sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
300ee6c5e5629d042bfc07cbc406e2f330b53659 |
# Dataset Card for ACL Anthology Corpus
[](https://creativecommons.org/licenses/by-nc-sa/4.0/)
This repository provides full-text and metadata to the ACL anthology collection (80k articles/posters as of September 2022) also including .pdf files and grobid extractions of the pdfs.
## How is this different from what ACL anthology provides and what already exists?
- We provide pdfs, full-text, references and other details extracted by grobid from the PDFs while [ACL Anthology](https://aclanthology.org/anthology+abstracts.bib.gz) only provides abstracts.
- There exists a similar corpus call [ACL Anthology Network](https://clair.eecs.umich.edu/aan/about.php) but is now showing its age with just 23k papers from Dec 2016.
```python
>>> import pandas as pd
>>> df = pd.read_parquet('acl-publication-info.74k.parquet')
>>> df
acl_id abstract full_text corpus_paper_id pdf_hash ... number volume journal editor isbn
0 O02-2002 There is a need to measure word similarity whe... There is a need to measure word similarity whe... 18022704 0b09178ac8d17a92f16140365363d8df88c757d0 ... None None None None None
1 L02-1310 8220988 8d5e31610bc82c2abc86bc20ceba684c97e66024 ... None None None None None
2 R13-1042 Thread disentanglement is the task of separati... Thread disentanglement is the task of separati... 16703040 3eb736b17a5acb583b9a9bd99837427753632cdb ... None None None None None
3 W05-0819 In this paper, we describe a word alignment al... In this paper, we describe a word alignment al... 1215281 b20450f67116e59d1348fc472cfc09f96e348f55 ... None None None None None
4 L02-1309 18078432 011e943b64a78dadc3440674419821ee080f0de3 ... None None None None None
... ... ... ... ... ... ... ... ... ... ... ...
73280 P99-1002 This paper describes recent progress and the a... This paper describes recent progress and the a... 715160 ab17a01f142124744c6ae425f8a23011366ec3ee ... None None None None None
73281 P00-1009 We present an LFG-DOP parser which uses fragme... We present an LFG-DOP parser which uses fragme... 1356246 ad005b3fd0c867667118482227e31d9378229751 ... None None None None None
73282 P99-1056 The processes through which readers evoke ment... The processes through which readers evoke ment... 7277828 924cf7a4836ebfc20ee094c30e61b949be049fb6 ... None None None None None
73283 P99-1051 This paper examines the extent to which verb d... This paper examines the extent to which verb d... 1829043 6b1f6f28ee36de69e8afac39461ee1158cd4d49a ... None None None None None
73284 P00-1013 Spoken dialogue managers have benefited from u... Spoken dialogue managers have benefited from u... 10903652 483c818c09e39d9da47103fbf2da8aaa7acacf01 ... None None None None None
[73285 rows x 21 columns]
```
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/shauryr/ACL-anthology-corpus
- **Point of Contact:** [email protected]
### Dataset Summary
Dataframe with extracted metadata (table below with details) and full text of the collection for analysis : **size 489M**
### Languages
en, zh and others
## Dataset Structure
Dataframe
### Data Instances
Each row is a paper from ACL anthology
### Data Fields
| **Column name** | **Description** |
| :---------------: | :---------------------------: |
| `acl_id` | unique ACL id |
| `abstract` | abstract extracted by GROBID |
| `full_text` | full text extracted by GROBID |
| `corpus_paper_id` | Semantic Scholar ID |
| `pdf_hash` | sha1 hash of the pdf |
| `numcitedby` | number of citations from S2 |
| `url` | link of publication |
| `publisher` | - |
| `address` | Address of conference |
| `year` | - |
| `month` | - |
| `booktitle` | - |
| `author` | list of authors |
| `title` | title of paper |
| `pages` | - |
| `doi` | - |
| `number` | - |
| `volume` | - |
| `journal` | - |
| `editor` | - |
| `isbn` | - |
## Dataset Creation
The corpus has all the papers in ACL anthology - as of September'22
### Source Data
- [ACL Anthology](aclanthology.org)
- [Semantic Scholar](semanticscholar.org)
# Additional Information
### Licensing Information
The ACL OCL corpus is released under the [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/). By using this corpus, you are agreeing to its usage terms.
### Citation Information
If you use this corpus in your research please use the following BibTeX entry:
@Misc{acl-ocl,
author = {Shaurya Rohatgi, Yanxia Qin, Benjamin Aw, Niranjana Unnithan, Min-Yen Kan},
title = {The ACL OCL Corpus: advancing Open science in Computational Linguistics},
howpublished = {arXiv},
year = {2022},
url = {https://huggingface.co/datasets/ACL-OCL/ACL-OCL-Corpus}
}
### Acknowledgements
We thank Semantic Scholar for providing access to the citation-related data in this corpus.
### Contributions
Thanks to [@shauryr](https://github.com/shauryr), [Yanxia Qin](https://github.com/qolina) and [Benjamin Aw](https://github.com/Benjamin-Aw-93) for adding this dataset. | WINGNUS/ACL-OCL | [
"task_categories:token-classification",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"research papers",
"acl",
"region:us"
] | 2022-11-15T21:15:08+00:00 | {"annotations_creators": [], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": [], "paperswithcode_id": "acronym-identification", "pretty_name": "acl-ocl-corpus", "tags": ["research papers", "acl"], "train-eval-index": [{"col_mapping": {"labels": "tags", "tokens": "tokens"}, "config": "default", "splits": {"eval_split": "test"}, "task": "token-classification", "task_id": "entity_extraction"}]} | 2023-09-20T23:57:32+00:00 |
e0a908a181ab222d8b8ddb3e75e864ae4a67040d | This dataset for Ukrainian language contains 200 original sentences marked manually with 0 (negative) and 1 (positive). | SergiiGurbych/sent_anal_ukr_binary | [
"region:us"
] | 2022-11-15T23:18:40+00:00 | {} | 2022-11-20T19:18:38+00:00 |
8146020cb5609af98a9f3013167538d6bad4f022 | Zanter/MyDataset | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-11-16T00:45:30+00:00 | {"license": "creativeml-openrail-m"} | 2022-11-16T02:10:54+00:00 |
|
cceb4696560317e920d6512b906263bb425883a1 | Home page & Original source: https://github.com/yasumasaonoe/creak | amydeng2000/CREAK | [
"region:us"
] | 2022-11-16T01:03:14+00:00 | {} | 2023-02-24T01:13:57+00:00 |
ea91f2e742ddc5791c57f27b2939a836e43314ba | # Dataset Card for "olm-october-2022-tokenized-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | olm/olm-october-2022-tokenized-512 | [
"region:us"
] | 2022-11-16T01:24:02+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 79589759460, "num_examples": 25807315}], "download_size": 21375344353, "dataset_size": 79589759460}} | 2022-11-16T01:47:11+00:00 |
a5afb4e4fb86585ce4fba473c7660db197bbdfe9 | # Dataset Card for "diana_uribe"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | juancopi81/diana_uribe | [
"task_categories:automatic-speech-recognition",
"whisper",
"whispering",
"base",
"region:us"
] | 2022-11-16T01:38:32+00:00 | {"task_categories": ["automatic-speech-recognition"], "dataset_info": {"features": [{"name": "CHANNEL_NAME", "dtype": "string"}, {"name": "URL", "dtype": "string"}, {"name": "TITLE", "dtype": "string"}, {"name": "DESCRIPTION", "dtype": "string"}, {"name": "TRANSCRIPTION", "dtype": "string"}, {"name": "SEGMENTS", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 23288573, "num_examples": 370}], "download_size": 11339946, "dataset_size": 23288573}, "tags": ["whisper", "whispering", "base"]} | 2022-11-19T19:57:00+00:00 |
8e54aa032996e146b47b98d91a8ce414a616b554 | # Dataset Card for "olm-october-2022-tokenized-1024"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | olm/olm-october-2022-tokenized-1024 | [
"region:us"
] | 2022-11-16T02:16:14+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 79468727400, "num_examples": 12909150}], "download_size": 21027268683, "dataset_size": 79468727400}} | 2022-11-16T02:50:17+00:00 |
e963e16ce22be14a22b9f9760f5d241935b4d650 |
# Dataset Card for Teyvat BLIP captions
Dataset used to train [Teyvat characters text to image model](https://github.com/hpcaitech/ColossalAI/tree/main/examples/images/diffusion).
BLIP generated captions for characters images from [genshin-impact fandom wiki](https://genshin-impact.fandom.com/wiki/Character#Playable_Characters)and [biligame wiki for genshin impact](https://wiki.biligame.com/ys/%E8%A7%92%E8%89%B2).
For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL png, and `text` is the accompanying text caption. Only a train split is provided.
The `text` include the tag `Teyvat`, `Name`,`Element`, `Weapon`, `Region`, `Model type`, and `Description`, the `Description` is captioned with the [pre-trained BLIP model](https://github.com/salesforce/BLIP).
## Examples
<img src = "https://huggingface.co/datasets/Fazzie/Teyvat/resolve/main/data/Ganyu_001.png" title = "Ganyu_001.png" style="max-width: 20%;" >
> Teyvat, Name:Ganyu, Element:Cryo, Weapon:Bow, Region:Liyue, Model type:Medium Female, Description:an anime character with blue hair and blue eyes
<img src = "https://huggingface.co/datasets/Fazzie/Teyvat/resolve/main/data/Ganyu_002.png" title = "Ganyu_002.png" style="max-width: 20%;" >
> Teyvat, Name:Ganyu, Element:Cryo, Weapon:Bow, Region:Liyue, Model type:Medium Female, Description:an anime character with blue hair and blue eyes
<img src = "https://huggingface.co/datasets/Fazzie/Teyvat/resolve/main/data/Keqing_003.png" title = "Keqing_003.png" style="max-width: 20%;" >
> Teyvat, Name:Keqing, Element:Electro, Weapon:Sword, Region:Liyue, Model type:Medium Female, Description:a anime girl with long white hair and blue eyes
<img src = "https://huggingface.co/datasets/Fazzie/Teyvat/resolve/main/data/Keqing_004.png" title = "Keqing_004.png" style="max-width: 20%;" >
> Teyvat, Name:Keqing, Element:Electro, Weapon:Sword, Region:Liyue, Model type:Medium Female, Description:an anime character wearing a purple dress and cat ears | Fazzie/Teyvat | [
"task_categories:text-to-image",
"annotations_creators:no-annotation",
"language_creators:found",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-16T03:47:33+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "source_datasets": ["original"], "task_categories": ["text-to-image"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 71202, "num_examples": 234}], "download_size": 466995417, "dataset_size": 71202}} | 2022-12-13T02:09:42+00:00 |
1c510d8fba5836df9983f4600a832f226667892d | # Dataset Card for "espn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | illorg/espn | [
"region:us"
] | 2022-11-16T04:59:06+00:00 | {"dataset_info": {"features": [{"name": "CHANNEL_NAME", "dtype": "string"}, {"name": "URL", "dtype": "string"}, {"name": "TITLE", "dtype": "string"}, {"name": "DESCRIPTION", "dtype": "string"}, {"name": "TRANSCRIPTION", "dtype": "string"}, {"name": "SEGMENTS", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 44761, "num_examples": 4}], "download_size": 28603, "dataset_size": 44761}} | 2022-11-16T04:59:09+00:00 |
7e5ded70f2d2bb9ce0119a4c11507aad4205b5f6 | # AutoTrain Dataset for project: mm
## Dataset Description
This dataset has been automatically processed by AutoTrain for project mm.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Email from attorney A Dutkanych regarding executed Settlement Agreement",
"target": "Email from attorney A Dutkanych regarding executed Settlement Agreement"
},
{
"text": "Telephone conference with A Royer regarding additional factual background information relating to O Stapletons Charge of Discrimination allegations",
"target": "Telephone conference with A Royer regarding additional factual background information as to O Stapletons Charge of Discrimination allegations"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 88 |
| valid | 22 |
| alanila/autotrain-data-mm | [
"region:us"
] | 2022-11-16T06:27:09+00:00 | {"task_categories": ["conditional-text-generation"]} | 2022-11-16T06:27:30+00:00 |
9bed3be927cdb7ff24e120ba77ddca329fe3f868 | mike008/wedo | [
"license:openrail",
"region:us"
] | 2022-11-16T07:43:53+00:00 | {"license": "openrail"} | 2022-11-16T08:07:12+00:00 |
|
5fbbc2483212a46d4b9ee29e0eef8ac27c4d77c8 | # Romanian paraphrase dataset
This data set was created by me, special for paraphrase
[t5-small-paraphrase-ro](https://huggingface.co/BlackKakapo/t5-small-paraphrase-ro)
[t5-small-paraphrase-ro-v2](https://huggingface.co/BlackKakapo/t5-small-paraphrase-ro-v2)
[t5-base-paraphrase-ro](https://huggingface.co/BlackKakapo/t5-base-paraphrase-ro)
[t5-base-paraphrase-ro-v2](https://huggingface.co/BlackKakapo/t5-base-paraphrase-ro-v2)
Here you can find ~100k examples of paraphrase. | BlackKakapo/paraphrase-ro | [
"task_categories:text2text-generation",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:ro",
"license:apache-2.0",
"region:us"
] | 2022-11-16T07:58:38+00:00 | {"language": "ro", "license": "apache-2.0", "multilinguality": "monolingual", "size_categories": "10K<n<100K", "task_categories": ["text2text-generation"]} | 2023-04-19T05:56:17+00:00 |
0e212142427b14722bc7ebd85e95fe2ed83dbcc7 | # Romanian grammar dataset
This data set was created by me, special for grammar
Here you can find:
~1600k examples of grammar (TRAIN).
~220k examples of grammar (TEST). | BlackKakapo/grammar-ro | [
"task_categories:text2text-generation",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:ro",
"license:apache-2.0",
"region:us"
] | 2022-11-16T08:03:13+00:00 | {"language": "ro", "license": "apache-2.0", "multilinguality": "monolingual", "size_categories": "10K<n<100K", "task_categories": ["text2text-generation"]} | 2023-04-19T05:56:48+00:00 |
2340a7b0db028c7253d3ca63eb1bc6258922047f | minoassad/SDhistory | [
"license:afl-3.0",
"region:us"
] | 2022-11-16T08:21:06+00:00 | {"license": "afl-3.0"} | 2022-11-16T21:58:32+00:00 |
|
6331ea3b86d2c8f414dc60da4a1a6d6f560df0cf | # Dataset Card for "whisper-transcripts-linustechtips"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Whispering-GPT](https://github.com/matallanas/whisper_gpt_pipeline)
- **Repository:** [whisper_gpt_pipeline](https://github.com/matallanas/whisper_gpt_pipeline)
- **Paper:** [whisper](https://cdn.openai.com/papers/whisper.pdf) and [gpt](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf)
- **Point of Contact:** [Whispering-GPT organization](https://huggingface.co/Whispering-GPT)
### Dataset Summary
This dataset is created by applying whisper to the videos of the Youtube channel [Linus Tech Tips](https://www.youtube.com/channel/UCXuqSBlHAE6Xw-yeJA0Tunw). The dataset was created a medium size whisper model.
### Languages
- **Language**: English
## Dataset Structure
The dataset
### Data Fields
The dataset is composed by:
- **id**: Id of the youtube video.
- **channel**: Name of the channel.
- **channel\_id**: Id of the youtube channel.
- **title**: Title given to the video.
- **categories**: Category of the video.
- **description**: Description added by the author.
- **text**: Whole transcript of the video.
- **segments**: A list with the time and transcription of the video.
- **start**: When started the trancription.
- **end**: When the transcription ends.
- **text**: The text of the transcription.
### Data Splits
- Train split.
## Dataset Creation
### Source Data
The transcriptions are from the videos of [Linus Tech Tips Channel](https://www.youtube.com/channel/UCXuqSBlHAE6Xw-yeJA0Tunw)
### Contributions
Thanks to [Whispering-GPT](https://huggingface.co/Whispering-GPT) organization for adding this dataset. | Whispering-GPT/whisper-transcripts-linustechtips | [
"task_categories:automatic-speech-recognition",
"whisper",
"whispering",
"medium",
"region:us"
] | 2022-11-16T08:29:52+00:00 | {"task_categories": ["automatic-speech-recognition"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "channel", "dtype": "string"}, {"name": "channel_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "categories", "sequence": "string"}, {"name": "tags", "sequence": "string"}, {"name": "description", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "segments", "list": [{"name": "start", "dtype": "float64"}, {"name": "end", "dtype": "float64"}, {"name": "text", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 177776633.92326075, "num_examples": 5655}], "download_size": 100975518, "dataset_size": 177776633.92326075}, "tags": ["whisper", "whispering", "medium"]} | 2022-12-06T13:10:26+00:00 |
a5b6dea1da418d7d505d261a5946055ee46d7a74 | iwaaaaa/aleechan | [
"license:artistic-2.0",
"region:us"
] | 2022-11-16T08:52:47+00:00 | {"license": "artistic-2.0"} | 2022-11-16T08:53:38+00:00 |
|
40ea8f976ff90ee137ac6ea16eeebf36fd33c8ce | # Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/venelink/ETPC/
- **Repository:**
- **Paper:** [ETPC - A Paraphrase Identification Corpus Annotated with Extended Paraphrase Typology and Negation](http://www.lrec-conf.org/proceedings/lrec2018/pdf/661.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
We present the Extended Paraphrase Typology (EPT) and the Extended Typology Paraphrase Corpus (ETPC). The EPT typology addresses several practical limitations of existing paraphrase typologies: it is the first typology that copes with the non-paraphrase pairs in the paraphrase identification corpora and distinguishes between contextual and habitual paraphrase types. ETPC is the largest corpus to date annotated with atomic paraphrase types. It is the first corpus with detailed annotation of both the paraphrase and the non-paraphrase pairs and the first corpus annotated with paraphrase and negation. Both new resources contribute to better understanding the paraphrase phenomenon, and allow for studying the relationship between paraphrasing and negation. To the developers of Paraphrase Identification systems ETPC corpus offers better means for evaluation and error analysis. Furthermore, the EPT typology and ETPC corpus emphasize the relationship with other areas of NLP such as Semantic Similarity, Textual Entailment, Summarization and Simplification.
### Supported Tasks and Leaderboards
- `text-classification`
### Languages
The text in the dataset is in English (`en`).
## Dataset Structure
### Data Fields
- `idx`: Monotonically increasing index ID.
- `sentence1`: Complete sentence expressing an opinion about a film.
- `sentence2`: Complete sentence expressing an opinion about a film.
- `etpc_label`: Whether the text-pair is a paraphrase, either "yes" (1) or "no" (0) according to etpc annotation schema.
- `mrpc_label`: Whether the text-pair is a paraphrase, either "yes" (1) or "no" (0) according to mrpc annotation schema.
- `negation`: Whether on sentence is a negation of another, either "yes" (1) or "no" (0).
### Data Splits
train: 5801
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Rotten Tomatoes reviewers.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Unknown.
### Citation Information
```bibtex
@inproceedings{kovatchev-etal-2018-etpc,
title = "{ETPC} - A Paraphrase Identification Corpus Annotated with Extended Paraphrase Typology and Negation",
author = "Kovatchev, Venelin and
Mart{\'\i}, M. Ant{\`o}nia and
Salam{\'o}, Maria",
booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)",
month = may,
year = "2018",
address = "Miyazaki, Japan",
publisher = "European Language Resources Association (ELRA)",
url = "https://aclanthology.org/L18-1221",
}
```
### Contributions
Thanks to [@jpwahle](https://github.com/jpwahle) for adding this dataset. | jpwahle/etpc | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-16T08:54:46+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "Extended Paraphrase Typology Corpus"} | 2023-10-02T15:05:00+00:00 |
84b8c52511486ba4fd5eb145ffbe4e693fba552c | thefivespace/dashandataset | [
"license:apache-2.0",
"region:us"
] | 2022-11-16T08:59:20+00:00 | {"license": "apache-2.0"} | 2022-11-16T08:59:20+00:00 |
|
7d09ef7987036af7b3c83a9375e4ee030891c616 |
# Dataset Card for ATCOSIM corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages and Other Details](#languages-and-other-details)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [ATCOSIM homepage](https://www.spsc.tugraz.at/databases-and-tools/atcosim-air-traffic-control-simulation-speech-corpus.html)
- **Repository:** [GitHub repository (used in research)](https://github.com/idiap/w2v2-air-traffic)
- **Paper:** [The ATCOSIM Corpus of Non-Prompted Clean Air Traffic Control Speech](https://aclanthology.org/L08-1507/)
- **Paper of this research:** [How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications](https://arxiv.org/abs/2203.16822)
### Dataset Summary
The ATCOSIM Air Traffic Control Simulation Speech corpus is a speech database of air traffic control (ATC) operator speech, provided by Graz University of Technology (TUG) and Eurocontrol Experimental Centre (EEC). It consists of ten hours of speech data, which were recorded during ATC real-time simulations using a close-talk headset microphone. The utterances are in English language and pronounced by ten non-native speakers. The database includes orthographic transcriptions and additional information on speakers and recording sessions. It was recorded and annotated by Konrad Hofbauer ([description here](https://www.spsc.tugraz.at/databases-and-tools/atcosim-air-traffic-control-simulation-speech-corpus.html)).
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`. Already adapted/fine-tuned models are available here --> [XLS-R-300m](https://huggingface.co/Jzuluaga/wav2vec2-large-960h-lv60-self-en-atc-atcosim).
### Languages and other details
The text and the recordings are in English. The participating controllers were all actively employed air traffic controllers and possessed professional experience in the simulated sectors. The six male and four female controllers were of either German or Swiss nationality and had German, Swiss German or Swiss French native tongue. The controllers had agreed to the recording of their voice for the purpose of language analysis as well as for research and development in speech technologies, and were asked to show their normal working behaviour.
## Dataset Structure
### Data Fields
- `id (string)`: a string of recording identifier for each example, corresponding to its.
- `audio (audio)`: audio data for the given ID
- `text (string)`: transcript of the file already normalized. Follow these repositories for more details [w2v2-air-traffic](https://github.com/idiap/w2v2-air-traffic) and [bert-text-diarization-atc](https://github.com/idiap/bert-text-diarization-atc)
- `segment_start_time (float32)`: segment start time (normally 0)
- `segment_end_time (float32): segment end time
- `duration (float32)`: duration of the recording, compute as segment_end_time - segment_start_time
## Additional Information
### Licensing Information
The licensing status of the dataset hinges on the legal status of the [ATCOSIM corpus](https://www.spsc.tugraz.at/databases-and-tools/atcosim-air-traffic-control-simulation-speech-corpus.html) creators.
### Citation Information
Contributors who prepared, processed, normalized and uploaded the dataset in HuggingFace:
```
@article{zuluaga2022how,
title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
@article{zuluaga2022bertraffic,
title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
@article{zuluaga2022atco2,
title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Vesel{\`y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others},
journal={arXiv preprint arXiv:2211.04054},
year={2022}
}
```
Authors of the dataset:
```
@inproceedings{hofbauer-etal-2008-atcosim,
title = "The {ATCOSIM} Corpus of Non-Prompted Clean Air Traffic Control Speech",
author = "Hofbauer, Konrad and
Petrik, Stefan and
Hering, Horst",
booktitle = "Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}'08)",
month = may,
year = "2008",
address = "Marrakech, Morocco",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2008/pdf/545_paper.pdf",
}
```
| Jzuluaga/atcosim_corpus | [
"task_categories:automatic-speech-recognition",
"multilinguality:monolingual",
"language:en",
"audio",
"automatic-speech-recognition",
"en-atc",
"en",
"robust-speech-recognition",
"noisy-speech-recognition",
"speech-recognition",
"arxiv:2203.16822",
"region:us"
] | 2022-11-16T09:04:42+00:00 | {"language": ["en"], "multilinguality": ["monolingual"], "task_categories": ["automatic-speech-recognition"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}, {"name": "segment_start_time", "dtype": "float32"}, {"name": "segment_end_time", "dtype": "float32"}, {"name": "duration", "dtype": "float32"}], "splits": [{"name": "test", "num_bytes": 471628915.76, "num_examples": 1901}, {"name": "train", "num_bytes": 1934757106.88, "num_examples": 7638}], "download_size": 0, "dataset_size": 2406386022.6400003}, "tags": ["audio", "automatic-speech-recognition", "en-atc", "en", "robust-speech-recognition", "noisy-speech-recognition", "speech-recognition"]} | 2022-12-05T11:14:57+00:00 |
d4bfcca433547321d83ef9718b645805087bf70d |
# Dataset Card for Danish WIT
## Dataset Description
- **Repository:** <https://gist.github.com/saattrupdan/bb6c9c52d9f4b35258db2b2456d31224>
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:[email protected])
- **Size of downloaded dataset files:** 7.5 GB
- **Size of the generated dataset:** 7.8 GB
- **Total amount of disk used:** 15.3 GB
### Dataset Summary
Google presented the Wikipedia Image Text (WIT) dataset in [July
2021](https://dl.acm.org/doi/abs/10.1145/3404835.3463257), a dataset which contains
scraped images from Wikipedia along with their descriptions. WikiMedia released
WIT-Base in [September
2021](https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/),
being a modified version of WIT where they have removed the images with empty
"reference descriptions", as well as removing images where a person's face covers more
than 10% of the image surface, along with inappropriate images that are candidate for
deletion. This dataset is the Danish portion of the WIT-Base dataset, consisting of
roughly 160,000 images with associated Danish descriptions. We release the dataset
under the [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/), in
accordance with WIT-Base's [identical
license](https://huggingface.co/datasets/wikimedia/wit_base#licensing-information).
### Supported Tasks and Leaderboards
Training machine learning models for caption generation, zero-shot image classification
and text-image search are the intended tasks for this dataset. No leaderboard is active
at this point.
### Languages
The dataset is available in Danish (`da`).
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 7.5 GB
- **Size of the generated dataset:** 7.8 GB
- **Total amount of disk used:** 15.3 GB
An example from the `train` split looks as follows.
```
{
"image": {
"bytes": b"\xff\xd8\xff\xe0\x00\x10JFIF...",
"path": None
},
"image_url": "https://upload.wikimedia.org/wikipedia/commons/4/45/Bispen_-_inside.jpg",
"embedding": [2.8568285, 2.9562542, 0.33794892, 8.753725, ...],
"metadata_url": "http://commons.wikimedia.org/wiki/File:Bispen_-_inside.jpg",
"original_height": 3161,
"original_width": 2316,
"mime_type": "image/jpeg",
"caption_attribution_description": "Kulturhuset Bispen set indefra. Biblioteket er til venstre",
"page_url": "https://da.wikipedia.org/wiki/Bispen",
"attribution_passes_lang_id": True,
"caption_alt_text_description": None,
"caption_reference_description": "Bispen set indefra fra 1. sal, hvor ....",
"caption_title_and_reference_description": "Bispen [SEP] Bispen set indefra ...",
"context_page_description": "Bispen er navnet på det offentlige kulturhus i ...",
"context_section_description": "Bispen er navnet på det offentlige kulturhus i ...",
"hierarchical_section_title": "Bispen",
"is_main_image": True,
"page_changed_recently": True,
"page_title": "Bispen",
"section_title": None
}
```
### Data Fields
The data fields are the same among all splits.
- `image`: a `dict` feature.
- `image_url`: a `str` feature.
- `embedding`: a `list` feature.
- `metadata_url`: a `str` feature.
- `original_height`: an `int` or `NaN` feature.
- `original_width`: an `int` or `NaN` feature.
- `mime_type`: a `str` or `None` feature.
- `caption_attribution_description`: a `str` or `None` feature.
- `page_url`: a `str` feature.
- `attribution_passes_lang_id`: a `bool` or `None` feature.
- `caption_alt_text_description`: a `str` or `None` feature.
- `caption_reference_description`: a `str` or `None` feature.
- `caption_title_and_reference_description`: a `str` or `None` feature.
- `context_page_description`: a `str` or `None` feature.
- `context_section_description`: a `str` or `None` feature.
- `hierarchical_section_title`: a `str` feature.
- `is_main_image`: a `bool` or `None` feature.
- `page_changed_recently`: a `bool` or `None` feature.
- `page_title`: a `str` feature.
- `section_title`: a `str` or `None` feature.
### Data Splits
Roughly 2.60% of the WIT-Base dataset comes from the Danish Wikipedia. We have split
the resulting 168,740 samples into a training set, validation set and testing set of
the following sizes:
| split | samples |
|---------|--------:|
| train | 167,460 |
| val | 256 |
| test | 1,024 |
## Dataset Creation
### Curation Rationale
It is quite cumbersome to extract the Danish portion of the WIT-Base dataset,
especially as the dataset takes up 333 GB of disk space, so the curation of Danish-WIT
is purely to make it easier to work with the Danish portion of it.
### Source Data
The original data was collected from WikiMedia's
[WIT-Base](https://huggingface.co/datasets/wikimedia/wit_base) dataset, which in turn
comes from Google's [WIT](https://huggingface.co/datasets/google/wit) dataset.
## Additional Information
### Dataset Curators
[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) curated this dataset.
### Licensing Information
The dataset is licensed under the [CC BY-SA 4.0
license](https://creativecommons.org/licenses/by-sa/4.0/).
| severo/danish-wit | [
"task_categories:image-to-text",
"task_categories:zero-shot-image-classification",
"task_categories:feature-extraction",
"task_ids:image-captioning",
"size_categories:100K<n<1M",
"source_datasets:wikimedia/wit_base",
"language:da",
"license:cc-by-sa-4.0",
"region:us"
] | 2022-11-16T09:07:30+00:00 | {"language": ["da"], "license": ["cc-by-sa-4.0"], "size_categories": ["100K<n<1M"], "source_datasets": ["wikimedia/wit_base"], "task_categories": ["image-to-text", "zero-shot-image-classification", "feature-extraction"], "task_ids": ["image-captioning"], "pretty_name": "Danish WIT"} | 2022-11-14T11:01:24+00:00 |
eb26a6e109ccbe16dc493559a48d0b5ed4caa6c0 | minoassad/abcdc | [
"license:afl-3.0",
"doi:10.57967/hf/0111",
"region:us"
] | 2022-11-16T09:19:24+00:00 | {"license": "afl-3.0"} | 2022-11-16T09:28:16+00:00 |
|
5782fe07bd37ec0535ab0ef253a4ed7868a6c05a | siberspace/keke2 | [
"region:us"
] | 2022-11-16T09:26:24+00:00 | {} | 2022-11-16T09:28:28+00:00 |
|
cf4f3f82e3c7ab23e28768c8cdd03c761b1d739e | ascento/dota2 | [
"license:unlicense",
"region:us"
] | 2022-11-16T10:37:30+00:00 | {"license": "unlicense"} | 2022-11-16T10:42:15+00:00 |
|
9ee42a9b16a81f9553990103d6153e6e019e965d | kaliansh/BMW | [
"license:unknown",
"region:us"
] | 2022-11-16T12:13:54+00:00 | {"license": "unknown"} | 2022-12-25T05:59:09+00:00 |
|
d6339da797fc00d558d0b2c0354235a8ccf6b66e |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | taejunkim/djmix | [
"region:us"
] | 2022-11-16T13:28:37+00:00 | {"annotations_creators": [], "language_creators": [], "language": [], "license": [], "multilinguality": [], "size_categories": [], "source_datasets": [], "task_categories": [], "task_ids": [], "pretty_name": "The DJ Mix Dataset", "tags": []} | 2023-07-29T01:55:37+00:00 |
1abb5e627925e8a6689c0aa1c44c59fbac7953dd | # Dataset Card for "processed_demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | taejunkim/processed_demo | [
"region:us"
] | 2022-11-16T14:22:14+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "package_name", "dtype": "string"}, {"name": "review", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "star", "dtype": "int64"}, {"name": "version_id", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 956, "num_examples": 5}, {"name": "train", "num_bytes": 1508, "num_examples": 5}], "download_size": 7783, "dataset_size": 2464}} | 2022-11-16T14:22:33+00:00 |
575b4d50337307354318a0d21bbf4a701639d539 | # Dataset Card for "binomial_3blue1brown_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | juancopi81/binomial_3blue1brown_test | [
"region:us"
] | 2022-11-16T14:40:20+00:00 | {"dataset_info": {"features": [{"name": "CHANNEL_NAME", "dtype": "string"}, {"name": "URL", "dtype": "string"}, {"name": "TITLE", "dtype": "string"}, {"name": "DESCRIPTION", "dtype": "string"}, {"name": "TRANSCRIPTION", "dtype": "string"}, {"name": "SEGMENTS", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 59462, "num_examples": 2}], "download_size": 44700, "dataset_size": 59462}} | 2022-11-16T14:40:23+00:00 |
f599c406b0b7a26af81802dfbc9054a04be30c98 | # Dataset Card for "test_push_og"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | polinaeterna/test_push_og | [
"region:us"
] | 2022-11-16T14:56:03+00:00 | {"dataset_info": {"features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 46, "num_examples": 3}, {"name": "test", "num_bytes": 32, "num_examples": 2}], "download_size": 1674, "dataset_size": 78}} | 2022-11-16T15:04:14+00:00 |
1ca34e4aefebfefc32f658afa3543126f959b464 | AmanK1202/Pokemon_playground | [
"license:other",
"region:us"
] | 2022-11-16T15:24:54+00:00 | {"license": "other"} | 2022-11-16T16:25:00+00:00 |
|
f1c8c125bcc621b03c73bd5bccdd38579521c627 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-ab6376-2120068523 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-16T15:57:58+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-3b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-16T16:43:43+00:00 |
d42f42526b7f46be81b6e46696be4bf516d13433 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-ab6376-2120068526 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-16T15:57:59+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-16T16:25:39+00:00 |
247e3b4ec632602bead7a90a4fd838450c69c780 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-ab6376-2120068524 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-16T15:58:00+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-7b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-16T17:45:44+00:00 |
cf77295d81f17cafdac7d0152765e8b42392e296 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b7
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-ab6376-2120068525 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-16T15:58:01+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b7", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-16T16:35:35+00:00 |
df149fbf9bcca94959d9177c4e99526172e530bf | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-ab6376-2120068527 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-16T15:58:07+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-560m", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-16T16:31:49+00:00 |
3d7fb7d0c4be6a2f1c2772cb625f9d941273f3a3 | tofighi/bitcoin | [
"license:apache-2.0",
"region:us"
] | 2022-11-16T16:40:13+00:00 | {"license": "apache-2.0"} | 2022-11-16T16:40:59+00:00 |
|
131f0b6c9736853611c0294edea5346d8f0990cc | # Dataset Card for "zalo-ai-train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hungngocphat01/zalo-ai-train | [
"region:us"
] | 2022-11-16T16:51:42+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 642229551.79, "num_examples": 9217}], "download_size": 641925455, "dataset_size": 642229551.79}} | 2022-11-19T05:06:32+00:00 |
456f0334dd95c31b2b458fff77626e024e87af03 | Den4ikAI/mailru-QA-old | [
"license:mit",
"region:us"
] | 2022-11-16T17:41:48+00:00 | {"license": "mit"} | 2022-11-16T18:01:57+00:00 |
|
6b2c98066ce597b9de0fb040e6baec52eadbbc75 |
# Dataset Card for Wikipedia
This repo is a wrapper around [olm/wikipedia](https://huggingface.co/datasets/olm/wikipedia) that just concatenates data from the EU languages.
Please refer to it for a complete data card.
The EU languages we include are:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
As with `olm/wikipedia` you will need to install a few dependencies:
```
pip install mwparserfromhell==0.6.4 multiprocess==0.70.13
```
```python
from datasets import load_dataset
load_dataset("dlwh/eu_wikipedias", date="20221101")
```
Please refer to the original olm/wikipedia for a complete data card.
| dlwh/eu_wikipedias | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:n<1K",
"size_categories:1K<n<10K",
"size_categories:10K<n<100K",
"size_categories:100K<n<1M",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:fi",
"language:fr",
"language:ga",
"language:hr",
"language:hu",
"language:it",
"language:lt",
"language:lv",
"language:mt",
"language:nl",
"language:pl",
"language:pt",
"language:ro",
"language:sk",
"language:sl",
"language:sv",
"license:cc-by-sa-3.0",
"license:gfdl",
"region:us"
] | 2022-11-16T18:03:07+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["bg", "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "ga", "hr", "hu", "it", "lt", "lv", "mt", "nl", "pl", "pt", "ro", "sk", "sl", "sv"], "license": ["cc-by-sa-3.0", "gfdl"], "multilinguality": ["multilingual"], "size_categories": ["n<1K", "1K<n<10K", "10K<n<100K", "100K<n<1M", "1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "Wikipedia"} | 2022-11-17T08:13:51+00:00 |
9ff900bee6cf6db545000652535d44345757fd51 | # VietNews-Abs-Sum
A dataset for Vietnamese Abstractive Summarization task.
It includes all articles from Vietnews (VNDS) dataset which was released by Van-Hau Nguyen et al.
The articles were collected from tuoitre.vn, vnexpress.net, and nguoiduatin.vn online newspaper by the authors.
# Introduction
This dataset was extracted from Train/Val/Test split of Vietnews dataset. All files from *test_tokenized*, *train_tokenized* and *val_tokenized* directories are fetched and preprocessed with punctuation normalization. The subsets then are stored in the *raw* director with 3 files *train.tsv*, *valid.tsv*, and *test.tsv* accordingly. These files will be considered as the original raw dataset as nothing changes except the punctuation normalization.
As pointed out in *BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese*, there are lots of duplicated samples across subsets. Therefore, we do another preprocessing process to remove all the duplicated samples. The process includes the following steps:
- First, remove all duplicates from each subset
- Second, merge all subsets into 1 set with the following order: test + val + train
- Finally, remove all duplicates from that merged set and then split out into 3 new subsets
The final subsets are the same to the orignal subsets but all duplicates were removed. Each subset now has total samples as follows:
- train_no_dups.tsv: 99134 samples
- valid_no_dups.tsv: 22184 samples
- test_no_dups.tsv: 22498 samples
Totally, we have 99134 + 22184 + 22498 = 143816 samples after filtering!
Note that this result is not the same as the number of samples reported in BARTpho paper, but there is no duplicate inside each subset or across subsets anymore.
These filtered subsets are also exported into JSONLINE format to support future training script that requires this data format.
# Directory structure
- raw: contains 3 raw subset files fetched from Vietnews directories
- train.tsv
- val.tsv
- test.tsv
- processed: contains duplicates filtered subsets
- test.tsv
- train.tsv
- valid.tsv
- test.jsonl
- train.jsonl
- valid.jsonl
- [and other variants]
# Credits
- Special thanks to Vietnews (VNDS) authors: https://github.com/ThanhChinhBK/vietnews
| ithieund/VietNews-Abs-Sum | [
"region:us"
] | 2022-11-16T18:26:54+00:00 | {} | 2022-11-17T10:46:16+00:00 |
f74aeef8979f2227041e35811b1a774270e7b9f6 | Artmann/coauthor | [
"license:mit",
"region:us"
] | 2022-11-16T18:45:10+00:00 | {"license": "mit"} | 2022-11-16T18:45:10+00:00 |
|
6eca9828d803494f43b9623a6e952c37a595778d | # Dataset Card for "testnnk"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | juancopi81/testnnk | [
"region:us"
] | 2022-11-16T19:33:19+00:00 | {"dataset_info": {"features": [{"name": "CHANNEL_NAME", "dtype": "string"}, {"name": "URL", "dtype": "string"}, {"name": "TITLE", "dtype": "string"}, {"name": "DESCRIPTION", "dtype": "string"}, {"name": "TRANSCRIPTION", "dtype": "string"}, {"name": "SEGMENTS", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 382632, "num_examples": 1}], "download_size": 176707, "dataset_size": 382632}} | 2022-11-16T19:33:22+00:00 |
a99195d7d7197eb9547133cea5046fb81b19a4aa | # Dataset Card for "logo-blip"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | salmonhumorous/logo-blip-caption | [
"region:us"
] | 2022-11-16T19:35:45+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24808769.89, "num_examples": 1435}], "download_size": 24242906, "dataset_size": 24808769.89}} | 2022-11-16T19:35:54+00:00 |
55de12c96f4bc4cc14351b3660e009c8c5186088 | # Dataset Card for "ChristmasClaymation-blip-captions"
All captions end with the suffix ", Christmas claymation style" | Norod78/ChristmasClaymation-blip-captions | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-11-16T20:12:20+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["other"], "language": ["en"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "pretty_name": "Christmas claymation style, BLIP captions", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 128397390.0, "num_examples": 401}], "download_size": 125229613, "dataset_size": 128397390.0}, "tags": []} | 2022-11-16T20:18:18+00:00 |
49b676d5016b3f1c19df199f08d406f062ce400c | # viWikiHow-Abs-Sum
A dataset for Vietnamese Abstractive Summarization task.
It includes all Vietnamese posts from WikiHow which was released in WikiLingua dataset.
# Introduction
This dataset was extracted from Train/Test split of WikiLingua dataset. As the target language is Vietnamese, we remove all other files, just keep train.\*.vi, test.\*.vi, and val.\*.vi for Vietnamese Abstractive Summarization task. The raw files then are stored in the *raw* director and after that, we run the python script to generate ready-to-use data files in TSV and JSONLINE formats which are stored in *processed* directory to be easily used for future training scripts.
# Directory structure
- raw: contains raw text files from WikiLingua
- test.src.vi
- test.tgt.vi
- train.src.vi
- train.tgt.vi
- val.src.vi
- val.tgt.vi
- processed: contains generated TSV and JSONLINE files
- test.tsv
- train.tsv
- valid.tsv
- test.jsonl
- train.jsonl
- valid.jsonl
- [and other variants]
# Credits
- Special thanks to WikiLingua authors: https://github.com/esdurmus/Wikilingua
- Article provided by <a href="https://www.wikihow.com/Main-Page" target="_blank">wikiHow</a>, a wiki that is building the world's largest and highest quality how-to manual. Please edit this article and find author credits at the original wikiHow article on How to Tie a Tie. Content on wikiHow can be shared under a <a href="http://creativecommons.org/licenses/by-nc-sa/3.0/" target="_blank">Creative Commons License</a>.
| ithieund/viWikiHow-Abs-Sum | [
"region:us"
] | 2022-11-16T20:34:58+00:00 | {} | 2022-11-16T20:50:46+00:00 |
483c7e0850992ddd10470da4892e80690e240362 | AmanK1202/BeneLogos | [
"license:other",
"region:us"
] | 2022-11-16T20:41:36+00:00 | {"license": "other"} | 2022-11-16T20:42:44+00:00 |
|
4fae6cd92b5b65284d944ac0348649012d349876 | Drozdik/tattoo_v0 | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:Drozdik/tattoo_v0",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-11-16T20:54:24+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["other"], "language": ["en"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["Drozdik/tattoo_v0"], "task_categories": ["text-to-image"], "task_ids": [], "pretty_name": "Tattoo BLIP caption", "tags": []} | 2022-11-16T21:40:55+00:00 |
|
be7a8a072e974e015b08309f1b3df244d54f3b2c | # Dataset Card for "dataset_readmes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | davanstrien/dataset_readmes | [
"region:us"
] | 2022-11-16T21:16:16+00:00 | {"dataset_info": {"features": [{"name": "author", "dtype": "string"}, {"name": "cardData", "dtype": "null"}, {"name": "citation", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "disabled", "dtype": "bool"}, {"name": "downloads", "dtype": "float64"}, {"name": "gated", "dtype": "bool"}, {"name": "id", "dtype": "string"}, {"name": "lastModified", "dtype": "string"}, {"name": "paperswithcode_id", "dtype": "string"}, {"name": "private", "dtype": "bool"}, {"name": "sha", "dtype": "string"}, {"name": "siblings", "sequence": "null"}, {"name": "tags", "sequence": "string"}, {"name": "readme_url", "dtype": "string"}, {"name": "readme", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 30248502, "num_examples": 7356}], "download_size": 9717727, "dataset_size": 30248502}} | 2022-11-16T21:16:19+00:00 |
f9c6c6198b775072d90d5d00fd3b01c1d18beba1 | # Dataset Card for "nn_to_hero"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | osanseviero/nn_to_hero | [
"whisper",
"region:us"
] | 2022-11-16T21:31:56+00:00 | {"tags": ["whisper"], "dataset_info": {"features": [{"name": "CHANNEL_NAME", "dtype": "string"}, {"name": "URL", "dtype": "string"}, {"name": "TITLE", "dtype": "string"}, {"name": "DESCRIPTION", "dtype": "string"}, {"name": "TRANSCRIPTION", "dtype": "string"}, {"name": "SEGMENTS", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1673867, "num_examples": 12}], "download_size": 765920, "dataset_size": 1673867}} | 2022-11-16T21:48:03+00:00 |
59ef766e66329eb9224c52814061dc93f134e42a | robertmyers/genesis | [
"license:bigscience-openrail-m",
"region:us"
] | 2022-11-16T21:56:18+00:00 | {"license": "bigscience-openrail-m"} | 2022-11-16T21:56:18+00:00 |
|
dec17e9391b767791e3808a655654467605a9d49 |
# Dataset Card for Twitter US Airline Sentiment
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/crowdflower/twitter-airline-sentiment
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
*This data originally came from [Crowdflower's Data for Everyone library](http://www.crowdflower.com/data-for-everyone).*
As the original source says,
> A sentiment analysis job about the problems of each major U.S. airline. Twitter data was scraped from February of 2015 and contributors were asked to first classify positive, negative, and neutral tweets, followed by categorizing negative reasons (such as "late flight" or "rude service").
The data we're providing on Kaggle is a slightly reformatted version of the original source. It includes both a CSV file and SQLite database. The code that does these transformations is [available on GitHub](https://github.com/benhamner/crowdflower-airline-twitter-sentiment)
For example, it contains whether the sentiment of the tweets in this set was positive, neutral, or negative for six US airlines:
[](https://www.kaggle.com/benhamner/d/crowdflower/twitter-airline-sentiment/exploring-airline-twitter-sentiment-data)
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@crowdflower](https://kaggle.com/crowdflower)
### Licensing Information
The license for this dataset is cc-by-nc-sa-4.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] | osanseviero/twitter-airline-sentiment | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-11-16T22:31:43+00:00 | {"license": ["cc-by-nc-sa-4.0"], "converted_from": "kaggle", "kaggle_id": "crowdflower/twitter-airline-sentiment"} | 2022-11-16T22:31:48+00:00 |
bc19a70b03111a6012f6c0a20211087668093f77 | # Dataset Card for "my-image-captioning-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ak8618/my-image-captioning-dataset | [
"region:us"
] | 2022-11-17T00:14:11+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 182262.0, "num_examples": 3}], "download_size": 164273, "dataset_size": 182262.0}} | 2022-11-17T00:14:17+00:00 |
3061d3821d70f52a8894fa1e30bccc995b94eeb0 | Vested-Sigil/Akira | [
"license:openrail",
"region:us"
] | 2022-11-17T00:20:25+00:00 | {"license": "openrail"} | 2022-11-17T00:20:25+00:00 |
|
1498ecae7c86e1a50efc2003d3d613483cb410c2 | # Dataset Card for "my-image-captioning-dataset1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ak8618/my-image-captioning-dataset1 | [
"region:us"
] | 2022-11-17T00:23:11+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 171096.0, "num_examples": 3}], "download_size": 163572, "dataset_size": 171096.0}} | 2022-11-17T00:23:16+00:00 |
d38a96426497e3b2a8643e86183fd575e09da88a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: 51la5/bert-large-NER
* Dataset: conll2003
* Config: conll2003
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@aniketrawat97](https://huggingface.co/aniketrawat97) for evaluating this model. | autoevaluate/autoeval-eval-conll2003-conll2003-c67e3d-2126868713 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-17T01:35:59+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["conll2003"], "eval_info": {"task": "entity_extraction", "model": "51la5/bert-large-NER", "metrics": [], "dataset_name": "conll2003", "dataset_config": "conll2003", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-11-17T01:38:57+00:00 |
0243ab65168e9f9e2bdda0f201b43b4f84774561 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: 51la5/distilbert-base-NER
* Dataset: conll2003
* Config: conll2003
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@aniketrawat97](https://huggingface.co/aniketrawat97) for evaluating this model. | autoevaluate/autoeval-eval-conll2003-conll2003-c67e3d-2126868714 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-17T01:36:05+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["conll2003"], "eval_info": {"task": "entity_extraction", "model": "51la5/distilbert-base-NER", "metrics": [], "dataset_name": "conll2003", "dataset_config": "conll2003", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-11-17T01:37:13+00:00 |
927244529f6f460a72a2cffc54ebacd58fca6250 | wkrmbm/regularisation_images | [
"license:cc0-1.0",
"region:us"
] | 2022-11-17T04:12:52+00:00 | {"license": "cc0-1.0"} | 2022-11-17T04:12:52+00:00 |
|
50d23cacadf49ee61dfe2b0fe57377c6367d5983 | martiwey/gh-java-methods | [
"task_categories:text-generation",
"task_categories:summarization",
"size_categories:10M<n<100M",
"license:mit",
"java",
"github",
"region:us"
] | 2022-11-17T04:23:06+00:00 | {"license": "mit", "size_categories": ["10M<n<100M"], "task_categories": ["text-generation", "summarization"], "tags": ["java", "github"]} | 2023-07-08T11:00:08+00:00 |
|
c3443fae8da8cc473b1f1b6ced73ae07b7d14529 |
# IMaSC: ICFOSS Malayalam Speech Corpus
**IMaSC** is a Malayalam text and speech corpus made available by [ICFOSS](https://icfoss.in/) for the purpose of developing speech technology for Malayalam, particularly text-to-speech. The corpus contains 34,473 text-audio pairs of Malayalam sentences spoken by 8 speakers, totalling in approximately 50 hours of audio.
## Dataset Description
- **Paper:** [IMaSC — ICFOSS Malayalam Speech Corpus](https://arxiv.org/abs/2211.12796)
- **Point of Contact:** [Thennal D K](mailto:[email protected])
## Dataset Structure
The dataset consists of 34,473 instances with fields `text`, `speaker`, and `audio`. The audio is mono, sampled at 16kH. The transcription is normalized and only includes Malayalam characters and common punctuation. The table given below specifies how the 34,473 instances are split between the speakers, along with some basic speaker info:
| Speaker | Gender | Age | Time (HH:MM:SS) | Sentences |
| --- | --- | --- | --- | --- |
| Joji | Male | 28 | 06:08:55 | 4,332 |
| Sonia | Female | 43 | 05:22:39 | 4,294 |
| Jijo | Male | 26 | 05:34:05 | 4,093 |
| Greeshma | Female | 22 | 06:32:39 | 4,416 |
| Anil | Male | 48 | 05:58:34 | 4,239 |
| Vidhya | Female | 23 | 04:21:56 | 3,242 |
| Sonu | Male | 25 | 06:04:43 | 4,219 |
| Simla | Female | 24 | 09:34:21 | 5,638 |
| **Total** | | | **49:37:54** | **34,473** |
### Data Instances
An example instance is given below:
```json
{'text': 'സർവ്വകലാശാല വൈസ് ചാൻസലർ ഡോ. ചന്ദ്രബാബുവിനും സംഭവം തലവേദനയാവുകയാണ്',
'speaker': 'Sonia',
'audio': {'path': None,
'array': array([ 0.00921631, 0.00930786, 0.00939941, ..., -0.00497437,
-0.00497437, -0.00497437]),
'sampling_rate': 16000}}
```
### Data Fields
- **text** (str): Transcription of the audio file
- **speaker** (str): The name of the speaker
- **audio** (dict): Audio object including loaded audio array, sampling rate and path to audio (always None)
### Data Splits
We provide all the data in a single `train` split. The loaded dataset object thus looks like this:
```json
DatasetDict({
train: Dataset({
features: ['text', 'speaker', 'audio'],
num_rows: 34473
})
})
```
### Dataset Creation
The text is sourced from [Malayalam Wikipedia](https://ml.wikipedia.org), and read by our speakers in studio conditions. Extensive error correction was conducted to provide a clean, accurate database. Further details are given in our paper, accessible at [https://arxiv.org/abs/2211.12796](https://arxiv.org/abs/2211.12796).
## Additional Information
### Licensing
The corpus is made available under the [Creative Commons license (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
### Citation
```
@misc{gopinath2022imasc,
title={IMaSC -- ICFOSS Malayalam Speech Corpus},
author={Deepa P Gopinath and Thennal D K and Vrinda V Nair and Swaraj K S and Sachin G},
year={2022},
eprint={2211.12796},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
| thennal/IMaSC | [
"task_categories:text-to-speech",
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ml",
"license:cc-by-sa-4.0",
"arxiv:2211.12796",
"region:us"
] | 2022-11-17T05:16:00+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["ml"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-to-speech", "automatic-speech-recognition"], "task_ids": [], "pretty_name": "ICFOSS Malayalam Speech Corpus", "tags": []} | 2022-12-08T17:21:02+00:00 |
3010ef33933c715abd286c21aa7f7efb1370d388 | napatswift/th-txt-img | [
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:th",
"license:cc",
"500k",
"region:us"
] | 2022-11-17T05:55:33+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["th"], "license": ["cc"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["image-to-text"], "task_ids": ["image-captioning"], "pretty_name": "ThaiTextImage", "tags": ["500k"]} | 2022-11-22T13:52:38+00:00 |
|
52cc3f9653a75e6b972a3e8be232554b405569cd | file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/core
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DFLIMG
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/doc
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/facelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/flaskr
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/localization
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/mainscripts
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/merger
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/models
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/samplelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/utils
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/XSegEditor
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55_p1.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/environment.yml
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/main.py
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-colab.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-cuda.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/core
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DFLIMG
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/doc
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/facelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/flaskr
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/localization
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/mainscripts
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/merger
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/models
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/samplelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/utils
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/XSegEditor
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55_p1.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/environment.yml
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/main.py
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-colab.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-cuda.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/core
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DFLIMG
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/doc
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/facelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/flaskr
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/localization
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/mainscripts
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/merger
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/models
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/samplelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/utils
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/XSegEditor
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55_p1.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/environment.yml
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/main.py
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-colab.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-cuda.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/core
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DFLIMG
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/doc
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/facelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/flaskr
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/localization
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/mainscripts
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/merger
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/models
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/samplelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/utils
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/XSegEditor
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55_p1.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/environment.yml
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/main.py
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-colab.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-cuda.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/core
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DFLIMG
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/doc
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/facelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/flaskr
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/localization
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/mainscripts
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/merger
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/models
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/samplelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/utils
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/XSegEditor
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55_p1.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/environment.yml
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/main.py
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-colab.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-cuda.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/core
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DFLIMG
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/doc
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/facelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/flaskr
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/localization
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/mainscripts
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/merger
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/models
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/samplelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/utils
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/XSegEditor
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55_p1.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/environment.yml
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/main.py
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-colab.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-cuda.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/core
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DFLIMG
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/doc
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/facelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/flaskr
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/localization
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/mainscripts
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/merger
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/models
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/samplelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/utils
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/XSegEditor
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55_p1.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/environment.yml
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/main.py
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-colab.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-cuda.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/core
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DFLIMG
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/doc
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/facelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/flaskr
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/localization
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/mainscripts
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/merger
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/models
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/samplelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/utils
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/XSegEditor
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55_p1.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/environment.yml
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/main.py
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-colab.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-cuda.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/core
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DFLIMG
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/doc
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/facelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/flaskr
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/localization
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/mainscripts
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/merger
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/models
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/samplelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/utils
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/XSegEditor
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55_p1.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/environment.yml
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/main.py
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-colab.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-cuda.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/core
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DFLIMG
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/doc
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/facelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/flaskr
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/localization
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/mainscripts
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/merger
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/models
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/samplelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/utils
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/XSegEditor
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55_p1.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/environment.yml
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/main.py
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-colab.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-cuda.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/core
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DFLIMG
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/doc
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/facelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/flaskr
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/localization
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/mainscripts
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/merger
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/models
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/samplelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/utils
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/XSegEditor
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55_p1.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/environment.yml
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/main.py
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-colab.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-cuda.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/core
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DFLIMG
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/doc
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/facelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/flaskr
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/localization
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/mainscripts
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/merger
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/models
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/samplelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/utils
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/XSegEditor
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55_p1.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/environment.yml
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/main.py
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-colab.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-cuda.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/core
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DFLIMG
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/doc
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/facelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/flaskr
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/localization
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/mainscripts
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/merger
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/models
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/samplelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/utils
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/XSegEditor
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55_p1.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/environment.yml
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/main.py
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-colab.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-cuda.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/core
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DFLIMG
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/doc
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/facelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/flaskr
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/localization
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/mainscripts
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/merger
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/models
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/samplelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/utils
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/XSegEditor
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55_p1.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/environment.yml
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/main.py
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-colab.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-cuda.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/core
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DFLIMG
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/doc
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/facelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/flaskr
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/localization
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/mainscripts
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/merger
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/models
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/samplelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/utils
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/XSegEditor
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55_p1.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/environment.yml
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/main.py
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-colab.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-cuda.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/core
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DFLIMG
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/doc
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/facelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/flaskr
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/localization
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/mainscripts
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/merger
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/models
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/samplelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/utils
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/XSegEditor
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55_p1.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/environment.yml
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/main.py
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-colab.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-cuda.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/core
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DFLIMG
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/doc
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/facelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/flaskr
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/localization
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/mainscripts
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/merger
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/models
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/samplelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/utils
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/XSegEditor
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55_p1.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/environment.yml
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/main.py
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-colab.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-cuda.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/core
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DFLIMG
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/doc
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/facelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/flaskr
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/localization
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/mainscripts
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/merger
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/models
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/samplelib
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/utils
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/XSegEditor
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/DeepfacelabMe%202022%20Sp1_Ver2.55_p1.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/environment.yml
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/main.py
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-colab.txt
file:///home/mzhh/%E6%A1%8C%E9%9D%A2/DFL_Me2.55/_internal/requirements-cuda.txt
| sdssfdf/deepfacelabme | [
"region:us"
] | 2022-11-17T07:57:47+00:00 | {} | 2022-11-17T08:05:34+00:00 |
530f80a26babad9381cb6c13ea768c63a07eda6c | # Dataset Card for "github-issues"
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- cc-by-nc-sa-3.0
multilinguality:
- monolingual
paperswithcode_id: null
pretty_name: Hugging Face GitHub Issues
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- bio
- paper
task_categories:
- text-classification
- table-to-text
task_ids:
- multi-class-classification
- sentiment-classification
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | taeseokyi/github-issues | [
"region:us"
] | 2022-11-17T08:11:18+00:00 | {"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "repository_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "comments_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "user", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "labels", "list": [{"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "color", "dtype": "string"}, {"name": "default", "dtype": "bool"}, {"name": "description", "dtype": "string"}]}, {"name": "state", "dtype": "string"}, {"name": "locked", "dtype": "bool"}, {"name": "assignee", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "assignees", "list": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "milestone", "dtype": "null"}, {"name": "comments", "sequence": "string"}, {"name": "created_at", "dtype": "timestamp[s]"}, {"name": "updated_at", "dtype": "timestamp[s]"}, {"name": "closed_at", "dtype": "timestamp[s]"}, {"name": "author_association", "dtype": "string"}, {"name": "active_lock_reason", "dtype": "null"}, {"name": "draft", "dtype": "bool"}, {"name": "pull_request", "struct": [{"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "diff_url", "dtype": "string"}, {"name": "patch_url", "dtype": "string"}, {"name": "merged_at", "dtype": "timestamp[s]"}]}, {"name": "body", "dtype": "string"}, {"name": "reactions", "struct": [{"name": "url", "dtype": "string"}, {"name": "total_count", "dtype": "int64"}, {"name": "+1", "dtype": "int64"}, {"name": "-1", "dtype": "int64"}, {"name": "laugh", "dtype": "int64"}, {"name": "hooray", "dtype": "int64"}, {"name": "confused", "dtype": "int64"}, {"name": "heart", "dtype": "int64"}, {"name": "rocket", "dtype": "int64"}, {"name": "eyes", "dtype": "int64"}]}, {"name": "timeline_url", "dtype": "string"}, {"name": "performed_via_github_app", "dtype": "null"}, {"name": "state_reason", "dtype": "string"}, {"name": "is_pull_request", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 386968, "num_examples": 100}], "download_size": 169642, "dataset_size": 386968}} | 2022-11-17T08:28:08+00:00 |
9015b3dfab8fbd3de4c6783ae770985b3caff4a8 | carlosdanielhernandezmena/dummy_corpus_asr_es | [
"license:cc-by-4.0",
"region:us"
] | 2022-11-17T08:23:08+00:00 | {"license": "cc-by-4.0"} | 2023-02-24T22:23:26+00:00 |
|
022bd3ea57091b057df3cf9e570ae0cb8c2c29a4 |
# Dataset Card for [np20ng]
## Table of Contents
- [Dataset Card for [np20ng]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** To be updated
- **Repository:** To be updated
- **Paper:** Submitted for review
- **Leaderboard:** To be updated
- **Point of Contact:** To be updated
### Dataset Summary
This is a multi-class Nepali text classification dataset. Text are the news documents and labels are the news categories. It consists over 200,000 documents categorized into 20 different Nepali news groups. News documents from 10 different news sources are compiled into this dataset. Labeling is done using the category-specific news from the respective news portals.
### Supported Tasks and Leaderboards
- Multi-class text classification from news document
- Multi-class text classification from news headings
- News heading generation from news document
### Languages
- Nepali
## Dataset Structure
### Data Instances
The dataset consists over 200,000 Nepali news documents categorized into 20 different news categories.
### Data Fields
- **category:** News category
- **content:** News document (main text)
- **headline:** News headline
- **source:** News source from where the news is taken from
### Data Splits
The dataset is a whole dataset and is not splitted.
## Dataset Creation
### Curation Rationale
To develop and create a large-scale Nepali text classification dataset and releasing it to the public for further research and developments
### Source Data
#### Initial Data Collection and Normalization
Data are scraped from popular Nepali news portals such as Onlinekhabar, Nepalkhabar, Ekantipur, Ratopati, Gorkhapatra, Nepalipatra, Educationpati, Crimenews, etc.
#### Who are the source language producers?
News portals
### Annotations
#### Annotation process
Category labeling of news documents are automatically done as the documents are scraped from category-specific URLs of particular news source
#### Who are the annotators?
News portals
### Personal and Sensitive Information
This dataset does not possess any personal and sensitive information. However, the news content can possess some biasness and irregular information which might be sensitive and not quite related with the original author of the dataset
## Considerations for Using the Data
### Social Impact of Dataset
No issues.
### Discussion of Biases
Categories can be depended on how news portals have categorized them. Otherwise could cause some bias between them.
### Other Known Limitations
News summary are not included
## Additional Information
### Dataset Curators
Me myself.
### Licensing Information
Apache-2.0
### Citation Information
To be updated later (Paper submission in process)
### Contributions
Thanks to [@Suyogyart](https://github.com/Suyogyart) for adding this dataset.
| Suyogyart/np20ng | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:other",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ne",
"license:apache-2.0",
"nepali-newsgroups",
"nepali-20-newsgroups",
"np20ng",
"nepali text classification",
"natural language processing",
"news",
"headline",
"region:us"
] | 2022-11-17T09:13:15+00:00 | {"annotations_creators": ["other"], "language_creators": ["machine-generated"], "language": ["ne"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "np20ng", "tags": ["nepali-newsgroups", "nepali-20-newsgroups", "np20ng", "nepali text classification", "natural language processing", "news", "headline"]} | 2022-11-17T14:14:33+00:00 |
0912bb6c9393c76d62a7c5ee81c4c817ff47c9f4 |
# STS-es
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://alt.qcri.org/semeval2014/task10/
- **Point of Contact:** [Aitor Gonzalez]([email protected])
### Dataset Summary
For Semantic Text Similarity, we collected the Spanish test sets from SemEval-2014 (Agirre et al., 2014) and SemEval-2015 (Agirre et al., 2015). Since no training data was provided for the Spanish subtask, we randomly sampled both datasets into 1,321 sentences for the train set, 78 sentences for the development set, and 156 sentences for the test set. To make the task harder for the models, we purposely made the development set smaller than the test set.
We use this corpus as part of the EvalEs Spanish language benchmark.
### Supported Tasks and Leaderboards
Semantic Text Similarity Scoring
### Languages
The dataset is in Spanish (`es-ES`)
## Dataset Structure
### Data Instances
```
{
'sentence1': "El "tendón de Aquiles" ("tendo Achillis") o "tendón calcáneo" ("tendo calcaneus") es un tendón de la parte posterior de la pierna."
'sentence2': "El tendón de Aquiles es la extensión tendinosa de los tres músculos de la pantorrilla: gemelo, sóleo y plantar delgado."
'label': 2.8
}
```
### Data Fields
- sentence1: String
- sentence2: String
- label: Float
### Data Splits
- train: 1,321 instances
- dev: 78 instances
- test: 156 instances
## Dataset Creation
### Curation Rationale
[N/A]
### Source Data
The source data came from the Spanish Wikipedia (2013 dump) and texts from Spanish news (2014).
For more information visit the paper from the SemEval-2014 Shared Task [(Agirre et al., 2014)](https://aclanthology.org/S14-2010.pdf) and the SemEval-2015 Shared Task [(Agirre et al., 2015)](https://aclanthology.org/S15-2045.pdf).
#### Initial Data Collection and Normalization
For more information visit the paper from the SemEval-2014 Shared Task [(Agirre et al., 2014)](https://aclanthology.org/S14-2010.pdf) and the SemEval-2015 Shared Task [(Agirre et al., 2015)](https://aclanthology.org/S15-2045.pdf).
#### Who are the source language producers?
Journalists and Wikipedia contributors.
### Annotations
#### Annotation process
For more information visit the paper from the SemEval-2014 Shared Task [(Agirre et al., 2014)](https://aclanthology.org/S14-2010.pdf) and the SemEval-2015 Shared Task [(Agirre et al., 2015)](https://aclanthology.org/S15-2045.pdf).
#### Who are the annotators?
For more information visit the paper from the SemEval-2014 Shared Task [(Agirre et al., 2014)](https://aclanthology.org/S14-2010.pdf) and the SemEval-2015 Shared Task [(Agirre et al., 2015)](https://aclanthology.org/S15-2045.pdf).
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Spanish.
### Discussion of Biases
No postprocessing steps were applied to mitigate potential social biases.
## Additional Information
### Citation Information
The following papers must be cited when using this corpus:
```
@inproceedings{agirre2015semeval,
title={Semeval-2015 task 2: Semantic textual similarity, english, spanish and pilot on interpretability},
author={Agirre, Eneko and Banea, Carmen and Cardie, Claire and Cer, Daniel and Diab, Mona and Gonzalez-Agirre, Aitor and Guo, Weiwei and Lopez-Gazpio, Inigo and Maritxalar, Montse and Mihalcea, Rada and others},
booktitle={Proceedings of the 9th international workshop on semantic evaluation (SemEval 2015)},
pages={252--263},
year={2015}
}
@inproceedings{agirre2014semeval,
title={SemEval-2014 Task 10: Multilingual Semantic Textual Similarity.},
author={Agirre, Eneko and Banea, Carmen and Cardie, Claire and Cer, Daniel M and Diab, Mona T and Gonzalez-Agirre, Aitor and Guo, Weiwei and Mihalcea, Rada and Rigau, German and Wiebe, Janyce},
booktitle={SemEval@ COLING},
pages={81--91},
year={2014}
}
```
| PlanTL-GOB-ES/sts-es | [
"task_categories:text-classification",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"language:es",
"region:us"
] | 2022-11-17T12:11:58+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["es"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": ["semantic-similarity-scoring", "text-scoring"], "pretty_name": "STS-es", "tags": []} | 2023-01-19T09:45:42+00:00 |
719918f7e4ce82d329ab8a0e2610e7fb239bd0c1 | # Dataset Card for "mm_tiny_imagenet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | israfelsr/mm_tiny_imagenet | [
"region:us"
] | 2022-11-17T12:44:50+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "n01443537", "1": "n01629819", "2": "n01641577", "3": "n01644900", "4": "n01698640", "5": "n01742172", "6": "n01768244", "7": "n01770393", "8": "n01774384", "9": "n01774750", "10": "n01784675", "11": "n01882714", "12": "n01910747", "13": "n01917289", "14": "n01944390", "15": "n01950731", "16": "n01983481", "17": "n01984695", "18": "n02002724", "19": "n02056570", "20": "n02058221", "21": "n02074367", "22": "n02094433", "23": "n02099601", "24": "n02099712", "25": "n02106662", "26": "n02113799", "27": "n02123045", "28": "n02123394", "29": "n02124075", "30": "n02125311", "31": "n02129165", "32": "n02132136", "33": "n02165456", "34": "n02226429", "35": "n02231487", "36": "n02233338", "37": "n02236044", "38": "n02268443", "39": "n02279972", "40": "n02281406", "41": "n02321529", "42": "n02364673", "43": "n02395406", "44": "n02403003", "45": "n02410509", "46": "n02415577", "47": "n02423022", "48": "n02437312", "49": "n02480495", "50": "n02481823", "51": "n02486410", "52": "n02504458", "53": "n02509815", "54": "n02666347", "55": "n02669723", "56": "n02699494", "57": "n02769748", "58": "n02788148", "59": "n02791270", "60": "n02793495", "61": "n02795169", "62": "n02802426", "63": "n02808440", "64": "n02814533", "65": "n02814860", "66": "n02815834", "67": "n02823428", "68": "n02837789", "69": "n02841315", "70": "n02843684", "71": "n02883205", "72": "n02892201", "73": "n02909870", "74": "n02917067", "75": "n02927161", "76": "n02948072", "77": "n02950826", "78": "n02963159", "79": "n02977058", "80": "n02988304", "81": "n03014705", "82": "n03026506", "83": "n03042490", "84": "n03085013", "85": "n03089624", "86": "n03100240", "87": "n03126707", "88": "n03160309", "89": "n03179701", "90": "n03201208", "91": "n03255030", "92": "n03355925", "93": "n03373237", "94": "n03388043", "95": "n03393912", "96": "n03400231", "97": "n03404251", "98": "n03424325", "99": "n03444034", "100": "n03447447", "101": "n03544143", "102": "n03584254", "103": "n03599486", "104": "n03617480", "105": "n03637318", "106": "n03649909", "107": "n03662601", "108": "n03670208", "109": "n03706229", "110": "n03733131", "111": "n03763968", "112": "n03770439", "113": "n03796401", "114": "n03814639", "115": "n03837869", "116": "n03838899", "117": "n03854065", "118": "n03891332", "119": "n03902125", "120": "n03930313", "121": "n03937543", "122": "n03970156", "123": "n03977966", "124": "n03980874", "125": "n03983396", "126": "n03992509", "127": "n04008634", "128": "n04023962", "129": "n04070727", "130": "n04074963", "131": "n04099969", "132": "n04118538", "133": "n04133789", "134": "n04146614", "135": "n04149813", "136": "n04179913", "137": "n04251144", "138": "n04254777", "139": "n04259630", "140": "n04265275", "141": "n04275548", "142": "n04285008", "143": "n04311004", "144": "n04328186", "145": "n04356056", "146": "n04366367", "147": "n04371430", "148": "n04376876", "149": "n04398044", "150": "n04399382", "151": "n04417672", "152": "n04456115", "153": "n04465666", "154": "n04486054", "155": "n04487081", "156": "n04501370", "157": "n04507155", "158": "n04532106", "159": "n04532670", "160": "n04540053", "161": "n04560804", "162": "n04562935", "163": "n04596742", "164": "n04598010", "165": "n06596364", "166": "n07056680", "167": "n07583066", "168": "n07614500", "169": "n07615774", "170": "n07646821", "171": "n07647870", "172": "n07657664", "173": "n07695742", "174": "n07711569", "175": "n07715103", "176": "n07720875", "177": "n07749582", "178": "n07753592", "179": "n07768694", "180": "n07871810", "181": "n07873807", "182": "n07875152", "183": "n07920052", "184": "n07975909", "185": "n08496334", "186": "n08620881", "187": "n08742578", "188": "n09193705", "189": "n09246464", "190": "n09256479", "191": "n09332890", "192": "n09428293", "193": "n12267677", "194": "n12520864", "195": "n13001041", "196": "n13652335", "197": "n13652994", "198": "n13719102", "199": "n14991210"}}}}, {"name": "caption", "dtype": "string"}, {"name": "label_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 159978960.0, "num_examples": 80000}, {"name": "validation", "num_bytes": 40004701.0, "num_examples": 20000}], "download_size": 149059401, "dataset_size": 199983661.0}} | 2022-12-16T11:19:54+00:00 |
6e5d367220c831c72fb41436a75345d8bfd8daee | dfghnbfg | alvaroec98/images_prueba | [
"region:us"
] | 2022-11-17T12:53:10+00:00 | {} | 2022-11-17T14:53:11+00:00 |
0ecd59e6c3eb60bae5e124ec827f60d5f8e2a2d1 |
# Dataset Card for librispeech_asr_dummy
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [LibriSpeech ASR corpus](http://www.openslr.org/12)
- **Repository:** [Needs More Information]
- **Paper:** [LibriSpeech: An ASR Corpus Based On Public Domain Audio Books](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf)
- **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
- **Point of Contact:** [Daniel Povey](mailto:[email protected])
### Dataset Summary
This is a **truncated** version of the LibriSpeech dataset. It contains 20 samples from each of the splits. To view the full dataset, visit: https://huggingface.co/datasets/librispeech_asr
LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`, `audio-speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER. An external leaderboard at https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-clean ranks the latest models from research and academia.
### Languages
The audio is in English. There are two configurations: `clean` and `other`.
The speakers in the corpus were ranked according to the WER of the transcripts of a model trained on
a different dataset, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher WER speakers designated as "other".
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'chapter_id': 141231,
'file': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'audio': {'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'id': '1272-141231-0000',
'speaker_id': 1272,
'text': 'A MAN SAID TO THE UNIVERSE SIR I EXIST'}
```
### Data Fields
- file: A path to the downloaded audio file in .flac format.
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- id: unique id of the data sample.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- chapter_id: id of the audiobook chapter which includes the transcription.
### Data Splits
The size of the corpus makes it impractical, or at least inconvenient
for some users, to distribute it as a single large archive. Thus the
training portion of the corpus is split into three subsets, with approximate size 100, 360 and 500 hours respectively.
A simple automatic
procedure was used to select the audio in the first two sets to be, on
average, of higher recording quality and with accents closer to US
English. An acoustic model was trained on WSJ’s si-84 data subset
and was used to recognize the audio in the corpus, using a bigram
LM estimated on the text of the respective books. We computed the
Word Error Rate (WER) of this automatic transcript relative to our
reference transcripts obtained from the book texts.
The speakers in the corpus were ranked according to the WER of
the WSJ model’s transcripts, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher-WER speakers designated as "other".
For "clean", the data is split into train, validation, and test set. The train set is further split into train.100 and train.360
respectively accounting for 100h and 360h of the training data.
For "other", the data is split into train, validation, and test set. The train set contains approximately 500h of recorded speech.
| | Train.500 | Train.360 | Train.100 | Valid | Test |
| ----- | ------ | ----- | ---- | ---- | ---- |
| clean | - | 104014 | 28539 | 2703 | 2620|
| other | 148688 | - | - | 2864 | 2939 |
## Dataset Creation
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Additional Information
### Dataset Curators
The dataset was initially created by Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur.
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
}
```
| sanchit-gandhi/librispeech_asr_dummy | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_ids:speaker-identification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-11-17T13:29:57+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition", "audio-classification"], "task_ids": ["speaker-identification"], "paperswithcode_id": "librispeech-1", "pretty_name": "LibriSpeech Dummy", "configs": [{"config_name": "default", "data_files": [{"split": "test.other", "path": "data/test.other-*"}, {"split": "train.other.500", "path": "data/train.other.500-*"}, {"split": "train.clean.360", "path": "data/train.clean.360-*"}, {"split": "validation.clean", "path": "data/validation.clean-*"}, {"split": "test.clean", "path": "data/test.clean-*"}, {"split": "validation.other", "path": "data/validation.other-*"}, {"split": "train.clean.100", "path": "data/train.clean.100-*"}]}, {"config_name": "short-form", "data_files": [{"split": "validation", "path": "short-form/validation-*"}]}], "dataset_info": {"config_name": "short-form", "features": [{"name": "file", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}, {"name": "chapter_id", "dtype": "int64"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 9677021.0, "num_examples": 73}], "download_size": 9192059, "dataset_size": 9677021.0}} | 2023-11-02T11:52:44+00:00 |
c7b2a2a29e43fa0e82ae56739900da375f1c417d | afschowdhury/faq_test | [
"license:mit",
"region:us"
] | 2022-11-17T14:38:05+00:00 | {"license": "mit"} | 2022-11-17T14:58:56+00:00 |
|
ae7e52141e910576fe9665f751b2043f900d097c | AmanK1202/LogoGeneration | [
"license:other",
"region:us"
] | 2022-11-17T14:47:06+00:00 | {"license": "other", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21212847.0, "num_examples": 821}], "download_size": 19963981, "dataset_size": 21212847.0}} | 2022-11-17T14:51:17+00:00 |
|
550b4b11a5ac147bf261ff150a65b98b01469b3f | nkandpa2/pretraining_entities | [
"license:bigscience-openrail-m",
"region:us"
] | 2022-11-17T15:35:51+00:00 | {"license": "bigscience-openrail-m"} | 2022-11-17T17:30:12+00:00 |
|
6c5fed17b4a853735e7d56709d184e50374af4a6 |
# Dataset Card for MNIST
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://yann.lecun.com/exdb/mnist/
- **Repository:**
- **Paper:** MNIST handwritten digit database by Yann LeCun, Corinna Cortes, and CJ Burges
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The MNIST dataset consists of 70,000 28x28 black-and-white images of handwritten digits extracted from two NIST databases. There are 60,000 images in the training dataset and 10,000 images in the validation dataset, one class per digit so a total of 10 classes, with 7,000 images (6,000 train images and 1,000 test images) per class.
Half of the image were drawn by Census Bureau employees and the other half by high school students (this split is evenly distributed in the training and testing sets).
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image of a handwritten digit into one of 10 classes representing integer values from 0 to 9, inclusively. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-mnist).
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its label:
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x276021F6DD8>,
'label': 5
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing the 28x28 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `label`: an integer between 0 and 9 representing the digit.
### Data Splits
The data is split into training and test set. All the images in the test set were drawn by different individuals than the images in the training set. The training set contains 60,000 images and the test set 10,000 images.
## Dataset Creation
### Curation Rationale
The MNIST database was created to provide a testbed for people wanting to try pattern recognition methods or machine learning algorithms while spending minimal efforts on preprocessing and formatting. Images of the original dataset (NIST) were in two groups, one consisting of images drawn by Census Bureau employees and one consisting of images drawn by high school students. In NIST, the training set was built by grouping all the images of the Census Bureau employees, and the test set was built by grouping the images form the high school students.
The goal in building MNIST was to have a training and test set following the same distributions, so the training set contains 30,000 images drawn by Census Bureau employees and 30,000 images drawn by high school students, and the test set contains 5,000 images of each group. The curators took care to make sure all the images in the test set were drawn by different individuals than the images in the training set.
### Source Data
#### Initial Data Collection and Normalization
The original images from NIST were size normalized to fit a 20x20 pixel box while preserving their aspect ratio. The resulting images contain grey levels (i.e., pixels don't simply have a value of black and white, but a level of greyness from 0 to 255) as a result of the anti-aliasing technique used by the normalization algorithm. The images were then centered in a 28x28 image by computing the center of mass of the pixels, and translating the image so as to position this point at the center of the 28x28 field.
#### Who are the source language producers?
Half of the source images were drawn by Census Bureau employees, half by high school students. According to the dataset curator, the images from the first group are more easily recognizable.
### Annotations
#### Annotation process
The images were not annotated after their creation: the image creators annotated their images with the corresponding label after drawing them.
#### Who are the annotators?
Same as the source data creators.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Chris Burges, Corinna Cortes and Yann LeCun
### Licensing Information
MIT Licence
### Citation Information
```
@article{lecun2010mnist,
title={MNIST handwritten digit database},
author={LeCun, Yann and Cortes, Corinna and Burges, CJ},
journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist},
volume={2},
year={2010}
}
```
### Contributions
Thanks to [@sgugger](https://github.com/sgugger) for adding this dataset. | severo/mnist | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-nist",
"language:en",
"license:mit",
"region:us"
] | 2022-11-17T16:33:16+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-nist"], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "paperswithcode_id": "mnist", "pretty_name": "MNIST", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3", "4": "4", "5": "5", "6": "6", "7": "7", "8": "8", "9": "9"}}}}], "config_name": "mnist", "splits": [{"name": "test", "num_bytes": 2916440, "num_examples": 10000}, {"name": "train", "num_bytes": 17470848, "num_examples": 60000}], "download_size": 11594722, "dataset_size": 20387288}} | 2022-11-03T16:46:54+00:00 |
060a986afef8ef37e7410183b61d982472ec2860 | # Dataset Card for "LogoGeneration_png"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | AmanK1202/LogoGeneration_png | [
"region:us"
] | 2022-11-17T16:56:53+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 120298419.0, "num_examples": 821}], "download_size": 120174466, "dataset_size": 120298419.0}} | 2022-11-17T16:57:32+00:00 |
32b5c393dc9f8c6d9f278f61040c79f9235c44a0 | Subset dataset of [diffusiondb](https://huggingface.co/datasets/poloclub/diffusiondb) consisting of just unique prompts.
Created this subset dataset for the [Prompt Extend](https://github.com/daspartho/prompt-extend) project. | daspartho/stable-diffusion-prompts | [
"language:en",
"region:us"
] | 2022-11-17T17:25:56+00:00 | {"language": "en", "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 284636288, "num_examples": 1819808}], "download_size": 101931289, "dataset_size": 284636288}} | 2023-08-25T13:33:31+00:00 |
6124bed5f88aac1f16b37b6b24e464b68c2853d5 | # Dataset Card for "olm-CC-MAIN-2022-40-sampling-ratio-0.0001-ne-language"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Tristan/olm-CC-MAIN-2022-40-sampling-ratio-0.0001-ne-language | [
"region:us"
] | 2022-11-17T17:33:23+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "crawl_timestamp", "dtype": "float64"}, {"name": "last_modified_timestamp", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 136949.0, "num_examples": 37}], "download_size": 62812, "dataset_size": 136949.0}} | 2022-11-17T17:34:16+00:00 |
5e848b43d8c0ed4aa7ba7de05a7b510560d71100 |
# Stripe Style Embedding / Textual Inversion
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/stripe_style/resolve/main/stripe_style_showcase.jpg"/>
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by stripe_style"```
Personally, I would recommend to use my embeddings with a strength of 0.8, like ```"drawn by (stripe_style:0.8)"```
I trained the embedding two epochs until 5000 steps.
I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508"
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/stripe_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"image-to-image",
"region:us"
] | 2022-11-17T17:47:24+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "thumbnail": "https://huggingface.co/datasets/Nerfgun3/stripe_style/resolve/main/stripe_style_showcase.jpg", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false} | 2022-11-17T17:55:11+00:00 |
b83d8fb92bfe1755463c32528f8b2146f06de8d5 | GZanc/Test | [
"license:openrail",
"region:us"
] | 2022-11-17T18:04:13+00:00 | {"license": "openrail"} | 2022-11-18T16:19:24+00:00 |
|
f94e826e12c3589ff908d338492211a4ebabe7a9 |
# Dataset Summary
AfriCLIRMatrix is a test collection for cross-lingual information retrieval research in 15 diverse African languages. This resource comprises English queries with query–document relevance judgments in 15 African languages automatically mined from Wikipedia
This dataset stores documents of AfriCLIRMatrix. To access the queries and judgments, please refer to [castorini/africlirmatrix](https://github.com/castorini/africlirmatrix).
# Dataset Structure
The only configuration here is the `language`.
An example of document data entry looks as follows:
```
{
'id': '62443',
'contents': 'Acyloin condensation jẹ́ ìyọkúrò àsopọ̀ àwọn carboxylic ester pẹ̀lú lílò metalic sodium lati ṣèdá α-hydroxyketone, tí wọ́n tún mọ̀ sí. Àdàpọ̀ ṣisẹ́ yìí jẹ́ èyí tó ...'
}
```
# Load Dataset
An example to load the dataset:
```
language = 'yoruba'
dataset = load_dataset('castorini/africlirmatrix', language, 'train')
```
# Citation Information
```
coming soon
``` | castorini/africlirmatrix | [
"task_categories:text-retrieval",
"multilinguality:multilingual",
"language:af",
"language:am",
"language:arz",
"language:ha",
"language:ig",
"language:ary",
"language:nso",
"language:sn",
"language:so",
"language:sw",
"language:ti",
"language:tw",
"language:wo",
"language:yo",
"language:zu",
"license:apache-2.0",
"region:us"
] | 2022-11-17T18:41:37+00:00 | {"language": ["af", "am", "arz", "ha", "ig", "ary", "nso", "sn", "so", "sw", "ti", "tw", "wo", "yo", "zu"], "license": "apache-2.0", "multilinguality": ["multilingual"], "task_categories": ["text-retrieval"], "viewer": true} | 2022-11-17T22:45:16+00:00 |
8ef704329bd386ce35ab431822ddab563965eff2 | # Dataset Card for "hackathon_pil"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | akanksha8618/hackathon_pil | [
"region:us"
] | 2022-11-17T18:58:49+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 93369.0, "num_examples": 3}], "download_size": 93939, "dataset_size": 93369.0}} | 2022-11-17T18:59:04+00:00 |
1c7130f602fa130e3cdf1d72ff83da131efb3bbe | # Dataset Card for "hackathon_pil_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | akanksha8618/hackathon_pil_v2 | [
"region:us"
] | 2022-11-17T19:06:00+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 93369.0, "num_examples": 3}], "download_size": 93939, "dataset_size": 93369.0}} | 2022-11-17T19:06:16+00:00 |
e11886c8db6d0c80fc09fd88e95bdf9e5e60daef | carlosdanielhernandezmena/toy_corpus_asr_es | [
"license:cc-by-4.0",
"region:us"
] | 2022-11-17T19:18:23+00:00 | {"license": "cc-by-4.0"} | 2024-01-31T16:10:00+00:00 |
|
d0925f0e223bcfb2840e66328835380f96f8f589 |
# Dataset Card for MultiLegalPile_Wikipedia_Filtered: A filtered version of the MultiLegalPile dataset, together with wikipedia articles
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:[email protected])
### Dataset Summary
The Multi_Legal_Pile is a large-scale multilingual legal dataset suited for pretraining language models.
It spans over 24 languages and four legal text types.
### Supported Tasks and Leaderboards
The dataset supports the tasks of fill-mask.
### Languages
The following languages are supported:
bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv
## Dataset Structure
It is structured in the following format: {language}_{text_type}_{shard}.jsonl.xz
text_type is one of the following:
- caselaw
- contracts
- legislation
- other
- wikipedia
Use the dataset like this:
```python
from datasets import load_dataset
config = 'en_contracts' # {language}_{text_type}
dataset = load_dataset('joelito/Multi_Legal_Pile', config, split='train', streaming=True)
```
'config' is a combination of language and text_type, e.g. 'en_contracts' or 'de_caselaw'.
To load all the languages or all the text_types, use 'all' instead of the language or text_type (e.g., '
all_legislation').
### Data Instances
The file format is jsonl.xz and there is a `train` and `validation` split available.
Since some configurations are very small or non-existent, they might not contain a train split or not be present at all.
The complete dataset consists of five large subsets:
- [Native Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile)
- [Eurlex Resources](https://huggingface.co/datasets/joelito/eurlex_resources)
- [MC4 Legal](https://huggingface.co/datasets/joelito/mc4_legal)
- [Pile of Law](https://huggingface.co/datasets/pile-of-law/pile-of-law)
- [EU Wikipedias](https://huggingface.co/datasets/joelito/EU_Wikipedias)
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
This dataset has been created by combining the following datasets:
Native Multi Legal Pile, Eurlex Resources, MC4 Legal, Pile of Law, EU Wikipedias.
It has been filtered to remove short documents (less than 64 whitespace-separated tokens) and
documents with more than 30% punctuation or numbers (see prepare_legal_data.py for more details).
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
TODO add citation
```
### Contributions
Thanks to [@JoelNiklaus](https://github.com/joelniklaus) for adding this dataset.
| joelniklaus/MultiLegalPile_Wikipedia_Filtered | [
"task_categories:fill-mask",
"annotations_creators:other",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:fi",
"language:fr",
"language:ga",
"language:hr",
"language:hu",
"language:it",
"language:lt",
"language:lv",
"language:mt",
"language:nl",
"language:pl",
"language:pt",
"language:ro",
"language:sk",
"language:sl",
"language:sv",
"license:cc-by-4.0",
"region:us"
] | 2022-11-17T19:28:00+00:00 | {"annotations_creators": ["other"], "language_creators": ["found"], "language": ["bg", "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "ga", "hr", "hu", "it", "lt", "lv", "mt", "nl", "pl", "pt", "ro", "sk", "sl", "sv"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["fill-mask"], "pretty_name": "MultiLegalPile_Wikipedia_Filtered: A filtered version of the MultiLegalPile dataset, together with wikipedia articles."} | 2022-11-29T21:52:23+00:00 |
26a7b45850bfdafeda574d1bc79b2f16700748e1 |
# Negative Embedding / Textual Inversion
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/bad_prompt/resolve/main/bad_prompt_showcase.jpg"/>
## Idea
The idea behind this embedding was to somehow train the negative prompt as an embedding, thus unifying the basis of the negative prompt into one word or embedding.
Side note: Embedding has proven to be very helpful for the generation of hands! :)
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder.
**Please put the embedding in the negative prompt to get the right results!**
For special negative tags such as "malformed sword", you still need to add them yourself. The negative embedding is trained on a basic skeleton for the negative prompt, which should provide a high-resolution image as a result.
### Version 1:
Issue: Changing the style to much.
To use it in the negative prompt: ```"bad_prompt"```
Personally, I would recommend to use my embeddings with a strength of 0.8 even the negative embeddings, like ```"(bad_prompt:0.8)"```
### Version 2:
With this version I tried to reduce the amount of vectors used, aswell as the issue with the changing artstyle. The newer version is still a work in progress, but its already way better than the first version. Its in files section!
I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508"
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/bad_prompt | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"image-to-image",
"region:us"
] | 2022-11-17T20:47:06+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "thumbnail": "https://huggingface.co/datasets/Nerfgun3/bad_prompt/resolve/main/bad_prompt_showcase.jpg", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false} | 2022-11-19T23:43:47+00:00 |
5b1bb2ed401d4c3384702e2bb011e4eb379b2396 | from datasets import load_dataset
| purplecat24/Russel | [
"region:us"
] | 2022-11-17T21:06:39+00:00 | {} | 2022-11-17T21:29:28+00:00 |
13054fc9d7475eebe9919802a5ae36f36abdc567 | # Dataset Card for "mtop"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | WillHeld/mtop | [
"region:us"
] | 2022-11-17T21:54:47+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": " intent", "dtype": "string"}, {"name": " slot", "dtype": "string"}, {"name": " utterance", "dtype": "string"}, {"name": " domain", "dtype": "string"}, {"name": " locale", "dtype": "string"}, {"name": " dcp_form", "dtype": "string"}, {"name": " tokens", "dtype": "string"}, {"name": "intent", "dtype": "string"}, {"name": "slot", "dtype": "string"}, {"name": "utterance", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "dcp_form", "dtype": "string"}, {"name": "tokens", "dtype": "string"}], "splits": [{"name": "eval_en", "num_bytes": 2077234, "num_examples": 2235}, {"name": "test_en", "num_bytes": 4090856, "num_examples": 4386}, {"name": "train_en", "num_bytes": 14501480, "num_examples": 15667}, {"name": "eval_de", "num_bytes": 1764320, "num_examples": 1815}, {"name": "test_de", "num_bytes": 3439946, "num_examples": 3549}, {"name": "train_de", "num_bytes": 13122042, "num_examples": 13424}, {"name": "eval_es", "num_bytes": 1594238, "num_examples": 1527}, {"name": "test_es", "num_bytes": 3089782, "num_examples": 2998}, {"name": "train_es", "num_bytes": 11277514, "num_examples": 10934}, {"name": "eval_fr", "num_bytes": 1607082, "num_examples": 1577}, {"name": "test_fr", "num_bytes": 3289276, "num_examples": 3193}, {"name": "train_fr", "num_bytes": 12147836, "num_examples": 11814}, {"name": "eval_hi", "num_bytes": 2618172, "num_examples": 2012}, {"name": "test_hi", "num_bytes": 3491690, "num_examples": 2789}, {"name": "train_hi", "num_bytes": 14225324, "num_examples": 11330}, {"name": "eval_th", "num_bytes": 2251378, "num_examples": 1671}, {"name": "test_th", "num_bytes": 3654864, "num_examples": 2765}, {"name": "train_th", "num_bytes": 14277512, "num_examples": 10759}], "download_size": 16165451, "dataset_size": 112520546}} | 2022-12-10T17:50:10+00:00 |
53404b8688a8bb2504a3717a345f8fc85c29ee61 | # Dataset Card for "hinglish_top"
License: https://github.com/google-research-datasets/Hinglish-TOP-Dataset/blob/main/LICENSE.md
Original Repo: https://github.com/google-research-datasets/Hinglish-TOP-Dataset
Paper Link For Citation: https://arxiv.org/pdf/2211.07514.pdf
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | WillHeld/hinglish_top | [
"arxiv:2211.07514",
"region:us"
] | 2022-11-17T22:01:20+00:00 | {"dataset_info": {"features": [{"name": "en_query", "dtype": "string"}, {"name": "cs_query", "dtype": "string"}, {"name": "en_parse", "dtype": "string"}, {"name": "cs_parse", "dtype": "string"}, {"name": "domain", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 411962, "num_examples": 1390}, {"name": "test", "num_bytes": 2003034, "num_examples": 6513}, {"name": "train", "num_bytes": 894606, "num_examples": 2993}], "download_size": 1553636, "dataset_size": 3309602}} | 2022-12-10T17:51:03+00:00 |
ade45482b1fa163b34177963c1e6f4d29621e24f |
**Homepage:** https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-56
Used lydfiler_16_1.tar.gz and metadata_se_csv.zip | jzju/nst | [
"task_categories:automatic-speech-recognition",
"language:sv",
"license:cc0-1.0",
"region:us"
] | 2022-11-17T22:47:45+00:00 | {"language": ["sv"], "license": ["cc0-1.0"], "task_categories": ["automatic-speech-recognition"], "pretty_name": "NST"} | 2022-11-17T23:35:18+00:00 |
d9c4b7fe6948e8651d914b111367c4be9f2f0269 |
# Dataset Card for "Reddit Haiku"
This dataset contains haikus from the subreddit [/r/haiku](https://www.reddit.com/r/haiku/) scraped and filtered between October 19th and 10th 2022, combined with a [previous dump](https://zissou.infosci.cornell.edu/convokit/datasets/subreddit-corpus/corpus-zipped/hackintosh_ja~-~hamsters/) of that same subreddit packaged by [ConvoKit](https://convokit.cornell.edu/documentation/subreddit.html) as part of the Subreddit Corpus, which is itself a subset of [pushshift.io](https://pushshift.io/)'s big dump.
A main motivation for this dataset was to collect an alternative haiku dataset for evaluation, in particular for evaluating Fabian Mueller's Deep Haiku [model](fabianmmueller/deep-haiku-gpt-j-6b-8bit) which was trained on the Haiku datasets of [hjhalani30](https://www.kaggle.com/datasets/hjhalani30/haiku-dataset) and [bfbarry](https://www.kaggle.com/datasets/bfbarry/haiku-dataset), which are also available on [huggingface hub](https://huggingface.co/datasets/statworx/haiku).
## Fields
The fields are post id (`id`), the content of the haiku (`processed_title`), upvotes (`ups`), and topic keywords (`keywords`). Topic keywords for each haiku have been extracted with the [KeyBERT library](https://maartengr.github.io/KeyBERT/guides/quickstart.html) and truncated to top-5 keywords.
## Usage
This dataset is intended for evaluation, hence there is only one split which is `test`.
```python
from datasets import load_dataset
d=load_dataset('huanggab/reddit_haiku', data_files='test':'merged_with_keywords.csv'}) # use data_files or it will result in error
>>> print(d['train'][0])
#{'Unnamed: 0': 0, 'id': '1020ac', 'processed_title': "There's nothing inside/There is nothing outside me/I search on in hope.", 'ups': 5, 'keywords': "[('inside', 0.5268), ('outside', 0.3751), ('search', 0.3367), ('hope', 0.272)]"}
```
There is code for scraping and processing in `processing_code`, and a subset of the data with more fields such as author Karma, downvotes and posting time at `processing_code/reddit-2022-10-20-dump.csv`. | huanggab/reddit_haiku | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other",
"language:en",
"license:unknown",
"haiku",
"poem",
"poetry",
"reddit",
"keybert",
"generation",
"region:us"
] | 2022-11-17T23:02:12+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "English haiku dataset scraped from Reddit's /r/haiku with topics extracted using KeyBERT", "tags": ["haiku", "poem", "poetry", "reddit", "keybert", "generation"]} | 2022-11-18T20:02:29+00:00 |
4556473b1043404d771aa6a91ba2c0ad5a6a1f27 | https://opus.nlpl.eu/XLEnt-v1.1.php
Uploaded from Opus to HuggingFace AI by Argos Open Tech.
Corpus Name: XLEnt
Package: XLEnt.de-en in Moses format
Website: http://opus.nlpl.eu/XLEnt-v1.1.php
Release: v1.1
Release date: Sun May 23 08:35:55 EEST 2021
This corpus is part of OPUS - the open collection of parallel corpora
OPUS Website: http://opus.nlpl.eu
If you use the dataset or code, please cite (pdf): @inproceedings{elkishky_xlent_2021, author = {El-Kishky, Ahmed and Renduchintala, Adi and Cross, James and Guzmán, Francisco and Koehn, Philipp}, booktitle = {Preprint}, title = {{XLEnt}: Mining Cross-lingual Entities with Lexical-Semantic-Phonetic Word Alignment}, year = {2021}, address = Online, } and, please, acknowledge OPUS (bib, pdf) as well for this service.
This corpus was created by mining CCAligned, CCMatrix, and WikiMatrix parallel sentences. These three sources were themselves extracted from web data from Commoncrawl Snapshots and Wikipedia snapshots. Entity pairs were obtained by performing named entity recognition and typing on English sentences and projecting labels to non-English aligned sentence pairs. No claims of intellectual property are made on the work of preparation of the corpus. XLEnt consists of parallel entity mentions in 120 languages aligned with English. These entity pairs were constructed by performing named entity recognition (NER) and typing on English sentences from mined sentence pairs. These extracted English entity labels and types were projected to the non-English sentences through word alignment. Word alignment was performed by combining three alignment signals ((1) word co-occurence alignment with FastAlign (2) semantic alignment using LASER embeddings, and (3) phonetic alignment via transliteration) into a unified word-alignment model. This lexical/semantic/phonetic alignment approach yielded more than 160 million aligned entity pairs in 120 languages paired with English. Recognizing that each English is often aligned to mulitple entities in different target languages, we can join on English entities to obtain aligned entity pairs that directly pair two non-English entities (e.g., Arabic-French) The original distribution is available from http://data.statmt.org/xlent/ The difference to version 1 is that pivoting now only uses the link with best score in case of alternative alignments for a pivot entity.
| argosopentech/xlent-de_en | [
"region:us"
] | 2022-11-17T23:15:36+00:00 | {} | 2022-11-17T23:22:09+00:00 |
cb8e75614830035a37f3a2a11de5e625eaf0bc31 |
# ProofNet
## Dataset Description
- **Repository:** [zhangir-azerbayev/ProofNet](https://github.com/zhangir-azerbayev/ProofNet)
- **Paper:** [ProofNet](https://mathai2022.github.io/papers/20.pdf)
- **Point of Contact:** [Zhangir Azerbayev](https://zhangir-azerbayev.github.io/)
### Dataset Summary
ProofNet is a benchmark for autoformalization and formal proving of undergraduate-level mathematics. The ProofNet benchmarks consists of 371 examples, each consisting of a formal theorem statement in Lean 3, a natural language theorem statement, and a natural language proof. The problems are primarily drawn from popular undergraduate pure mathematics textbooks and cover topics such as real and complex analysis, linear algebra, abstract algebra, and topology. We intend for ProofNet to be a challenging benchmark that will drive progress in autoformalization and automatic theorem proving.
**Citation**:
```bibtex
@misc{azerbayev2023proofnet,
title={ProofNet: Autoformalizing and Formally Proving Undergraduate-Level Mathematics},
author={Zhangir Azerbayev and Bartosz Piotrowski and Hailey Schoelkopf and Edward W. Ayers and Dragomir Radev and Jeremy Avigad},
year={2023},
eprint={2302.12433},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Leaderboard
**Statement Autoformalization**
| Model | Typecheck Rate | Accuracy |
| ---------------------------------- | -------------- | -------- |
| Davinci-code-002 (prompt retrieval)| 45.2 | 16.1 |
| Davinci-code-002 (in-context learning) | 23.7 | 13.4 |
| proofGPT-1.3B | 10.7 | 3.2 |
**Statement Informalization**
| Model | Accuracy |
| ---------------------------------- | -------- |
| Code-davinci-002 (in-context learning)| 62.3 |
| proofGPT-6.7B (in-context learning) | 6.5 |
| proofGPT-1.3B (in-context learning) | 4.3 |
### Data Fields
- `id`: Unique string identifier for the problem.
- `nl_statement`: Natural language theorem statement.
- `nl_proof`: Natural language proof, in LaTeX. Depends on `amsthm, amsmath, amssymb` packages.
- `formal_statement`: Formal theorem statement in Lean 3.
- `src_header`: File header including imports, namespaces, and locales required for the formal statement. Note that local import of [common.lean](https://github.com/zhangir-azerbayev/ProofNet/blob/main/benchmark/benchmark_to_publish/formal/common.lean), which has to be manually downloaded and place in the same directory as your `.lean` file containing the formal statement.
### Authors
Zhangir Azerbayev, Bartosz Piotrowski, Jeremy Avigad | hoskinson-center/proofnet | [
"license:mit",
"arxiv:2302.12433",
"region:us"
] | 2022-11-17T23:53:41+00:00 | {"license": "mit"} | 2023-03-17T21:25:37+00:00 |
5e382a8497d4dd28842cc0bfa85387f965ac9d8d | # Dataset Card for "top_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | WillHeld/top_v2 | [
"region:us"
] | 2022-11-18T00:41:44+00:00 | {"dataset_info": {"features": [{"name": "domain", "dtype": "string"}, {"name": "utterance", "dtype": "string"}, {"name": "semantic_parse", "dtype": "string"}], "splits": [{"name": "eval", "num_bytes": 2650777, "num_examples": 17160}, {"name": "test", "num_bytes": 5947186, "num_examples": 38785}, {"name": "train", "num_bytes": 19433606, "num_examples": 124597}], "download_size": 9672445, "dataset_size": 28031569}} | 2022-12-10T17:52:27+00:00 |
677226ce59cda82b34387e1c4a0991966b00914d | # Dataset Card for "cstop"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | WillHeld/cstop | [
"region:us"
] | 2022-11-18T00:46:55+00:00 | {"dataset_info": {"features": [{"name": "intent", "dtype": "string"}, {"name": " slots", "dtype": "string"}, {"name": " utterance", "dtype": "string"}, {"name": " semantic_parse", "dtype": "string"}, {"name": "slots", "dtype": "string"}, {"name": "utterance", "dtype": "string"}, {"name": "semantic_parse", "dtype": "string"}], "splits": [{"name": "eval", "num_bytes": 182981, "num_examples": 559}, {"name": "test", "num_bytes": 377805, "num_examples": 1167}, {"name": "train", "num_bytes": 1325564, "num_examples": 4077}], "download_size": 618573, "dataset_size": 1886350}} | 2022-12-10T17:53:33+00:00 |
fd202c37e1efe6e759fbcb07b341f78f7077b9f1 | goodfellowliu/Set5 | [
"language:en",
"license:openrail",
"region:us"
] | 2022-11-18T01:05:49+00:00 | {"language": ["en"], "license": "openrail"} | 2023-09-04T05:13:28+00:00 |
|
0db8283e38df0d4a430b8c1d7320e74e0c038fbe | JM138/logi138 | [
"region:us"
] | 2022-11-18T01:24:58+00:00 | {} | 2022-11-18T01:26:36+00:00 |
|
19d34f0d7e89739f05ed2e60627adcbfd8d5716b | MHCreaive/youtubeTranscript | [
"license:afl-3.0",
"region:us"
] | 2022-11-18T02:22:39+00:00 | {"license": "afl-3.0"} | 2022-11-18T02:22:39+00:00 |
|
94fc7c46882d3a75878bbce17a1bbf0449579826 | # Dataset Card for "parsed_sst2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liuyanchen1015/parsed_sst2 | [
"region:us"
] | 2022-11-18T02:46:25+00:00 | {"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}, {"name": "idx", "dtype": "int32"}, {"name": "parse_tree", "dtype": "string"}, {"name": "pure_parse_tree", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22647332, "num_examples": 67349}, {"name": "validation", "num_bytes": 560160, "num_examples": 872}, {"name": "test", "num_bytes": 1155733, "num_examples": 1821}], "download_size": 10913172, "dataset_size": 24363225}} | 2022-11-18T05:18:40+00:00 |
1efda3483472c864af69bb01923ab2e2851eb529 | Wulichao/single_cell_RNA_seq | [
"license:mit",
"region:us"
] | 2022-11-18T03:06:33+00:00 | {"license": "mit"} | 2022-11-18T03:06:33+00:00 |
|
a754585cf5449543a22daf2fa371957ff1d1353d | # Dataset Card for "Yannic-Kilcher"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | juancopi81/Yannic-Kilcher | [
"task_categories:automatic-speech-recognition",
"whisper",
"whispering",
"region:us"
] | 2022-11-18T03:10:02+00:00 | {"task_categories": ["automatic-speech-recognition"], "dataset_info": {"features": [{"name": "CHANNEL_NAME", "dtype": "string"}, {"name": "URL", "dtype": "string"}, {"name": "TITLE", "dtype": "string"}, {"name": "DESCRIPTION", "dtype": "string"}, {"name": "TRANSCRIPTION", "dtype": "string"}, {"name": "SEGMENTS", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 28243998, "num_examples": 375}], "download_size": 12872792, "dataset_size": 28243998}, "tags": ["whisper", "whispering"]} | 2022-11-18T12:29:51+00:00 |
30a3566ac0cc8e45248a20919b6fdbaab365b540 | # Dataset Card for "urgent-triage-samples"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Lokibabu/urgent-triage-samples | [
"region:us"
] | 2022-11-18T05:42:00+00:00 | {"dataset_info": {"features": [{"name": "img", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "name", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1021, "num_examples": 22}, {"name": "train", "num_bytes": 1021, "num_examples": 22}], "download_size": 2988, "dataset_size": 2042}} | 2022-11-18T06:05:09+00:00 |
9eb409dcb51be812b30a8c1cfe8b0ecb8e961305 | # Dataset Card for "Yannic-Kilcher"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | osanseviero/test_osan | [
"whisper",
"whispering",
"region:us"
] | 2022-11-18T06:39:19+00:00 | {"task_ids": ["automatic-speech-recognition"], "dataset_info": {"features": [{"name": "CHANNEL_NAME", "dtype": "string"}, {"name": "URL", "dtype": "string"}, {"name": "TITLE", "dtype": "string"}, {"name": "DESCRIPTION", "dtype": "string"}, {"name": "TRANSCRIPTION", "dtype": "string"}, {"name": "SEGMENTS", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 28243998, "num_examples": 375}], "download_size": 12872792, "dataset_size": 28243998}, "tags": ["whisper", "whispering"]} | 2022-11-18T06:47:04+00:00 |
fb245c1cb161755b49b1d214ed16880ef63eaba2 | martiwey/gh-java-methods-small | [
"task_categories:text-generation",
"task_categories:summarization",
"size_categories:1M<n<10M",
"license:mit",
"java",
"github",
"region:us"
] | 2022-11-18T06:42:49+00:00 | {"license": "mit", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation", "summarization"], "tags": ["java", "github"]} | 2023-07-08T10:59:06+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.