sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
e43b0d14aea1cb8f2a943fa2fdaf2362ff081d2a
|
For now, we host our datasets here: https://vault.cs.uwaterloo.ca/s/RTJ27g9Ek2kanRe
|
approach0/math-ir-datasets
|
[
"region:us"
] |
2023-04-09T03:11:35+00:00
|
{}
|
2023-04-09T03:13:19+00:00
|
fecc39d5b5b7c394dad335ae4f98d38f051333d4
|
# Dataset Card for "VALUE_rte_null_relcl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_rte_null_relcl
|
[
"region:us"
] |
2023-04-09T03:14:58+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "dev", "num_bytes": 27785, "num_examples": 62}, {"name": "test", "num_bytes": 269307, "num_examples": 602}, {"name": "train", "num_bytes": 229297, "num_examples": 501}], "download_size": 355526, "dataset_size": 526389}}
|
2023-04-09T03:15:03+00:00
|
f404eff6914dd049b119b50b137d7093f051f356
|
# Dataset Card for "image-caption-blip-for-training"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
JerryMo/image-caption-blip-for-training
|
[
"region:us"
] |
2023-04-09T04:25:43+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 113643609.795, "num_examples": 2485}], "download_size": 112604582, "dataset_size": 113643609.795}}
|
2023-04-14T01:34:59+00:00
|
7a484549fd03eb9d12113e5f9e682c9460ed9397
|
# Dataset Card for "ta-oscar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
livinNector/ta-oscar
|
[
"region:us"
] |
2023-04-09T05:28:52+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9864365297, "num_examples": 556772}], "download_size": 3471268172, "dataset_size": 9864365297}}
|
2023-04-12T10:47:47+00:00
|
de2296920a535cf5d606355e80ab147430fab91c
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
0x7o/value_determinant
|
[
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-04-09T05:44:45+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "pretty_name": "Value Determinant"}
|
2023-04-09T05:46:02+00:00
|
35192fb9efdffc5db9e3a4fcefd0a227672b7abe
|
# Dataset Card for "chunk_123"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_123
|
[
"region:us"
] |
2023-04-09T06:23:17+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 29561173200.125, "num_examples": 307775}], "download_size": 27785369593, "dataset_size": 29561173200.125}}
|
2023-04-09T06:41:41+00:00
|
dd438104f2fa1e8c88dc388b492518174042ed96
|
# Dataset Card for "iwslt2017_de_en_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
guangyil/iwslt2017_de_en_tokenized
|
[
"region:us"
] |
2023-04-09T06:43:27+00:00
|
{"dataset_info": {"features": [{"name": "bert_token", "sequence": "int64"}, {"name": "gpt2_token", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 55558335.89954976, "num_examples": 110923}, {"name": "test", "num_bytes": 226447.06306306308, "num_examples": 394}], "download_size": 7469304, "dataset_size": 55784782.96261282}}
|
2023-04-10T06:23:10+00:00
|
fb58a9007fd658c100a82de5f0c00f23e82f0065
|
# Dataset Card for "chunk_117"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_117
|
[
"region:us"
] |
2023-04-09T07:30:17+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 28353369600.0, "num_examples": 295200}], "download_size": 26228600521, "dataset_size": 28353369600.0}}
|
2023-04-09T07:53:25+00:00
|
22bfb915aa4c70f842d0881024cab980ca104269
|
# Dataset Card for ReLi-SA
## Dataset Description
- **Homepage:** [Corpus ReLi - Linguateca](https://linguateca.pt/Repositorio/ReLi/)
- **Paper:** [Sparkling Vampire... lol! Annotating Opinions in a Book Review Corpus](https://www.linguateca.pt/Repositorio/ReLi/Anais_ELC2012_Freitasetal.pdf)
- **Point of Contact:** [Cláudia Freitas]([email protected])
### Dataset Summary
ReLi is a dataset created by Cláudia Freitas within the framework of the project "Semantic Annotators based on Active Learning" at PUC-Rio. It consists of 1,600 book reviews manually annotated for the presence of opinions on the reviewed book and its polarity.
The dataset contains reviews in Brazilian Portuguese on books written by seven authors: Stephenie Meyer, Thalita Rebouças, Sidney Sheldon, Jorge Amado, George Orwell, José Saramago, and J.D. Salinger. The language used in the reviews varies from highly informal, with slang, abbreviations, neologisms, and emoticons, to more formal reviews with a more elaborate vocabulary.
ReLi-SA is an adaptation of the original ReLi dataset for the sentiment analysis task. We attribute a sentiment polarity to each sentence according to the sentiment annotations of its individual tokens.
### Supported Tasks and Leaderboards
- `sentiment-analysis`: The dataset can be used to train a model for sentiment analysis, which consists of classifying the sentiment expressed in a sentence as positive, negative, neutral, or mixed. Success on this task is typically measured by achieving a high [F1 score](https://huggingface.co/metrics/f1).
### Languages
This dataset is in Brazilian Portuguese.
## Dataset Structure
### Data Instances
```json
{
'source': 'ReLi-Orwell.txt',
'title': 'False',
'book': '1984',
'review_id': '0',
'score': 5.0,
'sentence_id': 102583,
'unique_review_id': 'ReLi-Orwell_1984_0',
'sentence': ' Um ótimo livro , além de ser um ótimo alerta para uma potencial distopia , em contraponto a utopia tão sonhada por os homens de o medievo e início de a modernidade .',
'label': 'positive'
}
```
### Data Fields
* `source`: The source file of the review.
* `title`: A boolean field indicating whether the sentence is a review title (True) or not (False).
* `book`: The book that the review is about.
* `review_id`: The review ID within the source file.
* `score`: The score the review attributes to the book.
* `sentence_id`: The sequential ID of the sentence (can be used to sort the sentences within a review).
* `unique_review_id`: A unique ID for the review a sentence belongs to.
* `sentence`: The sentence for which the label indicates the sentiment.
* `label`: The sentiment label, either `positive`, `neutral`, `negative`, or `mixed` if both positive and negative sentiment polarity tokens are found in the sentence.
### Data Splits
The dataset is divided into three splits:
| | train | validation | test |
|------------|--------:|----------:|-------:|
| Instances | 7,875 | 1,348 | 3,288 |
The splits are carefully made to avoid having reviews about a given author appear in more than one split.
## Additional Information
### Citation Information
If you use this dataset in your work, please cite the following publication:
```bibtex
@incollection{freitas2014sparkling,
title={Sparkling Vampire... lol! Annotating Opinions in a Book Review Corpus},
author={Freitas, Cl{\'a}udia and Motta, Eduardo and Milidi{\'u}, Ruy Luiz and C{\'e}sar, Juliana},
booktitle={New Language Technologies and Linguistic Research: A Two-Way Road},
editor={Alu{\'\i}sio, Sandra and Tagnin, Stella E. O.},
year={2014},
publisher={Cambridge Scholars Publishing},
pages={128--146}
}
```
### Contributions
Thanks to [@ruanchaves](https://github.com/ruanchaves) for adding this dataset.
|
ruanchaves/reli-sa
|
[
"region:us"
] |
2023-04-09T07:43:22+00:00
|
{}
|
2023-04-13T14:24:11+00:00
|
bfa2787688de416376ae52f4f98226e7a3f8f89b
|
# Dataset Card for "chunk_121"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_121
|
[
"region:us"
] |
2023-04-09T07:43:56+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 26670320496.375, "num_examples": 277677}], "download_size": 24903690762, "dataset_size": 26670320496.375}}
|
2023-04-09T08:06:46+00:00
|
c7bc2a5f3555e03fdd9e4616fe205740c7a977e3
|
# Dataset Card for "chunk_124"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_124
|
[
"region:us"
] |
2023-04-09T07:48:44+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 28649869776.125, "num_examples": 298287}], "download_size": 26744267899, "dataset_size": 28649869776.125}}
|
2023-04-09T08:12:51+00:00
|
77272d75839d48314b34d9c650dd66bcd5f68f6f
|
# Dataset Card for "chunk_116"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_116
|
[
"region:us"
] |
2023-04-09T08:14:03+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 26227058976.25, "num_examples": 273062}], "download_size": 24139612095, "dataset_size": 26227058976.25}}
|
2023-04-09T09:01:10+00:00
|
8c14b5595f794e8305de147bc62781af27dc38b4
|
# Dataset Card for "chunk_119"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_119
|
[
"region:us"
] |
2023-04-09T08:28:14+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 25807041072.875, "num_examples": 268689}], "download_size": 24137669766, "dataset_size": 25807041072.875}}
|
2023-04-09T09:14:56+00:00
|
74e2bd685ded6641bb74fb24fb0701c13560632f
|
cqin/strawberry-disease
|
[
"size_categories:n<1K",
"language:en",
"language:zh",
"region:us"
] |
2023-04-09T08:30:52+00:00
|
{"language": ["en", "zh"], "size_categories": ["n<1K"]}
|
2023-04-09T08:42:31+00:00
|
|
467d25a839086383794b58055981221b82c0d107
|
[github](https://github.com/ntunlp/xCodeEval)
# xCodeEval
[xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval](https://arxiv.org/abs/2303.03004)
We introduce **xCodeEval**, the largest executable multilingual multitask benchmark to date consisting of 25 M document-level coding examples from about 7.5 K unique problems covering up to 17 programming languages with execution-level parallelism. It features a total of seven tasks involving code understanding, generation, translation and retrieval, and it employs an execution-based evaluation. We develop a test-case based multilingual code execution engine, [**ExecEval**](https://github.com/ntunlp/ExecEval) that supports all the programming languages in **xCodeEval**. We also propose a novel data splitting and a data selection schema for balancing data distributions over multiple attributes based on geometric mean and graph-theoretic principle.
This repository contains the sample code and data link for xCodeEval [paper](https://arxiv.org/abs/2303.03004).
# Data Download
Currently this repository supports huggingface [`load_dataset()`](https://huggingface.co/docs/datasets/v1.11.0/package_reference/loading_methods.html#datasets.load_dataset) api. Follow the following example to load dataset for individual examples.
```
import datasets
prog_synthesis_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "program_synthesis")
code_translation_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "code_translation")
tag_classification_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "tag_classification")
apr_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "apr")
pcode_compilation_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "code_compilation")
retrieval_code_code_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "retrieval_code_code")
retrieval_nl_code_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "retrieval_nl_code")
retrieval_corpus_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "retrieval_corpus")
```
## Hf large data download tricks.
If you are facing long delay with data processing, add a `ignore_verifications=True`.
```
prog_synthesis_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "program_synthesis", ignore_verifications=True)
```
If you are facing long delay with data downloading, use huggingface streaming mode.
```
prog_synthesis_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "program_synthesis", streaming=True)
```
## Just Give me the raw data (😠)
Data can be also downloaded as a git LFS repo from huggingface.

You can download the full data using the following command.
```
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/NTU-NLP-sg/xCodeEval
cd xCodeEval
git lfs pull
```
To download a specific part of the dataset,
```
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/NTU-NLP-sg/xCodeEval
cd xCodeEval
git lfs pull --include "apr/test/*"
```
We propose 7 Tasks.
1. [Tag Classification](https://github.com/ntunlp/xCodeEval/blob/main/apr.md)
2. [Code Compilation](https://github.com/ntunlp/xCodeEval/blob/main/code_compilation.md)
3. [Program Synthesis](https://github.com/ntunlp/xCodeEval/blob/main/program_synthesis.md)
4. [Code Translation](https://github.com/ntunlp/xCodeEval/blob/main/code_translation.md)
5. [Automatic Program Repair](https://github.com/ntunlp/xCodeEval/blob/main/apr.md)
6. [Code-Code Retrieval](https://github.com/ntunlp/xCodeEval/blob/main/retrieval.md)
7. [NL-Code Retrieval](https://github.com/ntunlp/xCodeEval/blob/main/retrieval.md)
# Common Data for different tasks
If you are not using huggingface [`load_dataset()`](https://huggingface.co/docs/datasets/v1.11.0/package_reference/loading_methods.html#datasets.load_dataset) api, you may need to link some data with different tasks.

We have two data files that are required for multiple tasks.
1. `problem_descriptions.jsonl`
2. `unittest_db.json`
You can find these two files in the root directory of the [main](https://huggingface.co/datasets/NTU-NLP-sg/xCodeEval/tree/main) branch of huggingface dataset repository. To avoid data redundancy we didn't include these data with the relevant tasks, rather we add a unique id `src_uid` to retrieve these data.
## Structure of `problem_descriptions.jsonl`
A sample,
```json
{
"description": "There are $$$n$$$ positive integers $$$a_1, a_2, \\dots, a_n$$$. For the one move you can choose any even value $$$c$$$ and divide by two all elements that equal $$$c$$$.For example, if $$$a=[6,8,12,6,3,12]$$$ and you choose $$$c=6$$$, and $$$a$$$ is transformed into $$$a=[3,8,12,3,3,12]$$$ after the move.You need to find the minimal number of moves for transforming $$$a$$$ to an array of only odd integers (each element shouldn't be divisible by $$$2$$$).",
"input_from": "standard input",
"output_to": "standard output",
"time_limit": "3 seconds",
"memory_limit": "256 megabytes",
"input_spec": "The first line of the input contains one integer $$$t$$$ ($$$1 \\le t \\le 10^4$$$) \u2014 the number of test cases in the input. Then $$$t$$$ test cases follow. The first line of a test case contains $$$n$$$ ($$$1 \\le n \\le 2\\cdot10^5$$$) \u2014 the number of integers in the sequence $$$a$$$. The second line contains positive integers $$$a_1, a_2, \\dots, a_n$$$ ($$$1 \\le a_i \\le 10^9$$$). The sum of $$$n$$$ for all test cases in the input doesn't exceed $$$2\\cdot10^5$$$.",
"output_spec": "For $$$t$$$ test cases print the answers in the order of test cases in the input. The answer for the test case is the minimal number of moves needed to make all numbers in the test case odd (i.e. not divisible by $$$2$$$).",
"notes": "NoteIn the first test case of the example, the optimal sequence of moves can be as follows: before making moves $$$a=[40, 6, 40, 3, 20, 1]$$$; choose $$$c=6$$$; now $$$a=[40, 3, 40, 3, 20, 1]$$$; choose $$$c=40$$$; now $$$a=[20, 3, 20, 3, 20, 1]$$$; choose $$$c=20$$$; now $$$a=[10, 3, 10, 3, 10, 1]$$$; choose $$$c=10$$$; now $$$a=[5, 3, 5, 3, 5, 1]$$$ \u2014 all numbers are odd. Thus, all numbers became odd after $$$4$$$ moves. In $$$3$$$ or fewer moves, you cannot make them all odd.",
"sample_inputs": [
"4\n6\n40 6 40 3 20 1\n1\n1024\n4\n2 4 8 16\n3\n3 1 7"
],
"sample_outputs": [
"4\n10\n4\n0"
],
"tags": [
"number theory",
"greedy"
],
"src_uid": "afcd41492158e68095b01ff1e88c3dd4",
"difficulty": 1200,
"created_at": 1576321500
}
```
### Key Definitions
1. `description`: Problem description in textual format, math operations are written in latex.
2. `input_from`: How the program should take the unit test.
3. `output_to`: Where the program should output the result of the unit test.
4. `time_limit`: Time limit to solve the problem.
5. `memory_limit`: Memory limit to solve the problem.
6. `input_spec`: How and in what order the input will be given to the program? It also includes the date range, types, and sizes.
7. `output_spec`: How the outputs should be printed. Most of the time the unit test results are matched with an *exact string match* or *floating point comparison* with a precision boundary.
8. `sample_inputs`: A sample input for the code that is expected to solve the problem described in `description`.
9. `sample_outputs`: The expected output for the `sample_input` that is expected to solve the problem described in `description`.
10. `notes`: Explanation of `sample_inputs` & `sample_outputs`.
11. `tags`: The problem categories.
12. `src_uid`: The unique id of the problem. This ID is referred to in the task data samples instead of putting all this information.
13. `difficulty`: How difficult is it to solve the problem for a human (annotated by an expert human)?
14. `created_at`: The Unix timestamp when the problem was released. Use `datetime` lib in Python to parse it to a human-readable format.
## Structure of `unittest_db.json`
The structure of the `json` file,
```python
unittest_db = {
"db884d679d9cfb1dc4bc511f83beedda" : [
{
"input": "4\r\n3 2 3 2\r\n",
"output": [
"1"
],
},
{
...
},
...
]
"3bc096d8cd3418948d5be6bf297aa9b5":[
...
],
...
}
```
### Key Definitions
1. `unittest_db.json` dict keys i.e., `db884d679d9cfb1dc4bc511f83beedda` are the `src_uid` from `problem_descriptions.jsonl`.
2. `input`: Input of the unit test.
3. `output`: List of expected outputs for the unit test.
# Citation
```
@misc{khan2023xcodeeval,
title={xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval},
author={Mohammad Abdullah Matin Khan and M Saiful Bari and Xuan Long Do and Weishi Wang and Md Rizwan Parvez and Shafiq Joty},
year={2023},
eprint={2303.03004},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Part of this work was submitted as a requirement for the Master of Science degree in Computer Science and Applications at the Islamic University of Technology by Muhammad Abdullah Matin Khan Zarzis. (The thesis or project report will be added upon publication).
```
@misc{khan2024xcodeeval,
title={Development of a Code Search Engine Using Natural Language Processing Techniques},
author={Mohammad Abdullah Matin Khan},
year={2024},
publication={Journal of Engineering and Technology (JET)}
url=TBA
}
```
|
NTU-NLP-sg/xCodeEval
|
[
"task_categories:translation",
"task_categories:token-classification",
"task_categories:text2text-generation",
"task_categories:text-retrieval",
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:feature-extraction",
"task_categories:question-answering",
"annotations_creators:expert-generated",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:code",
"language:en",
"license:cc-by-nc-4.0",
"programming-language",
"code",
"program-synthesis",
"automatic-code-repair",
"code-retrieval",
"code-translation",
"code-classification",
"arxiv:2303.03004",
"region:us"
] |
2023-04-09T10:02:35+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["found", "expert-generated"], "language": ["code", "en"], "license": ["cc-by-nc-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1M<n<10M", "10M<n<100M"], "source_datasets": ["original"], "task_categories": ["translation", "token-classification", "text2text-generation", "text-retrieval", "text-generation", "text-classification", "feature-extraction", "question-answering"], "pretty_name": "xCodeEval", "tags": ["programming-language", "code", "program-synthesis", "automatic-code-repair", "code-retrieval", "code-translation", "code-classification"]}
|
2024-01-02T21:38:34+00:00
|
dc00621e6356df4ce4dc72834a274ef98a995f3b
|
# The SQuAD QG Dataset
## Description
[Stanford Question Answering Dataset (SQuAD)](https://rajpurkar.github.io/SQuAD-explorer/) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
This modified version is aimed at question generation;
each entry only contains contexts and questions concatenated to a single string related to the specific context.
`The SQuAD` unites SQuAD 1.1 and 2.0 in two subsets each containing a `train` and `validation` split.
## Dataset Structure
### Data Instances
An example entry looks as follows:
```python
{
context: "This is a test context",
questions: ["Is this a test?", "Is this a test context?"]
}
```
### Data Fields
The dataset has the following fields:
* context: a string feature
* questions: a string feature
**NB:** The data fields are the same among all splits.
### Data Splits
| name | train | validation |
|------|-------|------------|
| v1 | 18891 | 2067 |
| v2 | 18877 | 1204 |
|
the-coorporation/the_squad_qg
|
[
"language:en",
"license:wtfpl",
"region:us"
] |
2023-04-09T10:16:54+00:00
|
{"language": ["en"], "license": "wtfpl", "pretty_name": "The SQuAD QG Dataset", "dataset_info": [{"config_name": "v2", "features": [{"name": "context", "dtype": "string"}, {"name": "questions", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20328952, "num_examples": 18877}, {"name": "validation", "num_bytes": 1419411, "num_examples": 1204}], "download_size": 24163282, "dataset_size": 21748363}, {"config_name": "v1", "features": [{"name": "context", "dtype": "string"}, {"name": "questions", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20391081, "num_examples": 18891}, {"name": "validation", "num_bytes": 2389185, "num_examples": 2067}], "download_size": 25308169, "dataset_size": 22780266}]}
|
2023-04-23T15:59:58+00:00
|
59a630c53c6200476d482d35fa666f4d8926d84f
|
dylanalloy/fin-gpt-selftalk_500k
|
[
"license:cc-by-nc-4.0",
"region:us"
] |
2023-04-09T10:22:35+00:00
|
{"license": "cc-by-nc-4.0"}
|
2023-04-09T10:31:25+00:00
|
|
46c8d5e23bb134a14692e92df00e0fb4b749e942
|
# Dataset Card for "turkishReviews-project"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
kaaniince/turkishReviews-project
|
[
"region:us"
] |
2023-04-09T10:26:45+00:00
|
{"dataset_info": {"features": [{"name": "review", "dtype": "string"}, {"name": "review_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1252876.2642514652, "num_examples": 3378}, {"name": "validation", "num_bytes": 139455.7357485349, "num_examples": 376}], "download_size": 896649, "dataset_size": 1392332.0}}
|
2023-04-09T10:26:49+00:00
|
d361531dcf88fe7d5ca16ba47d616e0bd58c70ca
|
# Dataset Card for "chunk_111"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_111
|
[
"region:us"
] |
2023-04-09T10:28:52+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 22166245584.125, "num_examples": 230783}], "download_size": 18016309353, "dataset_size": 22166245584.125}}
|
2023-04-09T10:46:15+00:00
|
7f41da1b7e772e095a682f67e190e58a2fbd1aa6
|
# Dataset Card for "chunk_120"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_120
|
[
"region:us"
] |
2023-04-09T10:53:01+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 26957119824.125, "num_examples": 280663}], "download_size": 25198915226, "dataset_size": 26957119824.125}}
|
2023-04-09T11:16:10+00:00
|
9d9da924dee1135cafb540ada70689afcbbc28aa
|
# Dataset Card for "Miyazaki-captioned-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Outrun32/Miyazaki-captioned-dataset
|
[
"region:us"
] |
2023-04-09T10:56:52+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14512281.0, "num_examples": 65}], "download_size": 14510623, "dataset_size": 14512281.0}}
|
2023-04-09T10:57:01+00:00
|
f74679cd6706829db62a7fea91de31b9ae48b3bf
|
mitsudate/colstone_singing_dataset
|
[
"license:other",
"region:us"
] |
2023-04-09T11:22:52+00:00
|
{"license": "other"}
|
2023-04-09T11:34:40+00:00
|
|
ee4ee69594e8e5accf01a38fc6da1f97d74e0230
|
# mini-imagenet-LT_longtail-dataset
长尾数据集的分类任务是一个较为常见的话题,但是数据集整理较为麻烦,并且有些数据集例如Imagenet-LT相对来说还是太多,算力不够的情况下做实验成本较高。因此我根据mini-Imagenet重新整理出了mini-Imagenet-LT长尾数据集。并且使用了RSG模型和stable diffusion扩充数据集两种方法进行性能上的对比。
RSG方法,allacc:72.62% headacc:75.91% middleacc:62.45% tailacc:50.83%
SD方法,allacc:75.88% headacc:79.36% middleacc:64.31% tailacc:56.25%
数据集整理过程如下:
1.下载原始mini-imagenet数据集,其由从imagenet中抽取的100个类别的数据构成,每个类别600张图片,总计60000张图片。我们从每个类别的图像中抽取10%的测试集10%的验证集,剩下80%作为训练集。测试集和验证集会生成val.csv和test.csv两个表格文件,记录了路径和标签。
2.为了制作长尾数据集我们需要对训练集进行再抽样。我们对每个类别的训练数据集从中随机抽取10到480不等的数据构成了分布不均匀的长尾数据集,生成train.csv文件,每个类别的数据量记录在cls_label.json。
3.使用stable diffusion扩充我们的长尾数据集,讲每个类别的图片数量从10-480补齐到480张,生成的图片在genimages文件夹加,标签路径文件为gentrain.csv。具体生成方法我们使用图生图的方式,以某图片及其标签作为prompt对现在的图片轮流生成直到补齐480张为止。(由于seed的随机性或图片的问题,生成的图片有部分为损坏的纯黑图片,在下游任务中记得做筛选去除)。语义标签保存在classname.txt中。
The classification task of long-tail data sets is a relatively common topic, but the data set sorting is more troublesome, and some data sets such as Imagenet-LT are relatively too much, and the cost of experimentation is high when the computing power is not enough. So I rearranged the mini-Imagenet-LT long-tail dataset based on mini-Imagenet. And use the RSG model and stable diffusion to expand the data set two methods for performance comparison.
RSG method, allacc: 72.62 headacc: 75.91 middleacc: 62.45 tailacc: 50.83
SD method, allacc: 75.88 headacc: 79.36 middleacc: 64.31 tailacc: 56.25
The process of organizing the data set is as follows:
1. Download the original mini-imagenet dataset, which consists of 100 categories of data extracted from imagenet, with 600 pictures for each category, and a total of 60,000 pictures. We sample 10% of the test set, 10% of the validation set, and the remaining 80% as the training set from images in each category. The test set and validation set will generate two table files, val.csv and test.csv, which record the path and label.
2. In order to make a long tail dataset we need to resample the training set. We randomly sampled 10 to 480 data from the training data set of each category to form an unevenly distributed long-tail data set, and generated a train.csv file. The data volume of each category is recorded in cls_label.json.
3. Use stable diffusion to expand our long-tail data set. The number of pictures in each category is filled from 10-480 to 480. The generated pictures are added in the genimages folder, and the label path file is gentrain.csv. For the specific generation method, we use the image generation method, using a certain image and its label as a prompt to generate the current images in turn until 480 images are completed. (Due to the randomness of the seed or the problem of the picture, some of the generated pictures are damaged pure black pictures, remember to filter and remove them in downstream tasks). Semantic tags are stored in classname.txt.
|
KITSCH/miniimagenet-LT
|
[
"license:openrail",
"region:us"
] |
2023-04-09T11:46:19+00:00
|
{"license": "openrail"}
|
2023-04-09T12:30:42+00:00
|
a27d46211ee56c856e5c090ca76bb21b9933a2cd
|
# Dataset Card for "chunk_127"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_127
|
[
"region:us"
] |
2023-04-09T12:14:38+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 22307436144.375, "num_examples": 232253}], "download_size": 20381118594, "dataset_size": 22307436144.375}}
|
2023-04-09T12:46:54+00:00
|
174cc1cb620c28cad745860384f022b6f3532a55
|
# Dataset Card for "chunk_118"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_118
|
[
"region:us"
] |
2023-04-09T12:23:48+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 26944633584.375, "num_examples": 280533}], "download_size": 24953577185, "dataset_size": 26944633584.375}}
|
2023-04-09T12:46:44+00:00
|
896a0f288f6e355e5fd5bd6316796144079a7707
|
# Dataset Card for "chunk_130"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_130
|
[
"region:us"
] |
2023-04-09T12:39:18+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 24842431008.25, "num_examples": 258646}], "download_size": 23234676141, "dataset_size": 24842431008.25}}
|
2023-04-09T13:00:07+00:00
|
58575440d79a5987ec071207a68e85b2b5361588
|
## About Ask2Democracy-cfqa-salud-pension
Ask2Democracy-cfqa-salud-pension is an instructional, context-based generative dataset created using the text reforms of Colombian health and pension systems in Spanish(March 23).
The text was pre-processed and augmented using the chat-gpt-turbo API.
<div align="right">
Creado por Jorge Henao 🇨🇴 <a href="https://twitter.com/jhenaotw" target='_blank'>Twitter</a> <a href="https://www.linkedin.com/in/henaojorge" target='_blank'>LinkedIn</a> <a href="https://linktr.ee/jorgehenao" target='_blank'>Linktree</a>
<br>
Con el apoyo de David Torres 🇨🇴 <a href="https://twitter.com/davinci137" target='_blank'>Twitter</a> <a href="https://github.com/datorresb" target='_blank'>LinkedIn</a>
</div>
Different prompt engineering experiments were conducted to obtain high-quality results in Spanish language,
while balancing simplicity of language and source-based generation.
Two styles of questions were created:
* Conceptual questions about the text's definitions and concepts related to health and pension reforms.
* First-person questions that address concerns of regular people from diverse population groups.
Conceptual questions and answers are useful for understanding complex concepts commonly found in public documents. These types of questions generally ask for easy-to-understand explanations of a particular concept or subject, such as "What is the definition of concept X or Y?"
On the other hand, first-person questions are designed to address specific concerns of various population groups, including young adults, middle-aged adults, and older adults, as well as those from different gender and labor backgrounds. These questions are tailored to address the specific needs and interests of these groups with regards to the text's topics.
These kind of questions looks like this "How x concept can affect me as an y person?, like older adult, labour woman, young adult, etc
## Bias
The bias of the text generated by the Chat-GPT-Turbo model has not been evaluated, nor has the accuracy of OpenAI's knowledge about Colombia been verified.
Nevertheless, to reduce the risk of model hallucinations and biases, the text was pre-processed, and specific source based contexts were provided on every API call.
In addition, source based text was explicity insctucted during the prompt engineering exprimentation for each topic: Health and Pension reforms.
## Dataset Structure
### Data Instances
#### plain_text
An example of conceptual questions looks like this.
```
{
"instruction": "Given the context please answer the question. Context:'ARTÍCULO 73. SISTEMA NACIONAL DE PROTECCIÓN SOCIAL...",
"input": "¿Qué es el Sistema Nacional de Protección Social Integral para la Vejez?",
"output": "El Sistema Nacional de Protección Social Integral para la Vejez es una iniciativa del gobierno colombiano para garantizar la protección social de las personas mayores. Incluye el Consejo Nacional de Protección a la Vejez y el Comité Técnico, que trabajan para desarrollar políticas y programas que mejoren la calidad de vida de las personas mayores.",
"topics": ["protección social", "vejez", "políticas públicas"]
},
```
An example of first person questions looks like this.
```
{
"instruction": "Given the context please answer the question. Context:'Artículo 1 Reforma Pensional...'
"input": "¿Cómo me beneficia el Sistema Público Único Integrado de Información en Salud (SPUIIS) como adulto de mediana edad?",
"output": "El SPUIIS permite el procesamiento digital de imágenes y otras tecnologías de última generación, lo que facilita el diagnóstico y tratamiento de enfermedades, mejorando la calidad de vida de las personas.",
"topics": ["Diagnóstico y tratamiento", "Tecnología de última generación", "Calidad de vida"]
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `instruction`: a `string` feature.
- `input`: a `string` feature.
- `output`: a `string` feature.
|
hackathon-somos-nlp-2023/ask2democracy-cfqa-salud-pension
|
[
"region:us"
] |
2023-04-09T12:40:18+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "topics", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 7711587, "num_examples": 3805}], "download_size": 880079, "dataset_size": 7711587}}
|
2023-04-11T02:08:45+00:00
|
c62d6caa446f16fbea2061a6b7a60eabf2467066
|
adabingw/lyrr-lorde
|
[
"region:us"
] |
2023-04-09T12:50:27+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 240828, "num_examples": 171}], "download_size": 0, "dataset_size": 240828}}
|
2023-04-09T13:41:03+00:00
|
|
ccb51d8e931afc47838ccca35d7e6f2a21a9d8e0
|
# Dataset Card for "chunk_125"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_125
|
[
"region:us"
] |
2023-04-09T13:22:34+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 24537094416.625, "num_examples": 255467}], "download_size": 22689468637, "dataset_size": 24537094416.625}}
|
2023-04-09T13:57:56+00:00
|
34391ea4bfd22393d92b4b8d46e2117851d46425
|
# Dataset Card for "chunk_126"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_126
|
[
"region:us"
] |
2023-04-09T13:29:45+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 23572868544.5, "num_examples": 245428}], "download_size": 21846573105, "dataset_size": 23572868544.5}}
|
2023-04-09T14:02:39+00:00
|
b9bd94fbdb590e93e46d26042f15818286899218
|
# Dataset Card for "chunk_131"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_131
|
[
"region:us"
] |
2023-04-09T13:33:51+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 15537492864.0, "num_examples": 161768}], "download_size": 13202430045, "dataset_size": 15537492864.0}}
|
2023-04-09T13:47:41+00:00
|
c23442a7e14f2c6187222552ae4595efe0f1b8f6
|
# Dataset Card for "chunk_128"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_128
|
[
"region:us"
] |
2023-04-09T13:42:33+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 21788392752.875, "num_examples": 226849}], "download_size": 20254695728, "dataset_size": 21788392752.875}}
|
2023-04-09T14:13:48+00:00
|
f702bc9af184a5d9f7a3252ccdfdafd79987516f
|
# Dataset Card for Participation and Division of Labor in User-driven Audits
The project website for the research associated with this dataset is: https://userdrivenaudits.github.io/
## Dataset Description
This dataset is the complete dataset we used for our research study, "Participation and Division of Labor in User-driven Audits."
- **Homepage:** https://userdrivenaudits.github.io/
- **Paper (preprint):** https://arxiv.org/pdf/2304.02134.pdf
- **Point of Contact:**
Please contact Sara Kingsley for additional information or if you have questions about this data-set. Sara can be reached at skingsle[at]cs.cmu.edu
### Dataset Summary
This dataset is the complete dataset we used to analyze in our CHI'23 paper, `Participation and Division of Labor in User-driven Audits.`
## Dataset Structure
The dataset is provided in an excel file.
### Data Fields
The dataset contains the following features/column headers:
- created_at: date/timestamp for when a tweet was published
- text: this is the text of the tweet
- hashed conversation id: this is an anonymized identifier that represents the conversation to which the tweet belongs. Note, we hashed the original conversation ID in hopes to help protect user privacy
- hashed_author_id: this is an anonymized identifier that represents an unique identifier for the author of the tweet. Note, we hashed the original author ID in hopes to protect user privacy
- top_producer: this column identifies if the user was a top producer of content for an audit. 1 = top producer; 0 = not top producer
- top_broadcaster: this column identifies if the user was a top broadcaster of content for an audit. 1 = top broadcaster; 0 = not top producer
- case: this column identifies which user-driven audit the tweet was associated with. AC = Apple Card; TC = Twitter Cropping; INR = ImageNet Roulette; PAI = PortraitAI
- DOL_Label_prediction: this is the label for the user role the tweet played in the audit, this label was automatically inferred using our SVM Division of Labor classifier
- DOL_Label_prediction_amplification: this column identifies whether the tweet was classified as one where the user's tweet played the role of 'amplification'
- DOL_Label_prediction_booster: this column identifies whether the tweet was classified as one where the user's tweet played the role of 'escalation'
- DOL_Label_prediction_contextualization: this column identifies whether the tweet was classified as one where the user's tweet played the role of 'contextualization'
- DOL_Label_prediction_data_collection: this column identifies whether the tweet was classified as one where the user's tweet played the role of 'evidence collection'
- DOL_Label_prediction_data_hypothesizing: this column identifies whether the tweet was classified as one where the user's tweet played the role of 'hypothesizing'
- DOL_Label_prediction_irrelevant: this column identifies whether the tweet was marked irrelevant by our classifier.
All the remaining column data are identical to the column data that can be retrieved (during the time of our study at least) from the Twitter API. Please refer to the Twitter API documentation for reference/information about these column data. Please contact Sara Kingsley if this information becomes unavailable at some point.
### Data Splits
We split this data 70-30 into train/test sets to train our classifiers.
## Dataset Creation
We created this dataset starting in September/October 2020.
### Curation Rationale
Please see our CHI'23 paper's methodology section for more information on our rationale for creating the dataset. The preprint is available here: https://arxiv.org/pdf/2304.02134.pdf
### Source Data
Twitter API
#### Initial Data Collection and Normalization
Please see our CHI'23 paper's methodology section for more information on our rationale for creating the dataset. The preprint is available here: https://arxiv.org/pdf/2304.02134.pdf
#### Who are the source language producers?
Please see our CHI'23 paper's methodology section for more information on our rationale for creating the dataset. The preprint is available here: https://arxiv.org/pdf/2304.02134.pdf
### Annotations
Please see our CHI'23 paper's methodology section for more information on our rationale for creating the dataset. The preprint is available here: https://arxiv.org/pdf/2304.02134.pdf
#### Annotation process
Please see our CHI'23 paper's methodology section for more information on our rationale for creating the dataset. The preprint is available here: https://arxiv.org/pdf/2304.02134.pdf
#### Who are the annotators?
The authors of this paper: https://arxiv.org/pdf/2304.02134.pdf
### Personal and Sensitive Information
We hashed the original identifiers in the dataset that could allow people to retrieve information about the users who published the tweets in the dataset.
Warning: it is possible, if a tweet is still published publicly on twitter, to search for the original tweet using the text of the tweet. We ask and encourage users of our dataset not to do this.
## Considerations for Using the Data
People may use this dataset to replicate the work we did in this paper: https://arxiv.org/pdf/2304.02134.pdf
### Social Impact of Dataset
Please see: https://arxiv.org/pdf/2304.02134.pdf
### Discussion of Biases
Please see: https://arxiv.org/pdf/2304.02134.pdf
### Other Known Limitations
Please see this information in our paper: https://arxiv.org/pdf/2304.02134.pdf
## Additional Information
### Dataset Curators
Please see: https://arxiv.org/pdf/2304.02134.pdf
### Citation Information
Rena Li, Sara Kingsley, Chelsea Fan, Proteeti Sinha, Nora Wai, Jaimie Lee, Hong Shen, Motahhare Eslami, and Jason Hong. 2023. Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday
Users Work Together to Surface Algorithmic Harms? In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23), April 23–28, 2023, Hamburg, Germany. ACM, New York, NY, USA, 19 pages. https://doi.org/10.1145/3544548.3582074
Preprint is available, here: https://arxiv.org/pdf/2304.02134.pdf
### Contributions
Rena Li,
Sara Kingsley,
Chelsea Fan,
Proteeti Sinha,
Nora Wai,
Jaimie Lee,
Hong Shen,
Motahhare Eslami,
Jason Hong
|
saraki/userdrivenauditcases
|
[
"arxiv:2304.02134",
"region:us"
] |
2023-04-09T13:55:34+00:00
|
{}
|
2023-04-09T14:30:51+00:00
|
11b33b886d30fdec60a2f1dfa13632691d22afc3
|
Muhacker/Muhac
|
[
"license:other",
"region:us"
] |
2023-04-09T14:50:08+00:00
|
{"license": "other"}
|
2023-04-09T14:50:08+00:00
|
|
18f984eb7f4ecf7f2c822ce77218bc85a616098f
|
# Dataset Card for "test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/test
|
[
"region:us"
] |
2023-04-09T14:56:20+00:00
|
{"dataset_info": {"features": [{"name": "vocab", "dtype": "string"}, {"name": "descriptions", "sequence": "string"}, {"name": "prompt_descriptions", "sequence": "string"}], "splits": [{"name": "descriptors_lvis", "num_bytes": 679195, "num_examples": 1198}, {"name": "descriptors_oxfordpets", "num_bytes": 22322, "num_examples": 37}, {"name": "descriptors_visualgenome", "num_bytes": 1092697, "num_examples": 1913}, {"name": "descriptors_dtd", "num_bytes": 25204, "num_examples": 47}, {"name": "descriptors_fgvc", "num_bytes": 74126, "num_examples": 100}, {"name": "descriptors_cifar100", "num_bytes": 54081, "num_examples": 100}, {"name": "descriptors_coco", "num_bytes": 45186, "num_examples": 80}, {"name": "descriptors_sun397", "num_bytes": 243017, "num_examples": 362}, {"name": "descriptors_imagenet21k", "num_bytes": 604897, "num_examples": 998}, {"name": "descriptors_food101", "num_bytes": 58525, "num_examples": 101}, {"name": "descriptors_birdsnap", "num_bytes": 322488, "num_examples": 500}, {"name": "descriptors_oxfordflowers", "num_bytes": 58560, "num_examples": 102}, {"name": "descriptors_caltech101", "num_bytes": 56880, "num_examples": 102}, {"name": "descriptors_stanfordcars", "num_bytes": 157786, "num_examples": 196}, {"name": "lvis", "num_bytes": 679195, "num_examples": 1198}, {"name": "oxfordpets", "num_bytes": 22322, "num_examples": 37}, {"name": "visualgenome", "num_bytes": 1092697, "num_examples": 1913}, {"name": "dtd", "num_bytes": 25204, "num_examples": 47}, {"name": "fgvc", "num_bytes": 74126, "num_examples": 100}, {"name": "cifar100", "num_bytes": 54081, "num_examples": 100}, {"name": "coco", "num_bytes": 45186, "num_examples": 80}, {"name": "sun397", "num_bytes": 243017, "num_examples": 362}, {"name": "imagenet21k", "num_bytes": 604897, "num_examples": 998}, {"name": "food101", "num_bytes": 58525, "num_examples": 101}, {"name": "birdsnap", "num_bytes": 322488, "num_examples": 500}, {"name": "oxfordflowers", "num_bytes": 58560, "num_examples": 102}, {"name": "caltech101", "num_bytes": 56880, "num_examples": 102}, {"name": "stanfordcars", "num_bytes": 157786, "num_examples": 196}, {"name": "full", "num_bytes": 2999992, "num_examples": 4951}], "download_size": 5033751, "dataset_size": 9989920}}
|
2023-06-17T20:47:18+00:00
|
5cad6ff9ba831bd7cc576f551ef2dd4ab62292ba
|
# Dataset Card for "chunk_132"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_132
|
[
"region:us"
] |
2023-04-09T15:17:08+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 20839438512.875, "num_examples": 216969}], "download_size": 19299011373, "dataset_size": 20839438512.875}}
|
2023-04-09T15:35:57+00:00
|
a0e80ce2134fe989958f9b28a2dcd25708f780f9
|
# Dataset Card for "NMSQA-CODE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
voidful/NMSQA-CODE
|
[
"language:en",
"region:us"
] |
2023-04-09T15:54:03+00:00
|
{"language": "en", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "audio_full_answer_end", "sequence": "float64"}, {"name": "audio_full_answer_start", "sequence": "float64"}, {"name": "audio_segment_answer_end", "sequence": "float64"}, {"name": "audio_segment_answer_start", "sequence": "float64"}, {"name": "text", "sequence": "string"}]}, {"name": "content_segment_audio_path", "dtype": "string"}, {"name": "content_full_audio_path", "dtype": "string"}, {"name": "content_audio_sampling_rate", "dtype": "float64"}, {"name": "content_audio_speaker", "dtype": "string"}, {"name": "content_segment_text", "dtype": "string"}, {"name": "content_segment_normalized_text", "dtype": "string"}, {"name": "question_audio_path", "dtype": "string"}, {"name": "question_audio_sampling_rate", "dtype": "float64"}, {"name": "question_audio_speaker", "dtype": "string"}, {"name": "question_normalized_text", "dtype": "string"}, {"name": "hubert_100_context_unit", "dtype": "string"}, {"name": "hubert_100_question_unit", "dtype": "string"}, {"name": "hubert_100_answer_unit", "dtype": "string"}, {"name": "mhubert_1000_context_unit", "dtype": "string"}, {"name": "mhubert_1000_question_unit", "dtype": "string"}, {"name": "mhubert_1000_answer_unit", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3329037982, "num_examples": 87599}, {"name": "test", "num_bytes": 1079782, "num_examples": 171}, {"name": "dev", "num_bytes": 411186265, "num_examples": 10570}], "download_size": 507994561, "dataset_size": 3741304029}}
|
2023-07-24T17:30:24+00:00
|
9e6ca4e0287b95c8b41b855cc66e85a70bda05b3
|
wanian/bfgnbgfng
|
[
"license:openrail",
"region:us"
] |
2023-04-09T16:04:02+00:00
|
{"license": "openrail"}
|
2023-04-09T16:04:02+00:00
|
|
a39fd43b2f29aae3be773e1465f99c793e56a6df
|
himanshu0410/Tomato-10-Diseases-Labelled
|
[
"license:mit",
"region:us"
] |
2023-04-09T16:23:24+00:00
|
{"license": "mit"}
|
2023-04-09T16:23:24+00:00
|
|
fdde1bcb97a9396cbf85d36db6e5b49f0580a552
|
# Dataset Card for "bak_rus_3M2023_scored"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
slone/bak_rus_3M2023_scored
|
[
"region:us"
] |
2023-04-09T17:08:40+00:00
|
{"dataset_info": {"features": [{"name": "ba", "dtype": "string"}, {"name": "ru", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "cosine_sim", "dtype": "float64"}, {"name": "cross_encoder_sim", "dtype": "float64"}, {"name": "joint_sim", "dtype": "float64"}, {"name": "idx", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1228138533, "num_examples": 3686157}, {"name": "validation", "num_bytes": 1161040, "num_examples": 3000}], "download_size": 706620038, "dataset_size": 1229299573}}
|
2023-04-09T17:12:00+00:00
|
6f643456957547c54d5595882913b7c723c20b01
|
# Dataset Card for "chunk_135"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_135
|
[
"region:us"
] |
2023-04-09T17:54:43+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 22342781808.375, "num_examples": 232621}], "download_size": 20376739990, "dataset_size": 22342781808.375}}
|
2023-04-09T18:06:00+00:00
|
1aeb9831f68b485945daf8d0f25fc536f517bee0
|
# Dataset Card for "chunk_140"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_140
|
[
"region:us"
] |
2023-04-09T17:58:21+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 21473835552.25, "num_examples": 223574}], "download_size": 19214551736, "dataset_size": 21473835552.25}}
|
2023-04-09T18:09:15+00:00
|
43296755197da3c065deb28f6ae79ddeac658df8
|
HyperionHF/Anthropic-evals-persona
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-04-09T18:14:50+00:00
|
{"license": "cc-by-4.0"}
|
2023-04-09T18:29:23+00:00
|
|
bcfb10b4dfb70d045fed71e4101b1d35cc131054
|
# Dataset Card for "biored_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
c-x-he/biored_tokenized
|
[
"region:us"
] |
2023-04-09T18:20:16+00:00
|
{"dataset_info": {"features": [{"name": "pmid", "dtype": "string"}, {"name": "passage", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 576610, "num_examples": 97}, {"name": "train", "num_bytes": 2259680, "num_examples": 387}, {"name": "val", "num_bytes": 604670, "num_examples": 98}], "download_size": 1083243, "dataset_size": 3440960}}
|
2023-04-17T17:14:14+00:00
|
5bb3628845fce88ea694498433f6422c78afbd2d
|
# Dataset Card for "WikiMedia-v20210402-Ja_Zh-filtered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
larryvrh/WikiMedia-v20210402-Ja_Zh-filtered
|
[
"region:us"
] |
2023-04-09T18:29:10+00:00
|
{"dataset_info": {"features": [{"name": "ja", "dtype": "string"}, {"name": "zh", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7517762, "num_examples": 15989}], "download_size": 4720167, "dataset_size": 7517762}}
|
2023-04-09T18:30:00+00:00
|
f84b49f9f81aeb06e7792bada57f8450988e9765
|
# Dataset Card for "chunk_129"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_129
|
[
"region:us"
] |
2023-04-09T18:30:31+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 21313915632.375, "num_examples": 221909}], "download_size": 19737602839, "dataset_size": 21313915632.375}}
|
2023-04-09T18:50:57+00:00
|
fa2cd5703c69830dacadef5533aaf10c7cd5184a
|
# Dataset Card for "chunk_138"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_138
|
[
"region:us"
] |
2023-04-09T18:43:34+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 21273959664.375, "num_examples": 221493}], "download_size": 19608450501, "dataset_size": 21273959664.375}}
|
2023-04-09T19:00:58+00:00
|
7d76974627448b73aaaf0783083579fe37b34a19
|
# Dataset Card for "BloomDemoFashion"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
skar02/BloomDemoFashion
|
[
"region:us"
] |
2023-04-09T18:50:39+00:00
|
{"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "ad", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3397, "num_examples": 5}], "download_size": 7971, "dataset_size": 3397}}
|
2023-04-09T18:50:41+00:00
|
f0ffc782eb97840f4698a34af3be2d218faaf5c5
|
# Dataset Card for "cool_new_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
skar01/cool_new_dataset
|
[
"region:us"
] |
2023-04-09T18:53:29+00:00
|
{"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "ad", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3455, "num_examples": 5}], "download_size": 7984, "dataset_size": 3455}}
|
2023-04-09T18:53:30+00:00
|
5d61802c79b3bce6f5892b14fc8d2ebbb8636a60
|
# Dataset Card for "BloomDemoFashionDataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sudipkar123/BloomDemoFashionDataset
|
[
"region:us"
] |
2023-04-09T18:54:25+00:00
|
{"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "ad", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2393, "num_examples": 5}], "download_size": 6047, "dataset_size": 2393}}
|
2023-04-09T18:54:27+00:00
|
cece8a4df4b410a41fa12fed5fc776727d217f45
|
# Dataset Card for "chunk_122"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_122
|
[
"region:us"
] |
2023-04-09T19:01:08+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 28831880736.25, "num_examples": 300182}], "download_size": 26718079040, "dataset_size": 28831880736.25}}
|
2023-04-09T19:44:01+00:00
|
ccb611961535cfc1edfaaa25872e2e32ca97fd61
|
dylanalloy/imdb-gpt-selftalk_500k
|
[
"license:cc-by-nc-4.0",
"region:us"
] |
2023-04-09T19:11:55+00:00
|
{"license": "cc-by-nc-4.0"}
|
2023-04-09T19:14:38+00:00
|
|
61eec791e59fd5b91213527b99430b13cf6b541d
|
# Dataset Card for "chunk_133"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_133
|
[
"region:us"
] |
2023-04-09T19:12:34+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 24086821392.625, "num_examples": 250779}], "download_size": 21976509024, "dataset_size": 24086821392.625}}
|
2023-04-09T19:32:06+00:00
|
44fe47dac123794c7a0f3f286e6b41aebcc21f99
|
# Dataset Card for "chunk_142"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_142
|
[
"region:us"
] |
2023-04-09T19:24:08+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 20192747328.5, "num_examples": 210236}], "download_size": 18478246133, "dataset_size": 20192747328.5}}
|
2023-04-09T19:34:34+00:00
|
18ad1b0c701eaa0de03d3cecfdd769cbc70ffbd0
|
# Dataset Card for MiniPile
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
[The MiniPile Challenge for Data-Efficient Language Models](https://arxiv.org/abs/2304.08442)
### Dataset Summary
MiniPile is a 6GB subset of the [deduplicated The Pile corpus](https://huggingface.co/datasets/EleutherAI/the_pile_deduplicated). To curate MiniPile, we perform a simple, three-step data filtering process: we (1) infer embeddings for all documents of the Pile, (2) cluster the embedding space using k-means, and (3) filter out low-quality clusters.
The primary motivation for curating MiniPile is that (i) diverse pre-training datasets (like the Pile) are often too large for academic budgets and (ii) most smaller-scale datasets are fairly homogeneous and thereby unrepresentative of contemporary general-purpose language models. MiniPile aims to fill this gap and thereby facilitate data-efficient research on model architectures, training procedures, optimizers, etc.
More details on the MiniPile curation procedure and some pre-training results be found in the [MiniPile paper](https://arxiv.org/abs/2304.08442).
For more details on the Pile corpus, we refer the reader to [the Pile datasheet](https://arxiv.org/abs/2201.07311).
### Languages
English (`EN`)
## Additional Information
### Dataset Curators
MiniPile is a subset of the Pile, curated by Jean Kaddour. The Pile was created by Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, Connor Leahy.
### Licensing Information
Since MiniPile is a subset of the Pile, the same MIT License holds.
### Citation Information
```
@article{kaddour2023minipile,
title={The MiniPile Challenge for Data-Efficient Language Models},
author={Kaddour, Jean},
journal={arXiv preprint arXiv:2304.08442},
year={2023}
}
@article{gao2020pile,
title={The {P}ile: An 800{GB} dataset of diverse text for language modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}
```
|
JeanKaddour/minipile
|
[
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:other",
"arxiv:2304.08442",
"arxiv:2201.07311",
"region:us"
] |
2023-04-09T19:32:58+00:00
|
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": "other", "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "paperswithcode_id": "minipile", "pretty_name": "MiniPile", "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5906108510, "num_examples": 1000000}, {"name": "validation", "num_bytes": 2779386, "num_examples": 500}, {"name": "test", "num_bytes": 58558191, "num_examples": 10000}], "download_size": 3177432813, "dataset_size": 5967446087}}
|
2023-06-20T09:08:26+00:00
|
96ad60f6b05bd41e3a50b39fe0652385e2c29c87
|
# Dataset Card for "chunk_136"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_136
|
[
"region:us"
] |
2023-04-09T19:36:57+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 21927854448.375, "num_examples": 228301}], "download_size": 19947095246, "dataset_size": 21927854448.375}}
|
2023-04-09T20:07:58+00:00
|
e2c49f7cf387cdcc91012682d9920813fd7b8d8a
|
# Dataset Card for "chunk_137"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_137
|
[
"region:us"
] |
2023-04-09T19:42:27+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 21919018032.875, "num_examples": 228209}], "download_size": 19725752844, "dataset_size": 21919018032.875}}
|
2023-04-09T20:12:34+00:00
|
8708693ad9b4b1cd14220023e0f000256dafa9fd
|
# Dataset Card for "OpenCaselistLI"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Yusuf5/OpenCaselistLI
|
[
"region:us"
] |
2023-04-09T19:47:55+00:00
|
{"dataset_info": {"features": [{"name": "rowNum", "dtype": "int64"}, {"name": "id", "dtype": "int64"}, {"name": "fileId", "dtype": "int64"}, {"name": "pocket", "dtype": "string"}, {"name": "hat", "dtype": "string"}, {"name": "block", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "fullcite", "dtype": "string"}, {"name": "cite", "dtype": "string"}, {"name": "bucketId", "dtype": "int64"}, {"name": "duplicateCount", "dtype": "int64"}, {"name": "textLength", "dtype": "float64"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 614838996.9803674, "num_examples": 1047870}, {"name": "validate", "num_bytes": 77016964.64603722, "num_examples": 131260}, {"name": "test", "num_bytes": 76885532.3735954, "num_examples": 131036}], "download_size": 236929052, "dataset_size": 768741494.0}}
|
2023-04-09T22:24:40+00:00
|
cf36144dc7c4e91956fc345a312e459c38b514b3
|
# Dataset Card for "chunk_134"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_134
|
[
"region:us"
] |
2023-04-09T19:49:54+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 22845304944.375, "num_examples": 237853}], "download_size": 20796713169, "dataset_size": 22845304944.375}}
|
2023-04-09T20:22:21+00:00
|
e17a1164a23ca78399cbe014fea546135134a6aa
|
# Dataset Card for "chunk_139"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_139
|
[
"region:us"
] |
2023-04-09T19:53:33+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 21319390368.25, "num_examples": 221966}], "download_size": 19438740491, "dataset_size": 21319390368.25}}
|
2023-04-09T20:23:46+00:00
|
575de112b1a8aca0ba07ae26c8995c3b38831ed5
|
# Dataset Card for "chunk_141"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_141
|
[
"region:us"
] |
2023-04-09T20:46:14+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 19911614832.375, "num_examples": 207309}], "download_size": 18300991649, "dataset_size": 19911614832.375}}
|
2023-04-09T21:02:44+00:00
|
9eca81410f3eeba2dc2d8ce8d01d357a1bc583ba
|
ELI5 paired
This is a processed version of the [eli5](https://huggingface.co/datasets/eli5) dataset. The dataset was created following very closely the steps in the [stack-exchange-paired dataset](https://huggingface.co/datasets/lvwerra/stack-exchange-paired). The following steps were applied:
- Create pairs (response_j, response_k) where j was rated better than k
- Sample at most 10 pairs per question
- Shuffle the dataset globally
This dataset is designed to be used for preference learning using techniques such as Reinforcement Learning from Human Feedback. The processing notebook is in the repository as well.
If you want to construct a "question" column in this data, you can either use just the "title" column, or concatenate the "title" column with the "selftext" column as follows:
```
def get_question(example):
title = example["title"]
selftext = example["selftext"]
if selftext:
if selftext[-1] not in [".", "?", "!"]:
seperator = ". "
else:
seperator = " "
question = title + seperator + selftext
else:
question = title
example["question"] = question
return example
dataset = load_dataset("vincentmin/eli5_askscience_askhistorians_rlhf")
dataset = dataset.map(get_question)
```
For the license, see the [eli5 dataset](https://huggingface.co/datasets/eli5) which states
"The licensing status of the dataset hinges on the legal status of the Pushshift.io data which is unclear."
at the time of creation of this dataset.
|
vincentmin/eli5_rlhf
|
[
"task_categories:conversational",
"task_categories:text2text-generation",
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:1M<n<10M",
"language:en",
"rlhf",
"reinforcement learning from human feedback",
"region:us"
] |
2023-04-09T21:00:04+00:00
|
{"language": ["en"], "size_categories": ["1M<n<10M"], "task_categories": ["conversational", "text2text-generation", "text-generation", "question-answering"], "pretty_name": "Reddit Explain Like I am Five dataset for Reinforcement Learning from Human Feedback", "tags": ["rlhf", "reinforcement learning from human feedback"]}
|
2023-04-10T06:58:18+00:00
|
ed30b217a6ac2fb106207856725b54542bce8993
|
# Dataset Card for "chunk_145"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_145
|
[
"region:us"
] |
2023-04-09T21:23:52+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 19801831968.25, "num_examples": 206166}], "download_size": 18126798717, "dataset_size": 19801831968.25}}
|
2023-04-09T21:34:15+00:00
|
bb0acd7161a2c41565482654cbc1eead4bc58358
|
zzj0402/nvd
|
[
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:bigscience-openrail-m",
"code",
"region:us"
] |
2023-04-09T21:29:47+00:00
|
{"language": ["en"], "license": "bigscience-openrail-m", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification"], "pretty_name": "nvd", "tags": ["code"]}
|
2023-04-10T19:41:30+00:00
|
|
f8824a7ceeccbd00446728dfc2eaf0dc81ac1b38
|
# Dataset Card for "chunk_143"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_143
|
[
"region:us"
] |
2023-04-09T21:48:42+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 21448478880.25, "num_examples": 223310}], "download_size": 19238429165, "dataset_size": 21448478880.25}}
|
2023-04-09T22:08:11+00:00
|
6f3dd62636f51e214dd1f2220ae5c0040de3468f
|
# Dataset Card for Alpaca-Cleaned
- **Repository:** https://github.com/gururise/AlpacaDataCleaned
## Dataset Description
This is a cleaned version of the original Alpaca Dataset released by Stanford. The following issues have been identified in the original release and fixed in this dataset:
1. **Hallucinations:** Many instructions in the original dataset had instructions referencing data on the internet, which just caused GPT3 to hallucinate an answer.
```
"instruction":"Summarize the given article in 200 Words.",
"input": "https://www.bbc.com/news/world-51461830",
"output": "The recent protests in Belarus are part of a growing wave of discontent against the rule of Alexander Lukashenko, who has led the country since 1994. Demonstrations against his government have been ongoing since August, and come in the wake of an election widely seen as rigged. As police and security forces have cracked down on protesters, the movement has only grown, with hundreds of thousands of people taking to the streets to call for new leadership. Human rights activists have accused the government of using extreme violence in order prosecute protesters, leading to an international outcry and sanctions from the European Union. As the situation in Belarus continues to unfold, the international community is monitoring the situation closely - hoping for a peaceful resolution, while also standing firm against the oppressive tactics employed by the Belarusian state."
"instruction": "Classify the following shape according to simple geometry.",
"input": "<Picture Attached>",
"output": "Triangle"
"instruction": "Given a recent news article, write a headline for it.",
"input": "https://www.news.example.com/article-1",
"output": "\"NASA Launches a Mars 2020 Mission to Search for Signs of Ancient Life\""
```
2. **Merged Instructions:** There were many instructions that were merged together in the original dataset for some reason.
```
"instruction": "Should I invest in stocks?",
"input": "",
"output": "It depends on your individual situation. Investing in stocks carries a certain amount of risk but can also offer potential rewards. Make sure to research the different types of stocks, understand the risks and rewards, and develop a plan that works for you.\n##\n18. Instruction: Identify the noun in the following sentence:\n18. Input:\nThe large apple was delicious.\n18. Output:\nApple"
```
3. **Empty outputs:** Some entries in the original dataset had empty outputs.
```
"instruction": "Calculate the area of the following shape in square centimeters.",
"input": "rectangle of size 4 cm x 5 cm",
"output": ""
```
4. **Empty code examples:** Some descriptions in the original dataset were missing code examples, making it difficult to understand the intended behavior of the code.
5. **Instructions to generate images:** Some descriptions in the original dataset included instructions to generate images, something obviously not possible.
```
"instruction": "Create a graphic or logo that visually represents the word \"courage\".",
"input": "",
"output": "<No Output>"
```
6. **N/A outputs:** Some code snippets in the original dataset had N/A outputs.
7. **Inconsistent input field:** The original dataset had inconsistent usage of the input field when it was supposed to be empty.
```
"input":"<no input>"
"input":"No input"
"input":"noinput"
"input":"<noinput>"
```
8. **Wrong answers:** Some instructions/questions in the original dataset had incorrect answers. About 80% of the math problems are estimated to have incorrect answers.
```
"instruction": "Calculate the median of the following data set.",
"input": "1, 2, 4, 5, 8, 9",
"output": "5"
"instruction": "Convert 25m to km.",
"input": "",
"output": "25km"
```
9. **Non-Sensical/Unclear instructions:** Many instructions are unclear, we try to clarify (or re-write) if instructions are non-sensical. Instructions that are slightly unclear, but where one could deduce the meaning are not altered.
```
"instruction": "Freeze the following sample of yogurt for 10 minutes.",
"input": "Yogurt sample",
"output": "<noinput>"
"instruction": "Increase the font size to 12 points.",
"input": "",
"output": "The font size has been increased to 12 points."
```
10. **Extraneous escape and control characters:** The original dataset had several entries with extraneous escape and control characters.
### Original Alpaca Dataset Summary
Alpaca is a dataset of 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better.
The authors built on the data generation pipeline from [Self-Instruct framework](https://github.com/yizhongw/self-instruct) and made the following modifications:
- The `text-davinci-003` engine to generate the instruction data instead of `davinci`.
- A [new prompt](https://github.com/tatsu-lab/stanford_alpaca/blob/main/prompt.txt) was written that explicitly gave the requirement of instruction generation to `text-davinci-003`.
- Much more aggressive batch decoding was used, i.e., generating 20 instructions at once, which significantly reduced the cost of data generation.
- The data generation pipeline was simplified by discarding the difference between classification and non-classification instructions.
- Only a single instance was generated for each instruction, instead of 2 to 3 instances as in Self-Instruct.
This produced an instruction-following dataset with 52K examples obtained at a much lower cost (less than $500).
In a preliminary study, the authors also found that the 52K generated data to be much more diverse than the data released by [Self-Instruct](https://github.com/yizhongw/self-instruct/blob/main/data/seed_tasks.jsonl).
### Supported Tasks and Leaderboards
The Alpaca dataset designed for instruction training pretrained language models.
### Languages
The data in Alpaca are in English (BCP-47 en).
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"instruction": "Create a classification task by clustering the given list of items.",
"input": "Apples, oranges, bananas, strawberries, pineapples",
"output": "Class 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples",
"text": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nCreate a classification task by clustering the given list of items.\n\n### Input:\nApples, oranges, bananas, strawberries, pineapples\n\n### Response:\nClass 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples",
}
```
### Data Fields
The data fields are as follows:
* `instruction`: describes the task the model should perform. Each of the 52K instructions is unique.
* `input`: optional context or input for the task. For example, when the instruction is "Summarize the following article", the input is the article. Around 40% of the examples have an input.
* `output`: the answer to the instruction as generated by `text-davinci-003`.
* `text`: the `instruction`, `input` and `output` formatted with the [prompt template](https://github.com/tatsu-lab/stanford_alpaca#data-release) used by the authors for fine-tuning their models.
### Data Splits
| | train |
|---------------|------:|
| alpaca | 52002 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Excerpt the [blog post](https://crfm.stanford.edu/2023/03/13/alpaca.html) accompanying the release of this dataset:
> We believe that releasing the above assets will enable the academic community to perform controlled scientific studies on instruction-following language models, resulting in better science and ultimately new techniques to address the existing deficiencies with these models. At the same time, any release carries some risk. First, we recognize that releasing our training recipe reveals the feasibility of certain capabilities. On one hand, this enables more people (including bad actors) to create models that could cause harm (either intentionally or not). On the other hand, this awareness might incentivize swift defensive action, especially from the academic community, now empowered by the means to perform deeper safety research on such models. Overall, we believe that the benefits for the research community outweigh the risks of this particular release. Given that we are releasing the training recipe, we believe that releasing the data, model weights, and training code incur minimal further risk, given the simplicity of the recipe. At the same time, releasing these assets has enormous benefits for reproducible science, so that the academic community can use standard datasets, models, and code to perform controlled comparisons and to explore extensions. Deploying an interactive demo for Alpaca also poses potential risks, such as more widely disseminating harmful content and lowering the barrier for spam, fraud, or disinformation. We have put into place two risk mitigation strategies. First, we have implemented a content filter using OpenAI’s content moderation API, which filters out harmful content as defined by OpenAI’s usage policies. Second, we watermark all the model outputs using the method described in Kirchenbauer et al. 2023, so that others can detect (with some probability) whether an output comes from Alpaca 7B. Finally, we have strict terms and conditions for using the demo; it is restricted to non-commercial uses and to uses that follow LLaMA’s license agreement. We understand that these mitigation measures can be circumvented once we release the model weights or if users train their own instruction-following models. However, by installing these mitigations, we hope to advance the best practices and ultimately develop community norms for the responsible deployment of foundation models.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
The `alpaca` data is generated by a language model (`text-davinci-003`) and inevitably contains some errors or biases. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
### Contributions
[More Information Needed]
|
alexl83/AlpacaDataCleaned
|
[
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"instruction-finetuning",
"region:us"
] |
2023-04-09T21:52:22+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "pretty_name": "Alpaca-Cleaned", "tags": ["instruction-finetuning"]}
|
2023-04-09T22:42:35+00:00
|
05a42266452c09ae15eef2faee949616b01b87e6
|
# Dataset Card for "alpaca-es-agentes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hackathon-somos-nlp-2023/alpaca-es-agentes
|
[
"region:us"
] |
2023-04-09T23:02:54+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "null"}, {"name": "inputs", "struct": [{"name": "1-instruction", "dtype": "string"}, {"name": "2-input", "dtype": "string"}, {"name": "3-output", "dtype": "string"}]}, {"name": "prediction", "dtype": "null"}, {"name": "prediction_agent", "dtype": "null"}, {"name": "annotation", "dtype": "string"}, {"name": "annotation_agent", "dtype": "string"}, {"name": "vectors", "struct": [{"name": "input", "sequence": "float64"}, {"name": "instruction", "sequence": "float64"}, {"name": "output", "sequence": "float64"}]}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "dtype": "null"}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 985053979, "num_examples": 52002}], "download_size": 655032424, "dataset_size": 985053979}}
|
2023-04-09T23:03:38+00:00
|
97435ce6065dc49ee9883e4528a4b4bd82862c33
|
# Dataset Card for "chunk_147"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_147
|
[
"region:us"
] |
2023-04-09T23:04:53+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 20244421152.25, "num_examples": 210774}], "download_size": 18511704421, "dataset_size": 20244421152.25}}
|
2023-04-09T23:24:17+00:00
|
78deb28e3b3c1f8f8e71b005deb8b32ce79376c0
|
# Dataset Card for "biored_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
safiyaalavi/biored_tokenized
|
[
"region:us"
] |
2023-04-09T23:09:26+00:00
|
{"dataset_info": {"features": [{"name": "pmid", "dtype": "string"}, {"name": "passage", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 160097, "num_examples": 30}, {"name": "train", "num_bytes": 752283, "num_examples": 148}, {"name": "val", "num_bytes": 171371, "num_examples": 33}], "download_size": 392627, "dataset_size": 1083751}}
|
2023-04-10T17:20:36+00:00
|
060523e67ed642c6594e872ec59ce9f0cf16990e
|
# Dataset Card for "chunk_146"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_146
|
[
"region:us"
] |
2023-04-09T23:10:54+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 19708857504.25, "num_examples": 205198}], "download_size": 17829413682, "dataset_size": 19708857504.25}}
|
2023-04-09T23:37:10+00:00
|
be30ef19bc4269381485fbbec941d2953b5b47c1
|
# MOH Dataset
Creative Language Toolkit (CLTK) Metadata
- CL Type: Metaphor
- Task Type: detection, intrepretation
- Size: 1k~2k
- Created time: 2016
**Description**:
Moh dataset is a dataset for metaphor processing, which was released in the [paper](https://aclanthology.org/S16-2003.pdf).
For more details, please check the original paper.
## Citation
If you use this dataset, please cite:
```
@inproceedings{Mohammad2016MetaphorAA,
title={Metaphor as a Medium for Emotion: An Empirical Study},
author={Saif M. Mohammad and Ekaterina Shutova and Peter D. Turney},
booktitle={International Workshop on Semantic Evaluation},
year={2016}
}
```
## Contact
If you have any further queries, please open an issue or direct your queries to [mail](mailto:[email protected]).
|
CreativeLang/moh_metaphor
|
[
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-2.0",
"region:us"
] |
2023-04-09T23:22:06+00:00
|
{"language": ["en"], "license": "cc-by-2.0", "size_categories": ["1K<n<10K"], "pretty_name": "moh"}
|
2023-06-27T12:45:08+00:00
|
2234519372b465347bd693b096428b49e02a1fe1
|
prompt: one tall woman, species:dragon, dragon wings, large pear-shaped belly, large breasts, shoulder-length hair, long thick legs, wide hips, large dragon ears, long scaly tail
|
monmamo/dracquin
|
[
"size_categories:n<1K",
"language:en",
"license:cc",
"art",
"fantasy",
"anthrope",
"region:us"
] |
2023-04-09T23:33:04+00:00
|
{"language": ["en"], "license": "cc", "size_categories": ["n<1K"], "pretty_name": "Dracquin", "tags": ["art", "fantasy", "anthrope"]}
|
2023-04-10T00:44:04+00:00
|
27b91d1ec6e96d61327a1bc1b78bed7cadd4d5de
|
# Dataset Card for "chunk_149"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_149
|
[
"region:us"
] |
2023-04-10T00:15:28+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 17601084144.375, "num_examples": 183253}], "download_size": 15549262193, "dataset_size": 17601084144.375}}
|
2023-04-10T00:30:02+00:00
|
ab0e4bfd84b9622967907d3a8a8de6fb30db9d54
|
image generation prompt:
- average-height woman
- large pear-shaped belly
- rough olive-brown subtropical skin
- shoulder-length brown hair
- large breasts
- thick legs
- wide hips
- long neck
- brown pupils
- smile
- large brown dragon ears
|
monmamo/rhea-fairheart
|
[
"size_categories:n<1K",
"language:en",
"license:cc",
"art",
"anthrope",
"female",
"region:us"
] |
2023-04-10T00:16:33+00:00
|
{"language": ["en"], "license": "cc", "size_categories": ["n<1K"], "pretty_name": "Reah Fairheart", "tags": ["art", "anthrope", "female"]}
|
2023-04-14T23:41:03+00:00
|
7496ac60f5e0150a4bb5b39bd3cead7304e84fc4
|
### Dataset Summary
This is a 'Can Do' dataset spripped down from the original GPT4all dataset
All prompts that have 'As an AI' as a reply have been removed along with any other refusal (Such as not having input propery)
A New Collumn named Text has been added with ### Response: added in between the prompt and the response so that DataBricks Dolly scripts can just access it https://github.com/databrickslabs/dolly
|
Corianas/gpt4all_Stripped_dollyfied
|
[
"license:apache-2.0",
"region:us"
] |
2023-04-10T00:36:07+00:00
|
{"license": "apache-2.0"}
|
2023-04-12T22:23:20+00:00
|
14aaaf5e883af676972c32e96842ad42f56d995d
|
# Dataset Card for "chunk_144"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_144
|
[
"region:us"
] |
2023-04-10T00:48:40+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 20525841792.0, "num_examples": 213704}], "download_size": 18760055690, "dataset_size": 20525841792.0}}
|
2023-04-10T01:24:35+00:00
|
02ae4af002ae2c0d37d40c18632a6a56ce6904bc
|
monmamo/venenia-blossom
|
[
"size_categories:n<1K",
"language:en",
"license:cc",
"art",
"female",
"dracquin",
"anthrope",
"region:us"
] |
2023-04-10T01:02:00+00:00
|
{"language": ["en"], "license": "cc", "size_categories": ["n<1K"], "pretty_name": "Venenia Blossom", "tags": ["art", "female", "dracquin", "anthrope"]}
|
2023-04-16T04:21:54+00:00
|
|
fb24524133943b3e1a7a6f6616123d43003717f3
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
nxt-g3n/papers
|
[
"region:us"
] |
2023-04-10T01:22:46+00:00
|
{}
|
2023-04-10T03:01:26+00:00
|
9ee5911498089e2cb22d96ade80cc113e2439fd9
|
csr/Image-Colorization
|
[
"license:mit",
"region:us"
] |
2023-04-10T02:02:26+00:00
|
{"license": "mit"}
|
2023-04-10T02:05:24+00:00
|
|
d88f340acb1057376eb9b8bf135d63621898fb1a
|
sweetcocoa/pop2piano_ci
|
[
"size_categories:n<1K",
"license:mit",
"region:us"
] |
2023-04-10T02:02:48+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "pretty_name": "p"}
|
2023-06-19T11:18:56+00:00
|
|
7670e796496a7c6a26eaf29f584b97342d50cbd4
|
# Dataset Card for "Alpaca_instruction_fine_tune_Punjabi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
japneets/Alpaca_instruction_fine_tune_Punjabi
|
[
"region:us"
] |
2023-04-10T03:32:41+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 46649317, "num_examples": 52002}], "download_size": 18652304, "dataset_size": 46649317}}
|
2023-04-10T03:32:47+00:00
|
3ec6e19ea9e0dc4cb41c08ba9abdfd607cb077fa
|
# Dataset Card for "hands"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
camenduru/hands
|
[
"region:us"
] |
2023-04-10T03:47:33+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 890694993.116, "num_examples": 11076}], "download_size": 695555524, "dataset_size": 890694993.116}}
|
2023-04-10T03:48:03+00:00
|
c2f49a07f78bcb86cf9abddf8b24d290300620e1
|
# JinJinLeDao QA Dataset
## Dataset Description
**Repository**: https://github.com/tech-podcasts/JinJinLeDao_QA_Dataset
**HuggingFace**: https://huggingface.co/datasets/wavpub/JinJinLeDao_QA_Dataset
### Dataset Summary
The dataset contains over 18,000 Chinese question-answer pairs extracted from 281 episodes of the Chinese podcast "[JinJinLeDao](https://dao.fm/)". The subtitles were extracted using the OpenAI Whisper transcription tool, and the question-answer pairs were generated using GPT-3.5 by dividing the subtitles into blocks and prompting the model to generate questions and answers.
### Supported Tasks and Leaderboards
This dataset can be used for various natural language processing tasks, such as question answering and text generation, among others.
### Languages
The dataset is in Chinese (Mandarin).
## Dataset Structure
### Data Instances
The dataset contains over 18,000 question-answer pairs.
### Data Fields
Each data instance contains the following fields:
question: The generated question based on the text block.
answer: The corresponding answer to the generated question.
episode: The title of the podcast episode from which the question-answer pair was extracted.
podcast: The name of the specific program within the "[JinJinLeDao](https://dao.fm/)" podcast where the episode was featured.
### Data Splits
The dataset does not have predefined splits. Users can split the data according to their own requirements.
## Dataset Creation
### Curation Rationale
The dataset was created to provide a resource for Chinese language natural language processing research.
### Source Data
#### Initial Data Collection and Normalization
The source data consists of 281 episodes of the Chinese podcast "[JinJinLeDao](https://dao.fm/)", which were transcribed using the OpenAI Whisper transcription tool.
#### Who are the source language producers?
The source language producers are the hosts of the "[JinJinLeDao](https://dao.fm/)" podcast.
### Annotations
#### Annotation process
The dataset was annotated using an automated process, in which GPT-3.5 was used to generate questions and answers based on text prompts.
#### Who are the annotators?
The initial annotation of the dataset was carried out through an automated process, without the involvement of human annotators. However, we later introduced a manual correction step to improve the accuracy of the data, and we would like to express our gratitude to [Chunhui Gao](https://www.ipip.net/) for taking the time to assist us with this task.
### Personal and Sensitive Information
The dataset does not contain any personal or sensitive information, except for some user names mentioned in the audio content.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset was created for academic and research purposes only.
### Discussion of Biases
As the dataset was generated using an automated process, there may be biases in the generated questions and answers.
### Other Known Limitations
The dataset was generated using an automated process, which may result in lower quality data compared to manually annotated datasets.
## Additional Information
### Dataset Curators
The dataset was curated [JinJinLeDao](https://dao.fm/) and [Hongyang Jin](https://github.com/GanymedeNil).
### Licensing Information
The dataset is released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.
### Citation Information
If you use this dataset in your research, please cite the following paper:
N/A
### Contributions
Thanks to [JinJinLeDao](https://dao.fm/) for providing the data and to [Hongyang Jin](https://github.com/GanymedeNil) for curating and sharing this dataset.We would also like to express our gratitude to [Chunhui Gao](https://www.ipip.net/) for his assistance in improving the accuracy of the data.
|
wavpub/JinJinLeDao_QA_Dataset
|
[
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:zh",
"region:us"
] |
2023-04-10T03:47:59+00:00
|
{"language": ["zh"], "size_categories": ["10K<n<100K"], "task_categories": ["question-answering", "text-generation"], "pretty_name": "JinJinLeDao QA Dataset"}
|
2023-04-16T07:19:58+00:00
|
adbb059fe001ba184ebd9a577d878aa7e1539dda
|
# Dataset Card for "comentario_youtube_lorea_sin_input"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ID3/comentario_youtube_lorea_sin_input
|
[
"region:us"
] |
2023-04-10T04:01:03+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4904984, "num_examples": 3538}], "download_size": 1682813, "dataset_size": 4904984}}
|
2023-04-10T04:01:06+00:00
|
3b8a7888bd423a17adacddf8508f44db4aedb97b
|
hustzx/sd_test
|
[
"license:apache-2.0",
"region:us"
] |
2023-04-10T04:56:29+00:00
|
{"license": "apache-2.0"}
|
2023-04-10T04:56:29+00:00
|
|
049757c765080b37378e3e42897d61628eb5052e
|
# Dataset Card for "reklamation24_mode-schmuck-zubehoer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/reklamation24_mode-schmuck-zubehoer
|
[
"region:us"
] |
2023-04-10T06:00:19+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "label_name", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 175523, "num_examples": 400}, {"name": "test", "num_bytes": 43457, "num_examples": 100}], "download_size": 0, "dataset_size": 218980}}
|
2023-04-19T07:20:18+00:00
|
81d59a83c94ebd8348463591927ce40cd34200ff
|
# Dataset Card for "reklamation24_moebel-einrichtungshaeuser"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/reklamation24_moebel-einrichtungshaeuser
|
[
"region:us"
] |
2023-04-10T06:03:03+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "label_name", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 204461, "num_examples": 408}, {"name": "test", "num_bytes": 47795, "num_examples": 103}], "download_size": 0, "dataset_size": 252256}}
|
2023-04-19T07:22:24+00:00
|
cbc21cc24e2ff50deab5fdd17b94e35bcfc7d2b0
|
# Dataset Card for "wmt14_de_en_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
guangyil/wmt14_de_en_tokenized
|
[
"region:us"
] |
2023-04-10T06:20:03+00:00
|
{"dataset_info": {"features": [{"name": "bert_token", "sequence": "int64"}, {"name": "gpt2_token", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 830243434.9599016, "num_examples": 1207880}, {"name": "test", "num_bytes": 667680.9386666666, "num_examples": 1156}], "download_size": 98135244, "dataset_size": 830911115.8985683}}
|
2023-04-10T06:21:16+00:00
|
31aa998c1cbc36063c5bd5c0ca87b082d443da02
|
# Distil Whisper: TEDLIUM
This is a variant of the [TEDLIUM](https://huggingface.co/datasets/LIUM/tedlium) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/LIUM/tedlium).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/tedlium", "release3")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/tedlium", "release3", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc-by-nc-nd-3.0.
|
distil-whisper/tedlium
|
[
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc-by-nc-nd-3.0",
"region:us"
] |
2023-04-10T06:32:45+00:00
|
{"language": ["en"], "license": "cc-by-nc-nd-3.0", "task_categories": ["automatic-speech-recognition"], "-pretty_name": "TEDLIUM"}
|
2023-09-25T09:30:14+00:00
|
d8a5b3876ae10b14446e7acf91cd24ce78307cba
|
magicgh/alpaca-cleaned
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-04-10T06:48:04+00:00
|
{"license": "cc-by-4.0"}
|
2023-04-10T06:48:32+00:00
|
|
02bea07e904efbafd468f94df8498804aa3bba58
|
WilliamWen/summarization_cata
|
[
"license:apache-2.0",
"region:us"
] |
2023-04-10T07:12:43+00:00
|
{"license": "apache-2.0"}
|
2023-04-10T07:19:59+00:00
|
|
0b9c5e9e9578b522de6373ded662bf20c24451fb
|
Apocalypse-19/amazon-shoes
|
[
"license:mit",
"region:us"
] |
2023-04-10T07:20:42+00:00
|
{"license": "mit"}
|
2023-04-10T07:25:58+00:00
|
|
fc5958f3aab3a5a89b0ce7f89b4e5e6457ee8236
|
# Dataset Card for "so"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Hojjat/so
|
[
"region:us"
] |
2023-04-10T07:58:55+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1765581, "num_examples": 4777}, {"name": "test", "num_bytes": 497510, "num_examples": 1326}, {"name": "dev", "num_bytes": 194781, "num_examples": 530}], "download_size": 753841, "dataset_size": 2457872}}
|
2023-04-10T08:13:41+00:00
|
b9291ae17e2a684a324f6b1cec4b6f760d83c68d
|
# Dataset Card for "MultiPL-E-pass-rates"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nuprl/MultiPL-E-pass-rates
|
[
"region:us"
] |
2023-04-10T08:38:47+00:00
|
{"dataset_info": {"features": [{"name": "BaseDataset", "dtype": "string"}, {"name": "ProblemName", "dtype": "string"}, {"name": "Model", "dtype": "string"}, {"name": "Language", "dtype": "string"}, {"name": "Temperature", "dtype": "float64"}, {"name": "NumPassed", "dtype": "int64"}, {"name": "NumCompletions", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 751894, "num_examples": 8928}], "download_size": 56002, "dataset_size": 751894}}
|
2023-04-10T08:38:49+00:00
|
138b4df284f19b46aa49eea3e98a480fc0be84ac
|
# Dataset Card for "grayscale_image_6k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ioclab/grayscale_image_6k
|
[
"region:us"
] |
2023-04-10T08:42:56+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "grayscale_image", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1304330343.0, "num_examples": 6000}], "download_size": 0, "dataset_size": 1304330343.0}}
|
2023-04-10T10:28:39+00:00
|
b65525aa0e7c7e93b593c3251861afcf457203f0
|
besscater/trademarks_canada
|
[
"size_categories:100K<n<1M",
"language:en",
"legal",
"region:us"
] |
2023-04-10T08:44:50+00:00
|
{"language": ["en"], "size_categories": ["100K<n<1M"], "pretty_name": "canadian_tm", "tags": ["legal"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6139857639.54, "num_examples": 102310}], "download_size": 6050785561, "dataset_size": 6139857639.54}}
|
2023-04-10T12:35:05+00:00
|
|
24e097fb39d8cd36250fb2b5ee64852eca0c7648
|
# Dataset Card for "chunk_148"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_148
|
[
"region:us"
] |
2023-04-10T08:47:43+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 18333354096.375, "num_examples": 190877}], "download_size": 16591054234, "dataset_size": 18333354096.375}}
|
2023-04-10T09:02:29+00:00
|
e9d95084cc71114fda43a61f1795ec1d6ffeaf46
|
# Dataset Card for "chunk_150"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_150
|
[
"region:us"
] |
2023-04-10T08:49:18+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 15983059536.125, "num_examples": 166407}], "download_size": 13526195097, "dataset_size": 15983059536.125}}
|
2023-04-10T09:02:03+00:00
|
3886feaf7cab626f257f23aa6db9deda374355c7
|
# Dataset Card for "chunk_151"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_151
|
[
"region:us"
] |
2023-04-10T08:50:56+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 18378016416.25, "num_examples": 191342}], "download_size": 16543295215, "dataset_size": 18378016416.25}}
|
2023-04-10T09:05:36+00:00
|
2cbc2aeb0ef6def9012c119016e1bfe5059d6826
|
# Dataset Card for "chunk_153"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_153
|
[
"region:us"
] |
2023-04-10T08:57:24+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 21205861632.0, "num_examples": 220784}], "download_size": 19644649645, "dataset_size": 21205861632.0}}
|
2023-04-10T09:15:41+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.