sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4ff480e81ecdd75221052127f0a7bc1367251690
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
ashu000999/medicalchatbot
|
[
"region:us"
] |
2023-09-20T17:28:29+00:00
|
{}
|
2023-09-21T08:55:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
921aec1e7a49d5558926ef039a477d0a2073a73a
|
# Dataset Card for "pokemon"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
IsaacJu666/pokemon
|
[
"region:us"
] |
2023-09-20T17:38:05+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "text_blip", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 56583875.0, "num_examples": 833}], "download_size": 50947153, "dataset_size": 56583875.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T20:15:32+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "pokemon"
More Information needed
|
[
"# Dataset Card for \"pokemon\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"pokemon\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"pokemon\"\n\nMore Information needed"
] |
98c62f7547f3ecb64a0dad95c69c6e96da89fe7b
|
# Dataset
This dataset is a combination of guanaco, wizardlm instruct and wizard vicuna datasets (all of them were uncensored).
|
DanFosing/wizardlm-vicuna-guanaco-uncensored
|
[
"license:apache-2.0",
"region:us"
] |
2023-09-20T17:39:21+00:00
|
{"license": "apache-2.0"}
|
2023-09-27T17:45:31+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #region-us
|
# Dataset
This dataset is a combination of guanaco, wizardlm instruct and wizard vicuna datasets (all of them were uncensored).
|
[
"# Dataset\nThis dataset is a combination of guanaco, wizardlm instruct and wizard vicuna datasets (all of them were uncensored)."
] |
[
"TAGS\n#license-apache-2.0 #region-us \n",
"# Dataset\nThis dataset is a combination of guanaco, wizardlm instruct and wizard vicuna datasets (all of them were uncensored)."
] |
[
14,
39
] |
[
"passage: TAGS\n#license-apache-2.0 #region-us \n# Dataset\nThis dataset is a combination of guanaco, wizardlm instruct and wizard vicuna datasets (all of them were uncensored)."
] |
4ef740712d270c3500b51c79730863f9c10baa62
|
# Dataset Card for "data_aug_full_0919"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
linhqyy/data_aug_full_0919
|
[
"region:us"
] |
2023-09-20T17:40:43+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "intent", "dtype": "string"}, {"name": "entities", "list": [{"name": "type", "dtype": "string"}, {"name": "filler", "dtype": "string"}]}, {"name": "labels", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1675925, "num_examples": 7907}, {"name": "test", "num_bytes": 143380, "num_examples": 688}], "download_size": 435475, "dataset_size": 1819305}}
|
2023-09-20T17:40:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "data_aug_full_0919"
More Information needed
|
[
"# Dataset Card for \"data_aug_full_0919\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"data_aug_full_0919\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"data_aug_full_0919\"\n\nMore Information needed"
] |
8302f071e16b8dd54dfe04567da79971bf47d69a
|
### Thesis Chile Dataset
### Dataset Summary
Thesis Chile is the dataset partially used to create the [DiscoEval in Spanish benchmark](https://github.com/OpenCENIA/Spanish-Sentence-Evaluation).
This dataset was created by scraping titles and abstracts of Chilean thesis from public repositories of the Pontificia Universidad Catolica de Chile (repositorio.uc.cl), Universidad de Chile (repositorio.uchile.cl) and Universidad Técnica Federico Santa María (biblioteca.usm.cl).
### Supported Tasks
We see the potential utility of this data for both discriminative and generative tasks. For classification purposes, the title-abstract pairs offer the opportunity to assess semantic similarity or entailment. Conversely, in generative tasks, the abstracts can serve as inputs for models to generate titles (summary).
### Citation Information
```
@inproceedings{araujo-etal-2022-evaluation,
title = "Evaluation Benchmarks for {S}panish Sentence Representations",
author = "Araujo, Vladimir and
Carvallo, Andr{\'e}s and
Kundu, Souvik and
Ca{\~n}ete, Jos{\'e} and
Mendoza, Marcelo and
Mercer, Robert E. and
Bravo-Marquez, Felipe and
Moens, Marie-Francine and
Soto, Alvaro",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.648",
pages = "6024--6034",
}
```
|
vgaraujov/thesis-chile
|
[
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:es",
"license:cc-by-4.0",
"region:us"
] |
2023-09-20T18:09:49+00:00
|
{"language": ["es"], "license": "cc-by-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["summarization", "text-generation", "text-classification"], "pretty_name": "Thesis Chile"}
|
2023-09-20T19:35:08+00:00
|
[] |
[
"es"
] |
TAGS
#task_categories-summarization #task_categories-text-generation #task_categories-text-classification #size_categories-1K<n<10K #language-Spanish #license-cc-by-4.0 #region-us
|
### Thesis Chile Dataset
### Dataset Summary
Thesis Chile is the dataset partially used to create the DiscoEval in Spanish benchmark.
This dataset was created by scraping titles and abstracts of Chilean thesis from public repositories of the Pontificia Universidad Catolica de Chile (URL), Universidad de Chile (URL) and Universidad Técnica Federico Santa María (URL).
### Supported Tasks
We see the potential utility of this data for both discriminative and generative tasks. For classification purposes, the title-abstract pairs offer the opportunity to assess semantic similarity or entailment. Conversely, in generative tasks, the abstracts can serve as inputs for models to generate titles (summary).
|
[
"### Thesis Chile Dataset",
"### Dataset Summary\n\nThesis Chile is the dataset partially used to create the DiscoEval in Spanish benchmark. \nThis dataset was created by scraping titles and abstracts of Chilean thesis from public repositories of the Pontificia Universidad Catolica de Chile (URL), Universidad de Chile (URL) and Universidad Técnica Federico Santa María (URL).",
"### Supported Tasks\n\nWe see the potential utility of this data for both discriminative and generative tasks. For classification purposes, the title-abstract pairs offer the opportunity to assess semantic similarity or entailment. Conversely, in generative tasks, the abstracts can serve as inputs for models to generate titles (summary)."
] |
[
"TAGS\n#task_categories-summarization #task_categories-text-generation #task_categories-text-classification #size_categories-1K<n<10K #language-Spanish #license-cc-by-4.0 #region-us \n",
"### Thesis Chile Dataset",
"### Dataset Summary\n\nThesis Chile is the dataset partially used to create the DiscoEval in Spanish benchmark. \nThis dataset was created by scraping titles and abstracts of Chilean thesis from public repositories of the Pontificia Universidad Catolica de Chile (URL), Universidad de Chile (URL) and Universidad Técnica Federico Santa María (URL).",
"### Supported Tasks\n\nWe see the potential utility of this data for both discriminative and generative tasks. For classification purposes, the title-abstract pairs offer the opportunity to assess semantic similarity or entailment. Conversely, in generative tasks, the abstracts can serve as inputs for models to generate titles (summary)."
] |
[
64,
7,
77,
81
] |
[
"passage: TAGS\n#task_categories-summarization #task_categories-text-generation #task_categories-text-classification #size_categories-1K<n<10K #language-Spanish #license-cc-by-4.0 #region-us \n### Thesis Chile Dataset### Dataset Summary\n\nThesis Chile is the dataset partially used to create the DiscoEval in Spanish benchmark. \nThis dataset was created by scraping titles and abstracts of Chilean thesis from public repositories of the Pontificia Universidad Catolica de Chile (URL), Universidad de Chile (URL) and Universidad Técnica Federico Santa María (URL).### Supported Tasks\n\nWe see the potential utility of this data for both discriminative and generative tasks. For classification purposes, the title-abstract pairs offer the opportunity to assess semantic similarity or entailment. Conversely, in generative tasks, the abstracts can serve as inputs for models to generate titles (summary)."
] |
0465054671ab6009aa3443d93ad2e83377fd5336
|
# Dataset Card for "OASST_Top1_2023-08-25-Zh_Only"
Filtered from [OpenAssistant/oasst_top1_2023-08-25](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25).
|
larryvrh/OASST_Top1_2023-08-25-Zh_Only
|
[
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:n<1K",
"language:zh",
"region:us"
] |
2023-09-20T18:30:35+00:00
|
{"language": ["zh"], "size_categories": ["n<1K"], "task_categories": ["text-generation", "conversational"], "dataset_info": {"features": [{"name": "conversation", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1008722, "num_examples": 662}], "download_size": 603882, "dataset_size": 1008722}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-20T18:33:28+00:00
|
[] |
[
"zh"
] |
TAGS
#task_categories-text-generation #task_categories-conversational #size_categories-n<1K #language-Chinese #region-us
|
# Dataset Card for "OASST_Top1_2023-08-25-Zh_Only"
Filtered from OpenAssistant/oasst_top1_2023-08-25.
|
[
"# Dataset Card for \"OASST_Top1_2023-08-25-Zh_Only\"\n\nFiltered from OpenAssistant/oasst_top1_2023-08-25."
] |
[
"TAGS\n#task_categories-text-generation #task_categories-conversational #size_categories-n<1K #language-Chinese #region-us \n",
"# Dataset Card for \"OASST_Top1_2023-08-25-Zh_Only\"\n\nFiltered from OpenAssistant/oasst_top1_2023-08-25."
] |
[
42,
44
] |
[
"passage: TAGS\n#task_categories-text-generation #task_categories-conversational #size_categories-n<1K #language-Chinese #region-us \n# Dataset Card for \"OASST_Top1_2023-08-25-Zh_Only\"\n\nFiltered from OpenAssistant/oasst_top1_2023-08-25."
] |
289f4470eaf8f3c41d5e573820eb92dd1ed4d879
|
# Dataset Card for "queries"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Binaryy/queries
|
[
"region:us"
] |
2023-09-20T18:39:30+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0.1", "dtype": "int64"}, {"name": "Unnamed: 0", "dtype": "int64"}, {"name": "queries", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 62531, "num_examples": 543}], "download_size": 24151, "dataset_size": 62531}}
|
2023-09-28T09:29:25+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "queries"
More Information needed
|
[
"# Dataset Card for \"queries\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"queries\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"queries\"\n\nMore Information needed"
] |
a2d012d694f45918e56d3ba5a14135127c2ed4ff
|
# Dataset Card for "const_dataset_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
316usman/const_dataset_2
|
[
"region:us"
] |
2023-09-20T19:03:53+00:00
|
{"dataset_info": {"features": [{"name": "train", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19352633, "num_examples": 8153}], "download_size": 4941592, "dataset_size": 19352633}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-20T19:04:04+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "const_dataset_2"
More Information needed
|
[
"# Dataset Card for \"const_dataset_2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"const_dataset_2\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"const_dataset_2\"\n\nMore Information needed"
] |
c0424984401952f8fe57583edddf824bb15c09f4
|
# llms-book-bans-benchmark
<a href="https://lil.law.harvard.edu/blog/2023/09/25/ai-book-bans-freedom-to-read-case-study/"><img src="https://lil-blog-media.s3.amazonaws.com/ai-book-bans-lead-graphic.png"></a>
Data collected in the context of our experiment: [_"AI Book Bans: Are LLMs Champions of the Freedom to Read?"_](https://lil.law.harvard.edu/blog/2023/09/25/ai-book-bans-freedom-to-read-case-study/)
Pipeline and details about collection process available on [GitHub](https://github.com/harvard-lil/llms-book-bans-benchmark/).
---
# Directory structure
| File | Description |
| --- | --- |
| `data/gpt-3.5-turbo.csv` | Data collected by running the prompt against GPT-3.5-Turbo |
| `data/gpt-4.csv` | Data collected by running the prompt against GPT-4 |
| `data/llama2-13b.csv` | Data collected by running the prompt against Llama2-13b-chat |
| `data/llama2-70b.csv` | Data collected by running the prompt against Llama2-70b-chat |
| `data/palm2.csv` | Data collected by running the prompt against text-bison-001 (Palm2) |
| `analysis.csv` | Results of our analysis. Performed manually by the authors via a survey. |
|
harvard-lil/llms-book-bans-benchmark
|
[
"language:en",
"license:cc-by-4.0",
"region:us"
] |
2023-09-20T19:06:46+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "viewer": false}
|
2023-09-25T21:08:31+00:00
|
[] |
[
"en"
] |
TAGS
#language-English #license-cc-by-4.0 #region-us
|
llms-book-bans-benchmark
========================
<a href="URL src="URL
Data collected in the context of our experiment: *"AI Book Bans: Are LLMs Champions of the Freedom to Read?"*
Pipeline and details about collection process available on GitHub.
---
Directory structure
===================
|
[] |
[
"TAGS\n#language-English #license-cc-by-4.0 #region-us \n"
] |
[
19
] |
[
"passage: TAGS\n#language-English #license-cc-by-4.0 #region-us \n"
] |
8341e7b34f9bd9fc489bda819d804613fe26f359
|
# Dataset Card for "memories-semantic-memorization-filter-results"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
usvsnsp/memories-semantic-memorization-filter-results
|
[
"region:us"
] |
2023-09-20T19:08:28+00:00
|
{"dataset_info": {"features": [{"name": "sequence_id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "sequence_duplicates", "dtype": "int64"}, {"name": "max_frequency", "dtype": "int64"}, {"name": "avg_frequency", "dtype": "float64"}, {"name": "min_frequency", "dtype": "int64"}, {"name": "median_frequency", "dtype": "float64"}, {"name": "p25_frequency", "dtype": "int64"}, {"name": "p75_frequency", "dtype": "int64"}, {"name": "frequencies", "sequence": "int64"}, {"name": "is_incrementing", "dtype": "bool"}, {"name": "tokens", "sequence": "int64"}, {"name": "repeating_offset", "dtype": "int32"}, {"name": "num_repeating", "dtype": "int32"}, {"name": "smallest_repeating_chunk", "sequence": "int64"}, {"name": "memorization_score", "dtype": "float64"}, {"name": "templating_frequency_0.9", "dtype": "int64"}, {"name": "templating_frequency_0.8", "dtype": "int64"}, {"name": "prompt_perplexity", "dtype": "float32"}, {"name": "generation_perplexity", "dtype": "float32"}, {"name": "sequence_perplexity", "dtype": "float32"}], "splits": [{"name": "memories.duped.70m", "num_bytes": 648141277, "num_examples": 463953}, {"name": "memories.duped.160m", "num_bytes": 955903849, "num_examples": 689673}, {"name": "memories.duped.410m", "num_bytes": 1337555782, "num_examples": 970341}, {"name": "memories.duped.1b", "num_bytes": 1725540452, "num_examples": 1256141}, {"name": "memories.duped.1.4b", "num_bytes": 1884519155, "num_examples": 1373722}, {"name": "memories.duped.2.8b", "num_bytes": 2292743123, "num_examples": 1675077}, {"name": "memories.duped.6.9b", "num_bytes": 2898035658, "num_examples": 2120976}, {"name": "memories.duped.12b", "num_bytes": 3252649684, "num_examples": 2382328}, {"name": "memories.deduped.70m", "num_bytes": 576211560, "num_examples": 411448}, {"name": "memories.deduped.160m", "num_bytes": 809545073, "num_examples": 581195}, {"name": "memories.deduped.410m", "num_bytes": 1126006111, "num_examples": 811039}, {"name": "memories.deduped.1b", "num_bytes": 1430399436, "num_examples": 1032865}, {"name": "memories.deduped.1.4b", "num_bytes": 1450336662, "num_examples": 1048097}, {"name": "memories.deduped.2.8b", "num_bytes": 1871907415, "num_examples": 1355211}, {"name": "memories.deduped.6.9b", "num_bytes": 2319039796, "num_examples": 1680294}, {"name": "memories.deduped.12b", "num_bytes": 2581349436, "num_examples": 1871216}], "download_size": 9223426756, "dataset_size": 27159884469}}
|
2023-09-20T19:16:41+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "memories-semantic-memorization-filter-results"
More Information needed
|
[
"# Dataset Card for \"memories-semantic-memorization-filter-results\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"memories-semantic-memorization-filter-results\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"memories-semantic-memorization-filter-results\"\n\nMore Information needed"
] |
f88430089e3911e117ad04ef38e9f22f342759c5
|
Synthetic dataset of PowerShell, Active Directory and I think some Office 365 Q&A
|
adamo1139/PS_AD_Office_01
|
[
"license:unknown",
"region:us"
] |
2023-09-20T19:20:48+00:00
|
{"license": "unknown"}
|
2023-09-20T19:22:42+00:00
|
[] |
[] |
TAGS
#license-unknown #region-us
|
Synthetic dataset of PowerShell, Active Directory and I think some Office 365 Q&A
|
[] |
[
"TAGS\n#license-unknown #region-us \n"
] |
[
13
] |
[
"passage: TAGS\n#license-unknown #region-us \n"
] |
c847c38d36346346df5a421e8f13ed5fb8f4eac6
|
# Dataset Card for "logits-mt-it-128"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
amitness/logits-mt-it-128
|
[
"region:us"
] |
2023-09-20T19:48:46+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"name": "teacher_logits", "sequence": {"sequence": "float64"}}, {"name": "teacher_indices", "sequence": {"sequence": "int64"}}, {"name": "teacher_mask_indices", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 32877042660, "num_examples": 7259175}, {"name": "test", "num_bytes": 5801464960, "num_examples": 1281032}], "download_size": 1475328483, "dataset_size": 38678507620}}
|
2023-09-27T07:44:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "logits-mt-it-128"
More Information needed
|
[
"# Dataset Card for \"logits-mt-it-128\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"logits-mt-it-128\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"logits-mt-it-128\"\n\nMore Information needed"
] |
d22fe37d768e538f73d376df74a14900423f378c
|
Long context QA with the following augmentations
- Smart augmentation (changes the answer to the question and in the context)
- Changes the data around the answer within the chunk
- Random noise
- Random chunks of information
- Lots of varied lengths
- A few different prompt formats (aimed towards RWKV)
|
m8than/long-context-QA-augmented
|
[
"task_categories:text-generation",
"task_categories:fill-mask",
"language:en",
"license:cc-by-sa-3.0",
"language-modeling",
"masked-language-modeling",
"region:us"
] |
2023-09-20T19:49:25+00:00
|
{"language": ["en"], "license": "cc-by-sa-3.0", "task_categories": ["text-generation", "fill-mask"], "pretty_name": "LongContextQA", "tags": ["language-modeling", "masked-language-modeling"], "configs": [{"config_name": "default", "default": true, "data_files": [{"split": "train", "path": ["compiled/raccoon-xiii-large.jsonl"]}]}]}
|
2023-11-27T04:01:42+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-generation #task_categories-fill-mask #language-English #license-cc-by-sa-3.0 #language-modeling #masked-language-modeling #region-us
|
Long context QA with the following augmentations
- Smart augmentation (changes the answer to the question and in the context)
- Changes the data around the answer within the chunk
- Random noise
- Random chunks of information
- Lots of varied lengths
- A few different prompt formats (aimed towards RWKV)
|
[] |
[
"TAGS\n#task_categories-text-generation #task_categories-fill-mask #language-English #license-cc-by-sa-3.0 #language-modeling #masked-language-modeling #region-us \n"
] |
[
56
] |
[
"passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #language-English #license-cc-by-sa-3.0 #language-modeling #masked-language-modeling #region-us \n"
] |
44725e1284e190a82109cd7f1b5dc636e9d94126
|
# Drive Stats
[**Drive Stats**](https://www.backblaze.com/cloud-storage/resources/hard-drive-test-data) is a public data set of daily metrics on the hard drives in Backblaze’s [cloud storage infrastructure](https://www.backblaze.com/cloud-storage) that Backblaze has open-sourced since April 2013. Currently, Drive Stats comprises over 388 million records, rising by over 240,000 records per day. Drive Stats is an append-only dataset effectively logging daily statistics that once written are never updated or deleted.
This is our first Hugging Face dataset; feel free to suggest improvements by creating a new discussion on the [Community](https://huggingface.co/datasets/backblaze/Drive_Stats/discussions)!
## Drive Stats Q2 2023 Snapshot
* Drive Count: 240,940
* Drive Failures: 1,339
* Drive Days: 21.1M
* Annualized Failure Rate: 2.28%
## Overview of the Hard Drive Data
Each day in the Backblaze data center, we take a snapshot of each operational hard drive. This snapshot includes basic drive information along with the S.M.A.R.T. statistics reported by that drive. The daily snapshot of one drive is one record or row of data. All of the drive snapshots for a given day are collected into a file consisting of a row for each active hard drive. The format of this file is a "csv" (Comma Separated Values) file. Each day this file is named in the format YYYY-MM-DD.csv, for example, 2013-04-10.csv.
The first row of the each file contains the column names, the remaining rows are the actual data. The columns are as follows:
* Date – The date of the snapshot in yyyy-mm-dd format.
* Serial Number – The manufacturer-assigned serial number of the drive.
* Model – The manufacturer-assigned model number of the drive.
* Capacity – The drive capacity in bytes.
* Failure – Contains a “0” if the drive is OK. Contains a “1” if this is the last day the drive was operational before failing.
* SMART Stats:
* 2013-2014: 80 columns of data, that are the Raw and Normalized values for 40 different SMART stats as reported by the given drive. Each value is the number reported by the drive.
* 2015-2017: 90 columns of data, that are the Raw and Normalized values for 45 different SMART stats as reported by the given drive. Each value is the number reported by the drive.
* 2018 (Q1): 100 columns of data, that are the Raw and Normalized values for 50 different SMART stats as reported by the given drive. Each value is the number reported by the drive.
* 2018 (Q2): 104 columns of data, that are the Raw and Normalized values for 52 different SMART stats as reported by the given drive. Each value is the number reported by the drive.
* 2018 (Q4): 124 columns of data, that are the Raw and Normalized values for 62 different SMART stats as reported by the given drive. Each value is the number reported by the drive.
## Helpful Hints and Caveats
### Schema Changes
The schema may change from quarter to quarter. The basic information: date, serial_number, model, capacity_bytes, and failure will not change. All of the changes will be in the number of SMART attributes reported for all of the drives in a given quarter. There will never be more than 255 pair of SMART attributes reported. When you load the CSV files for each quarter you will need to account for the potential of a different number of SMART attributes from the previous quarter.
## How You Can Use the Data
You can download and use this data for free for your own purpose, all we ask is three things:
* you cite Backblaze as the source if you use the data,
* you accept that you are solely responsible for how you use the data, and
* you do not sell this data to anyone, it is free.
|
backblaze/Drive_Stats
|
[
"annotations_creators:machine-generated",
"size_categories:100M<n<1B",
"license:other",
"region:us"
] |
2023-09-20T19:51:43+00:00
|
{"annotations_creators": ["machine-generated"], "license": ["other"], "size_categories": ["100M<n<1B"], "pretty_name": "Drive Stats", "license_details": "https://www.backblaze.com/cloud-storage/resources/hard-drive-test-data#howYouCanUseTheData"}
|
2023-10-05T03:46:26+00:00
|
[] |
[] |
TAGS
#annotations_creators-machine-generated #size_categories-100M<n<1B #license-other #region-us
|
# Drive Stats
Drive Stats is a public data set of daily metrics on the hard drives in Backblaze’s cloud storage infrastructure that Backblaze has open-sourced since April 2013. Currently, Drive Stats comprises over 388 million records, rising by over 240,000 records per day. Drive Stats is an append-only dataset effectively logging daily statistics that once written are never updated or deleted.
This is our first Hugging Face dataset; feel free to suggest improvements by creating a new discussion on the Community!
## Drive Stats Q2 2023 Snapshot
* Drive Count: 240,940
* Drive Failures: 1,339
* Drive Days: 21.1M
* Annualized Failure Rate: 2.28%
## Overview of the Hard Drive Data
Each day in the Backblaze data center, we take a snapshot of each operational hard drive. This snapshot includes basic drive information along with the S.M.A.R.T. statistics reported by that drive. The daily snapshot of one drive is one record or row of data. All of the drive snapshots for a given day are collected into a file consisting of a row for each active hard drive. The format of this file is a "csv" (Comma Separated Values) file. Each day this file is named in the format URL, for example, URL.
The first row of the each file contains the column names, the remaining rows are the actual data. The columns are as follows:
* Date – The date of the snapshot in yyyy-mm-dd format.
* Serial Number – The manufacturer-assigned serial number of the drive.
* Model – The manufacturer-assigned model number of the drive.
* Capacity – The drive capacity in bytes.
* Failure – Contains a “0” if the drive is OK. Contains a “1” if this is the last day the drive was operational before failing.
* SMART Stats:
* 2013-2014: 80 columns of data, that are the Raw and Normalized values for 40 different SMART stats as reported by the given drive. Each value is the number reported by the drive.
* 2015-2017: 90 columns of data, that are the Raw and Normalized values for 45 different SMART stats as reported by the given drive. Each value is the number reported by the drive.
* 2018 (Q1): 100 columns of data, that are the Raw and Normalized values for 50 different SMART stats as reported by the given drive. Each value is the number reported by the drive.
* 2018 (Q2): 104 columns of data, that are the Raw and Normalized values for 52 different SMART stats as reported by the given drive. Each value is the number reported by the drive.
* 2018 (Q4): 124 columns of data, that are the Raw and Normalized values for 62 different SMART stats as reported by the given drive. Each value is the number reported by the drive.
## Helpful Hints and Caveats
### Schema Changes
The schema may change from quarter to quarter. The basic information: date, serial_number, model, capacity_bytes, and failure will not change. All of the changes will be in the number of SMART attributes reported for all of the drives in a given quarter. There will never be more than 255 pair of SMART attributes reported. When you load the CSV files for each quarter you will need to account for the potential of a different number of SMART attributes from the previous quarter.
## How You Can Use the Data
You can download and use this data for free for your own purpose, all we ask is three things:
* you cite Backblaze as the source if you use the data,
* you accept that you are solely responsible for how you use the data, and
* you do not sell this data to anyone, it is free.
|
[
"# Drive Stats\n\nDrive Stats is a public data set of daily metrics on the hard drives in Backblaze’s cloud storage infrastructure that Backblaze has open-sourced since April 2013. Currently, Drive Stats comprises over 388 million records, rising by over 240,000 records per day. Drive Stats is an append-only dataset effectively logging daily statistics that once written are never updated or deleted.\n\nThis is our first Hugging Face dataset; feel free to suggest improvements by creating a new discussion on the Community!",
"## Drive Stats Q2 2023 Snapshot\n\n* Drive Count: 240,940\n* Drive Failures: 1,339\n* Drive Days: 21.1M\n* Annualized Failure Rate: 2.28%",
"## Overview of the Hard Drive Data\n\nEach day in the Backblaze data center, we take a snapshot of each operational hard drive. This snapshot includes basic drive information along with the S.M.A.R.T. statistics reported by that drive. The daily snapshot of one drive is one record or row of data. All of the drive snapshots for a given day are collected into a file consisting of a row for each active hard drive. The format of this file is a \"csv\" (Comma Separated Values) file. Each day this file is named in the format URL, for example, URL.\n\nThe first row of the each file contains the column names, the remaining rows are the actual data. The columns are as follows:\n\n* Date – The date of the snapshot in yyyy-mm-dd format.\n* Serial Number – The manufacturer-assigned serial number of the drive.\n* Model – The manufacturer-assigned model number of the drive.\n* Capacity – The drive capacity in bytes.\n* Failure – Contains a “0” if the drive is OK. Contains a “1” if this is the last day the drive was operational before failing.\n* SMART Stats:\n * 2013-2014: 80 columns of data, that are the Raw and Normalized values for 40 different SMART stats as reported by the given drive. Each value is the number reported by the drive.\n * 2015-2017: 90 columns of data, that are the Raw and Normalized values for 45 different SMART stats as reported by the given drive. Each value is the number reported by the drive.\n * 2018 (Q1): 100 columns of data, that are the Raw and Normalized values for 50 different SMART stats as reported by the given drive. Each value is the number reported by the drive.\n * 2018 (Q2): 104 columns of data, that are the Raw and Normalized values for 52 different SMART stats as reported by the given drive. Each value is the number reported by the drive.\n * 2018 (Q4): 124 columns of data, that are the Raw and Normalized values for 62 different SMART stats as reported by the given drive. Each value is the number reported by the drive.",
"## Helpful Hints and Caveats",
"### Schema Changes\n\nThe schema may change from quarter to quarter. The basic information: date, serial_number, model, capacity_bytes, and failure will not change. All of the changes will be in the number of SMART attributes reported for all of the drives in a given quarter. There will never be more than 255 pair of SMART attributes reported. When you load the CSV files for each quarter you will need to account for the potential of a different number of SMART attributes from the previous quarter.",
"## How You Can Use the Data\n\nYou can download and use this data for free for your own purpose, all we ask is three things:\n\n* you cite Backblaze as the source if you use the data,\n* you accept that you are solely responsible for how you use the data, and\n* you do not sell this data to anyone, it is free."
] |
[
"TAGS\n#annotations_creators-machine-generated #size_categories-100M<n<1B #license-other #region-us \n",
"# Drive Stats\n\nDrive Stats is a public data set of daily metrics on the hard drives in Backblaze’s cloud storage infrastructure that Backblaze has open-sourced since April 2013. Currently, Drive Stats comprises over 388 million records, rising by over 240,000 records per day. Drive Stats is an append-only dataset effectively logging daily statistics that once written are never updated or deleted.\n\nThis is our first Hugging Face dataset; feel free to suggest improvements by creating a new discussion on the Community!",
"## Drive Stats Q2 2023 Snapshot\n\n* Drive Count: 240,940\n* Drive Failures: 1,339\n* Drive Days: 21.1M\n* Annualized Failure Rate: 2.28%",
"## Overview of the Hard Drive Data\n\nEach day in the Backblaze data center, we take a snapshot of each operational hard drive. This snapshot includes basic drive information along with the S.M.A.R.T. statistics reported by that drive. The daily snapshot of one drive is one record or row of data. All of the drive snapshots for a given day are collected into a file consisting of a row for each active hard drive. The format of this file is a \"csv\" (Comma Separated Values) file. Each day this file is named in the format URL, for example, URL.\n\nThe first row of the each file contains the column names, the remaining rows are the actual data. The columns are as follows:\n\n* Date – The date of the snapshot in yyyy-mm-dd format.\n* Serial Number – The manufacturer-assigned serial number of the drive.\n* Model – The manufacturer-assigned model number of the drive.\n* Capacity – The drive capacity in bytes.\n* Failure – Contains a “0” if the drive is OK. Contains a “1” if this is the last day the drive was operational before failing.\n* SMART Stats:\n * 2013-2014: 80 columns of data, that are the Raw and Normalized values for 40 different SMART stats as reported by the given drive. Each value is the number reported by the drive.\n * 2015-2017: 90 columns of data, that are the Raw and Normalized values for 45 different SMART stats as reported by the given drive. Each value is the number reported by the drive.\n * 2018 (Q1): 100 columns of data, that are the Raw and Normalized values for 50 different SMART stats as reported by the given drive. Each value is the number reported by the drive.\n * 2018 (Q2): 104 columns of data, that are the Raw and Normalized values for 52 different SMART stats as reported by the given drive. Each value is the number reported by the drive.\n * 2018 (Q4): 124 columns of data, that are the Raw and Normalized values for 62 different SMART stats as reported by the given drive. Each value is the number reported by the drive.",
"## Helpful Hints and Caveats",
"### Schema Changes\n\nThe schema may change from quarter to quarter. The basic information: date, serial_number, model, capacity_bytes, and failure will not change. All of the changes will be in the number of SMART attributes reported for all of the drives in a given quarter. There will never be more than 255 pair of SMART attributes reported. When you load the CSV files for each quarter you will need to account for the potential of a different number of SMART attributes from the previous quarter.",
"## How You Can Use the Data\n\nYou can download and use this data for free for your own purpose, all we ask is three things:\n\n* you cite Backblaze as the source if you use the data,\n* you accept that you are solely responsible for how you use the data, and\n* you do not sell this data to anyone, it is free."
] |
[
36,
119,
42,
493,
9,
109,
74
] |
[
"passage: TAGS\n#annotations_creators-machine-generated #size_categories-100M<n<1B #license-other #region-us \n# Drive Stats\n\nDrive Stats is a public data set of daily metrics on the hard drives in Backblaze’s cloud storage infrastructure that Backblaze has open-sourced since April 2013. Currently, Drive Stats comprises over 388 million records, rising by over 240,000 records per day. Drive Stats is an append-only dataset effectively logging daily statistics that once written are never updated or deleted.\n\nThis is our first Hugging Face dataset; feel free to suggest improvements by creating a new discussion on the Community!## Drive Stats Q2 2023 Snapshot\n\n* Drive Count: 240,940\n* Drive Failures: 1,339\n* Drive Days: 21.1M\n* Annualized Failure Rate: 2.28%"
] |
b0e7e942ca27603c970ad058d78b2956d81c0d12
|
# Dataset Card for "df2d5286"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/df2d5286
|
[
"region:us"
] |
2023-09-20T20:15:37+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 215, "num_examples": 10}], "download_size": 1374, "dataset_size": 215}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-20T20:15:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "df2d5286"
More Information needed
|
[
"# Dataset Card for \"df2d5286\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"df2d5286\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"df2d5286\"\n\nMore Information needed"
] |
4b957d3e625f52367e9f74506c907691663944b3
|
# Dataset Card for "9cc99eaf"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/9cc99eaf
|
[
"region:us"
] |
2023-09-20T20:15:41+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 215, "num_examples": 10}], "download_size": 1374, "dataset_size": 215}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-20T20:15:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "9cc99eaf"
More Information needed
|
[
"# Dataset Card for \"9cc99eaf\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"9cc99eaf\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"9cc99eaf\"\n\nMore Information needed"
] |
c7a1e01d2329c1b02cf2ddcd2812be75ce4a72cf
|
license: other
---
|
EdgarsKatze/test
|
[
"region:us"
] |
2023-09-20T20:23:21+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "test/data-00000-of-00001.arrow"}]}]}
|
2023-09-20T21:05:49+00:00
|
[] |
[] |
TAGS
#region-us
|
license: other
---
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
94a545ab7fe9c60690210094788fd3c8fbd9fe47
|
# Dataset Card for "Lee_Souder_Dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
MaxReynolds/Lee_Souder_Dataset
|
[
"region:us"
] |
2023-09-20T20:46:38+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 804279.0, "num_examples": 9}], "download_size": 805499, "dataset_size": 804279.0}}
|
2023-09-20T20:46:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Lee_Souder_Dataset"
More Information needed
|
[
"# Dataset Card for \"Lee_Souder_Dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Lee_Souder_Dataset\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Lee_Souder_Dataset\"\n\nMore Information needed"
] |
e8b2d558374578df6fbc232e8e709c57079950be
|
# Dataset Card for "6845e847"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/6845e847
|
[
"region:us"
] |
2023-09-20T21:33:17+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 211, "num_examples": 10}], "download_size": 1393, "dataset_size": 211}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-20T21:33:18+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "6845e847"
More Information needed
|
[
"# Dataset Card for \"6845e847\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"6845e847\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"6845e847\"\n\nMore Information needed"
] |
09a5534d63d5c1556ec21282f357f419704ca4ef
|
# Dataset Card for "allsides"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liyucheng/allsides
|
[
"region:us"
] |
2023-09-20T22:19:00+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "topic", "dtype": "string"}, {"name": "camp", "dtype": "string"}, {"name": "full_stories", "dtype": "string"}, {"name": "articles", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 4499065, "num_examples": 987}], "download_size": 2363071, "dataset_size": 4499065}}
|
2023-09-21T21:01:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "allsides"
More Information needed
|
[
"# Dataset Card for \"allsides\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"allsides\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"allsides\"\n\nMore Information needed"
] |
691aef3a2fa335b3f2bebbd0d1d7646833ab94b8
|
# Dataset Card for "logits-english-128"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
amitness/logits-english-128
|
[
"region:us"
] |
2023-09-20T22:24:26+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"name": "teacher_logits", "sequence": {"sequence": "float64"}}, {"name": "teacher_indices", "sequence": {"sequence": "int64"}}, {"name": "teacher_mask_indices", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 157577415476, "num_examples": 34783711}], "download_size": 61008671571, "dataset_size": 157577415476}}
|
2023-09-21T09:28:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "logits-english-128"
More Information needed
|
[
"# Dataset Card for \"logits-english-128\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"logits-english-128\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"logits-english-128\"\n\nMore Information needed"
] |
f29723f82adaad31600532fe1172bc75d67943a4
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
orderofmagnitude/g123
|
[
"region:us"
] |
2023-09-20T22:26:22+00:00
|
{}
|
2023-09-21T15:57:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
915b163461fb52d97540dd41704ec882a041c143
|
# Dataset Card for "COMPAS.csv"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AiresPucrs/COMPAS
|
[
"region:us"
] |
2023-09-20T22:38:08+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "float64"}, {"name": "name", "dtype": "string"}, {"name": "first", "dtype": "string"}, {"name": "last", "dtype": "string"}, {"name": "sex", "dtype": "string"}, {"name": "dob", "dtype": "string"}, {"name": "age", "dtype": "int64"}, {"name": "age_cat", "dtype": "string"}, {"name": "race", "dtype": "string"}, {"name": "juv_fel_count", "dtype": "int64"}, {"name": "decile_score", "dtype": "int64"}, {"name": "juv_misd_count", "dtype": "int64"}, {"name": "juv_other_count", "dtype": "int64"}, {"name": "priors_count", "dtype": "int64"}, {"name": "days_b_screening_arrest", "dtype": "float64"}, {"name": "c_jail_in", "dtype": "string"}, {"name": "c_jail_out", "dtype": "string"}, {"name": "c_days_from_compas", "dtype": "float64"}, {"name": "c_charge_degree", "dtype": "string"}, {"name": "c_charge_desc", "dtype": "string"}, {"name": "is_recid", "dtype": "int64"}, {"name": "r_charge_degree", "dtype": "string"}, {"name": "r_days_from_arrest", "dtype": "float64"}, {"name": "r_offense_date", "dtype": "string"}, {"name": "r_charge_desc", "dtype": "string"}, {"name": "r_jail_in", "dtype": "string"}, {"name": "violent_recid", "dtype": "float64"}, {"name": "is_violent_recid", "dtype": "int64"}, {"name": "vr_charge_degree", "dtype": "string"}, {"name": "vr_offense_date", "dtype": "string"}, {"name": "vr_charge_desc", "dtype": "string"}, {"name": "type_of_assessment", "dtype": "string"}, {"name": "decile_score.1", "dtype": "int64"}, {"name": "score_text", "dtype": "string"}, {"name": "screening_date", "dtype": "string"}, {"name": "v_type_of_assessment", "dtype": "string"}, {"name": "v_decile_score", "dtype": "int64"}, {"name": "v_score_text", "dtype": "string"}, {"name": "priors_count.1", "dtype": "int64"}, {"name": "event", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 7742099, "num_examples": 18316}], "download_size": 1350808, "dataset_size": 7742099}}
|
2023-09-20T22:38:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "URL"
More Information needed
|
[
"# Dataset Card for \"URL\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"URL\"\n\nMore Information needed"
] |
[
6,
11
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"URL\"\n\nMore Information needed"
] |
705cfc9f24ce645ba801a9e7d491555f69f88f37
|
# Dataset Card for "autotree_automl_Higgs_gosdt_l512_d3_sd1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_automl_Higgs_gosdt_l512_d3_sd1
|
[
"region:us"
] |
2023-09-20T23:00:36+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float64"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float64"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 12501600000, "num_examples": 100000}, {"name": "validation", "num_bytes": 1250160000, "num_examples": 10000}], "download_size": 9801806108, "dataset_size": 13751760000}}
|
2023-09-20T23:08:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_automl_Higgs_gosdt_l512_d3_sd1"
More Information needed
|
[
"# Dataset Card for \"autotree_automl_Higgs_gosdt_l512_d3_sd1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_automl_Higgs_gosdt_l512_d3_sd1\"\n\nMore Information needed"
] |
[
6,
33
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_automl_Higgs_gosdt_l512_d3_sd1\"\n\nMore Information needed"
] |
302c011ffdb8dc00279d92ced11dd704394963b1
|
# Dataset Card for "cpt_v1_flan-niv2-notrans-sample_coig-pc-core"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yizhilll/cpt_v1_flan-niv2-notrans-sample_coig-pc-core
|
[
"region:us"
] |
2023-09-20T23:07:47+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "task_name", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "language", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2368807275, "num_examples": 1472505}], "download_size": 1236980127, "dataset_size": 2368807275}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-20T23:10:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cpt_v1_flan-niv2-notrans-sample_coig-pc-core"
More Information needed
|
[
"# Dataset Card for \"cpt_v1_flan-niv2-notrans-sample_coig-pc-core\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cpt_v1_flan-niv2-notrans-sample_coig-pc-core\"\n\nMore Information needed"
] |
[
6,
33
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cpt_v1_flan-niv2-notrans-sample_coig-pc-core\"\n\nMore Information needed"
] |
540e58f664d6e36de7359fadc275d73780564b94
|
# Dataset Card for "OASST_Top1_2023-08-25-En_Only"
Filtered from [OpenAssistant/oasst_top1_2023-08-25](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25).
|
larryvrh/OASST_Top1_2023-08-25-En_Only
|
[
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] |
2023-09-20T23:23:11+00:00
|
{"language": ["en"], "size_categories": ["1K<n<10K"], "task_categories": ["text-generation", "conversational"], "dataset_info": {"features": [{"name": "conversation", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 9601409, "num_examples": 5010}], "download_size": 5257845, "dataset_size": 9601409}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-20T23:24:43+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-generation #task_categories-conversational #size_categories-1K<n<10K #language-English #region-us
|
# Dataset Card for "OASST_Top1_2023-08-25-En_Only"
Filtered from OpenAssistant/oasst_top1_2023-08-25.
|
[
"# Dataset Card for \"OASST_Top1_2023-08-25-En_Only\"\n\nFiltered from OpenAssistant/oasst_top1_2023-08-25."
] |
[
"TAGS\n#task_categories-text-generation #task_categories-conversational #size_categories-1K<n<10K #language-English #region-us \n",
"# Dataset Card for \"OASST_Top1_2023-08-25-En_Only\"\n\nFiltered from OpenAssistant/oasst_top1_2023-08-25."
] |
[
43,
43
] |
[
"passage: TAGS\n#task_categories-text-generation #task_categories-conversational #size_categories-1K<n<10K #language-English #region-us \n# Dataset Card for \"OASST_Top1_2023-08-25-En_Only\"\n\nFiltered from OpenAssistant/oasst_top1_2023-08-25."
] |
555a5358019cba0e90f89c22053c7f7b2af05c3d
|
# Dataset Card for "dataCC.csv"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AiresPucrs/data-credit-card
|
[
"region:us"
] |
2023-09-20T23:54:12+00:00
|
{"dataset_info": {"features": [{"name": "b", "dtype": "string"}, {"name": "30.83", "dtype": "string"}, {"name": "0", "dtype": "float64"}, {"name": "u", "dtype": "string"}, {"name": "g", "dtype": "string"}, {"name": "w", "dtype": "string"}, {"name": "v", "dtype": "string"}, {"name": "1.25", "dtype": "float64"}, {"name": "t", "dtype": "string"}, {"name": "t.1", "dtype": "string"}, {"name": "01", "dtype": "int64"}, {"name": "f", "dtype": "string"}, {"name": "g.1", "dtype": "string"}, {"name": "00202", "dtype": "string"}, {"name": "0.1", "dtype": "int64"}, {"name": "+", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 69072, "num_examples": 689}], "download_size": 17253, "dataset_size": 69072}}
|
2023-09-20T23:54:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "URL"
More Information needed
|
[
"# Dataset Card for \"URL\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"URL\"\n\nMore Information needed"
] |
[
6,
11
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"URL\"\n\nMore Information needed"
] |
f81d9e417adfadc7de238510ed7e1746d99ebc01
|
# Dataset Card for "adult_income.csv"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AiresPucrs/adult-census-income
|
[
"region:us"
] |
2023-09-21T00:27:33+00:00
|
{"dataset_info": {"features": [{"name": "age", "dtype": "int64"}, {"name": "workclass", "dtype": "string"}, {"name": "fnlwgt", "dtype": "int64"}, {"name": "education", "dtype": "string"}, {"name": "education.num", "dtype": "int64"}, {"name": "marital.status", "dtype": "string"}, {"name": "occupation", "dtype": "string"}, {"name": "relationship", "dtype": "string"}, {"name": "race", "dtype": "string"}, {"name": "sex", "dtype": "string"}, {"name": "capital.gain", "dtype": "int64"}, {"name": "capital.loss", "dtype": "int64"}, {"name": "hours.per.week", "dtype": "int64"}, {"name": "native.country", "dtype": "string"}, {"name": "income", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5316802, "num_examples": 32561}], "download_size": 553790, "dataset_size": 5316802}}
|
2023-09-21T00:27:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "adult_income.csv"
More Information needed
|
[
"# Dataset Card for \"adult_income.csv\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"adult_income.csv\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"adult_income.csv\"\n\nMore Information needed"
] |
f3c99b3d94e7612813412a43930fa2c1651e6c5f
|
# Dataset Card for "synthetic-data-gen"
This is the synthetically generated dataset used for preliminary research results from [arcee's](https://www.arcee.ai/) open-source [DALM](https://github.com/arcee-ai/DALM/) repo,
implementing E2E Rag fine-tuning over a generator and retriever with cross-gradient propogation.
Implementation research from E2E Rag:
* TACL paper - https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00530/114590/Improving-the-Domain-Adaptation-of-Retrieval
* Previous code - https://github.com/huggingface/transformers/blob/main/examples/research_projects/rag-end2end-retriever/README.md
|
arcee-ai/synthetic-data-gen
|
[
"region:us"
] |
2023-09-21T00:40:41+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Title", "dtype": "string"}, {"name": "Abstract", "dtype": "string"}, {"name": "Question", "dtype": "string"}, {"name": "Answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 633145356, "num_examples": 798682}, {"name": "test", "num_bytes": 158654392, "num_examples": 200278}], "download_size": 398488431, "dataset_size": 791799748}}
|
2023-09-21T00:46:31+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "synthetic-data-gen"
This is the synthetically generated dataset used for preliminary research results from arcee's open-source DALM repo,
implementing E2E Rag fine-tuning over a generator and retriever with cross-gradient propogation.
Implementation research from E2E Rag:
* TACL paper - URL
* Previous code - URL
|
[
"# Dataset Card for \"synthetic-data-gen\"\n\nThis is the synthetically generated dataset used for preliminary research results from arcee's open-source DALM repo, \nimplementing E2E Rag fine-tuning over a generator and retriever with cross-gradient propogation. \n\nImplementation research from E2E Rag:\n* TACL paper - URL\n* Previous code - URL"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"synthetic-data-gen\"\n\nThis is the synthetically generated dataset used for preliminary research results from arcee's open-source DALM repo, \nimplementing E2E Rag fine-tuning over a generator and retriever with cross-gradient propogation. \n\nImplementation research from E2E Rag:\n* TACL paper - URL\n* Previous code - URL"
] |
[
6,
88
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"synthetic-data-gen\"\n\nThis is the synthetically generated dataset used for preliminary research results from arcee's open-source DALM repo, \nimplementing E2E Rag fine-tuning over a generator and retriever with cross-gradient propogation. \n\nImplementation research from E2E Rag:\n* TACL paper - URL\n* Previous code - URL"
] |
8a44108890e1741fe79d8951fac0614616b71f7a
|
# 한국어 위키 데이터셋(Ko_wiki)
* 개요
- 이 데이터셋은 한국어 위키 데이터를 기반으로 만들어졌습니다. 원본 위키 데이터를 처리하기 위해 wikiextractor.py를 사용하여 텍스트 형식으로 변환하였습니다.
- 이 데이터셋을 제작한 주요 취지는 한국어 자연어 처리 연구와 애플리케이션 개발에 사용할 수 있는 광범위한 텍스트 데이터를 제공하기 위함입니다.
* 데이터 구조
- text: 위키 문서의 본문을 포함하는 문자열입니다.
* 사용 방법
1. huggingface dataset과 map을 활용하는 방법
```python3
from datasets import load_dataset
ko_dataset = load_dataset("text",
"daje/ko_wiki",
split="train",
streaming=True)
ko_wiki_tokenized = ko_dataset.map(lambda x : tokenizer(x["text"],
max_length=256,
padding="max_length",
truncation=True),
remove_columns=["text"])
```
2. 파이썬 스크립트를 사용하는 방법
```
import os
from tqdm import tqdm
from transformers import AutoTokenizer
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--input_path', type=str)
parser.add_argument('--output_path', type=str)
parser.add_argument('--model_name_or_path', type=str)
parser.add_argument('--max_seq_length', type=int, default=256)
parser.add_argument('--add_sep', default=True, action='store_true')
args = parser.parse_args()
def get_num_lines(fname):
res = os.popen(f'wc -l {fname}').read()
lines = res.strip().split()[0]
return int(lines)
def main(args):
seq_length = args.max_seq_length - 3 # room for [BOS], [EOS], [UNK]
input_fs = open(args.input_path, 'r')
output_fs = open(args.output_path, 'a')
total_line = get_num_lines(args.input_path)
tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path)
buffer = []
for doc in tqdm(input_fs, total=total_line):
tokens = tokenizer.tokenize(doc)
buffer += tokens
if args.add_sep:
buffer += [tokenizer.eos_token] # 자신이 사용하는 tokenizer에 맞추어서 eos, sep을 넣으시면 됩니다.
while len(buffer) > seq_length:
text = ' '.join(buffer[:seq_length])
output_fs.write(text)
output_fs.write('\n')
buffer = buffer[seq_length:]
input_fs.close()
output_fs.close()
if __name__ == '__main__':
main(args)
```
|
daje/ko_wiki
|
[
"region:us"
] |
2023-09-21T00:42:49+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 986780351, "num_examples": 311237}], "download_size": 550489937, "dataset_size": 986780351}}
|
2023-09-21T04:38:01+00:00
|
[] |
[] |
TAGS
#region-us
|
# 한국어 위키 데이터셋(Ko_wiki)
* 개요
- 이 데이터셋은 한국어 위키 데이터를 기반으로 만들어졌습니다. 원본 위키 데이터를 처리하기 위해 wikiextractor.py를 사용하여 텍스트 형식으로 변환하였습니다.
- 이 데이터셋을 제작한 주요 취지는 한국어 자연어 처리 연구와 애플리케이션 개발에 사용할 수 있는 광범위한 텍스트 데이터를 제공하기 위함입니다.
* 데이터 구조
- text: 위키 문서의 본문을 포함하는 문자열입니다.
* 사용 방법
1. huggingface dataset과 map을 활용하는 방법
2. 파이썬 스크립트를 사용하는 방법
|
[
"# 한국어 위키 데이터셋(Ko_wiki)\n* 개요 \n - 이 데이터셋은 한국어 위키 데이터를 기반으로 만들어졌습니다. 원본 위키 데이터를 처리하기 위해 wikiextractor.py를 사용하여 텍스트 형식으로 변환하였습니다.\n - 이 데이터셋을 제작한 주요 취지는 한국어 자연어 처리 연구와 애플리케이션 개발에 사용할 수 있는 광범위한 텍스트 데이터를 제공하기 위함입니다.\n\n* 데이터 구조\n - text: 위키 문서의 본문을 포함하는 문자열입니다.\n\n* 사용 방법\n 1. huggingface dataset과 map을 활용하는 방법 \n \n\n 2. 파이썬 스크립트를 사용하는 방법"
] |
[
"TAGS\n#region-us \n",
"# 한국어 위키 데이터셋(Ko_wiki)\n* 개요 \n - 이 데이터셋은 한국어 위키 데이터를 기반으로 만들어졌습니다. 원본 위키 데이터를 처리하기 위해 wikiextractor.py를 사용하여 텍스트 형식으로 변환하였습니다.\n - 이 데이터셋을 제작한 주요 취지는 한국어 자연어 처리 연구와 애플리케이션 개발에 사용할 수 있는 광범위한 텍스트 데이터를 제공하기 위함입니다.\n\n* 데이터 구조\n - text: 위키 문서의 본문을 포함하는 문자열입니다.\n\n* 사용 방법\n 1. huggingface dataset과 map을 활용하는 방법 \n \n\n 2. 파이썬 스크립트를 사용하는 방법"
] |
[
6,
127
] |
[
"passage: TAGS\n#region-us \n# 한국어 위키 데이터셋(Ko_wiki)\n* 개요 \n - 이 데이터셋은 한국어 위키 데이터를 기반으로 만들어졌습니다. 원본 위키 데이터를 처리하기 위해 wikiextractor.py를 사용하여 텍스트 형식으로 변환하였습니다.\n - 이 데이터셋을 제작한 주요 취지는 한국어 자연어 처리 연구와 애플리케이션 개발에 사용할 수 있는 광범위한 텍스트 데이터를 제공하기 위함입니다.\n\n* 데이터 구조\n - text: 위키 문서의 본문을 포함하는 문자열입니다.\n\n* 사용 방법\n 1. huggingface dataset과 map을 활용하는 방법 \n \n\n 2. 파이썬 스크립트를 사용하는 방법"
] |
fc6b7182891d8b21a4ae187c46a72be98ccb78cc
|
# Dataset Card for "all-lucidrain-code-python-tokenized-8192-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
kye/all-lucidrain-code-python-tokenized-8192-2
|
[
"region:us"
] |
2023-09-21T00:56:05+00:00
|
{"dataset_info": {"features": [{"name": "python_code", "sequence": "string"}, {"name": "repo_name", "sequence": "string"}, {"name": "file_path", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 875787, "num_examples": 16}], "download_size": 2857, "dataset_size": 875787}}
|
2023-09-21T00:56:07+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "all-lucidrain-code-python-tokenized-8192-2"
More Information needed
|
[
"# Dataset Card for \"all-lucidrain-code-python-tokenized-8192-2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"all-lucidrain-code-python-tokenized-8192-2\"\n\nMore Information needed"
] |
[
6,
28
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"all-lucidrain-code-python-tokenized-8192-2\"\n\nMore Information needed"
] |
fb1a3d37a7ad8dfdaac55cff699975d110cd906e
|
# Dataset Card for "all-lucidrain-code-python-tokenized-8192-3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
kye/all-lucidrain-code-python-tokenized-8192-3
|
[
"region:us"
] |
2023-09-21T00:58:26+00:00
|
{"dataset_info": {"features": [{"name": "repo_name", "sequence": "string"}, {"name": "file_path", "sequence": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 2299336, "num_examples": 21}], "download_size": 349131, "dataset_size": 2299336}}
|
2023-09-21T00:58:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "all-lucidrain-code-python-tokenized-8192-3"
More Information needed
|
[
"# Dataset Card for \"all-lucidrain-code-python-tokenized-8192-3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"all-lucidrain-code-python-tokenized-8192-3\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"all-lucidrain-code-python-tokenized-8192-3\"\n\nMore Information needed"
] |
85239fd3fc455f45d03687d6b9c255b742aad243
|
# Dataset Card for "med_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nezhazheng/med_data
|
[
"region:us"
] |
2023-09-21T00:59:16+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 84926.7, "num_examples": 900}, {"name": "validation", "num_bytes": 9436.3, "num_examples": 100}], "download_size": 55416, "dataset_size": 94363.0}}
|
2023-09-21T00:59:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "med_data"
More Information needed
|
[
"# Dataset Card for \"med_data\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"med_data\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"med_data\"\n\nMore Information needed"
] |
dcc1b40121680a8e4d393b9edc91c7cc2272a6f3
|
# Dataset Card for "data_for_synthesis_wer_25"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
thanhduycao/data_for_synthesis_wer_25
|
[
"region:us"
] |
2023-09-21T01:07:36+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "intent", "dtype": "string"}, {"name": "sentence_annotation", "dtype": "string"}, {"name": "entities", "list": [{"name": "type", "dtype": "string"}, {"name": "filler", "dtype": "string"}]}, {"name": "file", "dtype": "string"}, {"name": "audio", "struct": [{"name": "array", "sequence": "float64"}, {"name": "path", "dtype": "string"}, {"name": "sampling_rate", "dtype": "int64"}]}, {"name": "origin_transcription", "dtype": "string"}, {"name": "sentence_norm", "dtype": "string"}, {"name": "w2v2_large_transcription", "dtype": "string"}, {"name": "wer", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2606891904.5911727, "num_examples": 5034}], "download_size": 632502401, "dataset_size": 2606891904.5911727}}
|
2023-09-21T01:08:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "data_for_synthesis_wer_25"
More Information needed
|
[
"# Dataset Card for \"data_for_synthesis_wer_25\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"data_for_synthesis_wer_25\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"data_for_synthesis_wer_25\"\n\nMore Information needed"
] |
63280fef3c4a95501e5ce983ca878d25be90b81f
|
# 영어 위키 데이터셋(En_wiki)
* 개요
- 이 데이터셋은 영어 위키 데이터를 기반으로 만들어졌습니다. 원본 위키 데이터를 처리하기 위해 wikiextractor.py를 사용하여 텍스트 형식으로 변환하였습니다.
- 이 데이터셋을 제작한 주요 취지는 영어 자연어 처리 연구와 애플리케이션 개발에 사용할 수 있는 광범위한 텍스트 데이터를 제공하기 위함입니다.
- dataset map을 사용하실때는 반드시 streaming=True를 사용하셔야 합니다. 컴퓨팅 파워가 엄청 좋지 않다면, 램이 터질 수 있습니다.
* 데이터 구조
- text: 위키 문서의 본문을 포함하는 문자열입니다.
* 사용 방법
1. huggingface dataset과 map을 활용하는 방법
```python3
from datasets import load_dataset
ko_dataset = load_dataset("text",
"daje/en_wiki",
split="train",
streaming=True)
ko_wiki_tokenized = ko_dataset.map(lambda x : tokenizer(x["text"],
max_length=256,
padding="max_length",
truncation=True),
remove_columns=["text"])
```
2. 파이썬 스크립트를 사용하는 방법
```
import os
from tqdm import tqdm
from transformers import AutoTokenizer
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--input_path', type=str)
parser.add_argument('--output_path', type=str)
parser.add_argument('--model_name_or_path', type=str)
parser.add_argument('--max_seq_length', type=int, default=256)
parser.add_argument('--add_sep', default=True, action='store_true')
args = parser.parse_args()
def get_num_lines(fname):
res = os.popen(f'wc -l {fname}').read()
lines = res.strip().split()[0]
return int(lines)
def main(args):
seq_length = args.max_seq_length - 3 # room for [BOS], [EOS], [UNK]
input_fs = open(args.input_path, 'r')
output_fs = open(args.output_path, 'a')
total_line = get_num_lines(args.input_path)
tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path)
buffer = []
for doc in tqdm(input_fs, total=total_line):
tokens = tokenizer.tokenize(doc)
buffer += tokens
if args.add_sep:
buffer += [tokenizer.eos_token] # 자신이 사용하는 tokenizer에 맞추어서 eos, sep을 넣으시면 됩니다.
while len(buffer) > seq_length:
text = ' '.join(buffer[:seq_length])
output_fs.write(text)
output_fs.write('\n')
buffer = buffer[seq_length:]
input_fs.close()
output_fs.close()
if __name__ == '__main__':
main(args)
```
|
daje/en_wiki
|
[
"region:us"
] |
2023-09-21T01:22:29+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15215273427, "num_examples": 5091142}], "download_size": 8903954435, "dataset_size": 15215273427}}
|
2023-09-21T04:40:18+00:00
|
[] |
[] |
TAGS
#region-us
|
# 영어 위키 데이터셋(En_wiki)
* 개요
- 이 데이터셋은 영어 위키 데이터를 기반으로 만들어졌습니다. 원본 위키 데이터를 처리하기 위해 wikiextractor.py를 사용하여 텍스트 형식으로 변환하였습니다.
- 이 데이터셋을 제작한 주요 취지는 영어 자연어 처리 연구와 애플리케이션 개발에 사용할 수 있는 광범위한 텍스트 데이터를 제공하기 위함입니다.
- dataset map을 사용하실때는 반드시 streaming=True를 사용하셔야 합니다. 컴퓨팅 파워가 엄청 좋지 않다면, 램이 터질 수 있습니다.
* 데이터 구조
- text: 위키 문서의 본문을 포함하는 문자열입니다.
* 사용 방법
1. huggingface dataset과 map을 활용하는 방법
2. 파이썬 스크립트를 사용하는 방법
|
[
"# 영어 위키 데이터셋(En_wiki)\n* 개요 \n - 이 데이터셋은 영어 위키 데이터를 기반으로 만들어졌습니다. 원본 위키 데이터를 처리하기 위해 wikiextractor.py를 사용하여 텍스트 형식으로 변환하였습니다.\n - 이 데이터셋을 제작한 주요 취지는 영어 자연어 처리 연구와 애플리케이션 개발에 사용할 수 있는 광범위한 텍스트 데이터를 제공하기 위함입니다.\n - dataset map을 사용하실때는 반드시 streaming=True를 사용하셔야 합니다. 컴퓨팅 파워가 엄청 좋지 않다면, 램이 터질 수 있습니다. \n\n* 데이터 구조\n - text: 위키 문서의 본문을 포함하는 문자열입니다.\n\n* 사용 방법\n 1. huggingface dataset과 map을 활용하는 방법 \n \n\n 2. 파이썬 스크립트를 사용하는 방법"
] |
[
"TAGS\n#region-us \n",
"# 영어 위키 데이터셋(En_wiki)\n* 개요 \n - 이 데이터셋은 영어 위키 데이터를 기반으로 만들어졌습니다. 원본 위키 데이터를 처리하기 위해 wikiextractor.py를 사용하여 텍스트 형식으로 변환하였습니다.\n - 이 데이터셋을 제작한 주요 취지는 영어 자연어 처리 연구와 애플리케이션 개발에 사용할 수 있는 광범위한 텍스트 데이터를 제공하기 위함입니다.\n - dataset map을 사용하실때는 반드시 streaming=True를 사용하셔야 합니다. 컴퓨팅 파워가 엄청 좋지 않다면, 램이 터질 수 있습니다. \n\n* 데이터 구조\n - text: 위키 문서의 본문을 포함하는 문자열입니다.\n\n* 사용 방법\n 1. huggingface dataset과 map을 활용하는 방법 \n \n\n 2. 파이썬 스크립트를 사용하는 방법"
] |
[
6,
167
] |
[
"passage: TAGS\n#region-us \n# 영어 위키 데이터셋(En_wiki)\n* 개요 \n - 이 데이터셋은 영어 위키 데이터를 기반으로 만들어졌습니다. 원본 위키 데이터를 처리하기 위해 wikiextractor.py를 사용하여 텍스트 형식으로 변환하였습니다.\n - 이 데이터셋을 제작한 주요 취지는 영어 자연어 처리 연구와 애플리케이션 개발에 사용할 수 있는 광범위한 텍스트 데이터를 제공하기 위함입니다.\n - dataset map을 사용하실때는 반드시 streaming=True를 사용하셔야 합니다. 컴퓨팅 파워가 엄청 좋지 않다면, 램이 터질 수 있습니다. \n\n* 데이터 구조\n - text: 위키 문서의 본문을 포함하는 문자열입니다.\n\n* 사용 방법\n 1. huggingface dataset과 map을 활용하는 방법 \n \n\n 2. 파이썬 스크립트를 사용하는 방법"
] |
5086b6f7f2fcaadff60169cdb4767c8f6b161b73
|
# Dataset of Rikka
This is the dataset of Rikka, containing 284 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 284 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 599 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 284 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 284 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 284 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 284 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 284 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 599 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 599 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 599 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/rikka_4ninwasorezoreusootsuku
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-21T01:23:06+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-21T01:29:38+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Rikka
================
This is the dataset of Rikka, containing 284 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
4d12fbc21012a8bf3b73440c291bd665a521924e
|
# Dataset Card for "all-lucidrain-code-python-tokenized-8192-4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
kye/all-lucidrain-code-python-tokenized-8192-4
|
[
"region:us"
] |
2023-09-21T01:23:42+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 170959464, "num_examples": 4173}], "download_size": 39435682, "dataset_size": 170959464}}
|
2023-09-21T01:24:16+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "all-lucidrain-code-python-tokenized-8192-4"
More Information needed
|
[
"# Dataset Card for \"all-lucidrain-code-python-tokenized-8192-4\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"all-lucidrain-code-python-tokenized-8192-4\"\n\nMore Information needed"
] |
[
6,
28
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"all-lucidrain-code-python-tokenized-8192-4\"\n\nMore Information needed"
] |
f861152fae484a40ed68fd382c6dce2574c104b7
|
# Note
> some rm data from public dataset
- format
```json
{
"history": [
"query1", "answer1",
"query2", "answer2"
],
"prompt": "query",
"input": "input for query",
"output": [
"output rank1",
"output rank2",
"output rank3"
]
}
```
Thanks
- [beyond/rlhf-reward-single-round-trans_chinese](https://huggingface.co/datasets/beyond/rlhf-reward-single-round-trans_chinese) :
- [dikw/hh_rlhf_cn](https://huggingface.co/datasets/dikw/hh_rlhf_cn)
- [liyucheng/zhihu_rlhf_3k](https://huggingface.co/datasets/liyucheng/zhihu_rlhf_3k)
|
ticoAg/hh_rlhf_harmless_cn_test
|
[
"region:us"
] |
2023-09-21T01:44:53+00:00
|
{}
|
2023-09-21T13:44:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Note
> some rm data from public dataset
- format
Thanks
- beyond/rlhf-reward-single-round-trans_chinese :
- dikw/hh_rlhf_cn
- liyucheng/zhihu_rlhf_3k
|
[
"# Note\n> some rm data from public dataset\n\n- format\n\n\nThanks \n- beyond/rlhf-reward-single-round-trans_chinese : \n- dikw/hh_rlhf_cn\n- liyucheng/zhihu_rlhf_3k"
] |
[
"TAGS\n#region-us \n",
"# Note\n> some rm data from public dataset\n\n- format\n\n\nThanks \n- beyond/rlhf-reward-single-round-trans_chinese : \n- dikw/hh_rlhf_cn\n- liyucheng/zhihu_rlhf_3k"
] |
[
6,
60
] |
[
"passage: TAGS\n#region-us \n# Note\n> some rm data from public dataset\n\n- format\n\n\nThanks \n- beyond/rlhf-reward-single-round-trans_chinese : \n- dikw/hh_rlhf_cn\n- liyucheng/zhihu_rlhf_3k"
] |
74ae87650cc20a27bc38802b82cd35ce5e182d43
|
# Note
> some rm data from public dataset
- format
```json
{
"history": [
["query1", "answer1"],
["query2", "answer2"]
],
"prompt": "query",
"input": "input for query",
"output": [
"output rank1",
"output rank2",
"output rank3"
]
}
```
Thanks
- [beyond/rlhf-reward-single-round-trans_chinese](https://huggingface.co/datasets/beyond/rlhf-reward-single-round-trans_chinese) :
- [dikw/hh_rlhf_cn](https://huggingface.co/datasets/dikw/hh_rlhf_cn)
- [liyucheng/zhihu_rlhf_3k](https://huggingface.co/datasets/liyucheng/zhihu_rlhf_3k)
|
ticoAg/hh_rlhf_harmless_cn_train
|
[
"region:us"
] |
2023-09-21T01:45:46+00:00
|
{}
|
2023-09-21T13:47:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Note
> some rm data from public dataset
- format
Thanks
- beyond/rlhf-reward-single-round-trans_chinese :
- dikw/hh_rlhf_cn
- liyucheng/zhihu_rlhf_3k
|
[
"# Note\n> some rm data from public dataset\n\n- format\n\n\nThanks \n- beyond/rlhf-reward-single-round-trans_chinese : \n- dikw/hh_rlhf_cn\n- liyucheng/zhihu_rlhf_3k"
] |
[
"TAGS\n#region-us \n",
"# Note\n> some rm data from public dataset\n\n- format\n\n\nThanks \n- beyond/rlhf-reward-single-round-trans_chinese : \n- dikw/hh_rlhf_cn\n- liyucheng/zhihu_rlhf_3k"
] |
[
6,
60
] |
[
"passage: TAGS\n#region-us \n# Note\n> some rm data from public dataset\n\n- format\n\n\nThanks \n- beyond/rlhf-reward-single-round-trans_chinese : \n- dikw/hh_rlhf_cn\n- liyucheng/zhihu_rlhf_3k"
] |
be5bc5098cca6bf5a74e6b82d5ca16ebbcde4151
|
# Dataset of Chiyo
This is the dataset of Chiyo, containing 266 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 266 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 565 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 266 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 266 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 266 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 266 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 266 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 565 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 565 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 565 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/chiyo_4ninwasorezoreusootsuku
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-21T01:47:54+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-21T01:50:06+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Chiyo
================
This is the dataset of Chiyo, containing 266 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
58198d0956030b6666e567fbe90461949489a3ed
|
# Note
> some rm data from public dataset
- format
```json
{
"history": [
["query1", "answer1"],
["query2", "answer2"]
],
"prompt": "query",
"input": "input for query",
"output": [
"output rank1",
"output rank2",
"output rank3"
]
}
```
Thanks
- [beyond/rlhf-reward-single-round-trans_chinese](https://huggingface.co/datasets/beyond/rlhf-reward-single-round-trans_chinese) :
- [dikw/hh_rlhf_cn](https://huggingface.co/datasets/dikw/hh_rlhf_cn)
- [liyucheng/zhihu_rlhf_3k](https://huggingface.co/datasets/liyucheng/zhihu_rlhf_3k)
|
ticoAg/hh_rlhf_helpful_cn_test
|
[
"region:us"
] |
2023-09-21T01:51:35+00:00
|
{}
|
2023-09-21T13:47:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Note
> some rm data from public dataset
- format
Thanks
- beyond/rlhf-reward-single-round-trans_chinese :
- dikw/hh_rlhf_cn
- liyucheng/zhihu_rlhf_3k
|
[
"# Note\n> some rm data from public dataset\n\n- format\n\n\nThanks \n- beyond/rlhf-reward-single-round-trans_chinese : \n- dikw/hh_rlhf_cn\n- liyucheng/zhihu_rlhf_3k"
] |
[
"TAGS\n#region-us \n",
"# Note\n> some rm data from public dataset\n\n- format\n\n\nThanks \n- beyond/rlhf-reward-single-round-trans_chinese : \n- dikw/hh_rlhf_cn\n- liyucheng/zhihu_rlhf_3k"
] |
[
6,
60
] |
[
"passage: TAGS\n#region-us \n# Note\n> some rm data from public dataset\n\n- format\n\n\nThanks \n- beyond/rlhf-reward-single-round-trans_chinese : \n- dikw/hh_rlhf_cn\n- liyucheng/zhihu_rlhf_3k"
] |
bc3203bda02c4432c5e1ac4db0fae08023796422
|
# Note
> some rm data from public dataset
- format
```json
{
"history": [
"query1", "answer1",
"query2", "answer2"
],
"prompt": "query",
"input": "input for query",
"output": [
"output rank1",
"output rank2",
"output rank3"
]
}
```
Thanks
- [beyond/rlhf-reward-single-round-trans_chinese](https://huggingface.co/datasets/beyond/rlhf-reward-single-round-trans_chinese) :
- [dikw/hh_rlhf_cn](https://huggingface.co/datasets/dikw/hh_rlhf_cn)
- [liyucheng/zhihu_rlhf_3k](https://huggingface.co/datasets/liyucheng/zhihu_rlhf_3k)
|
ticoAg/hh_rlhf_helpful_cn_train
|
[
"region:us"
] |
2023-09-21T01:53:01+00:00
|
{}
|
2023-09-21T13:37:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Note
> some rm data from public dataset
- format
Thanks
- beyond/rlhf-reward-single-round-trans_chinese :
- dikw/hh_rlhf_cn
- liyucheng/zhihu_rlhf_3k
|
[
"# Note\n> some rm data from public dataset\n\n- format\n\n\nThanks \n- beyond/rlhf-reward-single-round-trans_chinese : \n- dikw/hh_rlhf_cn\n- liyucheng/zhihu_rlhf_3k"
] |
[
"TAGS\n#region-us \n",
"# Note\n> some rm data from public dataset\n\n- format\n\n\nThanks \n- beyond/rlhf-reward-single-round-trans_chinese : \n- dikw/hh_rlhf_cn\n- liyucheng/zhihu_rlhf_3k"
] |
[
6,
60
] |
[
"passage: TAGS\n#region-us \n# Note\n> some rm data from public dataset\n\n- format\n\n\nThanks \n- beyond/rlhf-reward-single-round-trans_chinese : \n- dikw/hh_rlhf_cn\n- liyucheng/zhihu_rlhf_3k"
] |
6c866926adb4cbe265b8194ceb0fa819b6cea31e
|
# Dataset Card for "ce65a06b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/ce65a06b
|
[
"region:us"
] |
2023-09-21T01:55:45+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 187, "num_examples": 10}], "download_size": 1357, "dataset_size": 187}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T01:55:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ce65a06b"
More Information needed
|
[
"# Dataset Card for \"ce65a06b\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ce65a06b\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ce65a06b\"\n\nMore Information needed"
] |
c1876b3fa1fdfedc7f72f06bb2bf0f05e7fcdfde
|
# Dataset of Sekine
This is the dataset of Sekine, containing 289 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 289 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 631 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 289 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 289 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 289 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 289 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 289 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 631 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 631 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 631 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/sekine_4ninwasorezoreusootsuku
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-21T02:10:03+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-21T02:14:58+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Sekine
=================
This is the dataset of Sekine, containing 289 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
a1fd632634a3811066fb7feb8909c6710349aad9
|
# Dataset Card for "data_for_synthesis_with_entities_align_v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
thanhduycao/data_for_synthesis_with_entities_align_v3
|
[
"region:us"
] |
2023-09-21T02:16:15+00:00
|
{"dataset_info": {"config_name": "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", "features": [{"name": "id", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "intent", "dtype": "string"}, {"name": "sentence_annotation", "dtype": "string"}, {"name": "entities", "list": [{"name": "type", "dtype": "string"}, {"name": "filler", "dtype": "string"}]}, {"name": "file", "dtype": "string"}, {"name": "audio", "struct": [{"name": "array", "sequence": "float64"}, {"name": "path", "dtype": "string"}, {"name": "sampling_rate", "dtype": "int64"}]}, {"name": "origin_transcription", "dtype": "string"}, {"name": "sentence_norm", "dtype": "string"}, {"name": "w2v2_large_transcription", "dtype": "string"}, {"name": "wer", "dtype": "int64"}, {"name": "entities_norm", "list": [{"name": "filler", "dtype": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "entities_align", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2667449542.4493446, "num_examples": 5029}], "download_size": 632908060, "dataset_size": 2667449542.4493446}, "configs": [{"config_name": "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", "data_files": [{"split": "train", "path": "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/train-*"}]}]}
|
2023-09-21T03:46:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "data_for_synthesis_with_entities_align_v3"
More Information needed
|
[
"# Dataset Card for \"data_for_synthesis_with_entities_align_v3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"data_for_synthesis_with_entities_align_v3\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"data_for_synthesis_with_entities_align_v3\"\n\nMore Information needed"
] |
478f992b0b2d3a1e807a310521fb2e12fa6345f4
|
# Note
> some rm data from public dataset
- format
```json
{
"history": [
"query1", "answer1",
"query2", "answer2"
],
"prompt": "query",
"input": "input for query",
"output": [
"output rank1",
"output rank2",
"output rank3"
]
}
```
Thanks
- [beyond/rlhf-reward-single-round-trans_chinese](https://huggingface.co/datasets/beyond/rlhf-reward-single-round-trans_chinese) :
- [dikw/hh_rlhf_cn](https://huggingface.co/datasets/dikw/hh_rlhf_cn)
- [liyucheng/zhihu_rlhf_3k](https://huggingface.co/datasets/liyucheng/zhihu_rlhf_3k)
|
ticoAg/zhihu_3k_rlhf_test
|
[
"region:us"
] |
2023-09-21T02:20:30+00:00
|
{}
|
2023-09-21T02:22:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Note
> some rm data from public dataset
- format
Thanks
- beyond/rlhf-reward-single-round-trans_chinese :
- dikw/hh_rlhf_cn
- liyucheng/zhihu_rlhf_3k
|
[
"# Note\n> some rm data from public dataset\n\n- format\n\n\nThanks \n- beyond/rlhf-reward-single-round-trans_chinese : \n- dikw/hh_rlhf_cn\n- liyucheng/zhihu_rlhf_3k"
] |
[
"TAGS\n#region-us \n",
"# Note\n> some rm data from public dataset\n\n- format\n\n\nThanks \n- beyond/rlhf-reward-single-round-trans_chinese : \n- dikw/hh_rlhf_cn\n- liyucheng/zhihu_rlhf_3k"
] |
[
6,
60
] |
[
"passage: TAGS\n#region-us \n# Note\n> some rm data from public dataset\n\n- format\n\n\nThanks \n- beyond/rlhf-reward-single-round-trans_chinese : \n- dikw/hh_rlhf_cn\n- liyucheng/zhihu_rlhf_3k"
] |
8673f72b25c3700c0b159245e35b0bbdc624e3a3
|
# Note
> some rm data from public dataset
- format
```json
{
"history": [
"query1", "answer1",
"query2", "answer2"
],
"prompt": "query",
"input": "input for query",
"output": [
"output rank1",
"output rank2",
"output rank3"
]
}
```
Thanks
- [beyond/rlhf-reward-single-round-trans_chinese](https://huggingface.co/datasets/beyond/rlhf-reward-single-round-trans_chinese) :
- [dikw/hh_rlhf_cn](https://huggingface.co/datasets/dikw/hh_rlhf_cn)
- [liyucheng/zhihu_rlhf_3k](https://huggingface.co/datasets/liyucheng/zhihu_rlhf_3k)
|
ticoAg/zhihu_3k_rlhf_train
|
[
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:zh",
"license:apache-2.0",
"region:us"
] |
2023-09-21T02:21:09+00:00
|
{"language": ["zh"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering"]}
|
2023-09-21T08:53:46+00:00
|
[] |
[
"zh"
] |
TAGS
#task_categories-question-answering #size_categories-1K<n<10K #language-Chinese #license-apache-2.0 #region-us
|
# Note
> some rm data from public dataset
- format
Thanks
- beyond/rlhf-reward-single-round-trans_chinese :
- dikw/hh_rlhf_cn
- liyucheng/zhihu_rlhf_3k
|
[
"# Note\n> some rm data from public dataset\n\n- format\n\n\nThanks \n- beyond/rlhf-reward-single-round-trans_chinese : \n- dikw/hh_rlhf_cn\n- liyucheng/zhihu_rlhf_3k"
] |
[
"TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-Chinese #license-apache-2.0 #region-us \n",
"# Note\n> some rm data from public dataset\n\n- format\n\n\nThanks \n- beyond/rlhf-reward-single-round-trans_chinese : \n- dikw/hh_rlhf_cn\n- liyucheng/zhihu_rlhf_3k"
] |
[
43,
60
] |
[
"passage: TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-Chinese #license-apache-2.0 #region-us \n# Note\n> some rm data from public dataset\n\n- format\n\n\nThanks \n- beyond/rlhf-reward-single-round-trans_chinese : \n- dikw/hh_rlhf_cn\n- liyucheng/zhihu_rlhf_3k"
] |
5cd4972bcb779b650112cb93963a2b7869d4ff2c
|
# Dataset Card for "final_law_clm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nc33/final_law_clm
|
[
"region:us"
] |
2023-09-21T02:25:03+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "input", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 364049188, "num_examples": 24334}], "download_size": 106506075, "dataset_size": 364049188}}
|
2023-09-21T05:50:48+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "final_law_clm"
More Information needed
|
[
"# Dataset Card for \"final_law_clm\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"final_law_clm\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"final_law_clm\"\n\nMore Information needed"
] |
6764ebe83f8a13b528f9a67e5de417ed654a2efe
|
# Dataset of Tsubasa
This is the dataset of Tsubasa, containing 295 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 295 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 652 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 295 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 295 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 295 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 295 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 295 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 652 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 652 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 652 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/tsubasa_4ninwasorezoreusootsuku
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-21T02:35:10+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-21T02:38:49+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Tsubasa
==================
This is the dataset of Tsubasa, containing 295 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
d8e006f5dbf8c064cb0ae421f87c4a861bc47455
|
## Citations
```bibtex
@article{platypus2023,
title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs},
author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz},
booktitle={arXiv preprint arxiv:2308.07317},
year={2023}
}
```
|
CJ-gyuwonpark/merge-data-v33
|
[
"region:us"
] |
2023-09-21T03:04:12+00:00
|
{}
|
2023-10-22T01:32:42+00:00
|
[] |
[] |
TAGS
#region-us
|
s
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
169946c78bd300e33bc6303def3c79dc42cfc814
|
SARFish is a [Synthetic Aperture Radar (SAR)](https://sentinel.esa.int/web/sentinel/missions/sentinel-1/instrument-payload) imagery dataset for the purpose of training, validating and testing supervised machine learning models on the tasks of ship detection, classification, and length regression. The SARFish dataset builds on the excellent work of the [xView3-SAR dataset](https://iuu.xview.us/dataset) (2021) and consists of two parts:
1. Data - Extends the xView3-SAR dataset to include [Single Look Complex (SLC)](https://sentinels.copernicus.eu/web/sentinel/technical-guides/sentinel-1-sar/products-algorithms/level-1-algorithms/single-look-complex) as well as [Ground Range Detected (GRD)](https://sentinels.copernicus.eu/web/sentinel/technical-guides/sentinel-1-sar/products-algorithms/level-1-algorithms/ground-range-detected) imagery data taken directly from the European Space Agency (ESA) Copernicus Programme [Open Access Hub Website](https://scihub.copernicus.eu/).
2. Labels - Derives labels from the xView3-SAR dataset providing maritime object location, vessel classification and vessel length information.
### Quick Links
The following are links to the Kaggle competitions for each of the tracks of the SARFish challenge along with the SARFish dataset and GitHub repo:
- Data:
- [SARFish](https://huggingface.co/datasets/ConnorLuckettDSTG/SARFish)
- [SARFishSample](https://huggingface.co/datasets/ConnorLuckettDSTG/SARFishSample)
- [Labels](https://iuu.xview.us/download-links)
- Challenge:
- [Maritime Object Detection Track](https://www.kaggle.com/competitions/sarfish-maritime-object-detection)
- [Maritime Object Classification Track](https://www.kaggle.com/competitions/sarfish-maritime-object-classification)
- [Vessel Length Regression Track](https://www.kaggle.com/competitions/sarfish-vessel-length-regression)
- [GitHub repo](https://github.com/RitwikGupta/SARFish)
- [Mailbox]([email protected])
- [DAIRNet](https://www.dairnet.com.au/events/workshop-on-complex-valued-deep-learning-and-sarfish-challenge/)
The [GitHub repo](https://github.com/RitwikGupta/SARFish) describes how to:
- Download the dataset.
- Run the SARFish_demo jupyter notebook.
- Load imagery products and groundtruth labels,
- Train and evaluate a reference/baseline model using the dataset.
### Dataset summary - What does the SARFish dataset consist of?
The following table summarises the sizes of the full size and sample SARFish dataset.
| dataset | coincident GRD, SLC products | compressed (GB) | uncompressed (GB) |
| --- | --- | --- | --- |
| SARFishSample | 1 | 4.3 | 8.2 |
| SARFish | 753 | 3293 | 6468 |
The following table summarises the partitions of the dataset:
| Partition | Coincident products | Labels Provided | Unique maritime object labels | |
| --- | --- | --- | --- | --- |
| | | | SLC | GRD |
| train | 553 | True | 63071 | 64054 |
| validation | 50 | True | 18906 | 19222 |
| public | 150 | False | 58744 | 60008 |
| | | Total | 140721 | 143284 |
### How to access the SARFish dataset
The SARFish dataset is available for download at:
- [full SARFish dataset](https://huggingface.co/datasets/ConnorLuckettDSTG/SARFish)
- [sample SARFish dataset](https://huggingface.co/datasets/ConnorLuckettDSTG/SARFishSample)
#### Full SARFish dataset
Make sure you have at least enough storage space for the uncompressed dataset.
```bash
cd /path/to/large/storage/location
```
[Create|login] to a [huggingface](https://huggingface.co) account.
Login to the huggingface command line interface.
```bash
huggingface-cli login
```
Copy the access token in settings/Access Tokens from your huggingface account. Clone the dataset
```bash
git lfs install
git clone https://huggingface.co/datasets/ConnorLuckettDSTG/SARFish
```
#### SARFish sample dataset
Substitute the final command for the full dataset with the following:
```bash
git clone https://huggingface.co/datasets/ConnorLuckettDSTG/SARFishSample
```
Follow the instructions of the github repo README to check the md5sums of the data and unzip them.
#### Labels
The SARFish dataset labels are derived from the labels supplied with the [xView-3 SAR dataset](https://iuu.xview.us/dataset). The SARFish dataset labels are available for download from the [DIU website](https://iuu.xview.us/download-links). Be sure to take into account country restrictions.
### Data
SARFish extends the xView3-SAR dataset by providing products from the [Sentinel-1 C-band SAR satellite constellation](https://sentinel.esa.int/web/sentinel/missions/sentinel-1) operated by the European Space Agency’s (ESA) Copernicus Programme available on their [Open Access Hub Website](https://scihub.copernicus.eu/) in both real-valued GRD and complex-valued SLC product types.

The above image shows a condensed summary of the image formation pipeline of the Sentinel-1 products provided by the Sentinel-1 Mission Performance Center. Note that the SLC and GRD products both share a common ancestor.

The above image shows the relationship between the xView3-SAR and SARFish datasets.
#### Summary table
The following table compares the GRD and SLC products of the SARFish dataset [3][4]
| | | |
| --- | --- | --- |
| Platform | Sentinel-1 (A, B) | |
| Operator | European Space Agency (ESA) Sentinel-1 Mission Performance Center | |
| Sensor | CBand SAR | |
| Mode | Interferometric Wide Swath (IW) | |
| Polarisations | VV, VH | |
| Ground range coverage (km) | 251.8 | |
| Product type | SLC | GRD |
| Pixel value | Complex | Magnitude Detected |
| Data type | Complex Int16 | Unsigned Int16 |
| Azimuth pixel spacing (m) | 2.3 | 10 |
| Range pixel spacing (m) | 14.1 | 10 |
#### Ground Range Detected (GRD) Products
GRD products consist of two 'detected' imagery products in VH, VV polarisations. The imagery data is stored in GeoTiff format. Also included in the dataset are no_data masks and shoreline files which are used to evaluate 'close-to-shore' maritime object detection tasks.
#### Single Look Complex (SLC) Products



The figures above show the 'swaths' comprising a SARFish SLC product in VH polarisation with groundtruth maritime object. labels The complex data has been 'detected' [3] by projecting the complex-valued data onto the real numbers for visualisation and displayed on decibel scale where the dynamic range is between 15 and 60 dB. Note that the SLC products have non-square (x, y): 2.3 × 14.1 m pixel spacing. The native format of the data is Complex Int16.

The figure above shows the footprint of the first swath of the example SLC product in context. The footprint was plotted using Clyde D'Cruz' ["openstreetmap WKT playground"](https://clydedacruz.github.io/openstreetmap-wkt-playground/).


The above images show detail of a labelled vessel in a SLC product in both VH (above) and VV (below) polarisations. Note the differences in the speckle and side-lobing artefacts on the vessel between polarisations and the non-square pixel spacing.
### Labels
#### Location labels
The labels denote the image pixel and geographic coordinate location of the maritime object.
| field | data_type | description |
| --------- | ----------- | --------- |
| detect\_lat | float | latitude of detection in World Geodetic System (WGS) 84 coordinates |
| detect\_lon | float | longitude of detection in WGS84 coordinates |
| detect\_scene\_row | int | pixel row of scene containing detection |
| detect\_scene\_column | int | pixel column of scene containing detection |
#### Classification Labels
The labels for the maritime object classification are organised in the same hierarchical structure as the xView3-SAR challenge labels:
```bash
label_heirarchy:
└── maritime_objects
└── vessels
└── fishing_vessels
```
They are denoted by the following columns in the labels:
| field | data_type | description |
| --------- | ----------- | --------- |
| is\_vessel | bool | True if detection is a vessel, False otherwise |
| is\_fishing | bool | True if detection is a fishing vessel, False otherwise |
The maritime object categories are labelled using boolean values to the following questions:
- is the maritime object a vessel?
- is the vessel a fishing vessel?
The following table shows the combinations of hierarchical classification labels present in the SARFish dataset:
| is\_vessel | is\_fishing |
|------------:|-------------:|
| False | nan |
| True | nan |
| | False |
| | True |
| nan | nan |
#### Vessel Length Labels
The vessel lengths are denoted in the following column in the labels:
| field | data_type | description |
| --------- | ----------- | --------- |
| vessel\_length\_m | float | length of vessel in meters; only provided where available from AIS |
#### Detailed labels summary
| field | data_type | description |
| --------- | ----------- | --------- |
| partition | str: \{"train", "validation"\} | split of the dataset |
| product\_type | str: \{"GRD", "SLC"\} | product type of the data |
| scene\_id | str | unique xView3 scene ID for challenge purposes |
| detect\_id | str | unique detection ID in the format: {scene\_id}\_{detect\_lat}\_{detect\_lon} |
| \{product\_type\}\_product\_identifier | str | The Copernicus Sentinel-1 product identifier for the designated product type |
| detect\_lat | float | latitude of detection in World Geodetic System (WGS) 84 coordinates |
| detect\_lon | float | longitude of detection in WGS84 coordinates |
| detect\_scene\_row | int | pixel row of scene containing detection |
| detect\_scene\_column | int | pixel column of scene containing detection |
| top | float | pixel row of the top left corner of the bounding box, where available |
| left | float | pixel column of the top left corner of the bounding box, where available |
| bottom | float | pixel row of the bottom right corner of the bounding box, where available |
| right | float | pixel column of the bottom right corner of the bounding box, where available |
| vessel\_length\_m | float | length of vessel in meters; only provided where available from AIS |
| source | str: \{AIS, AIS/Manual, Manual\} | source of detection (AIS, manual label, or both) |
| is\_vessel | bool | True if detection is a vessel, False otherwise |
| is\_fishing | bool | True if detection is a fishing vessel, False otherwise |
| global\_shoreline\_vector\_distance\_from\_shore\_km | float | distance from shore of detection in kilometers as determined using the global shoreline vectors projected into the pixel space of the SARFish products |
| xView3\_shoreline\_vector\_distance\_from\_shore\_km | float | distance from shore of detection in kilometers as determined using the xView3-SAR shoreline vectors projected into the pixel space of the SARFish products |
| confidence | str: \{HIGH, MEDIUM, LOW\} | level of confidence for is\_vessel and is\_fishing labels |
### Source
The Sentinel-1 GRD and SLC products were downloaded the University of Alaska's Alaska Satellite Facillity (ASF) which operates NASA's Distributed Active Archive Center (DAAC).
- [website](https://asf.alaska.edu/)
- [registration](https://urs.earthdata.nasa.gov/users/new)
- [download](https://datapool.asf.alaska.edu/)
- API docs
- [basics](https://docs.asf.alaska.edu/api/basics/)
- [keywords](https://docs.asf.alaska.edu/api/keywords/)
- [tools](https://docs.asf.alaska.edu/api/tools/)
[1]. Tri-Tan Cao, Connor Luckett, Jerome Williams, Tristrom Cooke, Ben Yip, Arvind Rajagopalan, and Sebastien Wong. Sarfish: Space-based maritime surveillance using complex synthetic aperture radar imagery. In 2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA), pages 1–8. IEEE, 2022.
[2] xview3-sar: Detecting dark fishing activity using synthetic aperture radar imagery. arXiv:2206.00897v4 [cs.CV], Nov 2022.
[3] M. Bourbigot, H. Johnsen, R. Piantanida, and G. Hajduch, Sentinel-1 Product Definition. Sentinel-1 Mission Performance Centre, 2016. [Online]. Available: https://sentinel.esa.int/web/sentinel/user-guides/sentinel-1-sar/document-library/-/asset_publisher/1dO7RF5fJMbd/content/sentinel-1-product-definition
[4] S. N. R. Chandra, J. Christopherson, and K. A. Casey, 2020 Joint Agency Commercial Imagery Evaluation—Remote sensing satellite compendium. US Geological Survey, 2020.
|
ConnorLuckettDSTG/SARFishSample
|
[
"task_categories:object-detection",
"task_categories:image-classification",
"size_categories:n<1K",
"license:apache-2.0",
"SARFish",
"Illegal Fishing",
"Computer Vision",
"Complex-Valued",
"Synthetic Aperture Radar",
"region:us"
] |
2023-09-21T03:40:13+00:00
|
{"license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["object-detection", "image-classification"], "pretty_name": "SARFish Sample Dataset", "tags": ["SARFish", "Illegal Fishing", "Computer Vision", "Complex-Valued", "Synthetic Aperture Radar"]}
|
2024-01-07T01:13:47+00:00
|
[] |
[] |
TAGS
#task_categories-object-detection #task_categories-image-classification #size_categories-n<1K #license-apache-2.0 #SARFish #Illegal Fishing #Computer Vision #Complex-Valued #Synthetic Aperture Radar #region-us
|
SARFish is a Synthetic Aperture Radar (SAR) imagery dataset for the purpose of training, validating and testing supervised machine learning models on the tasks of ship detection, classification, and length regression. The SARFish dataset builds on the excellent work of the xView3-SAR dataset (2021) and consists of two parts:
1. Data - Extends the xView3-SAR dataset to include Single Look Complex (SLC) as well as Ground Range Detected (GRD) imagery data taken directly from the European Space Agency (ESA) Copernicus Programme Open Access Hub Website.
2. Labels - Derives labels from the xView3-SAR dataset providing maritime object location, vessel classification and vessel length information.
### Quick Links
The following are links to the Kaggle competitions for each of the tracks of the SARFish challenge along with the SARFish dataset and GitHub repo:
* Data:
+ SARFish
+ SARFishSample
* Labels
* Challenge:
+ Maritime Object Detection Track
+ Maritime Object Classification Track
+ Vessel Length Regression Track
* GitHub repo
* Mailbox
* DAIRNet
The GitHub repo describes how to:
* Download the dataset.
* Run the SARFish\_demo jupyter notebook.
* Load imagery products and groundtruth labels,
* Train and evaluate a reference/baseline model using the dataset.
### Dataset summary - What does the SARFish dataset consist of?
The following table summarises the sizes of the full size and sample SARFish dataset.
The following table summarises the partitions of the dataset:
### How to access the SARFish dataset
The SARFish dataset is available for download at:
* full SARFish dataset
* sample SARFish dataset
#### Full SARFish dataset
Make sure you have at least enough storage space for the uncompressed dataset.
[Create|login] to a huggingface account.
Login to the huggingface command line interface.
Copy the access token in settings/Access Tokens from your huggingface account. Clone the dataset
#### SARFish sample dataset
Substitute the final command for the full dataset with the following:
Follow the instructions of the github repo README to check the md5sums of the data and unzip them.
#### Labels
The SARFish dataset labels are derived from the labels supplied with the xView-3 SAR dataset. The SARFish dataset labels are available for download from the DIU website. Be sure to take into account country restrictions.
### Data
SARFish extends the xView3-SAR dataset by providing products from the Sentinel-1 C-band SAR satellite constellation operated by the European Space Agency’s (ESA) Copernicus Programme available on their Open Access Hub Website in both real-valued GRD and complex-valued SLC product types.
 Products
GRD products consist of two 'detected' imagery products in VH, VV polarisations. The imagery data is stored in GeoTiff format. Also included in the dataset are no\_data masks and shoreline files which are used to evaluate 'close-to-shore' maritime object detection tasks.
#### Single Look Complex (SLC) Products
!SARFish Single Look Complex (SLC) example swath 1
!SARFish Single Look Complex (SLC) example swath 2
!SARFish Single Look Complex (SLC) example swath 3
The figures above show the 'swaths' comprising a SARFish SLC product in VH polarisation with groundtruth maritime object. labels The complex data has been 'detected' [3] by projecting the complex-valued data onto the real numbers for visualisation and displayed on decibel scale where the dynamic range is between 15 and 60 dB. Note that the SLC products have non-square (x, y): 2.3 × 14.1 m pixel spacing. The native format of the data is Complex Int16.
!SARFish SLC footprint
The figure above shows the footprint of the first swath of the example SLC product in context. The footprint was plotted using Clyde D'Cruz' "openstreetmap WKT playground".
!SARFish SLC VH polarisation ship example
!SARFish SLC VV polarisation ship example
The above images show detail of a labelled vessel in a SLC product in both VH (above) and VV (below) polarisations. Note the differences in the speckle and side-lobing artefacts on the vessel between polarisations and the non-square pixel spacing.
### Labels
#### Location labels
The labels denote the image pixel and geographic coordinate location of the maritime object.
field: detect\_lat, data\_type: float, description: latitude of detection in World Geodetic System (WGS) 84 coordinates
field: detect\_lon, data\_type: float, description: longitude of detection in WGS84 coordinates
field: detect\_scene\_row, data\_type: int, description: pixel row of scene containing detection
field: detect\_scene\_column, data\_type: int, description: pixel column of scene containing detection
#### Classification Labels
The labels for the maritime object classification are organised in the same hierarchical structure as the xView3-SAR challenge labels:
They are denoted by the following columns in the labels:
field: is\_vessel, data\_type: bool, description: True if detection is a vessel, False otherwise
field: is\_fishing, data\_type: bool, description: True if detection is a fishing vessel, False otherwise
The maritime object categories are labelled using boolean values to the following questions:
* is the maritime object a vessel?
* is the vessel a fishing vessel?
The following table shows the combinations of hierarchical classification labels present in the SARFish dataset:
#### Vessel Length Labels
The vessel lengths are denoted in the following column in the labels:
field: vessel\_length\_m, data\_type: float, description: length of vessel in meters; only provided where available from AIS
#### Detailed labels summary
field: partition, data\_type: str: {"train", "validation"}, description: split of the dataset
field: product\_type, data\_type: str: {"GRD", "SLC"}, description: product type of the data
field: scene\_id, data\_type: str, description: unique xView3 scene ID for challenge purposes
field: detect\_id, data\_type: str, description: unique detection ID in the format: {scene\_id}\_{detect\_lat}\_{detect\_lon}
field: {product\_type}\_product\_identifier, data\_type: str, description: The Copernicus Sentinel-1 product identifier for the designated product type
field: detect\_lat, data\_type: float, description: latitude of detection in World Geodetic System (WGS) 84 coordinates
field: detect\_lon, data\_type: float, description: longitude of detection in WGS84 coordinates
field: detect\_scene\_row, data\_type: int, description: pixel row of scene containing detection
field: detect\_scene\_column, data\_type: int, description: pixel column of scene containing detection
field: top, data\_type: float, description: pixel row of the top left corner of the bounding box, where available
field: left, data\_type: float, description: pixel column of the top left corner of the bounding box, where available
field: bottom, data\_type: float, description: pixel row of the bottom right corner of the bounding box, where available
field: right, data\_type: float, description: pixel column of the bottom right corner of the bounding box, where available
field: vessel\_length\_m, data\_type: float, description: length of vessel in meters; only provided where available from AIS
field: source, data\_type: str: {AIS, AIS/Manual, Manual}, description: source of detection (AIS, manual label, or both)
field: is\_vessel, data\_type: bool, description: True if detection is a vessel, False otherwise
field: is\_fishing, data\_type: bool, description: True if detection is a fishing vessel, False otherwise
field: global\_shoreline\_vector\_distance\_from\_shore\_km, data\_type: float, description: distance from shore of detection in kilometers as determined using the global shoreline vectors projected into the pixel space of the SARFish products
field: xView3\_shoreline\_vector\_distance\_from\_shore\_km, data\_type: float, description: distance from shore of detection in kilometers as determined using the xView3-SAR shoreline vectors projected into the pixel space of the SARFish products
field: confidence, data\_type: str: {HIGH, MEDIUM, LOW}, description: level of confidence for is\_vessel and is\_fishing labels
### Source
The Sentinel-1 GRD and SLC products were downloaded the University of Alaska's Alaska Satellite Facillity (ASF) which operates NASA's Distributed Active Archive Center (DAAC).
* website
* registration
* download
* API docs
+ basics
+ keywords
+ tools
[1]. Tri-Tan Cao, Connor Luckett, Jerome Williams, Tristrom Cooke, Ben Yip, Arvind Rajagopalan, and Sebastien Wong. Sarfish: Space-based maritime surveillance using complex synthetic aperture radar imagery. In 2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA), pages 1–8. IEEE, 2022.
[2] xview3-sar: Detecting dark fishing activity using synthetic aperture radar imagery. arXiv:2206.00897v4 [cs.CV], Nov 2022.
[3] M. Bourbigot, H. Johnsen, R. Piantanida, and G. Hajduch, Sentinel-1 Product Definition. Sentinel-1 Mission Performance Centre, 2016. [Online]. Available: URL
[4] S. N. R. Chandra, J. Christopherson, and K. A. Casey, 2020 Joint Agency Commercial Imagery Evaluation—Remote sensing satellite compendium. US Geological Survey, 2020.
|
[
"### Quick Links\n\n\nThe following are links to the Kaggle competitions for each of the tracks of the SARFish challenge along with the SARFish dataset and GitHub repo:\n\n\n* Data:\n\t+ SARFish\n\t+ SARFishSample\n* Labels\n* Challenge:\n\t+ Maritime Object Detection Track\n\t+ Maritime Object Classification Track\n\t+ Vessel Length Regression Track\n* GitHub repo\n* Mailbox\n* DAIRNet\n\n\nThe GitHub repo describes how to:\n\n\n* Download the dataset.\n* Run the SARFish\\_demo jupyter notebook.\n* Load imagery products and groundtruth labels,\n* Train and evaluate a reference/baseline model using the dataset.",
"### Dataset summary - What does the SARFish dataset consist of?\n\n\nThe following table summarises the sizes of the full size and sample SARFish dataset.\n\n\n\nThe following table summarises the partitions of the dataset:",
"### How to access the SARFish dataset\n\n\nThe SARFish dataset is available for download at:\n\n\n* full SARFish dataset\n* sample SARFish dataset",
"#### Full SARFish dataset\n\n\nMake sure you have at least enough storage space for the uncompressed dataset.\n\n\n[Create|login] to a huggingface account.\n\n\nLogin to the huggingface command line interface.\n\n\nCopy the access token in settings/Access Tokens from your huggingface account. Clone the dataset",
"#### SARFish sample dataset\n\n\nSubstitute the final command for the full dataset with the following:\n\n\nFollow the instructions of the github repo README to check the md5sums of the data and unzip them.",
"#### Labels\n\n\nThe SARFish dataset labels are derived from the labels supplied with the xView-3 SAR dataset. The SARFish dataset labels are available for download from the DIU website. Be sure to take into account country restrictions.",
"### Data\n\n\nSARFish extends the xView3-SAR dataset by providing products from the Sentinel-1 C-band SAR satellite constellation operated by the European Space Agency’s (ESA) Copernicus Programme available on their Open Access Hub Website in both real-valued GRD and complex-valued SLC product types.\n\n\n Products\n\n\nGRD products consist of two 'detected' imagery products in VH, VV polarisations. The imagery data is stored in GeoTiff format. Also included in the dataset are no\\_data masks and shoreline files which are used to evaluate 'close-to-shore' maritime object detection tasks.",
"#### Single Look Complex (SLC) Products\n\n\n!SARFish Single Look Complex (SLC) example swath 1\n\n\n!SARFish Single Look Complex (SLC) example swath 2\n\n\n!SARFish Single Look Complex (SLC) example swath 3\n\n\nThe figures above show the 'swaths' comprising a SARFish SLC product in VH polarisation with groundtruth maritime object. labels The complex data has been 'detected' [3] by projecting the complex-valued data onto the real numbers for visualisation and displayed on decibel scale where the dynamic range is between 15 and 60 dB. Note that the SLC products have non-square (x, y): 2.3 × 14.1 m pixel spacing. The native format of the data is Complex Int16.\n\n\n!SARFish SLC footprint\n\n\nThe figure above shows the footprint of the first swath of the example SLC product in context. The footprint was plotted using Clyde D'Cruz' \"openstreetmap WKT playground\".\n\n\n!SARFish SLC VH polarisation ship example\n\n\n!SARFish SLC VV polarisation ship example\n\n\nThe above images show detail of a labelled vessel in a SLC product in both VH (above) and VV (below) polarisations. Note the differences in the speckle and side-lobing artefacts on the vessel between polarisations and the non-square pixel spacing.",
"### Labels",
"#### Location labels\n\n\nThe labels denote the image pixel and geographic coordinate location of the maritime object.\n\n\nfield: detect\\_lat, data\\_type: float, description: latitude of detection in World Geodetic System (WGS) 84 coordinates\nfield: detect\\_lon, data\\_type: float, description: longitude of detection in WGS84 coordinates\nfield: detect\\_scene\\_row, data\\_type: int, description: pixel row of scene containing detection\nfield: detect\\_scene\\_column, data\\_type: int, description: pixel column of scene containing detection",
"#### Classification Labels\n\n\nThe labels for the maritime object classification are organised in the same hierarchical structure as the xView3-SAR challenge labels:\n\n\nThey are denoted by the following columns in the labels:\n\n\nfield: is\\_vessel, data\\_type: bool, description: True if detection is a vessel, False otherwise\nfield: is\\_fishing, data\\_type: bool, description: True if detection is a fishing vessel, False otherwise\n\n\nThe maritime object categories are labelled using boolean values to the following questions:\n\n\n* is the maritime object a vessel?\n* is the vessel a fishing vessel?\n\n\nThe following table shows the combinations of hierarchical classification labels present in the SARFish dataset:",
"#### Vessel Length Labels\n\n\nThe vessel lengths are denoted in the following column in the labels:\n\n\nfield: vessel\\_length\\_m, data\\_type: float, description: length of vessel in meters; only provided where available from AIS",
"#### Detailed labels summary\n\n\nfield: partition, data\\_type: str: {\"train\", \"validation\"}, description: split of the dataset\nfield: product\\_type, data\\_type: str: {\"GRD\", \"SLC\"}, description: product type of the data\nfield: scene\\_id, data\\_type: str, description: unique xView3 scene ID for challenge purposes\nfield: detect\\_id, data\\_type: str, description: unique detection ID in the format: {scene\\_id}\\_{detect\\_lat}\\_{detect\\_lon}\nfield: {product\\_type}\\_product\\_identifier, data\\_type: str, description: The Copernicus Sentinel-1 product identifier for the designated product type\nfield: detect\\_lat, data\\_type: float, description: latitude of detection in World Geodetic System (WGS) 84 coordinates\nfield: detect\\_lon, data\\_type: float, description: longitude of detection in WGS84 coordinates\nfield: detect\\_scene\\_row, data\\_type: int, description: pixel row of scene containing detection\nfield: detect\\_scene\\_column, data\\_type: int, description: pixel column of scene containing detection\nfield: top, data\\_type: float, description: pixel row of the top left corner of the bounding box, where available\nfield: left, data\\_type: float, description: pixel column of the top left corner of the bounding box, where available\nfield: bottom, data\\_type: float, description: pixel row of the bottom right corner of the bounding box, where available\nfield: right, data\\_type: float, description: pixel column of the bottom right corner of the bounding box, where available\nfield: vessel\\_length\\_m, data\\_type: float, description: length of vessel in meters; only provided where available from AIS\nfield: source, data\\_type: str: {AIS, AIS/Manual, Manual}, description: source of detection (AIS, manual label, or both)\nfield: is\\_vessel, data\\_type: bool, description: True if detection is a vessel, False otherwise\nfield: is\\_fishing, data\\_type: bool, description: True if detection is a fishing vessel, False otherwise\nfield: global\\_shoreline\\_vector\\_distance\\_from\\_shore\\_km, data\\_type: float, description: distance from shore of detection in kilometers as determined using the global shoreline vectors projected into the pixel space of the SARFish products\nfield: xView3\\_shoreline\\_vector\\_distance\\_from\\_shore\\_km, data\\_type: float, description: distance from shore of detection in kilometers as determined using the xView3-SAR shoreline vectors projected into the pixel space of the SARFish products\nfield: confidence, data\\_type: str: {HIGH, MEDIUM, LOW}, description: level of confidence for is\\_vessel and is\\_fishing labels",
"### Source\n\n\nThe Sentinel-1 GRD and SLC products were downloaded the University of Alaska's Alaska Satellite Facillity (ASF) which operates NASA's Distributed Active Archive Center (DAAC).\n\n\n* website\n* registration\n* download\n* API docs\n\t+ basics\n\t+ keywords\n\t+ tools\n\n\n[1]. Tri-Tan Cao, Connor Luckett, Jerome Williams, Tristrom Cooke, Ben Yip, Arvind Rajagopalan, and Sebastien Wong. Sarfish: Space-based maritime surveillance using complex synthetic aperture radar imagery. In 2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA), pages 1–8. IEEE, 2022.\n\n\n[2] xview3-sar: Detecting dark fishing activity using synthetic aperture radar imagery. arXiv:2206.00897v4 [cs.CV], Nov 2022.\n\n\n[3] M. Bourbigot, H. Johnsen, R. Piantanida, and G. Hajduch, Sentinel-1 Product Definition. Sentinel-1 Mission Performance Centre, 2016. [Online]. Available: URL\n\n\n[4] S. N. R. Chandra, J. Christopherson, and K. A. Casey, 2020 Joint Agency Commercial Imagery Evaluation—Remote sensing satellite compendium. US Geological Survey, 2020."
] |
[
"TAGS\n#task_categories-object-detection #task_categories-image-classification #size_categories-n<1K #license-apache-2.0 #SARFish #Illegal Fishing #Computer Vision #Complex-Valued #Synthetic Aperture Radar #region-us \n",
"### Quick Links\n\n\nThe following are links to the Kaggle competitions for each of the tracks of the SARFish challenge along with the SARFish dataset and GitHub repo:\n\n\n* Data:\n\t+ SARFish\n\t+ SARFishSample\n* Labels\n* Challenge:\n\t+ Maritime Object Detection Track\n\t+ Maritime Object Classification Track\n\t+ Vessel Length Regression Track\n* GitHub repo\n* Mailbox\n* DAIRNet\n\n\nThe GitHub repo describes how to:\n\n\n* Download the dataset.\n* Run the SARFish\\_demo jupyter notebook.\n* Load imagery products and groundtruth labels,\n* Train and evaluate a reference/baseline model using the dataset.",
"### Dataset summary - What does the SARFish dataset consist of?\n\n\nThe following table summarises the sizes of the full size and sample SARFish dataset.\n\n\n\nThe following table summarises the partitions of the dataset:",
"### How to access the SARFish dataset\n\n\nThe SARFish dataset is available for download at:\n\n\n* full SARFish dataset\n* sample SARFish dataset",
"#### Full SARFish dataset\n\n\nMake sure you have at least enough storage space for the uncompressed dataset.\n\n\n[Create|login] to a huggingface account.\n\n\nLogin to the huggingface command line interface.\n\n\nCopy the access token in settings/Access Tokens from your huggingface account. Clone the dataset",
"#### SARFish sample dataset\n\n\nSubstitute the final command for the full dataset with the following:\n\n\nFollow the instructions of the github repo README to check the md5sums of the data and unzip them.",
"#### Labels\n\n\nThe SARFish dataset labels are derived from the labels supplied with the xView-3 SAR dataset. The SARFish dataset labels are available for download from the DIU website. Be sure to take into account country restrictions.",
"### Data\n\n\nSARFish extends the xView3-SAR dataset by providing products from the Sentinel-1 C-band SAR satellite constellation operated by the European Space Agency’s (ESA) Copernicus Programme available on their Open Access Hub Website in both real-valued GRD and complex-valued SLC product types.\n\n\n Products\n\n\nGRD products consist of two 'detected' imagery products in VH, VV polarisations. The imagery data is stored in GeoTiff format. Also included in the dataset are no\\_data masks and shoreline files which are used to evaluate 'close-to-shore' maritime object detection tasks.",
"#### Single Look Complex (SLC) Products\n\n\n!SARFish Single Look Complex (SLC) example swath 1\n\n\n!SARFish Single Look Complex (SLC) example swath 2\n\n\n!SARFish Single Look Complex (SLC) example swath 3\n\n\nThe figures above show the 'swaths' comprising a SARFish SLC product in VH polarisation with groundtruth maritime object. labels The complex data has been 'detected' [3] by projecting the complex-valued data onto the real numbers for visualisation and displayed on decibel scale where the dynamic range is between 15 and 60 dB. Note that the SLC products have non-square (x, y): 2.3 × 14.1 m pixel spacing. The native format of the data is Complex Int16.\n\n\n!SARFish SLC footprint\n\n\nThe figure above shows the footprint of the first swath of the example SLC product in context. The footprint was plotted using Clyde D'Cruz' \"openstreetmap WKT playground\".\n\n\n!SARFish SLC VH polarisation ship example\n\n\n!SARFish SLC VV polarisation ship example\n\n\nThe above images show detail of a labelled vessel in a SLC product in both VH (above) and VV (below) polarisations. Note the differences in the speckle and side-lobing artefacts on the vessel between polarisations and the non-square pixel spacing.",
"### Labels",
"#### Location labels\n\n\nThe labels denote the image pixel and geographic coordinate location of the maritime object.\n\n\nfield: detect\\_lat, data\\_type: float, description: latitude of detection in World Geodetic System (WGS) 84 coordinates\nfield: detect\\_lon, data\\_type: float, description: longitude of detection in WGS84 coordinates\nfield: detect\\_scene\\_row, data\\_type: int, description: pixel row of scene containing detection\nfield: detect\\_scene\\_column, data\\_type: int, description: pixel column of scene containing detection",
"#### Classification Labels\n\n\nThe labels for the maritime object classification are organised in the same hierarchical structure as the xView3-SAR challenge labels:\n\n\nThey are denoted by the following columns in the labels:\n\n\nfield: is\\_vessel, data\\_type: bool, description: True if detection is a vessel, False otherwise\nfield: is\\_fishing, data\\_type: bool, description: True if detection is a fishing vessel, False otherwise\n\n\nThe maritime object categories are labelled using boolean values to the following questions:\n\n\n* is the maritime object a vessel?\n* is the vessel a fishing vessel?\n\n\nThe following table shows the combinations of hierarchical classification labels present in the SARFish dataset:",
"#### Vessel Length Labels\n\n\nThe vessel lengths are denoted in the following column in the labels:\n\n\nfield: vessel\\_length\\_m, data\\_type: float, description: length of vessel in meters; only provided where available from AIS",
"#### Detailed labels summary\n\n\nfield: partition, data\\_type: str: {\"train\", \"validation\"}, description: split of the dataset\nfield: product\\_type, data\\_type: str: {\"GRD\", \"SLC\"}, description: product type of the data\nfield: scene\\_id, data\\_type: str, description: unique xView3 scene ID for challenge purposes\nfield: detect\\_id, data\\_type: str, description: unique detection ID in the format: {scene\\_id}\\_{detect\\_lat}\\_{detect\\_lon}\nfield: {product\\_type}\\_product\\_identifier, data\\_type: str, description: The Copernicus Sentinel-1 product identifier for the designated product type\nfield: detect\\_lat, data\\_type: float, description: latitude of detection in World Geodetic System (WGS) 84 coordinates\nfield: detect\\_lon, data\\_type: float, description: longitude of detection in WGS84 coordinates\nfield: detect\\_scene\\_row, data\\_type: int, description: pixel row of scene containing detection\nfield: detect\\_scene\\_column, data\\_type: int, description: pixel column of scene containing detection\nfield: top, data\\_type: float, description: pixel row of the top left corner of the bounding box, where available\nfield: left, data\\_type: float, description: pixel column of the top left corner of the bounding box, where available\nfield: bottom, data\\_type: float, description: pixel row of the bottom right corner of the bounding box, where available\nfield: right, data\\_type: float, description: pixel column of the bottom right corner of the bounding box, where available\nfield: vessel\\_length\\_m, data\\_type: float, description: length of vessel in meters; only provided where available from AIS\nfield: source, data\\_type: str: {AIS, AIS/Manual, Manual}, description: source of detection (AIS, manual label, or both)\nfield: is\\_vessel, data\\_type: bool, description: True if detection is a vessel, False otherwise\nfield: is\\_fishing, data\\_type: bool, description: True if detection is a fishing vessel, False otherwise\nfield: global\\_shoreline\\_vector\\_distance\\_from\\_shore\\_km, data\\_type: float, description: distance from shore of detection in kilometers as determined using the global shoreline vectors projected into the pixel space of the SARFish products\nfield: xView3\\_shoreline\\_vector\\_distance\\_from\\_shore\\_km, data\\_type: float, description: distance from shore of detection in kilometers as determined using the xView3-SAR shoreline vectors projected into the pixel space of the SARFish products\nfield: confidence, data\\_type: str: {HIGH, MEDIUM, LOW}, description: level of confidence for is\\_vessel and is\\_fishing labels",
"### Source\n\n\nThe Sentinel-1 GRD and SLC products were downloaded the University of Alaska's Alaska Satellite Facillity (ASF) which operates NASA's Distributed Active Archive Center (DAAC).\n\n\n* website\n* registration\n* download\n* API docs\n\t+ basics\n\t+ keywords\n\t+ tools\n\n\n[1]. Tri-Tan Cao, Connor Luckett, Jerome Williams, Tristrom Cooke, Ben Yip, Arvind Rajagopalan, and Sebastien Wong. Sarfish: Space-based maritime surveillance using complex synthetic aperture radar imagery. In 2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA), pages 1–8. IEEE, 2022.\n\n\n[2] xview3-sar: Detecting dark fishing activity using synthetic aperture radar imagery. arXiv:2206.00897v4 [cs.CV], Nov 2022.\n\n\n[3] M. Bourbigot, H. Johnsen, R. Piantanida, and G. Hajduch, Sentinel-1 Product Definition. Sentinel-1 Mission Performance Centre, 2016. [Online]. Available: URL\n\n\n[4] S. N. R. Chandra, J. Christopherson, and K. A. Casey, 2020 Joint Agency Commercial Imagery Evaluation—Remote sensing satellite compendium. US Geological Survey, 2020."
] |
[
76,
151,
52,
37,
75,
49,
57,
152,
26,
89,
326,
3,
151,
177,
64,
755,
295
] |
[
"passage: TAGS\n#task_categories-object-detection #task_categories-image-classification #size_categories-n<1K #license-apache-2.0 #SARFish #Illegal Fishing #Computer Vision #Complex-Valued #Synthetic Aperture Radar #region-us \n### Quick Links\n\n\nThe following are links to the Kaggle competitions for each of the tracks of the SARFish challenge along with the SARFish dataset and GitHub repo:\n\n\n* Data:\n\t+ SARFish\n\t+ SARFishSample\n* Labels\n* Challenge:\n\t+ Maritime Object Detection Track\n\t+ Maritime Object Classification Track\n\t+ Vessel Length Regression Track\n* GitHub repo\n* Mailbox\n* DAIRNet\n\n\nThe GitHub repo describes how to:\n\n\n* Download the dataset.\n* Run the SARFish\\_demo jupyter notebook.\n* Load imagery products and groundtruth labels,\n* Train and evaluate a reference/baseline model using the dataset.### Dataset summary - What does the SARFish dataset consist of?\n\n\nThe following table summarises the sizes of the full size and sample SARFish dataset.\n\n\n\nThe following table summarises the partitions of the dataset:### How to access the SARFish dataset\n\n\nThe SARFish dataset is available for download at:\n\n\n* full SARFish dataset\n* sample SARFish dataset#### Full SARFish dataset\n\n\nMake sure you have at least enough storage space for the uncompressed dataset.\n\n\n[Create|login] to a huggingface account.\n\n\nLogin to the huggingface command line interface.\n\n\nCopy the access token in settings/Access Tokens from your huggingface account. Clone the dataset#### SARFish sample dataset\n\n\nSubstitute the final command for the full dataset with the following:\n\n\nFollow the instructions of the github repo README to check the md5sums of the data and unzip them.#### Labels\n\n\nThe SARFish dataset labels are derived from the labels supplied with the xView-3 SAR dataset. The SARFish dataset labels are available for download from the DIU website. Be sure to take into account country restrictions.",
"passage: ### Data\n\n\nSARFish extends the xView3-SAR dataset by providing products from the Sentinel-1 C-band SAR satellite constellation operated by the European Space Agency’s (ESA) Copernicus Programme available on their Open Access Hub Website in both real-valued GRD and complex-valued SLC product types.\n\n\n Products\n\n\nGRD products consist of two 'detected' imagery products in VH, VV polarisations. The imagery data is stored in GeoTiff format. Also included in the dataset are no\\_data masks and shoreline files which are used to evaluate 'close-to-shore' maritime object detection tasks.#### Single Look Complex (SLC) Products\n\n\n!SARFish Single Look Complex (SLC) example swath 1\n\n\n!SARFish Single Look Complex (SLC) example swath 2\n\n\n!SARFish Single Look Complex (SLC) example swath 3\n\n\nThe figures above show the 'swaths' comprising a SARFish SLC product in VH polarisation with groundtruth maritime object. labels The complex data has been 'detected' [3] by projecting the complex-valued data onto the real numbers for visualisation and displayed on decibel scale where the dynamic range is between 15 and 60 dB. Note that the SLC products have non-square (x, y): 2.3 × 14.1 m pixel spacing. The native format of the data is Complex Int16.\n\n\n!SARFish SLC footprint\n\n\nThe figure above shows the footprint of the first swath of the example SLC product in context. The footprint was plotted using Clyde D'Cruz' \"openstreetmap WKT playground\".\n\n\n!SARFish SLC VH polarisation ship example\n\n\n!SARFish SLC VV polarisation ship example\n\n\nThe above images show detail of a labelled vessel in a SLC product in both VH (above) and VV (below) polarisations. Note the differences in the speckle and side-lobing artefacts on the vessel between polarisations and the non-square pixel spacing.### Labels",
"passage: #### Location labels\n\n\nThe labels denote the image pixel and geographic coordinate location of the maritime object.\n\n\nfield: detect\\_lat, data\\_type: float, description: latitude of detection in World Geodetic System (WGS) 84 coordinates\nfield: detect\\_lon, data\\_type: float, description: longitude of detection in WGS84 coordinates\nfield: detect\\_scene\\_row, data\\_type: int, description: pixel row of scene containing detection\nfield: detect\\_scene\\_column, data\\_type: int, description: pixel column of scene containing detection#### Classification Labels\n\n\nThe labels for the maritime object classification are organised in the same hierarchical structure as the xView3-SAR challenge labels:\n\n\nThey are denoted by the following columns in the labels:\n\n\nfield: is\\_vessel, data\\_type: bool, description: True if detection is a vessel, False otherwise\nfield: is\\_fishing, data\\_type: bool, description: True if detection is a fishing vessel, False otherwise\n\n\nThe maritime object categories are labelled using boolean values to the following questions:\n\n\n* is the maritime object a vessel?\n* is the vessel a fishing vessel?\n\n\nThe following table shows the combinations of hierarchical classification labels present in the SARFish dataset:#### Vessel Length Labels\n\n\nThe vessel lengths are denoted in the following column in the labels:\n\n\nfield: vessel\\_length\\_m, data\\_type: float, description: length of vessel in meters; only provided where available from AIS"
] |
5fb65c6bbc721adb9183cf8172092ae4276d7f0a
|
# AutoTrain Dataset for project: hate_speech-testing
## Dataset Description
This dataset has been automatically processed by AutoTrain for project hate_speech-testing.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Sekiranya ada pelajar bercadang menyambung pengajian di luar negara seperti di Timur Tengah, ia akan memudahkan proses selanjutnya",
"target": 1
},
{
"text": "Oleh kerana proses pengambilalihan itu belum selesai, MRT Corp tidak mempunyai hak milik ke atas Ampang Park dan tidak dapat mengawal apa yang berlaku kepada bangunan itu ketika ini",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['Negative', 'Positive'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 2398 |
| valid | 600 |
|
afiqlol/autotrain-data-hate_speech-testing
|
[
"task_categories:text-classification",
"language:en",
"region:us"
] |
2023-09-21T03:47:54+00:00
|
{"language": ["en"], "task_categories": ["text-classification"]}
|
2023-09-21T04:24:41+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-classification #language-English #region-us
|
AutoTrain Dataset for project: hate\_speech-testing
===================================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project hate\_speech-testing.
### Languages
The BCP-47 code for the dataset's language is en.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
|
[
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
[
"TAGS\n#task_categories-text-classification #language-English #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
[
21,
26,
17,
23,
27
] |
[
"passage: TAGS\n#task_categories-text-classification #language-English #region-us \n### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA sample from this dataset looks as follows:### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
07042b190218d40e3ef3bdfc0a951cb6e5e4db39
|
This data set is compiled from the [kmfoda/booksum](https://huggingface.co/datasets/kmfoda/booksum) data set, mainly for summary extraction tasks, and adjusted and optimized for the Llama2 model alpaca data format.
## Citation
```
@article{kryscinski2021booksum,
title={BookSum: A Collection of Datasets for Long-form Narrative Summarization},
author={Wojciech Kry{\'s}ci{\'n}ski and Nazneen Rajani and Divyansh Agarwal and Caiming Xiong and Dragomir Radev},
year={2021},
eprint={2105.08209},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
The code is released under the BSD-3 License (see LICENSE.txt for details).
|
ZhongshengWang/alpaca-booksum
|
[
"size_categories:1K<n<10K",
"language:en",
"license:bsd-3-clause",
"arxiv:2105.08209",
"region:us"
] |
2023-09-21T04:07:30+00:00
|
{"language": ["en"], "license": "bsd-3-clause", "size_categories": ["1K<n<10K"]}
|
2023-09-21T04:12:13+00:00
|
[
"2105.08209"
] |
[
"en"
] |
TAGS
#size_categories-1K<n<10K #language-English #license-bsd-3-clause #arxiv-2105.08209 #region-us
|
This data set is compiled from the kmfoda/booksum data set, mainly for summary extraction tasks, and adjusted and optimized for the Llama2 model alpaca data format.
## License
The code is released under the BSD-3 License (see URL for details).
|
[
"## License\n\nThe code is released under the BSD-3 License (see URL for details)."
] |
[
"TAGS\n#size_categories-1K<n<10K #language-English #license-bsd-3-clause #arxiv-2105.08209 #region-us \n",
"## License\n\nThe code is released under the BSD-3 License (see URL for details)."
] |
[
41,
18
] |
[
"passage: TAGS\n#size_categories-1K<n<10K #language-English #license-bsd-3-clause #arxiv-2105.08209 #region-us \n## License\n\nThe code is released under the BSD-3 License (see URL for details)."
] |
00912d9a87da86e24c4024d3a1dee2508540a9e3
|
# Dataset Card for "ecommerce_purchase_history"
## Dataset Description
# Dataset Summary
이 데이터셋은 특정 이커머스 회사의 추천 시스템 연구 개발을 위한 데이터셋이다. 특정 기간에 대해 약 90일 동안의 구매 히스토리로부터 생성되었다. 구매 히스토리를 텍스트로 기술하였다.
llama2 토크나이저 기준 2,048 개의 토큰 미만의 예제 쌍만을 남기도록 수정하였다.
또한, test 스플릿의 경우 user_id, positive_prod_id 기준으로 train_split에 등장하지 않는 것만을 남겼다.
# Supported Tasks and Leaderboards
# Languages
This dataset is only made of `ko`(korean).
# Dataset Structure
|
jangmin/ecommerce_purchase_history
|
[
"size_categories:10K<n<100K",
"language:ko",
"region:us"
] |
2023-09-21T04:09:07+00:00
|
{"language": ["ko"], "size_categories": ["10K<n<100K"], "dataset_info": {"features": [{"name": "user_id", "dtype": "int64"}, {"name": "day", "dtype": "string"}, {"name": "order_ts", "dtype": "string"}, {"name": "positive_prod_id", "dtype": "int64"}, {"name": "negative_prod_id", "dtype": "int64"}, {"name": "negative_prod_ids", "sequence": "int64"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 122282877.9602969, "num_examples": 58535}, {"name": "test", "num_bytes": 52690471.08509643, "num_examples": 17332}, {"name": "rigorous_test", "num_bytes": 24661037.47070749, "num_examples": 8112}], "download_size": 33220918, "dataset_size": 199634386.51610082}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "rigorous_test", "path": "data/rigorous_test-*"}]}]}
|
2023-10-14T12:35:03+00:00
|
[] |
[
"ko"
] |
TAGS
#size_categories-10K<n<100K #language-Korean #region-us
|
# Dataset Card for "ecommerce_purchase_history"
## Dataset Description
# Dataset Summary
이 데이터셋은 특정 이커머스 회사의 추천 시스템 연구 개발을 위한 데이터셋이다. 특정 기간에 대해 약 90일 동안의 구매 히스토리로부터 생성되었다. 구매 히스토리를 텍스트로 기술하였다.
llama2 토크나이저 기준 2,048 개의 토큰 미만의 예제 쌍만을 남기도록 수정하였다.
또한, test 스플릿의 경우 user_id, positive_prod_id 기준으로 train_split에 등장하지 않는 것만을 남겼다.
# Supported Tasks and Leaderboards
# Languages
This dataset is only made of 'ko'(korean).
# Dataset Structure
|
[
"# Dataset Card for \"ecommerce_purchase_history\"",
"## Dataset Description",
"# Dataset Summary\n\n이 데이터셋은 특정 이커머스 회사의 추천 시스템 연구 개발을 위한 데이터셋이다. 특정 기간에 대해 약 90일 동안의 구매 히스토리로부터 생성되었다. 구매 히스토리를 텍스트로 기술하였다.\n\nllama2 토크나이저 기준 2,048 개의 토큰 미만의 예제 쌍만을 남기도록 수정하였다.\n\n또한, test 스플릿의 경우 user_id, positive_prod_id 기준으로 train_split에 등장하지 않는 것만을 남겼다.",
"# Supported Tasks and Leaderboards",
"# Languages\n\nThis dataset is only made of 'ko'(korean).",
"# Dataset Structure"
] |
[
"TAGS\n#size_categories-10K<n<100K #language-Korean #region-us \n",
"# Dataset Card for \"ecommerce_purchase_history\"",
"## Dataset Description",
"# Dataset Summary\n\n이 데이터셋은 특정 이커머스 회사의 추천 시스템 연구 개발을 위한 데이터셋이다. 특정 기간에 대해 약 90일 동안의 구매 히스토리로부터 생성되었다. 구매 히스토리를 텍스트로 기술하였다.\n\nllama2 토크나이저 기준 2,048 개의 토큰 미만의 예제 쌍만을 남기도록 수정하였다.\n\n또한, test 스플릿의 경우 user_id, positive_prod_id 기준으로 train_split에 등장하지 않는 것만을 남겼다.",
"# Supported Tasks and Leaderboards",
"# Languages\n\nThis dataset is only made of 'ko'(korean).",
"# Dataset Structure"
] |
[
23,
16,
4,
112,
9,
17,
6
] |
[
"passage: TAGS\n#size_categories-10K<n<100K #language-Korean #region-us \n# Dataset Card for \"ecommerce_purchase_history\"## Dataset Description# Dataset Summary\n\n이 데이터셋은 특정 이커머스 회사의 추천 시스템 연구 개발을 위한 데이터셋이다. 특정 기간에 대해 약 90일 동안의 구매 히스토리로부터 생성되었다. 구매 히스토리를 텍스트로 기술하였다.\n\nllama2 토크나이저 기준 2,048 개의 토큰 미만의 예제 쌍만을 남기도록 수정하였다.\n\n또한, test 스플릿의 경우 user_id, positive_prod_id 기준으로 train_split에 등장하지 않는 것만을 남겼다.# Supported Tasks and Leaderboards# Languages\n\nThis dataset is only made of 'ko'(korean).# Dataset Structure"
] |
b6d695325595ec574cb55edaa99d52fa14ee4518
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
- This dataset contains a list of affixal negations and their non-negated counterpart (e.g. unintended - intended).
- This dataset is from [van Son et al. (2016)](https://aclanthology.org/W16-5007/).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
joey234/affixal_negation
|
[
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-09-21T04:28:43+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "pretty_name": "e"}
|
2023-10-13T00:33:00+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-apache-2.0 #region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
- This dataset contains a list of affixal negations and their non-negated counterpart (e.g. unintended - intended).
- This dataset is from van Son et al. (2016).
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n- This dataset contains a list of affixal negations and their non-negated counterpart (e.g. unintended - intended).\n- This dataset is from van Son et al. (2016).",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-apache-2.0 #region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n- This dataset contains a list of affixal negations and their non-negated counterpart (e.g. unintended - intended).\n- This dataset is from van Son et al. (2016).",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
41,
8,
24,
52,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-apache-2.0 #region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n- This dataset contains a list of affixal negations and their non-negated counterpart (e.g. unintended - intended).\n- This dataset is from van Son et al. (2016).### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
6498ec6bfe68a65a31440290da92e4d367d92278
|
# Dataset Card for "instagram_model_ocean_grunge_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/instagram_model_ocean_grunge_prompts
|
[
"region:us"
] |
2023-09-21T04:32:44+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 65721, "num_examples": 1000}], "download_size": 1451, "dataset_size": 65721}}
|
2023-09-21T05:29:44+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "instagram_model_ocean_grunge_prompts"
More Information needed
|
[
"# Dataset Card for \"instagram_model_ocean_grunge_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"instagram_model_ocean_grunge_prompts\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"instagram_model_ocean_grunge_prompts\"\n\nMore Information needed"
] |
e3f7e0543afdbd38306d353dc2763e359acff0e4
|
# Dataset Card for "global_street_style_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/global_street_style_prompts
|
[
"region:us"
] |
2023-09-21T04:39:25+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 190230, "num_examples": 1000}], "download_size": 24159, "dataset_size": 190230}}
|
2023-09-21T04:39:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "global_street_style_prompts"
More Information needed
|
[
"# Dataset Card for \"global_street_style_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"global_street_style_prompts\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"global_street_style_prompts\"\n\nMore Information needed"
] |
ea3286611829903612a2239a88d732eac041e4d2
|
# Dataset Card for "global_elderly_woman_portrait_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/global_elderly_woman_portrait_prompts
|
[
"region:us"
] |
2023-09-21T04:43:38+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1144919, "num_examples": 10000}], "download_size": 123085, "dataset_size": 1144919}}
|
2023-09-21T04:43:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "global_elderly_woman_portrait_prompts"
More Information needed
|
[
"# Dataset Card for \"global_elderly_woman_portrait_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"global_elderly_woman_portrait_prompts\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"global_elderly_woman_portrait_prompts\"\n\nMore Information needed"
] |
b7d4ffdeeb8d1ad1c5f72615fec13caf42535c6b
|
# Dataset Card for "data_for_synthesis_entities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
thanhduycao/data_for_synthesis_entities
|
[
"region:us"
] |
2023-09-21T04:44:40+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "audio", "struct": [{"name": "array", "sequence": "float64"}, {"name": "path", "dtype": "null"}, {"name": "sampling_rate", "dtype": "int64"}]}, {"name": "transcription", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "entity_type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 651816414, "num_examples": 7153}], "download_size": 161959315, "dataset_size": 651816414}}
|
2023-09-21T23:26:56+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "data_for_synthesis_entities"
More Information needed
|
[
"# Dataset Card for \"data_for_synthesis_entities\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"data_for_synthesis_entities\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"data_for_synthesis_entities\"\n\nMore Information needed"
] |
a705550bce50013ede9fba598be01133bb2e6362
|
# Dataset Card for "github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
q-allen/github-issues
|
[
"region:us"
] |
2023-09-21T04:55:50+00:00
|
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "repository_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "comments_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "user", "struct": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "labels", "list": [{"name": "color", "dtype": "string"}, {"name": "default", "dtype": "bool"}, {"name": "description", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "name", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "state", "dtype": "string"}, {"name": "locked", "dtype": "bool"}, {"name": "assignee", "struct": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "assignees", "list": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "milestone", "struct": [{"name": "closed_at", "dtype": "string"}, {"name": "closed_issues", "dtype": "int64"}, {"name": "created_at", "dtype": "string"}, {"name": "creator", "struct": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "description", "dtype": "string"}, {"name": "due_on", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "labels_url", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "open_issues", "dtype": "int64"}, {"name": "state", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "updated_at", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "comments", "sequence": "string"}, {"name": "created_at", "dtype": "timestamp[ns, tz=UTC]"}, {"name": "updated_at", "dtype": "timestamp[ns, tz=UTC]"}, {"name": "closed_at", "dtype": "timestamp[ns, tz=UTC]"}, {"name": "author_association", "dtype": "string"}, {"name": "active_lock_reason", "dtype": "float64"}, {"name": "body", "dtype": "string"}, {"name": "reactions", "struct": [{"name": "+1", "dtype": "int64"}, {"name": "-1", "dtype": "int64"}, {"name": "confused", "dtype": "int64"}, {"name": "eyes", "dtype": "int64"}, {"name": "heart", "dtype": "int64"}, {"name": "hooray", "dtype": "int64"}, {"name": "laugh", "dtype": "int64"}, {"name": "rocket", "dtype": "int64"}, {"name": "total_count", "dtype": "int64"}, {"name": "url", "dtype": "string"}]}, {"name": "timeline_url", "dtype": "string"}, {"name": "performed_via_github_app", "dtype": "float64"}, {"name": "state_reason", "dtype": "string"}, {"name": "draft", "dtype": "float64"}, {"name": "pull_request", "struct": [{"name": "diff_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "merged_at", "dtype": "string"}, {"name": "patch_url", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "is_pull_request", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 32112447, "num_examples": 6224}], "download_size": 9190190, "dataset_size": 32112447}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T05:29:08+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "github-issues"
More Information needed
|
[
"# Dataset Card for \"github-issues\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"github-issues\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"github-issues\"\n\nMore Information needed"
] |
855b85d1a7b2d46aecaff13344f0db9ae3a3a078
|
1. The quick brown fox jumps over the lazy dog.
2. A red rose blooms in the garden under the warm sun.
3. Birds chirp happily in the morning, welcoming a new day.
4. The old oak tree stands tall and strong in the forest.
5. Children laugh and play in the park, full of joy.
6. The ocean waves crash against the rocky shore.
7. In a cozy cafe, people sip coffee and chat.
8. The city skyline sparkles with lights at night.
9. Books line the shelves of a dusty library.
10. Hiking through the mountains, I find peace.
11. Raindrops tap gently on the windowpane.
12. A rainbow arches across the sky after the rain.
13. The smell of freshly baked bread fills the kitchen.
14. Stars twinkle in the dark night sky.
15. Friends gather around a bonfire, sharing stories.
16. A lone wolf howls at the moon in the wilderness.
17. The sound of waves lulls me to sleep at the beach.
18. An artist paints a masterpiece on a canvas.
19. The aroma of flowers fills the air in the garden.
20. A violin's melody brings tears to my eyes.
21. The first snowfall blankets the town in white.
22. Children build sandcastles on the sandy beach.
23. The aroma of popcorn wafts from the movie theater.
24. Time stands still as I watch the sunset.
25. The river flows peacefully through the valley.
26. A gentle breeze rustles the leaves in the forest.
27. A lighthouse guides ships safely to shore.
28. The smell of fresh-cut grass fills the air.
29. Autumn leaves paint the trees in vibrant colors.
30. A cat curls up by the fireplace, purring softly.
31. The laughter of children echoes in the playground.
32. A hot air balloon soars high above the landscape.
33. The smell of rain brings memories of childhood.
34. A bee buzzes around a field of wildflowers.
35. The sound of waves crashing is music to my ears.
36. The city streets bustle with life and activity.
37. A gentle river meanders through the countryside.
38. A shooting star streaks across the night sky.
39. The scent of fresh coffee wakes me in the morning.
40. Fireworks light up the sky on the Fourth of July.
41. The moon's reflection shimmers on the water's surface.
42. A snowy owl perches silently in a tree.
43. Children release colorful balloons into the sky.
44. The smell of barbecue fills the backyard.
45. A hiker reaches the summit of a mountain.
46. The scent of pine trees fills the forest.
47. Waves crash against the rocks along the shore.
48. A warm hug from a loved one is comforting.
49. The sound of crickets chirping signals evening.
50. The rain brings life to the parched earth.
51. A kitten plays with a ball of yarn.
52. Sunflowers turn to face the sun in the field.
53. The ocean's vastness fills me with wonder.
54. A campfire crackles and pops in the wilderness.
55. A rainbow stretches across the horizon.
56. The smell of cinnamon and apples fills the kitchen.
57. A bicycle ride through the park is invigorating.
58. Fireflies light up the summer night.
59. The first sip of hot chocolate warms the soul.
60. A baby's laughter is pure and infectious.
61. The scent of pine needles evokes memories of Christmas.
62. A waterfall cascades down the rocks.
63. The city skyline glistens with skyscrapers.
64. A gentle rain falls on the rooftop.
65. A hummingbird hovers near a vibrant flower.
66. The sound of a crackling fire is soothing.
67. A warm blanket wraps around me on a cold night.
68. The smell of the ocean breeze is refreshing.
69. Snowflakes dance in the air during a winter storm.
70. The laughter of friends fills the room.
71. A meadow is covered in a carpet of wildflowers.
72. The rustling of leaves in the forest is calming.
73. A shooting star grants a silent wish.
74. The scent of fresh-baked cookies is irresistible.
75. A horse gallops freely in a green pasture.
76. The city lights twinkle like stars.
77. A soft snowfall blankets the landscape.
78. A campfire's glow warms the chilly night.
79. The aroma of roses fills the garden.
80. A gentle stream flows through the meadow.
81. The sound of a babbling brook is peaceful.
82. A rainbow appears after a summer rain shower.
83. The scent of lavender relaxes the mind.
84. Birds sing their morning songs in the trees.
85. A kite soars high in the clear blue sky.
86. The city streets are alive with activity.
87. A sunflower field stretches to the horizon.
88. The aroma of fresh-baked bread is mouthwatering.
89. A gentle breeze rustles the leaves in the park.
90. The waves of the ocean crash against the shore.
91. A crackling fire warms the cabin in the woods.
92. The smell of pine trees fills the forest air.
93. A mountain peak touches the sky.
94. The laughter of children fills the playground.
95. A rainbow arches over a tranquil lake.
96. The scent of blooming flowers fills the garden.
97. A violin's melody tugs at the heartstrings.
98. The first snowflake falls silently to the ground.
99. A cat purrs contentedly in a sunbeam.
100. The sound of waves lulls me to sleep by the sea.
|
anonymouse03052002/custom_data.txt
|
[
"region:us"
] |
2023-09-21T04:57:37+00:00
|
{}
|
2023-09-21T04:58:08+00:00
|
[] |
[] |
TAGS
#region-us
|
1. The quick brown fox jumps over the lazy dog.
2. A red rose blooms in the garden under the warm sun.
3. Birds chirp happily in the morning, welcoming a new day.
4. The old oak tree stands tall and strong in the forest.
5. Children laugh and play in the park, full of joy.
6. The ocean waves crash against the rocky shore.
7. In a cozy cafe, people sip coffee and chat.
8. The city skyline sparkles with lights at night.
9. Books line the shelves of a dusty library.
10. Hiking through the mountains, I find peace.
11. Raindrops tap gently on the windowpane.
12. A rainbow arches across the sky after the rain.
13. The smell of freshly baked bread fills the kitchen.
14. Stars twinkle in the dark night sky.
15. Friends gather around a bonfire, sharing stories.
16. A lone wolf howls at the moon in the wilderness.
17. The sound of waves lulls me to sleep at the beach.
18. An artist paints a masterpiece on a canvas.
19. The aroma of flowers fills the air in the garden.
20. A violin's melody brings tears to my eyes.
21. The first snowfall blankets the town in white.
22. Children build sandcastles on the sandy beach.
23. The aroma of popcorn wafts from the movie theater.
24. Time stands still as I watch the sunset.
25. The river flows peacefully through the valley.
26. A gentle breeze rustles the leaves in the forest.
27. A lighthouse guides ships safely to shore.
28. The smell of fresh-cut grass fills the air.
29. Autumn leaves paint the trees in vibrant colors.
30. A cat curls up by the fireplace, purring softly.
31. The laughter of children echoes in the playground.
32. A hot air balloon soars high above the landscape.
33. The smell of rain brings memories of childhood.
34. A bee buzzes around a field of wildflowers.
35. The sound of waves crashing is music to my ears.
36. The city streets bustle with life and activity.
37. A gentle river meanders through the countryside.
38. A shooting star streaks across the night sky.
39. The scent of fresh coffee wakes me in the morning.
40. Fireworks light up the sky on the Fourth of July.
41. The moon's reflection shimmers on the water's surface.
42. A snowy owl perches silently in a tree.
43. Children release colorful balloons into the sky.
44. The smell of barbecue fills the backyard.
45. A hiker reaches the summit of a mountain.
46. The scent of pine trees fills the forest.
47. Waves crash against the rocks along the shore.
48. A warm hug from a loved one is comforting.
49. The sound of crickets chirping signals evening.
50. The rain brings life to the parched earth.
51. A kitten plays with a ball of yarn.
52. Sunflowers turn to face the sun in the field.
53. The ocean's vastness fills me with wonder.
54. A campfire crackles and pops in the wilderness.
55. A rainbow stretches across the horizon.
56. The smell of cinnamon and apples fills the kitchen.
57. A bicycle ride through the park is invigorating.
58. Fireflies light up the summer night.
59. The first sip of hot chocolate warms the soul.
60. A baby's laughter is pure and infectious.
61. The scent of pine needles evokes memories of Christmas.
62. A waterfall cascades down the rocks.
63. The city skyline glistens with skyscrapers.
64. A gentle rain falls on the rooftop.
65. A hummingbird hovers near a vibrant flower.
66. The sound of a crackling fire is soothing.
67. A warm blanket wraps around me on a cold night.
68. The smell of the ocean breeze is refreshing.
69. Snowflakes dance in the air during a winter storm.
70. The laughter of friends fills the room.
71. A meadow is covered in a carpet of wildflowers.
72. The rustling of leaves in the forest is calming.
73. A shooting star grants a silent wish.
74. The scent of fresh-baked cookies is irresistible.
75. A horse gallops freely in a green pasture.
76. The city lights twinkle like stars.
77. A soft snowfall blankets the landscape.
78. A campfire's glow warms the chilly night.
79. The aroma of roses fills the garden.
80. A gentle stream flows through the meadow.
81. The sound of a babbling brook is peaceful.
82. A rainbow appears after a summer rain shower.
83. The scent of lavender relaxes the mind.
84. Birds sing their morning songs in the trees.
85. A kite soars high in the clear blue sky.
86. The city streets are alive with activity.
87. A sunflower field stretches to the horizon.
88. The aroma of fresh-baked bread is mouthwatering.
89. A gentle breeze rustles the leaves in the park.
90. The waves of the ocean crash against the shore.
91. A crackling fire warms the cabin in the woods.
92. The smell of pine trees fills the forest air.
93. A mountain peak touches the sky.
94. The laughter of children fills the playground.
95. A rainbow arches over a tranquil lake.
96. The scent of blooming flowers fills the garden.
97. A violin's melody tugs at the heartstrings.
98. The first snowflake falls silently to the ground.
99. A cat purrs contentedly in a sunbeam.
100. The sound of waves lulls me to sleep by the sea.
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
e6d2a57c10ad37fc79c25559c876820a617050c5
|
# Dataset Card for "varied_portrait_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/varied_portrait_prompts
|
[
"region:us"
] |
2023-09-21T05:01:31+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1589279, "num_examples": 10000}], "download_size": 241273, "dataset_size": 1589279}}
|
2023-09-21T05:01:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "varied_portrait_prompts"
More Information needed
|
[
"# Dataset Card for \"varied_portrait_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"varied_portrait_prompts\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"varied_portrait_prompts\"\n\nMore Information needed"
] |
a2bd46adb662f695ab1acfa267148b8977b968a5
|
# Dataset Card for "micro_photography_subjects"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/micro_photography_subjects
|
[
"region:us"
] |
2023-09-21T05:05:16+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 859747, "num_examples": 10000}], "download_size": 72310, "dataset_size": 859747}}
|
2023-09-21T05:05:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "micro_photography_subjects"
More Information needed
|
[
"# Dataset Card for \"micro_photography_subjects\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"micro_photography_subjects\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"micro_photography_subjects\"\n\nMore Information needed"
] |
9fef50b3a95fbec53f3126ac39033a3f0146db2f
|
# Dataset Card for "underwater_photography_subjects"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/underwater_photography_subjects
|
[
"region:us"
] |
2023-09-21T05:11:53+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 795203, "num_examples": 10000}], "download_size": 23715, "dataset_size": 795203}}
|
2023-09-21T05:11:56+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "underwater_photography_subjects"
More Information needed
|
[
"# Dataset Card for \"underwater_photography_subjects\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"underwater_photography_subjects\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"underwater_photography_subjects\"\n\nMore Information needed"
] |
5b6f31372e4a6b557aba6f1703577265047d6183
|
# Introduction
**LongBench** is the first benchmark for bilingual, multitask, and comprehensive assessment of **long context understanding** capabilities of large language models. LongBench includes different languages (Chinese and English) to provide a more comprehensive evaluation of the large models' multilingual capabilities on long contexts. In addition, LongBench is composed of six major categories and twenty one different tasks, covering key long-text application scenarios such as single-document QA, multi-document QA, summarization, few-shot learning, synthetic tasks and code completion.
We are fully aware of the potentially high costs involved in the model evaluation process, especially in the context of long context scenarios (such as manual annotation costs or API call costs). Therefore, we adopt a fully automated evaluation method, aimed at measuring and evaluating the model's ability to understand long contexts at the lowest cost.
LongBench includes 14 English tasks, 5 Chinese tasks, and 2 code tasks, with the average length of most tasks ranging from 5k to 15k, and a total of 4,750 test data. For detailed statistics and construction methods of LongBench tasks, please refer [here](task.md). In addition, we provide LongBench-E, a test set with a more uniform length distribution constructed by uniform sampling, with comparable amounts of data in the 0-4k, 4k-8k, and 8k+ length intervals to provide an analysis of the model's performance variations at different input lengths.
Github Repo for LongBench: https://github.com/THUDM/LongBench
Arxiv Paper for LongBench: https://arxiv.org/pdf/2308.14508.pdf
# How to use it?
#### Loading Data
```python
from datasets import load_dataset
datasets = ["narrativeqa", "qasper", "multifieldqa_en", "multifieldqa_zh", "hotpotqa", "2wikimqa", "musique", \
"dureader", "gov_report", "qmsum", "multi_news", "vcsum", "trec", "triviaqa", "samsum", "lsht", \
"passage_count", "passage_retrieval_en", "passage_retrieval_zh", "lcc", "repobench-p"]
for dataset in datasets:
data = load_dataset('THUDM/LongBench', dataset, split='test')
```
Similarly, you can load the **LongBench-E** data
```python
from datasets import load_dataset
datasets = ["qasper", "multifieldqa_en", "hotpotqa", "2wikimqa", "gov_report", "multi_news", "trec", \
"triviaqa", "samsum", "passage_count", "passage_retrieval_en", "lcc", "repobench-p"]
for dataset in datasets:
data = load_dataset('THUDM/LongBench', f"{dataset}_e", split='test')
```
Alternatively, you can download the folder from [this link](https://huggingface.co/datasets/THUDM/LongBench/resolve/main/data.zip) to load the data.
#### Data Format
All data in **LongBench** (LongBench-E) are standardized to the following format:
```json
{
"input": "The input/command for the task, usually short, such as questions in QA, queries in Few-shot tasks, etc",
"context": "The long context required for the task, such as documents, cross-file code, few-shot examples in Few-shot tasks",
"answers": "A List of all true answers",
"length": "Total length of the first three items (counted in characters for Chinese and words for English)",
"dataset": "The name of the dataset to which this piece of data belongs",
"language": "The language of this piece of data",
"all_classes": "All categories in classification tasks, null for non-classification tasks",
"_id": "Random id for each piece of data"
}
```
#### Evaluation
This repository provides data download for LongBench. If you wish to use this dataset for automated evaluation, please refer to our [github](https://github.com/THUDM/LongBench).
# Task statistics
| Task | Task Type | Eval metric | Avg len |Language | \#Sample |
| :-------- | :-----------:| :-----------: |:-------: | :-----------: |:--------: |
| HotpotQA | Multi-doc QA | F1 |9,151 |EN |200 |
| 2WikiMultihopQA| Multi-doc QA | F1 |4,887 |EN |200 |
| MuSiQue| Multi-doc QA | F1 |11,214 |EN |200 |
| DuReader| Multi-doc QA | Rouge-L |15,768 |ZH |200 |
| MultiFieldQA-en| Single-doc QA | F1 |4,559 |EN |150 |
| MultiFieldQA-zh| Single-doc QA | F1 |6,701 |ZH |200 |
| NarrativeQA| Single-doc QA | F1 |18,409 |EN |200 |
| Qasper| Single-doc QA | F1 |3,619 |EN |200 |
| GovReport| Summarization | Rouge-L |8,734 |EN |200 |
| QMSum| Summarization | Rouge-L |10,614 |EN |200 |
| MultiNews| Summarization | Rouge-L |2,113 |EN |200 |
| VCSUM| Summarization | Rouge-L |15,380 |ZH |200 |
| TriviaQA| Few shot | F1 |8,209 |EN |200 |
| SAMSum| Few shot | Rouge-L |6,258 |EN |200 |
| TREC| Few shot | Accuracy |5,177 |EN |200 |
| LSHT| Few shot | Accuracy |22,337 |ZH |200 |
| PassageRetrieval-en| Synthetic | Accuracy |9,289 |EN |200 |
| PassageCount| Synthetic | Accuracy |11,141 |EN |200 |
| PassageRetrieval-zh | Synthetic | Accuracy |6,745 |ZH |200 |
| LCC| Code | Edit Sim |1,235 |Python/C#/Java |500 |
| RepoBench-P| Code | Edit Sim |4,206 |Python/Java |500 |
> Note: In order to avoid discrepancies caused by different tokenizers, we use the word count (using Python's split function) to calculate the average length of English datasets and code datasets, and use the character count to calculate the average length of Chinese datasets.
# Task description
| Task | Task Description |
| :---------------- | :----------------------------------------------------------- |
| HotpotQA | Answer related questions based on multiple given documents |
| 2WikiMultihopQA | Answer related questions based on multiple given documents |
| MuSiQue | Answer related questions based on multiple given documents |
| DuReader | Answer related Chinese questions based on multiple retrieved documents |
| MultiFieldQA-en | Answer English questions based on a long article, which comes from a relatively diverse field |
| MultiFieldQA-zh | Answer Chinese questions based on a long article, which comes from a relatively diverse field |
| NarrativeQA | Answer questions based on stories or scripts, including understanding of important elements such as characters, plots, themes, etc. |
| Qasper | Answer questions based on a NLP research paper, questions proposed and answered by NLP practitioners |
| GovReport | A summarization task that requires summarizing government work reports |
| MultiNews | A multi-doc summarization that requires summarizing over multiple news |
| QMSum | A summarization task that requires summarizing meeting records based on user queries |
| VCSUM | A summarization task that requires summarizing Chinese meeting records |
| SAMSum | A dialogue summarization task, providing several few-shot examples |
| TriviaQA | Single document question answering task, providing several few-shot examples |
| NQ | Single document question answering task, providing several few-shot examples |
| TREC | A classification task that requires categorizing questions, includes 50 categories in total |
| LSHT | A Chinese classification task that requires categorizing news, includes 24 categories in total |
| PassageRetrieval-en | Given 30 English Wikipedia paragraphs, determine which paragraph the given summary corresponds to |
| PassageCount | Determine the total number of different paragraphs in a given repetitive article |
| PassageRetrieval-zh | Given several Chinese paragraphs from the C4 data set, determine which paragraph the given abstract corresponds to |
| LCC | Given a long piece of code, predict the next line of code |
| RepoBench-P | Given code in multiple files within a GitHub repository (including cross-file dependencies), predict the next line of code |
# Task construction
> Note: For all tasks constructed from existing datasets, we use data from the validation or test set of the existing dataset (except for VCSUM).
- The tasks of [HotpotQA](https://hotpotqa.github.io/), [2WikiMultihopQA](https://aclanthology.org/2020.coling-main.580/), [MuSiQue](https://arxiv.org/abs/2108.00573), and [DuReader](https://github.com/baidu/DuReader) are built based on the original datasets and processed to be suitable for long context evaluation. Specifically, for questions in the validation set, we select the evidence passage that contains the answer and several distracting articles. These articles together with the original question constitute the input of the tasks.
- The tasks of MultiFiedQA-zh and MultiFieldQA-en consist of long artical data from about 10 sources, including Latex papers, judicial documents, government work reports, and PDF documents indexed by Google. For each long artical, we invite several PhD and master students to annotate, i.e., to ask questions based on the long artical and give the correct answers. To better automate evaluation, we ask the annotators to propose questions with definitive answers as much as possible.
- The tasks of [NarrativeQA](https://arxiv.org/pdf/1712.07040.pdf), [Qasper](https://arxiv.org/pdf/2105.03011.pdf), [GovReport](https://arxiv.org/pdf/2104.02112.pdf), [QMSum](https://arxiv.org/pdf/2104.05938.pdf) and [MultiNews](https://aclanthology.org/P19-1102.pdf) directly use the data provided by the original papers. In the specific construction, we use the template provided by [ZeroSCROLLS](https://www.zero.scrolls-benchmark.com/) to convert the corresponding data into pure text input.
- The [VCSUM](https://arxiv.org/abs/2305.05280) task is built based on the original dataset, and we design a corresponding template to convert the corresponding data into pure text input.
- The [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) task is constructed in the manner of [CoLT5](https://arxiv.org/abs/2303.09752), which provides several examples of question and answering based on documents, and requires the language model to answer related questions based on new documents.
- The tasks of [SAMSum](https://aclanthology.org/D19-5409.pdf), [TREC](https://aclanthology.org/C02-1150.pdf) and [LSHT](http://tcci.ccf.org.cn/conference/2014/dldoc/evatask6.pdf) are built based on the original datasets. For each question in the validation set, we sample several data from the training set to form few-shot examples. These examples together with the questions in the validation set constitute the input for this task.
- The PassageRetrieval-en task is constructed based on English Wikipedia. For each piece of data, we randomly sample 30 paragraphs from English Wikipedia and select one for summarization (using GPT-3.5-Turbo). This task requires the model to give the original paragraph name to which the summary corresponds.
- The PassageCount task is constructed based on the English wiki. For each piece of data, we randomly sample several passages from English Wikipedia, repeat each paragraph at random several times, and finally shuffle the paragraphs. This task requires the model to determine the total number of different paragraphs in the given context.
- The PasskeyRetrieval-zh task is constructed based on [C4](https://arxiv.org/abs/1910.10683). For each piece of data, we randomly sample several Chinese paragraphs from C4 and select one of them for summarization (using GPT-3.5-Turbo). This task requires the model to give the original paragraph name to which the summary corresponds.
- For the [LCC](https://arxiv.org/abs/2306.14893) task, we sample from the original code completion dataset. In the [RepoBench-P](https://arxiv.org/abs/2306.03091) task, we select the most challenging XF-F (Cross-File-First) setting from the original dataset and refer to the Oracle-Filled scenario in the paper. For each original piece of data, we randomly extract multiple cross-file code snippets, including the gold cross-file code snippet, and concatenate them as input, requiring the model to effectively use cross-file code for completion.
# LongBench-E statistics
| Task | Task Type | \#data in 0-4k | \#data in 4-8k | \#data in 8k+|
| :--------- | :-----------:| :-----------: |:---------: | :-------------: |
| HotpotQA | Multi-doc QA | 100 |100 |100 |
| 2WikiMultihopQA| Multi-doc QA | 100 |100 |100 |
| MultiFieldQA-en| Single-doc QA | 67 |70 |13 |
| Qasper| Single-doc QA | 100 |100 |24 |
| GovReport| Summarization | 100 |100 |100 |
| MultiNews| Summarization | 100 |100 |94 |
| TriviaQA| Few shot | 100 |100 |100 |
| SAMSum| Few shot | 100 |100 |100 |
| TREC| Few shot | 100 |100 |100 |
| PassageRetrieval-en| Synthetic | 100 |100 |100 |
| PassageCount| Synthetic | 100 |100 |100 |
| LCC| Code | 100 |100 |100 |
| RepoBench-P| Code | 100 |100 |100 |
# Citation
```
@misc{bai2023longbench,
title={LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding},
author={Yushi Bai and Xin Lv and Jiajie Zhang and Hongchang Lyu and Jiankai Tang and Zhidian Huang and Zhengxiao Du and Xiao Liu and Aohan Zeng and Lei Hou and Yuxiao Dong and Jie Tang and Juanzi Li},
year={2023},
eprint={2308.14508},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
bzantium/LongBench
|
[
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:summarization",
"task_categories:conversational",
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"language:zh",
"Long Context",
"arxiv:2308.14508",
"arxiv:2108.00573",
"arxiv:1712.07040",
"arxiv:2105.03011",
"arxiv:2104.02112",
"arxiv:2104.05938",
"arxiv:2305.05280",
"arxiv:2303.09752",
"arxiv:1910.10683",
"arxiv:2306.14893",
"arxiv:2306.03091",
"region:us"
] |
2023-09-21T05:13:03+00:00
|
{"language": ["en", "zh"], "size_categories": ["1K<n<10K"], "task_categories": ["question-answering", "text-generation", "summarization", "conversational", "text-classification"], "tags": ["Long Context"]}
|
2023-09-25T03:03:43+00:00
|
[
"2308.14508",
"2108.00573",
"1712.07040",
"2105.03011",
"2104.02112",
"2104.05938",
"2305.05280",
"2303.09752",
"1910.10683",
"2306.14893",
"2306.03091"
] |
[
"en",
"zh"
] |
TAGS
#task_categories-question-answering #task_categories-text-generation #task_categories-summarization #task_categories-conversational #task_categories-text-classification #size_categories-1K<n<10K #language-English #language-Chinese #Long Context #arxiv-2308.14508 #arxiv-2108.00573 #arxiv-1712.07040 #arxiv-2105.03011 #arxiv-2104.02112 #arxiv-2104.05938 #arxiv-2305.05280 #arxiv-2303.09752 #arxiv-1910.10683 #arxiv-2306.14893 #arxiv-2306.03091 #region-us
|
Introduction
============
LongBench is the first benchmark for bilingual, multitask, and comprehensive assessment of long context understanding capabilities of large language models. LongBench includes different languages (Chinese and English) to provide a more comprehensive evaluation of the large models' multilingual capabilities on long contexts. In addition, LongBench is composed of six major categories and twenty one different tasks, covering key long-text application scenarios such as single-document QA, multi-document QA, summarization, few-shot learning, synthetic tasks and code completion.
We are fully aware of the potentially high costs involved in the model evaluation process, especially in the context of long context scenarios (such as manual annotation costs or API call costs). Therefore, we adopt a fully automated evaluation method, aimed at measuring and evaluating the model's ability to understand long contexts at the lowest cost.
LongBench includes 14 English tasks, 5 Chinese tasks, and 2 code tasks, with the average length of most tasks ranging from 5k to 15k, and a total of 4,750 test data. For detailed statistics and construction methods of LongBench tasks, please refer here. In addition, we provide LongBench-E, a test set with a more uniform length distribution constructed by uniform sampling, with comparable amounts of data in the 0-4k, 4k-8k, and 8k+ length intervals to provide an analysis of the model's performance variations at different input lengths.
Github Repo for LongBench: URL
Arxiv Paper for LongBench: URL
How to use it?
==============
#### Loading Data
Similarly, you can load the LongBench-E data
Alternatively, you can download the folder from this link to load the data.
#### Data Format
All data in LongBench (LongBench-E) are standardized to the following format:
#### Evaluation
This repository provides data download for LongBench. If you wish to use this dataset for automated evaluation, please refer to our github.
Task statistics
===============
>
> Note: In order to avoid discrepancies caused by different tokenizers, we use the word count (using Python's split function) to calculate the average length of English datasets and code datasets, and use the character count to calculate the average length of Chinese datasets.
>
>
>
Task description
================
Task construction
=================
>
> Note: For all tasks constructed from existing datasets, we use data from the validation or test set of the existing dataset (except for VCSUM).
>
>
>
* The tasks of HotpotQA, 2WikiMultihopQA, MuSiQue, and DuReader are built based on the original datasets and processed to be suitable for long context evaluation. Specifically, for questions in the validation set, we select the evidence passage that contains the answer and several distracting articles. These articles together with the original question constitute the input of the tasks.
* The tasks of MultiFiedQA-zh and MultiFieldQA-en consist of long artical data from about 10 sources, including Latex papers, judicial documents, government work reports, and PDF documents indexed by Google. For each long artical, we invite several PhD and master students to annotate, i.e., to ask questions based on the long artical and give the correct answers. To better automate evaluation, we ask the annotators to propose questions with definitive answers as much as possible.
* The tasks of NarrativeQA, Qasper, GovReport, QMSum and MultiNews directly use the data provided by the original papers. In the specific construction, we use the template provided by ZeroSCROLLS to convert the corresponding data into pure text input.
* The VCSUM task is built based on the original dataset, and we design a corresponding template to convert the corresponding data into pure text input.
* The TriviaQA task is constructed in the manner of CoLT5, which provides several examples of question and answering based on documents, and requires the language model to answer related questions based on new documents.
* The tasks of SAMSum, TREC and LSHT are built based on the original datasets. For each question in the validation set, we sample several data from the training set to form few-shot examples. These examples together with the questions in the validation set constitute the input for this task.
* The PassageRetrieval-en task is constructed based on English Wikipedia. For each piece of data, we randomly sample 30 paragraphs from English Wikipedia and select one for summarization (using GPT-3.5-Turbo). This task requires the model to give the original paragraph name to which the summary corresponds.
* The PassageCount task is constructed based on the English wiki. For each piece of data, we randomly sample several passages from English Wikipedia, repeat each paragraph at random several times, and finally shuffle the paragraphs. This task requires the model to determine the total number of different paragraphs in the given context.
* The PasskeyRetrieval-zh task is constructed based on C4. For each piece of data, we randomly sample several Chinese paragraphs from C4 and select one of them for summarization (using GPT-3.5-Turbo). This task requires the model to give the original paragraph name to which the summary corresponds.
* For the LCC task, we sample from the original code completion dataset. In the RepoBench-P task, we select the most challenging XF-F (Cross-File-First) setting from the original dataset and refer to the Oracle-Filled scenario in the paper. For each original piece of data, we randomly extract multiple cross-file code snippets, including the gold cross-file code snippet, and concatenate them as input, requiring the model to effectively use cross-file code for completion.
LongBench-E statistics
======================
|
[
"#### Loading Data\n\n\nSimilarly, you can load the LongBench-E data\n\n\nAlternatively, you can download the folder from this link to load the data.",
"#### Data Format\n\n\nAll data in LongBench (LongBench-E) are standardized to the following format:",
"#### Evaluation\n\n\nThis repository provides data download for LongBench. If you wish to use this dataset for automated evaluation, please refer to our github.\n\n\nTask statistics\n===============\n\n\n\n\n> \n> Note: In order to avoid discrepancies caused by different tokenizers, we use the word count (using Python's split function) to calculate the average length of English datasets and code datasets, and use the character count to calculate the average length of Chinese datasets.\n> \n> \n> \n\n\nTask description\n================\n\n\n\nTask construction\n=================\n\n\n\n> \n> Note: For all tasks constructed from existing datasets, we use data from the validation or test set of the existing dataset (except for VCSUM).\n> \n> \n> \n\n\n* The tasks of HotpotQA, 2WikiMultihopQA, MuSiQue, and DuReader are built based on the original datasets and processed to be suitable for long context evaluation. Specifically, for questions in the validation set, we select the evidence passage that contains the answer and several distracting articles. These articles together with the original question constitute the input of the tasks.\n* The tasks of MultiFiedQA-zh and MultiFieldQA-en consist of long artical data from about 10 sources, including Latex papers, judicial documents, government work reports, and PDF documents indexed by Google. For each long artical, we invite several PhD and master students to annotate, i.e., to ask questions based on the long artical and give the correct answers. To better automate evaluation, we ask the annotators to propose questions with definitive answers as much as possible.\n* The tasks of NarrativeQA, Qasper, GovReport, QMSum and MultiNews directly use the data provided by the original papers. In the specific construction, we use the template provided by ZeroSCROLLS to convert the corresponding data into pure text input.\n* The VCSUM task is built based on the original dataset, and we design a corresponding template to convert the corresponding data into pure text input.\n* The TriviaQA task is constructed in the manner of CoLT5, which provides several examples of question and answering based on documents, and requires the language model to answer related questions based on new documents.\n* The tasks of SAMSum, TREC and LSHT are built based on the original datasets. For each question in the validation set, we sample several data from the training set to form few-shot examples. These examples together with the questions in the validation set constitute the input for this task.\n* The PassageRetrieval-en task is constructed based on English Wikipedia. For each piece of data, we randomly sample 30 paragraphs from English Wikipedia and select one for summarization (using GPT-3.5-Turbo). This task requires the model to give the original paragraph name to which the summary corresponds.\n* The PassageCount task is constructed based on the English wiki. For each piece of data, we randomly sample several passages from English Wikipedia, repeat each paragraph at random several times, and finally shuffle the paragraphs. This task requires the model to determine the total number of different paragraphs in the given context.\n* The PasskeyRetrieval-zh task is constructed based on C4. For each piece of data, we randomly sample several Chinese paragraphs from C4 and select one of them for summarization (using GPT-3.5-Turbo). This task requires the model to give the original paragraph name to which the summary corresponds.\n* For the LCC task, we sample from the original code completion dataset. In the RepoBench-P task, we select the most challenging XF-F (Cross-File-First) setting from the original dataset and refer to the Oracle-Filled scenario in the paper. For each original piece of data, we randomly extract multiple cross-file code snippets, including the gold cross-file code snippet, and concatenate them as input, requiring the model to effectively use cross-file code for completion.\n\n\nLongBench-E statistics\n======================"
] |
[
"TAGS\n#task_categories-question-answering #task_categories-text-generation #task_categories-summarization #task_categories-conversational #task_categories-text-classification #size_categories-1K<n<10K #language-English #language-Chinese #Long Context #arxiv-2308.14508 #arxiv-2108.00573 #arxiv-1712.07040 #arxiv-2105.03011 #arxiv-2104.02112 #arxiv-2104.05938 #arxiv-2305.05280 #arxiv-2303.09752 #arxiv-1910.10683 #arxiv-2306.14893 #arxiv-2306.03091 #region-us \n",
"#### Loading Data\n\n\nSimilarly, you can load the LongBench-E data\n\n\nAlternatively, you can download the folder from this link to load the data.",
"#### Data Format\n\n\nAll data in LongBench (LongBench-E) are standardized to the following format:",
"#### Evaluation\n\n\nThis repository provides data download for LongBench. If you wish to use this dataset for automated evaluation, please refer to our github.\n\n\nTask statistics\n===============\n\n\n\n\n> \n> Note: In order to avoid discrepancies caused by different tokenizers, we use the word count (using Python's split function) to calculate the average length of English datasets and code datasets, and use the character count to calculate the average length of Chinese datasets.\n> \n> \n> \n\n\nTask description\n================\n\n\n\nTask construction\n=================\n\n\n\n> \n> Note: For all tasks constructed from existing datasets, we use data from the validation or test set of the existing dataset (except for VCSUM).\n> \n> \n> \n\n\n* The tasks of HotpotQA, 2WikiMultihopQA, MuSiQue, and DuReader are built based on the original datasets and processed to be suitable for long context evaluation. Specifically, for questions in the validation set, we select the evidence passage that contains the answer and several distracting articles. These articles together with the original question constitute the input of the tasks.\n* The tasks of MultiFiedQA-zh and MultiFieldQA-en consist of long artical data from about 10 sources, including Latex papers, judicial documents, government work reports, and PDF documents indexed by Google. For each long artical, we invite several PhD and master students to annotate, i.e., to ask questions based on the long artical and give the correct answers. To better automate evaluation, we ask the annotators to propose questions with definitive answers as much as possible.\n* The tasks of NarrativeQA, Qasper, GovReport, QMSum and MultiNews directly use the data provided by the original papers. In the specific construction, we use the template provided by ZeroSCROLLS to convert the corresponding data into pure text input.\n* The VCSUM task is built based on the original dataset, and we design a corresponding template to convert the corresponding data into pure text input.\n* The TriviaQA task is constructed in the manner of CoLT5, which provides several examples of question and answering based on documents, and requires the language model to answer related questions based on new documents.\n* The tasks of SAMSum, TREC and LSHT are built based on the original datasets. For each question in the validation set, we sample several data from the training set to form few-shot examples. These examples together with the questions in the validation set constitute the input for this task.\n* The PassageRetrieval-en task is constructed based on English Wikipedia. For each piece of data, we randomly sample 30 paragraphs from English Wikipedia and select one for summarization (using GPT-3.5-Turbo). This task requires the model to give the original paragraph name to which the summary corresponds.\n* The PassageCount task is constructed based on the English wiki. For each piece of data, we randomly sample several passages from English Wikipedia, repeat each paragraph at random several times, and finally shuffle the paragraphs. This task requires the model to determine the total number of different paragraphs in the given context.\n* The PasskeyRetrieval-zh task is constructed based on C4. For each piece of data, we randomly sample several Chinese paragraphs from C4 and select one of them for summarization (using GPT-3.5-Turbo). This task requires the model to give the original paragraph name to which the summary corresponds.\n* For the LCC task, we sample from the original code completion dataset. In the RepoBench-P task, we select the most challenging XF-F (Cross-File-First) setting from the original dataset and refer to the Oracle-Filled scenario in the paper. For each original piece of data, we randomly extract multiple cross-file code snippets, including the gold cross-file code snippet, and concatenate them as input, requiring the model to effectively use cross-file code for completion.\n\n\nLongBench-E statistics\n======================"
] |
[
180,
33,
26,
920
] |
[
"passage: TAGS\n#task_categories-question-answering #task_categories-text-generation #task_categories-summarization #task_categories-conversational #task_categories-text-classification #size_categories-1K<n<10K #language-English #language-Chinese #Long Context #arxiv-2308.14508 #arxiv-2108.00573 #arxiv-1712.07040 #arxiv-2105.03011 #arxiv-2104.02112 #arxiv-2104.05938 #arxiv-2305.05280 #arxiv-2303.09752 #arxiv-1910.10683 #arxiv-2306.14893 #arxiv-2306.03091 #region-us \n#### Loading Data\n\n\nSimilarly, you can load the LongBench-E data\n\n\nAlternatively, you can download the folder from this link to load the data.#### Data Format\n\n\nAll data in LongBench (LongBench-E) are standardized to the following format:"
] |
b67b96e2cb5c8d1d624dcc357b29a1e1c571d1af
|
# Dataset Card for "three_styles_prompted_500"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
kewu93/three_styles_prompted_500
|
[
"region:us"
] |
2023-09-21T05:27:33+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 34576478.8, "num_examples": 1200}, {"name": "val", "num_bytes": 8468533.6, "num_examples": 300}], "download_size": 42069788, "dataset_size": 43045012.4}}
|
2023-09-21T05:27:41+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "three_styles_prompted_500"
More Information needed
|
[
"# Dataset Card for \"three_styles_prompted_500\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"three_styles_prompted_500\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"three_styles_prompted_500\"\n\nMore Information needed"
] |
61fc549a182ef2fbe2949687f3d3b3fb8fad3479
|
# Dataset Card for "autotree_automl_Higgs_gosdt_l512_d3_sd2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_automl_Higgs_gosdt_l512_d3_sd2
|
[
"region:us"
] |
2023-09-21T05:30:02+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float64"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float64"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 12501600000, "num_examples": 100000}, {"name": "validation", "num_bytes": 1250160000, "num_examples": 10000}], "download_size": 9801930446, "dataset_size": 13751760000}}
|
2023-09-21T05:38:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_automl_Higgs_gosdt_l512_d3_sd2"
More Information needed
|
[
"# Dataset Card for \"autotree_automl_Higgs_gosdt_l512_d3_sd2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_automl_Higgs_gosdt_l512_d3_sd2\"\n\nMore Information needed"
] |
[
6,
33
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_automl_Higgs_gosdt_l512_d3_sd2\"\n\nMore Information needed"
] |
becf0eb69155275164f38811950ca8cf93da8642
|
# Dataset Card for "eval_tag_squad_v0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/eval_tag_squad_v0
|
[
"region:us"
] |
2023-09-21T05:47:25+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 87035544, "num_examples": 87599}, {"name": "validation", "num_bytes": 11397371, "num_examples": 10570}], "download_size": 21419187, "dataset_size": 98432915}}
|
2023-09-21T14:52:30+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "eval_tag_squad_v0"
More Information needed
|
[
"# Dataset Card for \"eval_tag_squad_v0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"eval_tag_squad_v0\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"eval_tag_squad_v0\"\n\nMore Information needed"
] |
0d0a376bbe9c7464c9f2826753a9cde914efe297
|
# coedit-reworded
This is Grammarly's [coedit](https://huggingface.co/datasets/grammarly/coedit) dataset parsed into Alpaca-style `instruction`, `input`, and `output` rows, with the original `instruction` values replaced with a more diverse set of procedurally generated instructions. Contains 23930 unique values of `instruction`, as compared to the original 144. See [`coedit_reword.py`](https://huggingface.co/datasets/chargoddard/coedit-reworded/blob/main/coedit_reword.py) for how these were generated.
All credit to the original authors of this dataset.
# Citation
```
@article{raheja2023coedit,
title={CoEdIT: Text Editing by Task-Specific Instruction Tuning},
author={Vipul Raheja and Dhruv Kumar and Ryan Koo and Dongyeop Kang},
year={2023},
eprint={2305.09857},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
chargoddard/coedit-reworded
|
[
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"arxiv:2305.09857",
"region:us"
] |
2023-09-21T05:53:36+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "task", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "original_instruction", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24317220, "num_examples": 82466}], "download_size": 12064503, "dataset_size": 24317220}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T06:14:35+00:00
|
[
"2305.09857"
] |
[
"en"
] |
TAGS
#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #arxiv-2305.09857 #region-us
|
# coedit-reworded
This is Grammarly's coedit dataset parsed into Alpaca-style 'instruction', 'input', and 'output' rows, with the original 'instruction' values replaced with a more diverse set of procedurally generated instructions. Contains 23930 unique values of 'instruction', as compared to the original 144. See 'coedit_reword.py' for how these were generated.
All credit to the original authors of this dataset.
|
[
"# coedit-reworded\n\nThis is Grammarly's coedit dataset parsed into Alpaca-style 'instruction', 'input', and 'output' rows, with the original 'instruction' values replaced with a more diverse set of procedurally generated instructions. Contains 23930 unique values of 'instruction', as compared to the original 144. See 'coedit_reword.py' for how these were generated.\n\nAll credit to the original authors of this dataset."
] |
[
"TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #arxiv-2305.09857 #region-us \n",
"# coedit-reworded\n\nThis is Grammarly's coedit dataset parsed into Alpaca-style 'instruction', 'input', and 'output' rows, with the original 'instruction' values replaced with a more diverse set of procedurally generated instructions. Contains 23930 unique values of 'instruction', as compared to the original 144. See 'coedit_reword.py' for how these were generated.\n\nAll credit to the original authors of this dataset."
] |
[
50,
112
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #arxiv-2305.09857 #region-us \n# coedit-reworded\n\nThis is Grammarly's coedit dataset parsed into Alpaca-style 'instruction', 'input', and 'output' rows, with the original 'instruction' values replaced with a more diverse set of procedurally generated instructions. Contains 23930 unique values of 'instruction', as compared to the original 144. See 'coedit_reword.py' for how these were generated.\n\nAll credit to the original authors of this dataset."
] |
4ed459ee69ef949621c7ae4d1a7ac92bd4d2a44f
|
# Dataset Card for "bailamvan"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pphuc25/bailamvan
|
[
"region:us"
] |
2023-09-21T05:57:20+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9514569, "num_examples": 888}], "download_size": 4680823, "dataset_size": 9514569}}
|
2023-09-21T06:01:32+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "bailamvan"
More Information needed
|
[
"# Dataset Card for \"bailamvan\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"bailamvan\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"bailamvan\"\n\nMore Information needed"
] |
27d53d18107323a409be945dd4a5f88814ee2185
|
# Dataset Card for "wildlife_photography_subjects"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/wildlife_photography_subjects
|
[
"region:us"
] |
2023-09-21T05:58:10+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 682404, "num_examples": 10000}], "download_size": 13383, "dataset_size": 682404}}
|
2023-09-21T05:58:13+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "wildlife_photography_subjects"
More Information needed
|
[
"# Dataset Card for \"wildlife_photography_subjects\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"wildlife_photography_subjects\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"wildlife_photography_subjects\"\n\nMore Information needed"
] |
cd395c6858c50dd12840f75ef8b0ae626e06953f
|
# Dataset Card for "baivanhay"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pphuc25/baivanhay
|
[
"region:us"
] |
2023-09-21T06:02:39+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 60911721, "num_examples": 9913}], "download_size": 30468207, "dataset_size": 60911721}}
|
2023-09-21T06:02:49+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "baivanhay"
More Information needed
|
[
"# Dataset Card for \"baivanhay\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"baivanhay\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"baivanhay\"\n\nMore Information needed"
] |
bd66695b49b60aeef87a0e1c7d2fd243e9d9ca28
|
# Dataset Card for "vanmau_edu"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pphuc25/vanmau_edu
|
[
"region:us"
] |
2023-09-21T06:04:49+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 31792839, "num_examples": 5717}], "download_size": 16545654, "dataset_size": 31792839}}
|
2023-09-21T06:04:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "vanmau_edu"
More Information needed
|
[
"# Dataset Card for \"vanmau_edu\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"vanmau_edu\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"vanmau_edu\"\n\nMore Information needed"
] |
ec8bd1b16e4c8ca28ce1874552b2ddaa833418cb
|
# Dataset Card for "cabo_da_roca_light_conditions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/cabo_da_roca_light_conditions
|
[
"region:us"
] |
2023-09-21T06:04:58+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 707338, "num_examples": 10000}], "download_size": 11725, "dataset_size": 707338}}
|
2023-09-21T06:05:01+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cabo_da_roca_light_conditions"
More Information needed
|
[
"# Dataset Card for \"cabo_da_roca_light_conditions\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cabo_da_roca_light_conditions\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cabo_da_roca_light_conditions\"\n\nMore Information needed"
] |
87ebd7fecb5114b2df28956b8a9bffc30771ef2d
|
# Dataset Description
Detailed description: [www.kaggle.com/competitions/malware-classification/overview/description](https://www.kaggle.com/competitions/malware-classification/overview/description)
Warning: this dataset is almost half a terabyte uncompressed! We have compressed the data using 7zip to achieve the smallest file size possible. Note that the rules do not allow sharing of the data outside of Kaggle, including bit torrent ([why not?](https://www.kaggle.com/wiki/ANoteOnTorrents)).
You are provided with a set of known malware files representing a mix of 9 different families. Each malware file has an Id, a 20 character hash value uniquely identifying the file, and a Class, an integer representing one of 9 family names to which the malware may belong:
* Ramnit
* Lollipop
* Kelihos_ver3
* Vundo
* Simda
* Tracur
* Kelihos_ver1
* Obfuscator.ACY
* Gatak
For each file, the raw data contains the hexadecimal representation of the file's binary content, without the PE header (to ensure sterility). You are also provided a metadata manifest, which is a log containing various metadata information extracted from the binary, such as function calls, strings, etc. This was generated using the IDA disassembler tool. Your task is to develop the best mechanism for classifying files in the test set into their respective family affiliations.
The dataset contains the following files:
* train.7z - the raw data for the training set (MD5 hash = 4fedb0899fc2210a6c843889a70952ed)
* trainLabels.csv - the class labels associated with the training set
* test.7z - the raw data for the test set (MD5 hash = 84b6fbfb9df3c461ed2cbbfa371ffb43)
* sampleSubmission.csv - a file showing the valid submission format
* dataSample.csv - a sample of the dataset to preview before downloading
|
c01dsnap/MaliciousPEs
|
[
"license:other",
"region:us"
] |
2023-09-21T06:08:34+00:00
|
{"license": "other"}
|
2023-09-27T08:25:33+00:00
|
[] |
[] |
TAGS
#license-other #region-us
|
# Dataset Description
Detailed description: URL
Warning: this dataset is almost half a terabyte uncompressed! We have compressed the data using 7zip to achieve the smallest file size possible. Note that the rules do not allow sharing of the data outside of Kaggle, including bit torrent (why not?).
You are provided with a set of known malware files representing a mix of 9 different families. Each malware file has an Id, a 20 character hash value uniquely identifying the file, and a Class, an integer representing one of 9 family names to which the malware may belong:
* Ramnit
* Lollipop
* Kelihos_ver3
* Vundo
* Simda
* Tracur
* Kelihos_ver1
* Obfuscator.ACY
* Gatak
For each file, the raw data contains the hexadecimal representation of the file's binary content, without the PE header (to ensure sterility). You are also provided a metadata manifest, which is a log containing various metadata information extracted from the binary, such as function calls, strings, etc. This was generated using the IDA disassembler tool. Your task is to develop the best mechanism for classifying files in the test set into their respective family affiliations.
The dataset contains the following files:
* train.7z - the raw data for the training set (MD5 hash = 4fedb0899fc2210a6c843889a70952ed)
* URL - the class labels associated with the training set
* test.7z - the raw data for the test set (MD5 hash = 84b6fbfb9df3c461ed2cbbfa371ffb43)
* URL - a file showing the valid submission format
* URL - a sample of the dataset to preview before downloading
|
[
"# Dataset Description\n\nDetailed description: URL\n\nWarning: this dataset is almost half a terabyte uncompressed! We have compressed the data using 7zip to achieve the smallest file size possible. Note that the rules do not allow sharing of the data outside of Kaggle, including bit torrent (why not?).\n\nYou are provided with a set of known malware files representing a mix of 9 different families. Each malware file has an Id, a 20 character hash value uniquely identifying the file, and a Class, an integer representing one of 9 family names to which the malware may belong:\n\n\n* Ramnit\n* Lollipop\n* Kelihos_ver3\n* Vundo\n* Simda\n* Tracur\n* Kelihos_ver1\n* Obfuscator.ACY\n* Gatak\n\nFor each file, the raw data contains the hexadecimal representation of the file's binary content, without the PE header (to ensure sterility). You are also provided a metadata manifest, which is a log containing various metadata information extracted from the binary, such as function calls, strings, etc. This was generated using the IDA disassembler tool. Your task is to develop the best mechanism for classifying files in the test set into their respective family affiliations.\n\nThe dataset contains the following files:\n\n* train.7z - the raw data for the training set (MD5 hash = 4fedb0899fc2210a6c843889a70952ed)\n* URL - the class labels associated with the training set\n* test.7z - the raw data for the test set (MD5 hash = 84b6fbfb9df3c461ed2cbbfa371ffb43)\n* URL - a file showing the valid submission format\n* URL - a sample of the dataset to preview before downloading"
] |
[
"TAGS\n#license-other #region-us \n",
"# Dataset Description\n\nDetailed description: URL\n\nWarning: this dataset is almost half a terabyte uncompressed! We have compressed the data using 7zip to achieve the smallest file size possible. Note that the rules do not allow sharing of the data outside of Kaggle, including bit torrent (why not?).\n\nYou are provided with a set of known malware files representing a mix of 9 different families. Each malware file has an Id, a 20 character hash value uniquely identifying the file, and a Class, an integer representing one of 9 family names to which the malware may belong:\n\n\n* Ramnit\n* Lollipop\n* Kelihos_ver3\n* Vundo\n* Simda\n* Tracur\n* Kelihos_ver1\n* Obfuscator.ACY\n* Gatak\n\nFor each file, the raw data contains the hexadecimal representation of the file's binary content, without the PE header (to ensure sterility). You are also provided a metadata manifest, which is a log containing various metadata information extracted from the binary, such as function calls, strings, etc. This was generated using the IDA disassembler tool. Your task is to develop the best mechanism for classifying files in the test set into their respective family affiliations.\n\nThe dataset contains the following files:\n\n* train.7z - the raw data for the training set (MD5 hash = 4fedb0899fc2210a6c843889a70952ed)\n* URL - the class labels associated with the training set\n* test.7z - the raw data for the test set (MD5 hash = 84b6fbfb9df3c461ed2cbbfa371ffb43)\n* URL - a file showing the valid submission format\n* URL - a sample of the dataset to preview before downloading"
] |
[
11,
405
] |
[
"passage: TAGS\n#license-other #region-us \n# Dataset Description\n\nDetailed description: URL\n\nWarning: this dataset is almost half a terabyte uncompressed! We have compressed the data using 7zip to achieve the smallest file size possible. Note that the rules do not allow sharing of the data outside of Kaggle, including bit torrent (why not?).\n\nYou are provided with a set of known malware files representing a mix of 9 different families. Each malware file has an Id, a 20 character hash value uniquely identifying the file, and a Class, an integer representing one of 9 family names to which the malware may belong:\n\n\n* Ramnit\n* Lollipop\n* Kelihos_ver3\n* Vundo\n* Simda\n* Tracur\n* Kelihos_ver1\n* Obfuscator.ACY\n* Gatak\n\nFor each file, the raw data contains the hexadecimal representation of the file's binary content, without the PE header (to ensure sterility). You are also provided a metadata manifest, which is a log containing various metadata information extracted from the binary, such as function calls, strings, etc. This was generated using the IDA disassembler tool. Your task is to develop the best mechanism for classifying files in the test set into their respective family affiliations.\n\nThe dataset contains the following files:\n\n* train.7z - the raw data for the training set (MD5 hash = 4fedb0899fc2210a6c843889a70952ed)\n* URL - the class labels associated with the training set\n* test.7z - the raw data for the test set (MD5 hash = 84b6fbfb9df3c461ed2cbbfa371ffb43)\n* URL - a file showing the valid submission format\n* URL - a sample of the dataset to preview before downloading"
] |
186c9824f3af37865b6d617c175241e4f2aef81d
|
# Dataset Card for "vanmauvip_com"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pphuc25/vanmauvip_com
|
[
"region:us"
] |
2023-09-21T06:11:19+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 71040692, "num_examples": 13390}], "download_size": 35161324, "dataset_size": 71040692}}
|
2023-09-21T06:11:48+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "vanmauvip_com"
More Information needed
|
[
"# Dataset Card for \"vanmauvip_com\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"vanmauvip_com\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"vanmauvip_com\"\n\nMore Information needed"
] |
2e063c063d8789be9c48b2567d4cafa72eb1e532
|
# Dataset Card for "family_lifestyle_photography"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/family_lifestyle_photography
|
[
"region:us"
] |
2023-09-21T06:22:19+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1039539, "num_examples": 10000}], "download_size": 22749, "dataset_size": 1039539}}
|
2023-09-21T06:22:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "family_lifestyle_photography"
More Information needed
|
[
"# Dataset Card for \"family_lifestyle_photography\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"family_lifestyle_photography\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"family_lifestyle_photography\"\n\nMore Information needed"
] |
da7fbee5f342ef03d852c141d46e5ef65c7be8ab
|
# Dataset Card for "trialdata"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
18moumi/trialdata
|
[
"region:us"
] |
2023-09-21T06:35:33+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 48673, "num_examples": 142}], "download_size": 21464, "dataset_size": 48673}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T06:36:04+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "trialdata"
More Information needed
|
[
"# Dataset Card for \"trialdata\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"trialdata\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"trialdata\"\n\nMore Information needed"
] |
1d4be2c68635fb47a081a430223e63be05ff6391
|
# Dataset Card for "data-parsing-new-dataset-v4-updated-labels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dostai/data-parsing-new-dataset-v4-updated-labels
|
[
"region:us"
] |
2023-09-21T06:45:17+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "struct": [{"name": "gt_parse", "struct": [{"name": "VendorCompanyName", "dtype": "string"}, {"name": "VendorCompanyID", "dtype": "string"}, {"name": "InvoiceID", "dtype": "string"}]}]}], "splits": [{"name": "train", "num_bytes": 293781936.0, "num_examples": 146}], "download_size": 31041936, "dataset_size": 293781936.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T06:45:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "data-parsing-new-dataset-v4-updated-labels"
More Information needed
|
[
"# Dataset Card for \"data-parsing-new-dataset-v4-updated-labels\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"data-parsing-new-dataset-v4-updated-labels\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"data-parsing-new-dataset-v4-updated-labels\"\n\nMore Information needed"
] |
f5afe37a2499d923dc48c7ec603327534188f5d0
|
annotations_creators:
- no-annotation
language:
- en
language_creators:
- expert-generated
license:
- other
multilinguality:
- monolingual
pretty_name: codedataset
size_categories:
- n<1K
source_datasets:
- original
tags: []
task_categories:
- text2text-generation
- text-generation
- question-answering
task_ids:
- explanation-generation
- open-book-qa
- closed-book-qa
- abstractive-qa
- language-modeling
- dialogue-modeling
- extractive-qa
|
tanmay2798/codedataset
|
[
"license:llama2",
"region:us"
] |
2023-09-21T06:51:37+00:00
|
{"license": "llama2"}
|
2023-09-21T07:02:59+00:00
|
[] |
[] |
TAGS
#license-llama2 #region-us
|
annotations_creators:
- no-annotation
language:
- en
language_creators:
- expert-generated
license:
- other
multilinguality:
- monolingual
pretty_name: codedataset
size_categories:
- n<1K
source_datasets:
- original
tags: []
task_categories:
- text2text-generation
- text-generation
- question-answering
task_ids:
- explanation-generation
- open-book-qa
- closed-book-qa
- abstractive-qa
- language-modeling
- dialogue-modeling
- extractive-qa
|
[] |
[
"TAGS\n#license-llama2 #region-us \n"
] |
[
13
] |
[
"passage: TAGS\n#license-llama2 #region-us \n"
] |
fab4f5de09e1c31ab7134278410d11f36e70eee3
|
# Dataset Card for "photojournalism_fisherwoman"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/photojournalism_fisherwoman
|
[
"region:us"
] |
2023-09-21T06:55:20+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1219362, "num_examples": 10000}], "download_size": 25612, "dataset_size": 1219362}}
|
2023-09-21T06:55:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "photojournalism_fisherwoman"
More Information needed
|
[
"# Dataset Card for \"photojournalism_fisherwoman\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"photojournalism_fisherwoman\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"photojournalism_fisherwoman\"\n\nMore Information needed"
] |
122de666be5f0d0e354ad1de52bf14b0173ba3ec
|
# Dataset Card for "JSON_expert_huy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Jackoon/JSON_expert_huy
|
[
"region:us"
] |
2023-09-21T06:57:32+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 178537, "num_examples": 173}], "download_size": 40306, "dataset_size": 178537}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T06:57:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "JSON_expert_huy"
More Information needed
|
[
"# Dataset Card for \"JSON_expert_huy\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"JSON_expert_huy\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"JSON_expert_huy\"\n\nMore Information needed"
] |
dbca9f002aff66640f5573302c0f16f5f005837e
|
# Dataset Card for "luft_versorgen_2830-undersampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mboth/luft_versorgen_2830-undersampled
|
[
"region:us"
] |
2023-09-21T07:00:52+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "Datatype", "dtype": "string"}, {"name": "Beschreibung", "dtype": "string"}, {"name": "Name", "dtype": "string"}, {"name": "Unit", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "Grundfunktion", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "LuftBereitstellen", "1": "LuftVerteilen"}}}}], "splits": [{"name": "train", "num_bytes": 1118270.5721056196, "num_examples": 5660}, {"name": "test", "num_bytes": 290707, "num_examples": 1477}, {"name": "valid", "num_bytes": 290707, "num_examples": 1477}], "download_size": 603236, "dataset_size": 1699684.5721056196}}
|
2023-09-21T07:00:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "luft_versorgen_2830-undersampled"
More Information needed
|
[
"# Dataset Card for \"luft_versorgen_2830-undersampled\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"luft_versorgen_2830-undersampled\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"luft_versorgen_2830-undersampled\"\n\nMore Information needed"
] |
a3c0389267f4e1c82417cf4de4facb709c5e90fe
|
# ERR Newsroom Keyphrases
This dataset is a subset of [ERR Newsroom](https://huggingface.co/datasets/TalTechNLP/err-newsroom), with up to 5 keyphrases assigned to each news article. The keyphrases are generated using the OpenAI API, using the `gpt-3.5-turbo` model (see the script `extract-keywords-openai.py`).
|
TalTechNLP/err-newsroom-keyphrases
|
[
"task_categories:summarization",
"task_categories:text2text-generation",
"language:et",
"license:cc-by-4.0",
"region:us"
] |
2023-09-21T07:02:32+00:00
|
{"language": ["et"], "license": "cc-by-4.0", "task_categories": ["summarization", "text2text-generation"], "pretty_name": "ERR Newsroom Keyphrases"}
|
2023-09-21T07:15:54+00:00
|
[] |
[
"et"
] |
TAGS
#task_categories-summarization #task_categories-text2text-generation #language-Estonian #license-cc-by-4.0 #region-us
|
# ERR Newsroom Keyphrases
This dataset is a subset of ERR Newsroom, with up to 5 keyphrases assigned to each news article. The keyphrases are generated using the OpenAI API, using the 'gpt-3.5-turbo' model (see the script 'URL').
|
[
"# ERR Newsroom Keyphrases\n\nThis dataset is a subset of ERR Newsroom, with up to 5 keyphrases assigned to each news article. The keyphrases are generated using the OpenAI API, using the 'gpt-3.5-turbo' model (see the script 'URL')."
] |
[
"TAGS\n#task_categories-summarization #task_categories-text2text-generation #language-Estonian #license-cc-by-4.0 #region-us \n",
"# ERR Newsroom Keyphrases\n\nThis dataset is a subset of ERR Newsroom, with up to 5 keyphrases assigned to each news article. The keyphrases are generated using the OpenAI API, using the 'gpt-3.5-turbo' model (see the script 'URL')."
] |
[
43,
66
] |
[
"passage: TAGS\n#task_categories-summarization #task_categories-text2text-generation #language-Estonian #license-cc-by-4.0 #region-us \n# ERR Newsroom Keyphrases\n\nThis dataset is a subset of ERR Newsroom, with up to 5 keyphrases assigned to each news article. The keyphrases are generated using the OpenAI API, using the 'gpt-3.5-turbo' model (see the script 'URL')."
] |
cdc19c9e5d0e4d02e44e19848b2bbad32d6d482d
|
# Dataset Card for "tokenized_kowiki"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
daje/tokenized_kowiki
|
[
"region:us"
] |
2023-09-21T07:02:46+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1656585668, "num_examples": 1706411}], "download_size": 682692770, "dataset_size": 1656585668}}
|
2023-09-21T07:04:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "tokenized_kowiki"
More Information needed
|
[
"# Dataset Card for \"tokenized_kowiki\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"tokenized_kowiki\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"tokenized_kowiki\"\n\nMore Information needed"
] |
d10153aa1f0dbe83817f379360260d8ba9377eaf
|
# Dataset Card for "data_for_synthesis_with_entities_align_v4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
thanhduycao/data_for_synthesis_with_entities_align_v4
|
[
"region:us"
] |
2023-09-21T07:17:03+00:00
|
{"dataset_info": {"config_name": "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", "features": [{"name": "id", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "intent", "dtype": "string"}, {"name": "sentence_annotation", "dtype": "string"}, {"name": "entities", "list": [{"name": "type", "dtype": "string"}, {"name": "filler", "dtype": "string"}]}, {"name": "file", "dtype": "string"}, {"name": "audio", "struct": [{"name": "array", "sequence": "float64"}, {"name": "path", "dtype": "string"}, {"name": "sampling_rate", "dtype": "int64"}]}, {"name": "origin_transcription", "dtype": "string"}, {"name": "sentence_norm", "dtype": "string"}, {"name": "w2v2_large_transcription", "dtype": "string"}, {"name": "wer", "dtype": "int64"}, {"name": "entities_norm", "list": [{"name": "filler", "dtype": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "entities_align", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2667449542.4493446, "num_examples": 5029}], "download_size": 632908060, "dataset_size": 2667449542.4493446}, "configs": [{"config_name": "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", "data_files": [{"split": "train", "path": "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/train-*"}]}]}
|
2023-09-21T07:17:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "data_for_synthesis_with_entities_align_v4"
More Information needed
|
[
"# Dataset Card for \"data_for_synthesis_with_entities_align_v4\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"data_for_synthesis_with_entities_align_v4\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"data_for_synthesis_with_entities_align_v4\"\n\nMore Information needed"
] |
9b92169ef76416215d1c2f664c341fad98f861f8
|
# Dataset Card for "data_aug_full_0919_new"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
linhqyy/data_aug_full_0919_new
|
[
"region:us"
] |
2023-09-21T07:20:10+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "intent", "dtype": "string"}, {"name": "entities", "list": [{"name": "filler", "dtype": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "labels", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1621998, "num_examples": 7687}, {"name": "test", "num_bytes": 138487, "num_examples": 660}], "download_size": 366308, "dataset_size": 1760485}}
|
2023-09-21T07:20:13+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "data_aug_full_0919_new"
More Information needed
|
[
"# Dataset Card for \"data_aug_full_0919_new\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"data_aug_full_0919_new\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"data_aug_full_0919_new\"\n\nMore Information needed"
] |
7c0b4f46253199310b7da3b17ba0cc3c25b89823
|
# Dataset Card for "samoan_fire_photography"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/samoan_fire_photography
|
[
"region:us"
] |
2023-09-21T07:22:17+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1538494, "num_examples": 10000}], "download_size": 27005, "dataset_size": 1538494}}
|
2023-09-21T07:27:36+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "samoan_fire_photography"
More Information needed
|
[
"# Dataset Card for \"samoan_fire_photography\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"samoan_fire_photography\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"samoan_fire_photography\"\n\nMore Information needed"
] |
5546445aa1a398c08a35844faf212811a22177cc
|
# Dataset Card for "wildreceipts_ocr_eval"
see train dataset for full detail: https://huggingface.co/datasets/mychen76/wildreceipts_ocr_train
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mychen76/wildreceipts_ocr_eval
|
[
"region:us"
] |
2023-09-21T07:31:48+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3239963.0, "num_examples": 20}], "download_size": 3034931, "dataset_size": 3239963.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T11:34:49+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "wildreceipts_ocr_eval"
see train dataset for full detail: URL
More Information needed
|
[
"# Dataset Card for \"wildreceipts_ocr_eval\"\n\nsee train dataset for full detail: URL\n\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"wildreceipts_ocr_eval\"\n\nsee train dataset for full detail: URL\n\n\nMore Information needed"
] |
[
6,
30
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"wildreceipts_ocr_eval\"\n\nsee train dataset for full detail: URL\n\n\nMore Information needed"
] |
b3cdf597e0305a066d857b5ab1caf962f67be7f8
|
# Dataset Card for "wildreceipts_ocr_test"
see train dataset for full detail:
https://huggingface.co/datasets/mychen76/wildreceipts_ocr_train
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mychen76/wildreceipts_ocr_test
|
[
"region:us"
] |
2023-09-21T07:31:50+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 53982770.0, "num_examples": 452}], "download_size": 49734928, "dataset_size": 53982770.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T09:12:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "wildreceipts_ocr_test"
see train dataset for full detail:
URL
More Information needed
|
[
"# Dataset Card for \"wildreceipts_ocr_test\"\n\nsee train dataset for full detail:\nURL\n\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"wildreceipts_ocr_test\"\n\nsee train dataset for full detail:\nURL\n\n\nMore Information needed"
] |
[
6,
29
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"wildreceipts_ocr_test\"\n\nsee train dataset for full detail:\nURL\n\n\nMore Information needed"
] |
4bafe98927851bd8fbb6a5fc2a261a71be9b3e56
|
# Dataset Card for "wildreceipts_ocr_train"
Dataset Summary
-----------------------------
This is collection of receipts images with enhanced text information source from Wildreceipts and additional curated receipt images.
It contains photo and OCRs information of each image including words, bounding box, labels and key information extraction data in json and xml format.
Features and Data Structure
-----------------------------
visual data
- Receipt image represent complex layouts, the effects are well demonstrated on each image.
text data
- ocr_json - represent extracted receipt key information data in json format
- ocr_boxes - represent up-to-date ocr scan result as grouth truth in raw format
- ocr_words - represent ocr detected and recognized words from the receipt image
- ocr_labels - represent original mapping of labels class and text position (may deviate from actual ocr scan result)
- ocr_xml - represent xml format of the key information
- ocr_kie - represent extraction of key information from the receipt image
Languages
The language of the data is primarily English.
Data Instances
A data instance in this dataset represents entries from the Receipt collection which have been augmented.
Data Samples
-----------------------------
Image:
file_name: receipt_0.jpeg
Sample: ocr_words
-----------------------------
['CHO EUN', 'KOREAN RESTAURANT', '2621 ORANGETHORPE AVE,FULLERTON.', '714879-3574', 'THANKYOU!!', 'DATE12/30/2016 FRI', 'TIME19:19', 'BIBIM.OCTOPU T1', '$13.99', 'S-FOODP.CAKT1', '$14.99', 'PORK DUMPLIN T1', '$8.99', 'LA BEEF RIB T1', '$17.99', '4.00xITEMS', 'SUBTOTAL', '$55.96', 'TAX1', '$4.48', 'TOTAL', '$60.44', '$60AA']
Sample: ocr_json
-----------------------------
{"store_name": "CHOEUN KOREANRESTAURANT", "store_addr": "2621ORANGETHORPEAVE,FULLERTON.", "telephone": "(714)879-3574", "date": "12/30/2016FRI", "time": "19:19", "subtotal": "$55.96", "tax": "$4.48", "total": "$60.44", "ignore": " ", "tips": "", "line_items": [{"item_key": "", "item_name": "BIBIM.OCTOPUT1", "item_value": "$13.99", "item_quantity": "1"}, {"item_key": "", "item_name": "S-FOODP.CAKT1", "item_value": "$14.99", "item_quantity": "1"}, {"item_key": "", "item_name": "PORKDUMPLINT1", "item_value": "$8.99", "item_quantity": "1"}, {"item_key": "", "item_name": "LABEEFRIBT1", "item_value": "\uffe517.99", "item_quantity": "1"}, {"item_key": "4.00xITEMS", "item_name": "", "item_value": "", "item_quantity": ""}]}
Sample: ocr_xml
-----------------------------
<s_receipt><s_total>$60.44</s_total><s_tips></s_tips><s_time>19:19</s_time><s_telephone>(714)879-3574</s_telephone><s_tax>$4.48</s_tax><s_subtotal>$55.96</s_subtotal><s_store_name>CHOEUN KOREANRESTAURANT</s_store_name><s_store_addr>2621ORANGETHORPEAVE,FULLERTON.</s_store_addr><s_line_items><s_item_value>$13.99</s_item_value><s_item_quantity>1</s_item_quantity><s_item_name>BIBIM.OCTOPUT1</s_item_name><s_item_key></s_item_key><sep/><s_item_value>$14.99</s_item_value><s_item_quantity>1</s_item_quantity><s_item_name>S-FOODP.CAKT1</s_item_name><s_item_key></s_item_key><sep/><s_item_value>$8.99</s_item_value><s_item_quantity>1</s_item_quantity><s_item_name>PORKDUMPLINT1</s_item_name><s_item_key></s_item_key><sep/><s_item_value>¥17.99</s_item_value><s_item_quantity>1</s_item_quantity><s_item_name>LABEEFRIBT1</s_item_name><s_item_key></s_item_key><sep/><s_item_value></s_item_value><s_item_quantity></s_item_quantity><s_item_name></s_item_name><s_item_key>4.00xITEMS</s_item_key></s_line_items><s_ignore> </s_ignore><s_date>12/30/2016FRI</s_date></s_receipt>
Sample: ocr_kie
-----------------------------
[{'label': 'Store_name_value', 'transcription': 'CHOEUN'}, {'label': 'Store_name_value', 'transcription': 'KOREANRESTAURANT'}, {'label': 'Store_addr_value', 'transcription': '2621ORANGETHORPEAVE,FULLERTON.'}, {'label': 'Tel_value', 'transcription': '(714)879-3574'}, {'label': 'Others', 'transcription': 'THANKYOU!!'}, {'label': 'Date_key', 'transcription': 'DATE'}, {'label': 'Date_value', 'transcription': '12/30/2016FRI'}, {'label': 'Time_value', 'transcription': '19:19'}, {'label': 'Prod_item_value', 'transcription': 'BIBIM.OCTOPUT1'}, {'label': 'Prod_item_value', 'transcription': 'S-FOODP.CAKT1'}, {'label': 'Prod_item_value', 'transcription': 'PORKDUMPLINT1'}, {'label': 'Prod_item_value', 'transcription': 'LABEEFRIBT1'}, {'label': 'Prod_price_value', 'transcription': '$13.99'}, {'label': 'Prod_price_value', 'transcription': '$14.99'}, {'label': 'Prod_price_value', 'transcription': '$8.99'}, {'label': 'Prod_price_value', 'transcription': '¥17.99'}, {'label': 'Prod_item_key', 'transcription': '4.00xITEMS'}, {'label': 'Subtotal_key', 'transcription': 'SUBTOTAL'}, {'label': 'Tax_key', 'transcription': 'TAX1'}, {'label': 'Total_key', 'transcription': 'TOTAL'}, {'label': 'Subtotal_value', 'transcription': '$55.96'}, {'label': 'Tax_value', 'transcription': '$4.48'}, {'label': 'Total_value', 'transcription': '$60.44'}, {'label': 'Ignore', 'transcription': ''}, {'label': 'Ignore', 'transcription': ''}, {'label': 'Time_key', 'transcription': 'TIME'}]
Sample: ocr_labels
-----------------------------
[{'label': 'Store_name_value', 'transcription': 'CHOEUN', 'points': [[114.0, 19.0], [230.0, 19.0], [230.0, 1.0], [114.0, 1.0]]}, {'label': 'Store_name_value', 'transcription': 'KOREANRESTAURANT', 'points': [[97.0, 35.0], [236.0, 35.0], [236.0, 19.0], [97.0, 19.0]]}, {'label': 'Store_addr_value', 'transcription': '2621ORANGETHORPEAVE,FULLERTON.', 'points': [[29.0, 56.0], [295.0, 56.0], [295.0, 34.0], [29.0, 34.0]]}, {'label': 'Tel_value', 'transcription': '(714)879-3574', 'points': [[48.0, 73.0], [280.0, 73.0], [280.0, 54.0], [48.0, 54.0]]}, {'label': 'Others', 'transcription': 'THANKYOU!!', 'points': [[79.0, 92.0], [259.0, 92.0], [259.0, 74.0], [79.0, 74.0]]}, {'label': 'Date_key', 'transcription': 'DATE', 'points': [[22.0, 130.0], [61.0, 130.0], [61.0, 112.0], [22.0, 112.0]]}, {'label': 'Date_value', 'transcription': '12/30/2016FRI', 'points': [[70.0, 131.0], [192.0, 131.0], [192.0, 112.0], [70.0, 112.0]]}, {'label': 'Time_value', 'transcription': '19:19', 'points': [[263.0, 128.0], [307.0, 128.0], [307.0, 111.0], [263.0, 111.0]]}, {'label': 'Prod_item_value', 'transcription': 'BIBIM.OCTOPUT1', 'points': [[19.0, 168.0], [157.0, 168.0], [157.0, 149.0], [19.0, 149.0]]}, {'label': 'Prod_item_value', 'transcription': 'S-FOODP.CAKT1', 'points': [[17.0, 190.0], [158.0, 190.0], [158.0, 171.0], [17.0, 171.0]]}, {'label': 'Prod_item_value', 'transcription': 'PORKDUMPLINT1', 'points': [[14.0, 214.0], [158.0, 214.0], [158.0, 192.0], [14.0, 192.0]]}, {'label': 'Prod_item_value', 'transcription': 'LABEEFRIBT1', 'points': [[14.0, 236.0], [151.0, 236.0], [151.0, 215.0], [14.0, 215.0]]}, {'transcription': '$13.99', 'points': [[254.0, 168.0], [312.0, 168.0], [312.0, 149.0], [254.0, 149.0]]}, {'transcription': '$14.99', 'points': [[257.0, 189.0], [314.0, 189.0], [314.0, 170.0], [257.0, 170.0]]}, {'transcription': '$8.99', 'points': [[268.0, 212.0], [316.0, 212.0], [316.0, 191.0], [268.0, 191.0]]}, {'transcription': '¥17.99', 'points': [[261.0, 234.0], [318.0, 234.0], [318.0, 213.0], [261.0, 213.0]]}, {'label': 'Prod_item_key', 'transcription': '4.00xITEMS', 'points': [[118.0, 260.0], [217.0, 260.0], [217.0, 239.0], [118.0, 239.0]]}, {'label': 'Subtotal_key', 'transcription': 'SUBTOTAL', 'points': [[8.0, 285.0], [91.0, 285.0], [91.0, 264.0], [8.0, 264.0]]}, {'label': 'Tax_key', 'transcription': 'TAX1', 'points': [[8.0, 312.0], [49.0, 312.0], [49.0, 291.0], [8.0, 291.0]]}, {'label': 'Total_key', 'transcription': 'TOTAL', 'points': [[8.0, 336.0], [61.0, 336.0], [61.0, 316.0], [8.0, 316.0]]}, {'label': 'Subtotal_value', 'transcription': '$55.96', 'points': [[263.0, 283.0], [325.0, 283.0], [325.0, 260.0], [263.0, 260.0]]}, {'label': 'Tax_value', 'transcription': '$4.48', 'points': [[274.0, 308.0], [326.0, 308.0], [326.0, 286.0], [274.0, 286.0]]}, {'label': 'Total_value', 'transcription': '$60.44', 'points': [[267.0, 334.0], [328.0, 334.0], [328.0, 310.0], [267.0, 310.0]]}, {'label': 'Ignore', 'transcription': '', 'points': [[269.0, 347.0], [328.0, 347.0], [328.0, 336.0], [269.0, 336.0]]}, {'label': 'Ignore', 'transcription': '', 'points': [[11.0, 347.0], [50.0, 347.0], [50.0, 342.0], [11.0, 342.0]]}, {'label': 'Time_key', 'transcription': 'TIME', 'points': [[215.0, 128.0], [253.0, 128.0], [253.0, 112.0], [215.0, 112.0]]}]
Sample: ocr_boxes
-----------------------------
[[[[113.0, 0.0], [228.0, 3.0], [227.0, 20.0], [113.0, 17.0]], ('CHO EUN', 0.9466678500175476)], [[[96.0, 17.0], [236.0, 21.0], [236.0, 38.0], [96.0, 33.0]], ('KOREAN RESTAURANT', 0.9685913324356079)], [[[28.0, 32.0], [293.0, 37.0], [292.0, 56.0], [28.0, 51.0]], ('2621 ORANGETHORPE AVE,FULLERTON.', 0.951709508895874)], [[[48.0, 53.0], [279.0, 56.0], [279.0, 73.0], [47.0, 70.0]], ('714879-3574', 0.9919183850288391)], [[[81.0, 75.0], [256.0, 75.0], [256.0, 89.0], [81.0, 89.0]], ('THANKYOU!!', 0.9518492817878723)], [[[24.0, 113.0], [191.0, 113.0], [191.0, 127.0], [24.0, 127.0]], ('DATE12/30/2016 FRI', 0.9638745784759521)], [[[214.0, 111.0], [305.0, 109.0], [306.0, 125.0], [215.0, 128.0]], ('TIME19:19', 0.9523274898529053)], [[[18.0, 150.0], [156.0, 149.0], [156.0, 167.0], [18.0, 168.0]], ('BIBIM.OCTOPU T1', 0.9491282105445862)], [[[253.0, 147.0], [312.0, 144.0], [313.0, 166.0], [254.0, 168.0]], ('$13.99', 0.9204174876213074)], [[[16.0, 172.0], [157.0, 170.0], [157.0, 187.0], [16.0, 189.0]], ('S-FOODP.CAKT1', 0.9633263945579529)], [[[255.0, 168.0], [313.0, 168.0], [313.0, 189.0], [255.0, 189.0]], ('$14.99', 0.9975371956825256)], [[[15.0, 194.0], [157.0, 192.0], [157.0, 210.0], [15.0, 212.0]], ('PORK DUMPLIN T1', 0.9503927826881409)], [[[265.0, 190.0], [317.0, 188.0], [318.0, 209.0], [266.0, 212.0]], ('$8.99', 0.9171518087387085)], [[[12.0, 217.0], [149.0, 213.0], [149.0, 233.0], [12.0, 236.0]], ('LA BEEF RIB T1', 0.925663948059082)], [[[258.0, 213.0], [319.0, 210.0], [320.0, 232.0], [259.0, 235.0]], ('$17.99', 0.9976120591163635)], [[[119.0, 237.0], [217.0, 237.0], [217.0, 258.0], [119.0, 258.0]], ('4.00xITEMS', 0.9557921290397644)], [[[9.0, 264.0], [90.0, 262.0], [90.0, 284.0], [9.0, 286.0]], ('SUBTOTAL', 0.9968011379241943)], [[[263.0, 261.0], [324.0, 259.0], [325.0, 281.0], [264.0, 283.0]], ('$55.96', 0.9971590042114258)], [[[8.0, 289.0], [50.0, 289.0], [50.0, 311.0], [8.0, 311.0]], ('TAX1', 0.9973537921905518)], [[[273.0, 286.0], [326.0, 283.0], [328.0, 306.0], [274.0, 309.0]], ('$4.48', 0.991606593132019)], [[[9.0, 315.0], [61.0, 315.0], [61.0, 337.0], [9.0, 337.0]], ('TOTAL', 0.9985822439193726)], [[[266.0, 312.0], [328.0, 309.0], [328.0, 331.0], [267.0, 333.0]], ('$60.44', 0.9942547678947449)], [[[269.0, 334.0], [326.0, 334.0], [326.0, 347.0], [269.0, 347.0]], ('$60AA', 0.7674070596694946)]]
Curation Rationale
-----------------------------
The curated dataset was created to provide a source of OCR augmented text data for own personal AI research use. The datapoints are intended primarily to provide an enhancement of the core Receipt Image Collection data which relies upon the key information extraction from receipt image.
Data Source and Prepratation
-----------------------------
1) This dataset use the great work from WildReceipt is a large receipt dataset collected from document images of unseen templates in the wild. It contains 25 key information categories, a total of about 69000 text boxes. Offical dataset: https://download.openmmlab.com/mmocr/data/wildreceipt.tar
2) OCR text data is generated using techniques OCR scaned on each image.
3) Additional Post progressing OCR result into XML, JSON and Words format
License:
Please check out the license of each subset in our curated dataset.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mychen76/wildreceipts_ocr_train
|
[
"region:us"
] |
2023-09-21T07:32:11+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 132661697.28, "num_examples": 1265}], "download_size": 118220818, "dataset_size": 132661697.28}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T09:10:56+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "wildreceipts_ocr_train"
Dataset Summary
-----------------------------
This is collection of receipts images with enhanced text information source from Wildreceipts and additional curated receipt images.
It contains photo and OCRs information of each image including words, bounding box, labels and key information extraction data in json and xml format.
Features and Data Structure
-----------------------------
visual data
- Receipt image represent complex layouts, the effects are well demonstrated on each image.
text data
- ocr_json - represent extracted receipt key information data in json format
- ocr_boxes - represent up-to-date ocr scan result as grouth truth in raw format
- ocr_words - represent ocr detected and recognized words from the receipt image
- ocr_labels - represent original mapping of labels class and text position (may deviate from actual ocr scan result)
- ocr_xml - represent xml format of the key information
- ocr_kie - represent extraction of key information from the receipt image
Languages
The language of the data is primarily English.
Data Instances
A data instance in this dataset represents entries from the Receipt collection which have been augmented.
Data Samples
-----------------------------
Image:
file_name: receipt_0.jpeg
Sample: ocr_words
-----------------------------
['CHO EUN', 'KOREAN RESTAURANT', '2621 ORANGETHORPE AVE,FULLERTON.', '714879-3574', 'THANKYOU!!', 'DATE12/30/2016 FRI', 'TIME19:19', 'BIBIM.OCTOPU T1', '$13.99', 'S-FOODP.CAKT1', '$14.99', 'PORK DUMPLIN T1', '$8.99', 'LA BEEF RIB T1', '$17.99', '4.00xITEMS', 'SUBTOTAL', '$55.96', 'TAX1', '$4.48', 'TOTAL', '$60.44', '$60AA']
Sample: ocr_json
-----------------------------
{"store_name": "CHOEUN KOREANRESTAURANT", "store_addr": "2621ORANGETHORPEAVE,FULLERTON.", "telephone": "(714)879-3574", "date": "12/30/2016FRI", "time": "19:19", "subtotal": "$55.96", "tax": "$4.48", "total": "$60.44", "ignore": " ", "tips": "", "line_items": [{"item_key": "", "item_name": "BIBIM.OCTOPUT1", "item_value": "$13.99", "item_quantity": "1"}, {"item_key": "", "item_name": "S-FOODP.CAKT1", "item_value": "$14.99", "item_quantity": "1"}, {"item_key": "", "item_name": "PORKDUMPLINT1", "item_value": "$8.99", "item_quantity": "1"}, {"item_key": "", "item_name": "LABEEFRIBT1", "item_value": "\uffe517.99", "item_quantity": "1"}, {"item_key": "4.00xITEMS", "item_name": "", "item_value": "", "item_quantity": ""}]}
Sample: ocr_xml
-----------------------------
<s_receipt><s_total>$60.44</s_total><s_tips></s_tips><s_time>19:19</s_time><s_telephone>(714)879-3574</s_telephone><s_tax>$4.48</s_tax><s_subtotal>$55.96</s_subtotal><s_store_name>CHOEUN KOREANRESTAURANT</s_store_name><s_store_addr>2621ORANGETHORPEAVE,FULLERTON.</s_store_addr><s_line_items><s_item_value>$13.99</s_item_value><s_item_quantity>1</s_item_quantity><s_item_name>BIBIM.OCTOPUT1</s_item_name><s_item_key></s_item_key><sep/><s_item_value>$14.99</s_item_value><s_item_quantity>1</s_item_quantity><s_item_name>S-FOODP.CAKT1</s_item_name><s_item_key></s_item_key><sep/><s_item_value>$8.99</s_item_value><s_item_quantity>1</s_item_quantity><s_item_name>PORKDUMPLINT1</s_item_name><s_item_key></s_item_key><sep/><s_item_value>¥17.99</s_item_value><s_item_quantity>1</s_item_quantity><s_item_name>LABEEFRIBT1</s_item_name><s_item_key></s_item_key><sep/><s_item_value></s_item_value><s_item_quantity></s_item_quantity><s_item_name></s_item_name><s_item_key>4.00xITEMS</s_item_key></s_line_items><s_ignore> </s_ignore><s_date>12/30/2016FRI</s_date></s_receipt>
Sample: ocr_kie
-----------------------------
[{'label': 'Store_name_value', 'transcription': 'CHOEUN'}, {'label': 'Store_name_value', 'transcription': 'KOREANRESTAURANT'}, {'label': 'Store_addr_value', 'transcription': '2621ORANGETHORPEAVE,FULLERTON.'}, {'label': 'Tel_value', 'transcription': '(714)879-3574'}, {'label': 'Others', 'transcription': 'THANKYOU!!'}, {'label': 'Date_key', 'transcription': 'DATE'}, {'label': 'Date_value', 'transcription': '12/30/2016FRI'}, {'label': 'Time_value', 'transcription': '19:19'}, {'label': 'Prod_item_value', 'transcription': 'BIBIM.OCTOPUT1'}, {'label': 'Prod_item_value', 'transcription': 'S-FOODP.CAKT1'}, {'label': 'Prod_item_value', 'transcription': 'PORKDUMPLINT1'}, {'label': 'Prod_item_value', 'transcription': 'LABEEFRIBT1'}, {'label': 'Prod_price_value', 'transcription': '$13.99'}, {'label': 'Prod_price_value', 'transcription': '$14.99'}, {'label': 'Prod_price_value', 'transcription': '$8.99'}, {'label': 'Prod_price_value', 'transcription': '¥17.99'}, {'label': 'Prod_item_key', 'transcription': '4.00xITEMS'}, {'label': 'Subtotal_key', 'transcription': 'SUBTOTAL'}, {'label': 'Tax_key', 'transcription': 'TAX1'}, {'label': 'Total_key', 'transcription': 'TOTAL'}, {'label': 'Subtotal_value', 'transcription': '$55.96'}, {'label': 'Tax_value', 'transcription': '$4.48'}, {'label': 'Total_value', 'transcription': '$60.44'}, {'label': 'Ignore', 'transcription': ''}, {'label': 'Ignore', 'transcription': ''}, {'label': 'Time_key', 'transcription': 'TIME'}]
Sample: ocr_labels
-----------------------------
[{'label': 'Store_name_value', 'transcription': 'CHOEUN', 'points': [[114.0, 19.0], [230.0, 19.0], [230.0, 1.0], [114.0, 1.0]]}, {'label': 'Store_name_value', 'transcription': 'KOREANRESTAURANT', 'points': [[97.0, 35.0], [236.0, 35.0], [236.0, 19.0], [97.0, 19.0]]}, {'label': 'Store_addr_value', 'transcription': '2621ORANGETHORPEAVE,FULLERTON.', 'points': [[29.0, 56.0], [295.0, 56.0], [295.0, 34.0], [29.0, 34.0]]}, {'label': 'Tel_value', 'transcription': '(714)879-3574', 'points': [[48.0, 73.0], [280.0, 73.0], [280.0, 54.0], [48.0, 54.0]]}, {'label': 'Others', 'transcription': 'THANKYOU!!', 'points': [[79.0, 92.0], [259.0, 92.0], [259.0, 74.0], [79.0, 74.0]]}, {'label': 'Date_key', 'transcription': 'DATE', 'points': [[22.0, 130.0], [61.0, 130.0], [61.0, 112.0], [22.0, 112.0]]}, {'label': 'Date_value', 'transcription': '12/30/2016FRI', 'points': [[70.0, 131.0], [192.0, 131.0], [192.0, 112.0], [70.0, 112.0]]}, {'label': 'Time_value', 'transcription': '19:19', 'points': [[263.0, 128.0], [307.0, 128.0], [307.0, 111.0], [263.0, 111.0]]}, {'label': 'Prod_item_value', 'transcription': 'BIBIM.OCTOPUT1', 'points': [[19.0, 168.0], [157.0, 168.0], [157.0, 149.0], [19.0, 149.0]]}, {'label': 'Prod_item_value', 'transcription': 'S-FOODP.CAKT1', 'points': [[17.0, 190.0], [158.0, 190.0], [158.0, 171.0], [17.0, 171.0]]}, {'label': 'Prod_item_value', 'transcription': 'PORKDUMPLINT1', 'points': [[14.0, 214.0], [158.0, 214.0], [158.0, 192.0], [14.0, 192.0]]}, {'label': 'Prod_item_value', 'transcription': 'LABEEFRIBT1', 'points': [[14.0, 236.0], [151.0, 236.0], [151.0, 215.0], [14.0, 215.0]]}, {'transcription': '$13.99', 'points': [[254.0, 168.0], [312.0, 168.0], [312.0, 149.0], [254.0, 149.0]]}, {'transcription': '$14.99', 'points': [[257.0, 189.0], [314.0, 189.0], [314.0, 170.0], [257.0, 170.0]]}, {'transcription': '$8.99', 'points': [[268.0, 212.0], [316.0, 212.0], [316.0, 191.0], [268.0, 191.0]]}, {'transcription': '¥17.99', 'points': [[261.0, 234.0], [318.0, 234.0], [318.0, 213.0], [261.0, 213.0]]}, {'label': 'Prod_item_key', 'transcription': '4.00xITEMS', 'points': [[118.0, 260.0], [217.0, 260.0], [217.0, 239.0], [118.0, 239.0]]}, {'label': 'Subtotal_key', 'transcription': 'SUBTOTAL', 'points': [[8.0, 285.0], [91.0, 285.0], [91.0, 264.0], [8.0, 264.0]]}, {'label': 'Tax_key', 'transcription': 'TAX1', 'points': [[8.0, 312.0], [49.0, 312.0], [49.0, 291.0], [8.0, 291.0]]}, {'label': 'Total_key', 'transcription': 'TOTAL', 'points': [[8.0, 336.0], [61.0, 336.0], [61.0, 316.0], [8.0, 316.0]]}, {'label': 'Subtotal_value', 'transcription': '$55.96', 'points': [[263.0, 283.0], [325.0, 283.0], [325.0, 260.0], [263.0, 260.0]]}, {'label': 'Tax_value', 'transcription': '$4.48', 'points': [[274.0, 308.0], [326.0, 308.0], [326.0, 286.0], [274.0, 286.0]]}, {'label': 'Total_value', 'transcription': '$60.44', 'points': [[267.0, 334.0], [328.0, 334.0], [328.0, 310.0], [267.0, 310.0]]}, {'label': 'Ignore', 'transcription': '', 'points': [[269.0, 347.0], [328.0, 347.0], [328.0, 336.0], [269.0, 336.0]]}, {'label': 'Ignore', 'transcription': '', 'points': [[11.0, 347.0], [50.0, 347.0], [50.0, 342.0], [11.0, 342.0]]}, {'label': 'Time_key', 'transcription': 'TIME', 'points': [[215.0, 128.0], [253.0, 128.0], [253.0, 112.0], [215.0, 112.0]]}]
Sample: ocr_boxes
-----------------------------
[[[[113.0, 0.0], [228.0, 3.0], [227.0, 20.0], [113.0, 17.0]], ('CHO EUN', 0.9466678500175476)], [[[96.0, 17.0], [236.0, 21.0], [236.0, 38.0], [96.0, 33.0]], ('KOREAN RESTAURANT', 0.9685913324356079)], [[[28.0, 32.0], [293.0, 37.0], [292.0, 56.0], [28.0, 51.0]], ('2621 ORANGETHORPE AVE,FULLERTON.', 0.951709508895874)], [[[48.0, 53.0], [279.0, 56.0], [279.0, 73.0], [47.0, 70.0]], ('714879-3574', 0.9919183850288391)], [[[81.0, 75.0], [256.0, 75.0], [256.0, 89.0], [81.0, 89.0]], ('THANKYOU!!', 0.9518492817878723)], [[[24.0, 113.0], [191.0, 113.0], [191.0, 127.0], [24.0, 127.0]], ('DATE12/30/2016 FRI', 0.9638745784759521)], [[[214.0, 111.0], [305.0, 109.0], [306.0, 125.0], [215.0, 128.0]], ('TIME19:19', 0.9523274898529053)], [[[18.0, 150.0], [156.0, 149.0], [156.0, 167.0], [18.0, 168.0]], ('BIBIM.OCTOPU T1', 0.9491282105445862)], [[[253.0, 147.0], [312.0, 144.0], [313.0, 166.0], [254.0, 168.0]], ('$13.99', 0.9204174876213074)], [[[16.0, 172.0], [157.0, 170.0], [157.0, 187.0], [16.0, 189.0]], ('S-FOODP.CAKT1', 0.9633263945579529)], [[[255.0, 168.0], [313.0, 168.0], [313.0, 189.0], [255.0, 189.0]], ('$14.99', 0.9975371956825256)], [[[15.0, 194.0], [157.0, 192.0], [157.0, 210.0], [15.0, 212.0]], ('PORK DUMPLIN T1', 0.9503927826881409)], [[[265.0, 190.0], [317.0, 188.0], [318.0, 209.0], [266.0, 212.0]], ('$8.99', 0.9171518087387085)], [[[12.0, 217.0], [149.0, 213.0], [149.0, 233.0], [12.0, 236.0]], ('LA BEEF RIB T1', 0.925663948059082)], [[[258.0, 213.0], [319.0, 210.0], [320.0, 232.0], [259.0, 235.0]], ('$17.99', 0.9976120591163635)], [[[119.0, 237.0], [217.0, 237.0], [217.0, 258.0], [119.0, 258.0]], ('4.00xITEMS', 0.9557921290397644)], [[[9.0, 264.0], [90.0, 262.0], [90.0, 284.0], [9.0, 286.0]], ('SUBTOTAL', 0.9968011379241943)], [[[263.0, 261.0], [324.0, 259.0], [325.0, 281.0], [264.0, 283.0]], ('$55.96', 0.9971590042114258)], [[[8.0, 289.0], [50.0, 289.0], [50.0, 311.0], [8.0, 311.0]], ('TAX1', 0.9973537921905518)], [[[273.0, 286.0], [326.0, 283.0], [328.0, 306.0], [274.0, 309.0]], ('$4.48', 0.991606593132019)], [[[9.0, 315.0], [61.0, 315.0], [61.0, 337.0], [9.0, 337.0]], ('TOTAL', 0.9985822439193726)], [[[266.0, 312.0], [328.0, 309.0], [328.0, 331.0], [267.0, 333.0]], ('$60.44', 0.9942547678947449)], [[[269.0, 334.0], [326.0, 334.0], [326.0, 347.0], [269.0, 347.0]], ('$60AA', 0.7674070596694946)]]
Curation Rationale
-----------------------------
The curated dataset was created to provide a source of OCR augmented text data for own personal AI research use. The datapoints are intended primarily to provide an enhancement of the core Receipt Image Collection data which relies upon the key information extraction from receipt image.
Data Source and Prepratation
-----------------------------
1) This dataset use the great work from WildReceipt is a large receipt dataset collected from document images of unseen templates in the wild. It contains 25 key information categories, a total of about 69000 text boxes. Offical dataset: URL
2) OCR text data is generated using techniques OCR scaned on each image.
3) Additional Post progressing OCR result into XML, JSON and Words format
License:
Please check out the license of each subset in our curated dataset.
More Information needed
|
[
"# Dataset Card for \"wildreceipts_ocr_train\"\n\n\n\nDataset Summary\n-----------------------------\nThis is collection of receipts images with enhanced text information source from Wildreceipts and additional curated receipt images. \nIt contains photo and OCRs information of each image including words, bounding box, labels and key information extraction data in json and xml format.\n\nFeatures and Data Structure\n-----------------------------\n\nvisual data\n- Receipt image represent complex layouts, the effects are well demonstrated on each image.\n\ntext data \n- ocr_json - represent extracted receipt key information data in json format\n- ocr_boxes - represent up-to-date ocr scan result as grouth truth in raw format\n- ocr_words - represent ocr detected and recognized words from the receipt image \n- ocr_labels - represent original mapping of labels class and text position (may deviate from actual ocr scan result)\n- ocr_xml - represent xml format of the key information\n- ocr_kie - represent extraction of key information from the receipt image\n\nLanguages\nThe language of the data is primarily English.\n\nData Instances\nA data instance in this dataset represents entries from the Receipt collection which have been augmented. \n\nData Samples\n-----------------------------\nImage:\nfile_name: receipt_0.jpeg\n\nSample: ocr_words\n-----------------------------\n['CHO EUN', 'KOREAN RESTAURANT', '2621 ORANGETHORPE AVE,FULLERTON.', '714879-3574', 'THANKYOU!!', 'DATE12/30/2016 FRI', 'TIME19:19', 'BIBIM.OCTOPU T1', '$13.99', 'S-FOODP.CAKT1', '$14.99', 'PORK DUMPLIN T1', '$8.99', 'LA BEEF RIB T1', '$17.99', '4.00xITEMS', 'SUBTOTAL', '$55.96', 'TAX1', '$4.48', 'TOTAL', '$60.44', '$60AA']\n\nSample: ocr_json\n-----------------------------\n{\"store_name\": \"CHOEUN KOREANRESTAURANT\", \"store_addr\": \"2621ORANGETHORPEAVE,FULLERTON.\", \"telephone\": \"(714)879-3574\", \"date\": \"12/30/2016FRI\", \"time\": \"19:19\", \"subtotal\": \"$55.96\", \"tax\": \"$4.48\", \"total\": \"$60.44\", \"ignore\": \" \", \"tips\": \"\", \"line_items\": [{\"item_key\": \"\", \"item_name\": \"BIBIM.OCTOPUT1\", \"item_value\": \"$13.99\", \"item_quantity\": \"1\"}, {\"item_key\": \"\", \"item_name\": \"S-FOODP.CAKT1\", \"item_value\": \"$14.99\", \"item_quantity\": \"1\"}, {\"item_key\": \"\", \"item_name\": \"PORKDUMPLINT1\", \"item_value\": \"$8.99\", \"item_quantity\": \"1\"}, {\"item_key\": \"\", \"item_name\": \"LABEEFRIBT1\", \"item_value\": \"\\uffe517.99\", \"item_quantity\": \"1\"}, {\"item_key\": \"4.00xITEMS\", \"item_name\": \"\", \"item_value\": \"\", \"item_quantity\": \"\"}]}\n\nSample: ocr_xml\n-----------------------------\n<s_receipt><s_total>$60.44</s_total><s_tips></s_tips><s_time>19:19</s_time><s_telephone>(714)879-3574</s_telephone><s_tax>$4.48</s_tax><s_subtotal>$55.96</s_subtotal><s_store_name>CHOEUN KOREANRESTAURANT</s_store_name><s_store_addr>2621ORANGETHORPEAVE,FULLERTON.</s_store_addr><s_line_items><s_item_value>$13.99</s_item_value><s_item_quantity>1</s_item_quantity><s_item_name>BIBIM.OCTOPUT1</s_item_name><s_item_key></s_item_key><sep/><s_item_value>$14.99</s_item_value><s_item_quantity>1</s_item_quantity><s_item_name>S-FOODP.CAKT1</s_item_name><s_item_key></s_item_key><sep/><s_item_value>$8.99</s_item_value><s_item_quantity>1</s_item_quantity><s_item_name>PORKDUMPLINT1</s_item_name><s_item_key></s_item_key><sep/><s_item_value>¥17.99</s_item_value><s_item_quantity>1</s_item_quantity><s_item_name>LABEEFRIBT1</s_item_name><s_item_key></s_item_key><sep/><s_item_value></s_item_value><s_item_quantity></s_item_quantity><s_item_name></s_item_name><s_item_key>4.00xITEMS</s_item_key></s_line_items><s_ignore> </s_ignore><s_date>12/30/2016FRI</s_date></s_receipt>\n\nSample: ocr_kie\n-----------------------------\n[{'label': 'Store_name_value', 'transcription': 'CHOEUN'}, {'label': 'Store_name_value', 'transcription': 'KOREANRESTAURANT'}, {'label': 'Store_addr_value', 'transcription': '2621ORANGETHORPEAVE,FULLERTON.'}, {'label': 'Tel_value', 'transcription': '(714)879-3574'}, {'label': 'Others', 'transcription': 'THANKYOU!!'}, {'label': 'Date_key', 'transcription': 'DATE'}, {'label': 'Date_value', 'transcription': '12/30/2016FRI'}, {'label': 'Time_value', 'transcription': '19:19'}, {'label': 'Prod_item_value', 'transcription': 'BIBIM.OCTOPUT1'}, {'label': 'Prod_item_value', 'transcription': 'S-FOODP.CAKT1'}, {'label': 'Prod_item_value', 'transcription': 'PORKDUMPLINT1'}, {'label': 'Prod_item_value', 'transcription': 'LABEEFRIBT1'}, {'label': 'Prod_price_value', 'transcription': '$13.99'}, {'label': 'Prod_price_value', 'transcription': '$14.99'}, {'label': 'Prod_price_value', 'transcription': '$8.99'}, {'label': 'Prod_price_value', 'transcription': '¥17.99'}, {'label': 'Prod_item_key', 'transcription': '4.00xITEMS'}, {'label': 'Subtotal_key', 'transcription': 'SUBTOTAL'}, {'label': 'Tax_key', 'transcription': 'TAX1'}, {'label': 'Total_key', 'transcription': 'TOTAL'}, {'label': 'Subtotal_value', 'transcription': '$55.96'}, {'label': 'Tax_value', 'transcription': '$4.48'}, {'label': 'Total_value', 'transcription': '$60.44'}, {'label': 'Ignore', 'transcription': ''}, {'label': 'Ignore', 'transcription': ''}, {'label': 'Time_key', 'transcription': 'TIME'}]\n\nSample: ocr_labels\n-----------------------------\n[{'label': 'Store_name_value', 'transcription': 'CHOEUN', 'points': [[114.0, 19.0], [230.0, 19.0], [230.0, 1.0], [114.0, 1.0]]}, {'label': 'Store_name_value', 'transcription': 'KOREANRESTAURANT', 'points': [[97.0, 35.0], [236.0, 35.0], [236.0, 19.0], [97.0, 19.0]]}, {'label': 'Store_addr_value', 'transcription': '2621ORANGETHORPEAVE,FULLERTON.', 'points': [[29.0, 56.0], [295.0, 56.0], [295.0, 34.0], [29.0, 34.0]]}, {'label': 'Tel_value', 'transcription': '(714)879-3574', 'points': [[48.0, 73.0], [280.0, 73.0], [280.0, 54.0], [48.0, 54.0]]}, {'label': 'Others', 'transcription': 'THANKYOU!!', 'points': [[79.0, 92.0], [259.0, 92.0], [259.0, 74.0], [79.0, 74.0]]}, {'label': 'Date_key', 'transcription': 'DATE', 'points': [[22.0, 130.0], [61.0, 130.0], [61.0, 112.0], [22.0, 112.0]]}, {'label': 'Date_value', 'transcription': '12/30/2016FRI', 'points': [[70.0, 131.0], [192.0, 131.0], [192.0, 112.0], [70.0, 112.0]]}, {'label': 'Time_value', 'transcription': '19:19', 'points': [[263.0, 128.0], [307.0, 128.0], [307.0, 111.0], [263.0, 111.0]]}, {'label': 'Prod_item_value', 'transcription': 'BIBIM.OCTOPUT1', 'points': [[19.0, 168.0], [157.0, 168.0], [157.0, 149.0], [19.0, 149.0]]}, {'label': 'Prod_item_value', 'transcription': 'S-FOODP.CAKT1', 'points': [[17.0, 190.0], [158.0, 190.0], [158.0, 171.0], [17.0, 171.0]]}, {'label': 'Prod_item_value', 'transcription': 'PORKDUMPLINT1', 'points': [[14.0, 214.0], [158.0, 214.0], [158.0, 192.0], [14.0, 192.0]]}, {'label': 'Prod_item_value', 'transcription': 'LABEEFRIBT1', 'points': [[14.0, 236.0], [151.0, 236.0], [151.0, 215.0], [14.0, 215.0]]}, {'transcription': '$13.99', 'points': [[254.0, 168.0], [312.0, 168.0], [312.0, 149.0], [254.0, 149.0]]}, {'transcription': '$14.99', 'points': [[257.0, 189.0], [314.0, 189.0], [314.0, 170.0], [257.0, 170.0]]}, {'transcription': '$8.99', 'points': [[268.0, 212.0], [316.0, 212.0], [316.0, 191.0], [268.0, 191.0]]}, {'transcription': '¥17.99', 'points': [[261.0, 234.0], [318.0, 234.0], [318.0, 213.0], [261.0, 213.0]]}, {'label': 'Prod_item_key', 'transcription': '4.00xITEMS', 'points': [[118.0, 260.0], [217.0, 260.0], [217.0, 239.0], [118.0, 239.0]]}, {'label': 'Subtotal_key', 'transcription': 'SUBTOTAL', 'points': [[8.0, 285.0], [91.0, 285.0], [91.0, 264.0], [8.0, 264.0]]}, {'label': 'Tax_key', 'transcription': 'TAX1', 'points': [[8.0, 312.0], [49.0, 312.0], [49.0, 291.0], [8.0, 291.0]]}, {'label': 'Total_key', 'transcription': 'TOTAL', 'points': [[8.0, 336.0], [61.0, 336.0], [61.0, 316.0], [8.0, 316.0]]}, {'label': 'Subtotal_value', 'transcription': '$55.96', 'points': [[263.0, 283.0], [325.0, 283.0], [325.0, 260.0], [263.0, 260.0]]}, {'label': 'Tax_value', 'transcription': '$4.48', 'points': [[274.0, 308.0], [326.0, 308.0], [326.0, 286.0], [274.0, 286.0]]}, {'label': 'Total_value', 'transcription': '$60.44', 'points': [[267.0, 334.0], [328.0, 334.0], [328.0, 310.0], [267.0, 310.0]]}, {'label': 'Ignore', 'transcription': '', 'points': [[269.0, 347.0], [328.0, 347.0], [328.0, 336.0], [269.0, 336.0]]}, {'label': 'Ignore', 'transcription': '', 'points': [[11.0, 347.0], [50.0, 347.0], [50.0, 342.0], [11.0, 342.0]]}, {'label': 'Time_key', 'transcription': 'TIME', 'points': [[215.0, 128.0], [253.0, 128.0], [253.0, 112.0], [215.0, 112.0]]}]\n\nSample: ocr_boxes\n-----------------------------\n[[[[113.0, 0.0], [228.0, 3.0], [227.0, 20.0], [113.0, 17.0]], ('CHO EUN', 0.9466678500175476)], [[[96.0, 17.0], [236.0, 21.0], [236.0, 38.0], [96.0, 33.0]], ('KOREAN RESTAURANT', 0.9685913324356079)], [[[28.0, 32.0], [293.0, 37.0], [292.0, 56.0], [28.0, 51.0]], ('2621 ORANGETHORPE AVE,FULLERTON.', 0.951709508895874)], [[[48.0, 53.0], [279.0, 56.0], [279.0, 73.0], [47.0, 70.0]], ('714879-3574', 0.9919183850288391)], [[[81.0, 75.0], [256.0, 75.0], [256.0, 89.0], [81.0, 89.0]], ('THANKYOU!!', 0.9518492817878723)], [[[24.0, 113.0], [191.0, 113.0], [191.0, 127.0], [24.0, 127.0]], ('DATE12/30/2016 FRI', 0.9638745784759521)], [[[214.0, 111.0], [305.0, 109.0], [306.0, 125.0], [215.0, 128.0]], ('TIME19:19', 0.9523274898529053)], [[[18.0, 150.0], [156.0, 149.0], [156.0, 167.0], [18.0, 168.0]], ('BIBIM.OCTOPU T1', 0.9491282105445862)], [[[253.0, 147.0], [312.0, 144.0], [313.0, 166.0], [254.0, 168.0]], ('$13.99', 0.9204174876213074)], [[[16.0, 172.0], [157.0, 170.0], [157.0, 187.0], [16.0, 189.0]], ('S-FOODP.CAKT1', 0.9633263945579529)], [[[255.0, 168.0], [313.0, 168.0], [313.0, 189.0], [255.0, 189.0]], ('$14.99', 0.9975371956825256)], [[[15.0, 194.0], [157.0, 192.0], [157.0, 210.0], [15.0, 212.0]], ('PORK DUMPLIN T1', 0.9503927826881409)], [[[265.0, 190.0], [317.0, 188.0], [318.0, 209.0], [266.0, 212.0]], ('$8.99', 0.9171518087387085)], [[[12.0, 217.0], [149.0, 213.0], [149.0, 233.0], [12.0, 236.0]], ('LA BEEF RIB T1', 0.925663948059082)], [[[258.0, 213.0], [319.0, 210.0], [320.0, 232.0], [259.0, 235.0]], ('$17.99', 0.9976120591163635)], [[[119.0, 237.0], [217.0, 237.0], [217.0, 258.0], [119.0, 258.0]], ('4.00xITEMS', 0.9557921290397644)], [[[9.0, 264.0], [90.0, 262.0], [90.0, 284.0], [9.0, 286.0]], ('SUBTOTAL', 0.9968011379241943)], [[[263.0, 261.0], [324.0, 259.0], [325.0, 281.0], [264.0, 283.0]], ('$55.96', 0.9971590042114258)], [[[8.0, 289.0], [50.0, 289.0], [50.0, 311.0], [8.0, 311.0]], ('TAX1', 0.9973537921905518)], [[[273.0, 286.0], [326.0, 283.0], [328.0, 306.0], [274.0, 309.0]], ('$4.48', 0.991606593132019)], [[[9.0, 315.0], [61.0, 315.0], [61.0, 337.0], [9.0, 337.0]], ('TOTAL', 0.9985822439193726)], [[[266.0, 312.0], [328.0, 309.0], [328.0, 331.0], [267.0, 333.0]], ('$60.44', 0.9942547678947449)], [[[269.0, 334.0], [326.0, 334.0], [326.0, 347.0], [269.0, 347.0]], ('$60AA', 0.7674070596694946)]]\n\n\nCuration Rationale\n-----------------------------\n\nThe curated dataset was created to provide a source of OCR augmented text data for own personal AI research use. The datapoints are intended primarily to provide an enhancement of the core Receipt Image Collection data which relies upon the key information extraction from receipt image. \n\nData Source and Prepratation\n-----------------------------\n1) This dataset use the great work from WildReceipt is a large receipt dataset collected from document images of unseen templates in the wild. It contains 25 key information categories, a total of about 69000 text boxes. Offical dataset: URL\n2) OCR text data is generated using techniques OCR scaned on each image. \n3) Additional Post progressing OCR result into XML, JSON and Words format\n\n\nLicense: \nPlease check out the license of each subset in our curated dataset. \n\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"wildreceipts_ocr_train\"\n\n\n\nDataset Summary\n-----------------------------\nThis is collection of receipts images with enhanced text information source from Wildreceipts and additional curated receipt images. \nIt contains photo and OCRs information of each image including words, bounding box, labels and key information extraction data in json and xml format.\n\nFeatures and Data Structure\n-----------------------------\n\nvisual data\n- Receipt image represent complex layouts, the effects are well demonstrated on each image.\n\ntext data \n- ocr_json - represent extracted receipt key information data in json format\n- ocr_boxes - represent up-to-date ocr scan result as grouth truth in raw format\n- ocr_words - represent ocr detected and recognized words from the receipt image \n- ocr_labels - represent original mapping of labels class and text position (may deviate from actual ocr scan result)\n- ocr_xml - represent xml format of the key information\n- ocr_kie - represent extraction of key information from the receipt image\n\nLanguages\nThe language of the data is primarily English.\n\nData Instances\nA data instance in this dataset represents entries from the Receipt collection which have been augmented. \n\nData Samples\n-----------------------------\nImage:\nfile_name: receipt_0.jpeg\n\nSample: ocr_words\n-----------------------------\n['CHO EUN', 'KOREAN RESTAURANT', '2621 ORANGETHORPE AVE,FULLERTON.', '714879-3574', 'THANKYOU!!', 'DATE12/30/2016 FRI', 'TIME19:19', 'BIBIM.OCTOPU T1', '$13.99', 'S-FOODP.CAKT1', '$14.99', 'PORK DUMPLIN T1', '$8.99', 'LA BEEF RIB T1', '$17.99', '4.00xITEMS', 'SUBTOTAL', '$55.96', 'TAX1', '$4.48', 'TOTAL', '$60.44', '$60AA']\n\nSample: ocr_json\n-----------------------------\n{\"store_name\": \"CHOEUN KOREANRESTAURANT\", \"store_addr\": \"2621ORANGETHORPEAVE,FULLERTON.\", \"telephone\": \"(714)879-3574\", \"date\": \"12/30/2016FRI\", \"time\": \"19:19\", \"subtotal\": \"$55.96\", \"tax\": \"$4.48\", \"total\": \"$60.44\", \"ignore\": \" \", \"tips\": \"\", \"line_items\": [{\"item_key\": \"\", \"item_name\": \"BIBIM.OCTOPUT1\", \"item_value\": \"$13.99\", \"item_quantity\": \"1\"}, {\"item_key\": \"\", \"item_name\": \"S-FOODP.CAKT1\", \"item_value\": \"$14.99\", \"item_quantity\": \"1\"}, {\"item_key\": \"\", \"item_name\": \"PORKDUMPLINT1\", \"item_value\": \"$8.99\", \"item_quantity\": \"1\"}, {\"item_key\": \"\", \"item_name\": \"LABEEFRIBT1\", \"item_value\": \"\\uffe517.99\", \"item_quantity\": \"1\"}, {\"item_key\": \"4.00xITEMS\", \"item_name\": \"\", \"item_value\": \"\", \"item_quantity\": \"\"}]}\n\nSample: ocr_xml\n-----------------------------\n<s_receipt><s_total>$60.44</s_total><s_tips></s_tips><s_time>19:19</s_time><s_telephone>(714)879-3574</s_telephone><s_tax>$4.48</s_tax><s_subtotal>$55.96</s_subtotal><s_store_name>CHOEUN KOREANRESTAURANT</s_store_name><s_store_addr>2621ORANGETHORPEAVE,FULLERTON.</s_store_addr><s_line_items><s_item_value>$13.99</s_item_value><s_item_quantity>1</s_item_quantity><s_item_name>BIBIM.OCTOPUT1</s_item_name><s_item_key></s_item_key><sep/><s_item_value>$14.99</s_item_value><s_item_quantity>1</s_item_quantity><s_item_name>S-FOODP.CAKT1</s_item_name><s_item_key></s_item_key><sep/><s_item_value>$8.99</s_item_value><s_item_quantity>1</s_item_quantity><s_item_name>PORKDUMPLINT1</s_item_name><s_item_key></s_item_key><sep/><s_item_value>¥17.99</s_item_value><s_item_quantity>1</s_item_quantity><s_item_name>LABEEFRIBT1</s_item_name><s_item_key></s_item_key><sep/><s_item_value></s_item_value><s_item_quantity></s_item_quantity><s_item_name></s_item_name><s_item_key>4.00xITEMS</s_item_key></s_line_items><s_ignore> </s_ignore><s_date>12/30/2016FRI</s_date></s_receipt>\n\nSample: ocr_kie\n-----------------------------\n[{'label': 'Store_name_value', 'transcription': 'CHOEUN'}, {'label': 'Store_name_value', 'transcription': 'KOREANRESTAURANT'}, {'label': 'Store_addr_value', 'transcription': '2621ORANGETHORPEAVE,FULLERTON.'}, {'label': 'Tel_value', 'transcription': '(714)879-3574'}, {'label': 'Others', 'transcription': 'THANKYOU!!'}, {'label': 'Date_key', 'transcription': 'DATE'}, {'label': 'Date_value', 'transcription': '12/30/2016FRI'}, {'label': 'Time_value', 'transcription': '19:19'}, {'label': 'Prod_item_value', 'transcription': 'BIBIM.OCTOPUT1'}, {'label': 'Prod_item_value', 'transcription': 'S-FOODP.CAKT1'}, {'label': 'Prod_item_value', 'transcription': 'PORKDUMPLINT1'}, {'label': 'Prod_item_value', 'transcription': 'LABEEFRIBT1'}, {'label': 'Prod_price_value', 'transcription': '$13.99'}, {'label': 'Prod_price_value', 'transcription': '$14.99'}, {'label': 'Prod_price_value', 'transcription': '$8.99'}, {'label': 'Prod_price_value', 'transcription': '¥17.99'}, {'label': 'Prod_item_key', 'transcription': '4.00xITEMS'}, {'label': 'Subtotal_key', 'transcription': 'SUBTOTAL'}, {'label': 'Tax_key', 'transcription': 'TAX1'}, {'label': 'Total_key', 'transcription': 'TOTAL'}, {'label': 'Subtotal_value', 'transcription': '$55.96'}, {'label': 'Tax_value', 'transcription': '$4.48'}, {'label': 'Total_value', 'transcription': '$60.44'}, {'label': 'Ignore', 'transcription': ''}, {'label': 'Ignore', 'transcription': ''}, {'label': 'Time_key', 'transcription': 'TIME'}]\n\nSample: ocr_labels\n-----------------------------\n[{'label': 'Store_name_value', 'transcription': 'CHOEUN', 'points': [[114.0, 19.0], [230.0, 19.0], [230.0, 1.0], [114.0, 1.0]]}, {'label': 'Store_name_value', 'transcription': 'KOREANRESTAURANT', 'points': [[97.0, 35.0], [236.0, 35.0], [236.0, 19.0], [97.0, 19.0]]}, {'label': 'Store_addr_value', 'transcription': '2621ORANGETHORPEAVE,FULLERTON.', 'points': [[29.0, 56.0], [295.0, 56.0], [295.0, 34.0], [29.0, 34.0]]}, {'label': 'Tel_value', 'transcription': '(714)879-3574', 'points': [[48.0, 73.0], [280.0, 73.0], [280.0, 54.0], [48.0, 54.0]]}, {'label': 'Others', 'transcription': 'THANKYOU!!', 'points': [[79.0, 92.0], [259.0, 92.0], [259.0, 74.0], [79.0, 74.0]]}, {'label': 'Date_key', 'transcription': 'DATE', 'points': [[22.0, 130.0], [61.0, 130.0], [61.0, 112.0], [22.0, 112.0]]}, {'label': 'Date_value', 'transcription': '12/30/2016FRI', 'points': [[70.0, 131.0], [192.0, 131.0], [192.0, 112.0], [70.0, 112.0]]}, {'label': 'Time_value', 'transcription': '19:19', 'points': [[263.0, 128.0], [307.0, 128.0], [307.0, 111.0], [263.0, 111.0]]}, {'label': 'Prod_item_value', 'transcription': 'BIBIM.OCTOPUT1', 'points': [[19.0, 168.0], [157.0, 168.0], [157.0, 149.0], [19.0, 149.0]]}, {'label': 'Prod_item_value', 'transcription': 'S-FOODP.CAKT1', 'points': [[17.0, 190.0], [158.0, 190.0], [158.0, 171.0], [17.0, 171.0]]}, {'label': 'Prod_item_value', 'transcription': 'PORKDUMPLINT1', 'points': [[14.0, 214.0], [158.0, 214.0], [158.0, 192.0], [14.0, 192.0]]}, {'label': 'Prod_item_value', 'transcription': 'LABEEFRIBT1', 'points': [[14.0, 236.0], [151.0, 236.0], [151.0, 215.0], [14.0, 215.0]]}, {'transcription': '$13.99', 'points': [[254.0, 168.0], [312.0, 168.0], [312.0, 149.0], [254.0, 149.0]]}, {'transcription': '$14.99', 'points': [[257.0, 189.0], [314.0, 189.0], [314.0, 170.0], [257.0, 170.0]]}, {'transcription': '$8.99', 'points': [[268.0, 212.0], [316.0, 212.0], [316.0, 191.0], [268.0, 191.0]]}, {'transcription': '¥17.99', 'points': [[261.0, 234.0], [318.0, 234.0], [318.0, 213.0], [261.0, 213.0]]}, {'label': 'Prod_item_key', 'transcription': '4.00xITEMS', 'points': [[118.0, 260.0], [217.0, 260.0], [217.0, 239.0], [118.0, 239.0]]}, {'label': 'Subtotal_key', 'transcription': 'SUBTOTAL', 'points': [[8.0, 285.0], [91.0, 285.0], [91.0, 264.0], [8.0, 264.0]]}, {'label': 'Tax_key', 'transcription': 'TAX1', 'points': [[8.0, 312.0], [49.0, 312.0], [49.0, 291.0], [8.0, 291.0]]}, {'label': 'Total_key', 'transcription': 'TOTAL', 'points': [[8.0, 336.0], [61.0, 336.0], [61.0, 316.0], [8.0, 316.0]]}, {'label': 'Subtotal_value', 'transcription': '$55.96', 'points': [[263.0, 283.0], [325.0, 283.0], [325.0, 260.0], [263.0, 260.0]]}, {'label': 'Tax_value', 'transcription': '$4.48', 'points': [[274.0, 308.0], [326.0, 308.0], [326.0, 286.0], [274.0, 286.0]]}, {'label': 'Total_value', 'transcription': '$60.44', 'points': [[267.0, 334.0], [328.0, 334.0], [328.0, 310.0], [267.0, 310.0]]}, {'label': 'Ignore', 'transcription': '', 'points': [[269.0, 347.0], [328.0, 347.0], [328.0, 336.0], [269.0, 336.0]]}, {'label': 'Ignore', 'transcription': '', 'points': [[11.0, 347.0], [50.0, 347.0], [50.0, 342.0], [11.0, 342.0]]}, {'label': 'Time_key', 'transcription': 'TIME', 'points': [[215.0, 128.0], [253.0, 128.0], [253.0, 112.0], [215.0, 112.0]]}]\n\nSample: ocr_boxes\n-----------------------------\n[[[[113.0, 0.0], [228.0, 3.0], [227.0, 20.0], [113.0, 17.0]], ('CHO EUN', 0.9466678500175476)], [[[96.0, 17.0], [236.0, 21.0], [236.0, 38.0], [96.0, 33.0]], ('KOREAN RESTAURANT', 0.9685913324356079)], [[[28.0, 32.0], [293.0, 37.0], [292.0, 56.0], [28.0, 51.0]], ('2621 ORANGETHORPE AVE,FULLERTON.', 0.951709508895874)], [[[48.0, 53.0], [279.0, 56.0], [279.0, 73.0], [47.0, 70.0]], ('714879-3574', 0.9919183850288391)], [[[81.0, 75.0], [256.0, 75.0], [256.0, 89.0], [81.0, 89.0]], ('THANKYOU!!', 0.9518492817878723)], [[[24.0, 113.0], [191.0, 113.0], [191.0, 127.0], [24.0, 127.0]], ('DATE12/30/2016 FRI', 0.9638745784759521)], [[[214.0, 111.0], [305.0, 109.0], [306.0, 125.0], [215.0, 128.0]], ('TIME19:19', 0.9523274898529053)], [[[18.0, 150.0], [156.0, 149.0], [156.0, 167.0], [18.0, 168.0]], ('BIBIM.OCTOPU T1', 0.9491282105445862)], [[[253.0, 147.0], [312.0, 144.0], [313.0, 166.0], [254.0, 168.0]], ('$13.99', 0.9204174876213074)], [[[16.0, 172.0], [157.0, 170.0], [157.0, 187.0], [16.0, 189.0]], ('S-FOODP.CAKT1', 0.9633263945579529)], [[[255.0, 168.0], [313.0, 168.0], [313.0, 189.0], [255.0, 189.0]], ('$14.99', 0.9975371956825256)], [[[15.0, 194.0], [157.0, 192.0], [157.0, 210.0], [15.0, 212.0]], ('PORK DUMPLIN T1', 0.9503927826881409)], [[[265.0, 190.0], [317.0, 188.0], [318.0, 209.0], [266.0, 212.0]], ('$8.99', 0.9171518087387085)], [[[12.0, 217.0], [149.0, 213.0], [149.0, 233.0], [12.0, 236.0]], ('LA BEEF RIB T1', 0.925663948059082)], [[[258.0, 213.0], [319.0, 210.0], [320.0, 232.0], [259.0, 235.0]], ('$17.99', 0.9976120591163635)], [[[119.0, 237.0], [217.0, 237.0], [217.0, 258.0], [119.0, 258.0]], ('4.00xITEMS', 0.9557921290397644)], [[[9.0, 264.0], [90.0, 262.0], [90.0, 284.0], [9.0, 286.0]], ('SUBTOTAL', 0.9968011379241943)], [[[263.0, 261.0], [324.0, 259.0], [325.0, 281.0], [264.0, 283.0]], ('$55.96', 0.9971590042114258)], [[[8.0, 289.0], [50.0, 289.0], [50.0, 311.0], [8.0, 311.0]], ('TAX1', 0.9973537921905518)], [[[273.0, 286.0], [326.0, 283.0], [328.0, 306.0], [274.0, 309.0]], ('$4.48', 0.991606593132019)], [[[9.0, 315.0], [61.0, 315.0], [61.0, 337.0], [9.0, 337.0]], ('TOTAL', 0.9985822439193726)], [[[266.0, 312.0], [328.0, 309.0], [328.0, 331.0], [267.0, 333.0]], ('$60.44', 0.9942547678947449)], [[[269.0, 334.0], [326.0, 334.0], [326.0, 347.0], [269.0, 347.0]], ('$60AA', 0.7674070596694946)]]\n\n\nCuration Rationale\n-----------------------------\n\nThe curated dataset was created to provide a source of OCR augmented text data for own personal AI research use. The datapoints are intended primarily to provide an enhancement of the core Receipt Image Collection data which relies upon the key information extraction from receipt image. \n\nData Source and Prepratation\n-----------------------------\n1) This dataset use the great work from WildReceipt is a large receipt dataset collected from document images of unseen templates in the wild. It contains 25 key information categories, a total of about 69000 text boxes. Offical dataset: URL\n2) OCR text data is generated using techniques OCR scaned on each image. \n3) Additional Post progressing OCR result into XML, JSON and Words format\n\n\nLicense: \nPlease check out the license of each subset in our curated dataset. \n\n\nMore Information needed"
] |
[
6,
5377
] |
[
"passage: TAGS\n#region-us \n"
] |
40640be104136f2926e143a4a2d4dbb25bf29841
|
# Dataset Card for "indic-sentiment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ai4bharat/IndicSentiment-Translated
|
[
"region:us"
] |
2023-09-21T07:35:23+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "GENERIC CATEGORIES", "dtype": "string"}, {"name": "CATEGORY", "dtype": "string"}, {"name": "SUB-CATEGORY", "dtype": "string"}, {"name": "PRODUCT", "dtype": "string"}, {"name": "BRAND", "dtype": "string"}, {"name": "ASPECTS", "dtype": "string"}, {"name": "ASPECT COMBO", "dtype": "string"}, {"name": "ENGLISH REVIEW", "dtype": "string"}, {"name": "LABEL", "dtype": "string"}, {"name": "INDIC REVIEW", "dtype": "string"}, {"name": "ITV2 HI REVIEW", "dtype": "string"}, {"name": "ITV2 TE REVIEW", "dtype": "string"}, {"name": "ITV2 KN REVIEW", "dtype": "string"}, {"name": "ITV2 GU REVIEW", "dtype": "string"}, {"name": "ITV2 OR REVIEW", "dtype": "string"}, {"name": "ITV2 ML REVIEW", "dtype": "string"}, {"name": "ITV2 BD REVIEW", "dtype": "string"}, {"name": "ITV2 UR REVIEW", "dtype": "string"}, {"name": "ITV2 AS REVIEW", "dtype": "string"}, {"name": "ITV2 BN REVIEW", "dtype": "string"}, {"name": "ITV2 MR REVIEW", "dtype": "string"}, {"name": "ITV2 PA REVIEW", "dtype": "string"}, {"name": "ITV2 TA REVIEW", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 382343, "num_examples": 156}, {"name": "test", "num_bytes": 2447023, "num_examples": 1000}], "download_size": 1710213, "dataset_size": 2829366}}
|
2023-10-05T16:36:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "indic-sentiment"
More Information needed
|
[
"# Dataset Card for \"indic-sentiment\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"indic-sentiment\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"indic-sentiment\"\n\nMore Information needed"
] |
e308df0b3caa95858c2b01c31909daa0b9f4b2a6
|
# Dataset Card for "high_speed_photography"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/high_speed_photography
|
[
"region:us"
] |
2023-09-21T07:40:37+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 873953, "num_examples": 10000}], "download_size": 14576, "dataset_size": 873953}}
|
2023-09-21T07:40:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "high_speed_photography"
More Information needed
|
[
"# Dataset Card for \"high_speed_photography\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"high_speed_photography\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"high_speed_photography\"\n\nMore Information needed"
] |
d513958e827e08c79e68c9602cbc6dd04e5e3a50
|
# Dataset Card for "tokenized_enwiki"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
daje/tokenized_enwiki
|
[
"region:us"
] |
2023-09-21T07:40:53+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24408319844, "num_examples": 16370815}], "download_size": 10890317773, "dataset_size": 24408319844}}
|
2023-09-21T08:00:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "tokenized_enwiki"
More Information needed
|
[
"# Dataset Card for \"tokenized_enwiki\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"tokenized_enwiki\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"tokenized_enwiki\"\n\nMore Information needed"
] |
c068e8e5c18f6efbff0a12e655d9f193f8a4ef0d
|
# Dataset Card for "retro_style_photography"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/retro_style_photography
|
[
"region:us"
] |
2023-09-21T07:49:33+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 984913, "num_examples": 10000}], "download_size": 13621, "dataset_size": 984913}}
|
2023-09-21T07:49:36+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "retro_style_photography"
More Information needed
|
[
"# Dataset Card for \"retro_style_photography\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"retro_style_photography\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"retro_style_photography\"\n\nMore Information needed"
] |
2688bef5b99398ff526bfc49588f6d11755252e8
|
# gandalf_ignore_instructions

This is a dataset of prompt injections from [Gandalf](https://gandalf.lakera.ai/) by [Lakera](https://www.lakera.ai/).
Note that we might update the dataset occasionally by cleaning the data or adding more samples.
## How the data was obtained
There are millions of prompts and many of them are not actual prompt injections (people ask Gandalf all kinds of things).
We used the following process to obtain relevant data:
- Start with all prompts submitted to Gandalf in July 2023.
- Use [OpenAI text embeddings](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) (`text-embedding-ada-002`) to compute the similarity between the prompts and the phrase "Ignore all previous instructions".
- Filter to those prompts that have similarity of at least 0.825.
- Use [Lakera Guard](https://platform.lakera.ai/docs)'s personally identifiable information detector to filter out prompts that potentially contain PII data.
- Remove near-duplicates from the data (prompts that differ only by a few letters) using an approximate algorithm. This helps reduce leakage between the data splits.
- Sample 1000 prompts.
- Split the data into train-val-test with an 80/10/10 ratio. Each sample is assigned independently so the size of the train split is not _exactly_ 80% and so on.
Note that there is a small amount of noise in the data since an automatic method was used to obtain it: a few of the samples might not be real prompt injections.
## Citation
If you use this dataset in your research, please cite it as
```
@InProceedings{gandalf_ignore_instructions,
title = {gandalf_ignore_instructions},
author={Lakera AI (https://www.lakera.ai)},
year={2023}
}
```
## Licensing Information
gandalf_ignore_instructions is distributed under the [MIT License](https://opensource.org/license/mit/).
|
Lakera/gandalf_ignore_instructions
|
[
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"prompt injection",
"region:us"
] |
2023-09-21T07:49:47+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "similarity", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 66400, "num_examples": 777}, {"name": "validation", "num_bytes": 9633, "num_examples": 111}, {"name": "test", "num_bytes": 9747, "num_examples": 112}], "download_size": 51515, "dataset_size": 85780}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "tags": ["prompt injection"]}
|
2023-10-02T08:26:29+00:00
|
[] |
[
"en"
] |
TAGS
#size_categories-1K<n<10K #language-English #license-mit #prompt injection #region-us
|
# gandalf_ignore_instructions
.
We used the following process to obtain relevant data:
- Start with all prompts submitted to Gandalf in July 2023.
- Use OpenAI text embeddings ('text-embedding-ada-002') to compute the similarity between the prompts and the phrase "Ignore all previous instructions".
- Filter to those prompts that have similarity of at least 0.825.
- Use Lakera Guard's personally identifiable information detector to filter out prompts that potentially contain PII data.
- Remove near-duplicates from the data (prompts that differ only by a few letters) using an approximate algorithm. This helps reduce leakage between the data splits.
- Sample 1000 prompts.
- Split the data into train-val-test with an 80/10/10 ratio. Each sample is assigned independently so the size of the train split is not _exactly_ 80% and so on.
Note that there is a small amount of noise in the data since an automatic method was used to obtain it: a few of the samples might not be real prompt injections.
If you use this dataset in your research, please cite it as
## Licensing Information
gandalf_ignore_instructions is distributed under the MIT License.
|
[
"# gandalf_ignore_instructions\n\n.\n\nWe used the following process to obtain relevant data:\n- Start with all prompts submitted to Gandalf in July 2023.\n- Use OpenAI text embeddings ('text-embedding-ada-002') to compute the similarity between the prompts and the phrase \"Ignore all previous instructions\".\n- Filter to those prompts that have similarity of at least 0.825.\n- Use Lakera Guard's personally identifiable information detector to filter out prompts that potentially contain PII data.\n- Remove near-duplicates from the data (prompts that differ only by a few letters) using an approximate algorithm. This helps reduce leakage between the data splits.\n- Sample 1000 prompts.\n- Split the data into train-val-test with an 80/10/10 ratio. Each sample is assigned independently so the size of the train split is not _exactly_ 80% and so on.\n\nNote that there is a small amount of noise in the data since an automatic method was used to obtain it: a few of the samples might not be real prompt injections.\n\nIf you use this dataset in your research, please cite it as",
"## Licensing Information\n\ngandalf_ignore_instructions is distributed under the MIT License."
] |
[
"TAGS\n#size_categories-1K<n<10K #language-English #license-mit #prompt injection #region-us \n",
"# gandalf_ignore_instructions\n\n.\n\nWe used the following process to obtain relevant data:\n- Start with all prompts submitted to Gandalf in July 2023.\n- Use OpenAI text embeddings ('text-embedding-ada-002') to compute the similarity between the prompts and the phrase \"Ignore all previous instructions\".\n- Filter to those prompts that have similarity of at least 0.825.\n- Use Lakera Guard's personally identifiable information detector to filter out prompts that potentially contain PII data.\n- Remove near-duplicates from the data (prompts that differ only by a few letters) using an approximate algorithm. This helps reduce leakage between the data splits.\n- Sample 1000 prompts.\n- Split the data into train-val-test with an 80/10/10 ratio. Each sample is assigned independently so the size of the train split is not _exactly_ 80% and so on.\n\nNote that there is a small amount of noise in the data since an automatic method was used to obtain it: a few of the samples might not be real prompt injections.\n\nIf you use this dataset in your research, please cite it as",
"## Licensing Information\n\ngandalf_ignore_instructions is distributed under the MIT License."
] |
[
33,
51,
288,
21
] |
[
"passage: TAGS\n#size_categories-1K<n<10K #language-English #license-mit #prompt injection #region-us \n# gandalf_ignore_instructions\n\n.\n\nWe used the following process to obtain relevant data:\n- Start with all prompts submitted to Gandalf in July 2023.\n- Use OpenAI text embeddings ('text-embedding-ada-002') to compute the similarity between the prompts and the phrase \"Ignore all previous instructions\".\n- Filter to those prompts that have similarity of at least 0.825.\n- Use Lakera Guard's personally identifiable information detector to filter out prompts that potentially contain PII data.\n- Remove near-duplicates from the data (prompts that differ only by a few letters) using an approximate algorithm. This helps reduce leakage between the data splits.\n- Sample 1000 prompts.\n- Split the data into train-val-test with an 80/10/10 ratio. Each sample is assigned independently so the size of the train split is not _exactly_ 80% and so on.\n\nNote that there is a small amount of noise in the data since an automatic method was used to obtain it: a few of the samples might not be real prompt injections.\n\nIf you use this dataset in your research, please cite it as## Licensing Information\n\ngandalf_ignore_instructions is distributed under the MIT License."
] |
2af82f57071eef7c5aea47b5d4a04981a1f05219
|
# Dataset Card for "pubmed_subset_wiki_20p"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zxvix/pubmed_subset_wiki_20p
|
[
"region:us"
] |
2023-09-21T08:09:09+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3560448613.8489647, "num_examples": 1250378}, {"name": "test", "num_bytes": 1024229, "num_examples": 1000}], "download_size": 1090915329, "dataset_size": 3561472842.8489647}}
|
2023-09-21T08:12:12+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "pubmed_subset_wiki_20p"
More Information needed
|
[
"# Dataset Card for \"pubmed_subset_wiki_20p\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"pubmed_subset_wiki_20p\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"pubmed_subset_wiki_20p\"\n\nMore Information needed"
] |
60f66e8f300aea1f8f68d6a0d669709a14ab66d8
|
# Dataset Card for "medien_versorgen_303-undersampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mboth/medien_versorgen_303-undersampled
|
[
"region:us"
] |
2023-09-21T08:11:59+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "Datatype", "dtype": "string"}, {"name": "Beschreibung", "dtype": "string"}, {"name": "Name", "dtype": "string"}, {"name": "Unit", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "Grundfunktion", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Bereitstellen", "1": "Entsorgen", "2": "Speichern", "3": "Verteilen"}}}}], "splits": [{"name": "train", "num_bytes": 59754.580327868855, "num_examples": 303}, {"name": "test", "num_bytes": 14725, "num_examples": 77}, {"name": "valid", "num_bytes": 14725, "num_examples": 77}], "download_size": 42321, "dataset_size": 89204.58032786885}}
|
2023-09-21T08:12:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "medien_versorgen_303-undersampled"
More Information needed
|
[
"# Dataset Card for \"medien_versorgen_303-undersampled\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"medien_versorgen_303-undersampled\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"medien_versorgen_303-undersampled\"\n\nMore Information needed"
] |
4747c9fd61d73fffa33565128f73f35ac84ca686
|
<div align="center">
<h1> CulturaX </h1>
<h3> Cleaned, Enormous, and Public: The Multilingual Fuel to Democratize Large Language Models for 167 Languages </h3>
</div>
## Dataset Description
- **Repository:** [https://github.com/nlp-uoregon/CulturaX](https://github.com/nlp-uoregon/CulturaX)
- **Papers:** [CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages](https://arxiv.org/abs/2309.09400)
## Dataset Summary
We present CulturaX, a substantial multilingual dataset with 6.3 trillion tokens in 167 languages, tailored for large language model (LLM) development. Our dataset undergoes meticulous cleaning and deduplication through a rigorous pipeline of multiple stages to accomplish the best quality for model training, including language identification, URL-based filtering, metric-based cleaning, document refinement, and data deduplication. We employ MinHash at document level to achieve fuzzy deduplication for the datasets in different languages. Our data cleaning framework includes diverse criteria and threshold selections, guided by extensive data samples, ensuring comprehensive noise filtering in various aspects. CulturaX is fully released to the public in HuggingFace to facilitate research and advancements in multilingual LLMs.
Our dataset combines the most recent iteration of mC4 (version 3.1.0) [1] with all accessible OSCAR corpora up to the present year, including 20.19, 21.09, 22.01, and 23.01 [2]. After deep cleaning and deduplication, CulturaX involves 16TB data in the parquet format (expanding to 27TB when unpacked). More than a half of our dataset is dedicated to non-English languages to significantly boost the data size and enhance the feasibility of training models in multilingual scenarios.
To obtain perplexity scores for data cleaning, we train a SentencePiece tokenizer and 5-gram Kneser-Ney language models as provided in the KenLM library [3] using the 20230501 dumps of Wikipedia. Our KenLM models are also released in HuggingFace: https://huggingface.co/uonlp/kenlm.
Details for the dataset can be found in our technical paper: [https://arxiv.org/abs/2309.09400](https://arxiv.org/abs/2309.09400)
You can download the dataset using Hugging Face datasets:
```python
from datasets import load_dataset
ds = load_dataset("uonlp/CulturaX", "en")
```
### Languages
The supported languages and statistics for our dataset can be found below:
*(Note that the language code `als` and `eml` refer to `gsw` and `x-eml` in the OSCAR-2301 dataset.)*
| | Code | Language | # Documents | # Tokens | # Tokens (%) |
|----:|:-------|:-------------------------|:----------------|:--------------------|:------|
| ... | ... | ... |... | ... | ... |
| 10 | vi | Vietnamese | 102,411,180 | 98,453,464,077 | 1.56 |
| ... | ... | ... | ... | ... | ... |
### Dataset Structure
```json
{
"text": ...,
"timestamp": ...,
"url": ...,
"source": "mc4" | "OSCAR-xxxx",
}
```
## Considerations for Using the Data
As CulturaX is the cleaned version of the mC4 and OSCAR datasets, which were both extracted from CommonCrawl, personal and sensitive information might still contain personal and sensitive information.
This must be considered prior to using this dataset for any purpose, such as training deep learning models, etc.
## License Information
The licence terms for CulturaX strictly follows those of `mC4` and `OSCAR`. Please refer to both below licenses when using this dataset.
- [mC4 license](https://huggingface.co/datasets/allenai/c4#license)
- [OSCAR license](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301#licensing-information)
## Citation
To cite CulturaX, please use:
```
@misc{nguyen2023culturax,
title={CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages},
author={Thuat Nguyen and Chien Van Nguyen and Viet Dac Lai and Hieu Man and Nghia Trung Ngo and Franck Dernoncourt and Ryan A. Rossi and Thien Huu Nguyen},
year={2023},
eprint={2309.09400},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Reference
[1] Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual
pre-trained text-to-text transformer. In NAACL 2021. https://huggingface.co/datasets/mc4
[2] Pedro Javier Ortiz Suárez, Benoît Sagot, and Laurent Romary. 2019. Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures. In Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-
7) 2019. https://oscar-project.org/
[3] KenLM: Faster and smaller language model queries. In Proceedings of the Sixth
Workshop on Statistical Machine Translation, 2011.
|
vietgpt/CulturaX
|
[
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:vi",
"arxiv:2309.09400",
"region:us"
] |
2023-09-21T08:17:15+00:00
|
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["vi"], "multilinguality": ["multilingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "CulturaX", "extra_gated_prompt": "By completing the form below, you acknowledge that the provided data is offered as is. Although we anticipate no problems, you accept full responsibility for any repercussions resulting from the use of this data. Furthermore, you agree that the data must not be utilized for malicious or harmful purposes towards humanity.", "extra_gated_fields": {"Name": "text", "Email": "text", "Affiliation": "text", "Country": "text", "Usecase": "text", "I have explicitly check with my jurisdiction and I confirm that downloading CulturaX is legal in the country/region where I am located right now, and for the use case that I have described above": "checkbox", "You agree to not attempt to determine the identity of individuals in this dataset": "checkbox"}}
|
2023-09-22T00:39:40+00:00
|
[
"2309.09400"
] |
[
"vi"
] |
TAGS
#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-10M<n<100M #source_datasets-original #language-Vietnamese #arxiv-2309.09400 #region-us
|
CulturaX
=========
### Cleaned, Enormous, and Public: The Multilingual Fuel to Democratize Large Language Models for 167 Languages
Dataset Description
-------------------
* Repository: URL
* Papers: CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages
Dataset Summary
---------------
We present CulturaX, a substantial multilingual dataset with 6.3 trillion tokens in 167 languages, tailored for large language model (LLM) development. Our dataset undergoes meticulous cleaning and deduplication through a rigorous pipeline of multiple stages to accomplish the best quality for model training, including language identification, URL-based filtering, metric-based cleaning, document refinement, and data deduplication. We employ MinHash at document level to achieve fuzzy deduplication for the datasets in different languages. Our data cleaning framework includes diverse criteria and threshold selections, guided by extensive data samples, ensuring comprehensive noise filtering in various aspects. CulturaX is fully released to the public in HuggingFace to facilitate research and advancements in multilingual LLMs.
Our dataset combines the most recent iteration of mC4 (version 3.1.0) [1] with all accessible OSCAR corpora up to the present year, including 20.19, 21.09, 22.01, and 23.01 [2]. After deep cleaning and deduplication, CulturaX involves 16TB data in the parquet format (expanding to 27TB when unpacked). More than a half of our dataset is dedicated to non-English languages to significantly boost the data size and enhance the feasibility of training models in multilingual scenarios.
To obtain perplexity scores for data cleaning, we train a SentencePiece tokenizer and 5-gram Kneser-Ney language models as provided in the KenLM library [3] using the 20230501 dumps of Wikipedia. Our KenLM models are also released in HuggingFace: URL
Details for the dataset can be found in our technical paper: URL
You can download the dataset using Hugging Face datasets:
### Languages
The supported languages and statistics for our dataset can be found below:
*(Note that the language code 'als' and 'eml' refer to 'gsw' and 'x-eml' in the OSCAR-2301 dataset.)*
### Dataset Structure
Considerations for Using the Data
---------------------------------
As CulturaX is the cleaned version of the mC4 and OSCAR datasets, which were both extracted from CommonCrawl, personal and sensitive information might still contain personal and sensitive information.
This must be considered prior to using this dataset for any purpose, such as training deep learning models, etc.
License Information
-------------------
The licence terms for CulturaX strictly follows those of 'mC4' and 'OSCAR'. Please refer to both below licenses when using this dataset.
* mC4 license
* OSCAR license
To cite CulturaX, please use:
Reference
---------
[1] Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual
pre-trained text-to-text transformer. In NAACL 2021. URL
[2] Pedro Javier Ortiz Suárez, Benoît Sagot, and Laurent Romary. 2019. Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures. In Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-
7) 2019. URL
[3] KenLM: Faster and smaller language model queries. In Proceedings of the Sixth
Workshop on Statistical Machine Translation, 2011.
|
[
"### Cleaned, Enormous, and Public: The Multilingual Fuel to Democratize Large Language Models for 167 Languages\n\n\n\nDataset Description\n-------------------\n\n\n* Repository: URL\n* Papers: CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages\n\n\nDataset Summary\n---------------\n\n\nWe present CulturaX, a substantial multilingual dataset with 6.3 trillion tokens in 167 languages, tailored for large language model (LLM) development. Our dataset undergoes meticulous cleaning and deduplication through a rigorous pipeline of multiple stages to accomplish the best quality for model training, including language identification, URL-based filtering, metric-based cleaning, document refinement, and data deduplication. We employ MinHash at document level to achieve fuzzy deduplication for the datasets in different languages. Our data cleaning framework includes diverse criteria and threshold selections, guided by extensive data samples, ensuring comprehensive noise filtering in various aspects. CulturaX is fully released to the public in HuggingFace to facilitate research and advancements in multilingual LLMs.\n\n\nOur dataset combines the most recent iteration of mC4 (version 3.1.0) [1] with all accessible OSCAR corpora up to the present year, including 20.19, 21.09, 22.01, and 23.01 [2]. After deep cleaning and deduplication, CulturaX involves 16TB data in the parquet format (expanding to 27TB when unpacked). More than a half of our dataset is dedicated to non-English languages to significantly boost the data size and enhance the feasibility of training models in multilingual scenarios.\n\n\nTo obtain perplexity scores for data cleaning, we train a SentencePiece tokenizer and 5-gram Kneser-Ney language models as provided in the KenLM library [3] using the 20230501 dumps of Wikipedia. Our KenLM models are also released in HuggingFace: URL\n\n\nDetails for the dataset can be found in our technical paper: URL\n\n\nYou can download the dataset using Hugging Face datasets:",
"### Languages\n\n\nThe supported languages and statistics for our dataset can be found below:\n\n\n*(Note that the language code 'als' and 'eml' refer to 'gsw' and 'x-eml' in the OSCAR-2301 dataset.)*",
"### Dataset Structure\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nAs CulturaX is the cleaned version of the mC4 and OSCAR datasets, which were both extracted from CommonCrawl, personal and sensitive information might still contain personal and sensitive information.\nThis must be considered prior to using this dataset for any purpose, such as training deep learning models, etc.\n\n\nLicense Information\n-------------------\n\n\nThe licence terms for CulturaX strictly follows those of 'mC4' and 'OSCAR'. Please refer to both below licenses when using this dataset.\n\n\n* mC4 license\n* OSCAR license\n\n\nTo cite CulturaX, please use:\n\n\nReference\n---------\n\n\n[1] Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual\npre-trained text-to-text transformer. In NAACL 2021. URL\n\n\n[2] Pedro Javier Ortiz Suárez, Benoît Sagot, and Laurent Romary. 2019. Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures. In Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-\n7) 2019. URL\n\n\n[3] KenLM: Faster and smaller language model queries. In Proceedings of the Sixth\nWorkshop on Statistical Machine Translation, 2011."
] |
[
"TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-10M<n<100M #source_datasets-original #language-Vietnamese #arxiv-2309.09400 #region-us \n",
"### Cleaned, Enormous, and Public: The Multilingual Fuel to Democratize Large Language Models for 167 Languages\n\n\n\nDataset Description\n-------------------\n\n\n* Repository: URL\n* Papers: CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages\n\n\nDataset Summary\n---------------\n\n\nWe present CulturaX, a substantial multilingual dataset with 6.3 trillion tokens in 167 languages, tailored for large language model (LLM) development. Our dataset undergoes meticulous cleaning and deduplication through a rigorous pipeline of multiple stages to accomplish the best quality for model training, including language identification, URL-based filtering, metric-based cleaning, document refinement, and data deduplication. We employ MinHash at document level to achieve fuzzy deduplication for the datasets in different languages. Our data cleaning framework includes diverse criteria and threshold selections, guided by extensive data samples, ensuring comprehensive noise filtering in various aspects. CulturaX is fully released to the public in HuggingFace to facilitate research and advancements in multilingual LLMs.\n\n\nOur dataset combines the most recent iteration of mC4 (version 3.1.0) [1] with all accessible OSCAR corpora up to the present year, including 20.19, 21.09, 22.01, and 23.01 [2]. After deep cleaning and deduplication, CulturaX involves 16TB data in the parquet format (expanding to 27TB when unpacked). More than a half of our dataset is dedicated to non-English languages to significantly boost the data size and enhance the feasibility of training models in multilingual scenarios.\n\n\nTo obtain perplexity scores for data cleaning, we train a SentencePiece tokenizer and 5-gram Kneser-Ney language models as provided in the KenLM library [3] using the 20230501 dumps of Wikipedia. Our KenLM models are also released in HuggingFace: URL\n\n\nDetails for the dataset can be found in our technical paper: URL\n\n\nYou can download the dataset using Hugging Face datasets:",
"### Languages\n\n\nThe supported languages and statistics for our dataset can be found below:\n\n\n*(Note that the language code 'als' and 'eml' refer to 'gsw' and 'x-eml' in the OSCAR-2301 dataset.)*",
"### Dataset Structure\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nAs CulturaX is the cleaned version of the mC4 and OSCAR datasets, which were both extracted from CommonCrawl, personal and sensitive information might still contain personal and sensitive information.\nThis must be considered prior to using this dataset for any purpose, such as training deep learning models, etc.\n\n\nLicense Information\n-------------------\n\n\nThe licence terms for CulturaX strictly follows those of 'mC4' and 'OSCAR'. Please refer to both below licenses when using this dataset.\n\n\n* mC4 license\n* OSCAR license\n\n\nTo cite CulturaX, please use:\n\n\nReference\n---------\n\n\n[1] Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual\npre-trained text-to-text transformer. In NAACL 2021. URL\n\n\n[2] Pedro Javier Ortiz Suárez, Benoît Sagot, and Laurent Romary. 2019. Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures. In Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-\n7) 2019. URL\n\n\n[3] KenLM: Faster and smaller language model queries. In Proceedings of the Sixth\nWorkshop on Statistical Machine Translation, 2011."
] |
[
115,
475,
60,
305
] |
[
"passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-10M<n<100M #source_datasets-original #language-Vietnamese #arxiv-2309.09400 #region-us \n"
] |
542642fab0157e1088c31ef8e2c018018e08bb9f
|
# Dataset Card for "eval_tag_nq_dev_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/eval_tag_nq_dev_v2
|
[
"region:us"
] |
2023-09-21T08:21:37+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2372, "num_examples": 10}, {"name": "validation", "num_bytes": 1672810, "num_examples": 6515}], "download_size": 937279, "dataset_size": 1675182}}
|
2023-09-21T14:51:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "eval_tag_nq_dev_v2"
More Information needed
|
[
"# Dataset Card for \"eval_tag_nq_dev_v2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"eval_tag_nq_dev_v2\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"eval_tag_nq_dev_v2\"\n\nMore Information needed"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.