sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9e2ddecb7d98a6d1a02e6f08aab77934252bc79c | # Dataset Card for "doclaynet_processed"
Clean version of [DocLayNet](https://github.com/DS4SD/DocLayNet) ready for finetuning. | vikp/doclaynet_processed | [
"region:us"
]
| 2023-11-22T06:48:12+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "bboxes", "sequence": {"sequence": "float64"}}, {"name": "labels", "sequence": "int64"}, {"name": "words", "sequence": "string"}, {"name": "split", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32034973965.125, "num_examples": 80863}], "download_size": 0, "dataset_size": 32034973965.125}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-30T19:14:28+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "doclaynet_processed"
Clean version of DocLayNet ready for finetuning. | [
"# Dataset Card for \"doclaynet_processed\"\n\nClean version of DocLayNet ready for finetuning."
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"doclaynet_processed\"\n\nClean version of DocLayNet ready for finetuning."
]
| [
6,
26
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"doclaynet_processed\"\n\nClean version of DocLayNet ready for finetuning."
]
|
d0d9e66ced3d2afec7592a11b98580143438b70d | # Dataset Card for "aloi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jxie/aloi | [
"region:us"
]
| 2023-11-22T07:07:26+00:00 | {"dataset_info": {"features": [{"name": "inputs", "sequence": "float64"}, {"name": "label", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 71608320, "num_examples": 69120}, {"name": "val", "num_bytes": 17902080, "num_examples": 17280}, {"name": "test", "num_bytes": 22377600, "num_examples": 21600}], "download_size": 4459430, "dataset_size": 111888000}} | 2023-11-22T07:07:31+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "aloi"
More Information needed | [
"# Dataset Card for \"aloi\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"aloi\"\n\nMore Information needed"
]
| [
6,
12
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"aloi\"\n\nMore Information needed"
]
|
e15b72fa6ceff3c460737dcd278f0a0dfdd540ac |
```python
#these dictionary are useful for this dataset
pos_2_id = {'#': 0, '$': 1, "''": 2, '(': 3, ')': 4, '.': 5, ':': 6, 'CC': 7, 'CD': 8, 'DT': 9, 'EX': 10, 'FW': 11, 'IN': 12, 'JJ': 13, 'JJR': 14, 'JJS': 15, 'MD': 16, 'NN': 17, 'NNP': 18, 'NNPS': 19, 'NNS': 20, 'PDT': 21, 'POS': 22, 'PRP': 23, 'PRP$': 24, 'RB': 25, 'RBR': 26, 'RBS': 27, 'RP': 28, 'TO': 29, 'VB': 30, 'VBD': 31, 'VBG': 32, 'VBN': 33, 'VBP': 34, 'VBZ': 35, 'WDT': 36, 'WP': 37, 'WP$': 38, 'WRB': 39}
id_2_pos = {0: '#', 1: '$', 2: "''", 3: '(', 4: ')', 5: '.', 6: ':', 7: 'CC', 8: 'CD', 9: 'DT', 10: 'EX', 11: 'FW', 12: 'IN', 13: 'JJ', 14: 'JJR', 15: 'JJS', 16: 'MD', 17: 'NN', 18: 'NNP', 19: 'NNPS', 20: 'NNS', 21: 'PDT', 22: 'POS', 23: 'PRP', 24: 'PRP$', 25: 'RB', 26: 'RBR', 27: 'RBS', 28: 'RP', 29: 'TO', 30: 'VB', 31: 'VBD', 32: 'VBG', 33: 'VBN', 34: 'VBP', 35: 'VBZ', 36: 'WDT', 37: 'WP', 38: 'WP$', 39: 'WRB'}
tag_2_id = {'B-application': 0, 'B-cve id': 1, 'B-edition': 2, 'B-file': 3, 'B-function': 4, 'B-hardware': 5, 'B-language': 6, 'B-method': 7, 'B-os': 8, 'B-parameter': 9, 'B-programming language': 10, 'B-relevant_term': 11, 'B-update': 12, 'B-vendor': 13, 'B-version': 14, 'I-application': 15, 'I-edition': 16, 'I-hardware': 17, 'I-os': 18, 'I-relevant_term': 19, 'I-update': 20, 'I-vendor': 21, 'I-version': 22, 'O': 23}
id_2_tag = {0: 'B-application', 1: 'B-cve id', 2: 'B-edition', 3: 'B-file', 4: 'B-function', 5: 'B-hardware', 6: 'B-language', 7: 'B-method', 8: 'B-os', 9: 'B-parameter', 10: 'B-programming language', 11: 'B-relevant_term', 12: 'B-update', 13: 'B-vendor', 14: 'B-version', 15: 'I-application', 16: 'I-edition', 17: 'I-hardware', 18: 'I-os', 19: 'I-relevant_term', 20: 'I-update', 21: 'I-vendor', 22: 'I-version', 23: 'O'}
```
| thangvip/cti-dataset | [
"region:us"
]
| 2023-11-22T07:30:02+00:00 | {"dataset_info": {"features": [{"name": "sentence_idx", "dtype": "int64"}, {"name": "words", "sequence": "string"}, {"name": "POS", "sequence": "int64"}, {"name": "tag", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 13350196.989130436, "num_examples": 13794}, {"name": "test", "num_bytes": 3338033.1604691073, "num_examples": 3449}], "download_size": 2511496, "dataset_size": 16688230.149599543}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2023-11-22T09:01:30+00:00 | []
| []
| TAGS
#region-us
| []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
|
25ef1078844ba3c20be31581020deadebc25579e | # Dataset Card for "indic-superb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | collabora/indic-superb | [
"region:us"
]
| 2023-11-22T07:35:07+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcription", "dtype": "string"}, {"name": "duration", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 46776721194.64, "num_examples": 24872}, {"name": "test", "num_bytes": 1592137067.0, "num_examples": 872}], "download_size": 46065024050, "dataset_size": 48368858261.64}} | 2023-11-22T08:13:35+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "indic-superb"
More Information needed | [
"# Dataset Card for \"indic-superb\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"indic-superb\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"indic-superb\"\n\nMore Information needed"
]
|
5660ecd7f04958c35447fba7c94d72b2e747fa10 |
```python
#these dictionary are useful for this dataset
pos_2_id = {'#': 0, '$': 1, "''": 2, '(': 3, ')': 4, '.': 5, ':': 6, 'CC': 7, 'CD': 8, 'DT': 9, 'EX': 10, 'FW': 11, 'IN': 12, 'JJ': 13, 'JJR': 14, 'JJS': 15, 'MD': 16, 'NN': 17, 'NNP': 18, 'NNPS': 19, 'NNS': 20, 'PDT': 21, 'POS': 22, 'PRP': 23, 'PRP$': 24, 'RB': 25, 'RBR': 26, 'RBS': 27, 'RP': 28, 'TO': 29, 'VB': 30, 'VBD': 31, 'VBG': 32, 'VBN': 33, 'VBP': 34, 'VBZ': 35, 'WDT': 36, 'WP': 37, 'WP$': 38, 'WRB': 39}
id_2_pos = {0: '#', 1: '$', 2: "''", 3: '(', 4: ')', 5: '.', 6: ':', 7: 'CC', 8: 'CD', 9: 'DT', 10: 'EX', 11: 'FW', 12: 'IN', 13: 'JJ', 14: 'JJR', 15: 'JJS', 16: 'MD', 17: 'NN', 18: 'NNP', 19: 'NNPS', 20: 'NNS', 21: 'PDT', 22: 'POS', 23: 'PRP', 24: 'PRP$', 25: 'RB', 26: 'RBR', 27: 'RBS', 28: 'RP', 29: 'TO', 30: 'VB', 31: 'VBD', 32: 'VBG', 33: 'VBN', 34: 'VBP', 35: 'VBZ', 36: 'WDT', 37: 'WP', 38: 'WP$', 39: 'WRB'}
tag_2_id = {'B-application': 0, 'B-cve id': 1, 'B-edition': 2, 'B-file': 3, 'B-function': 4, 'B-hardware': 5, 'B-language': 6, 'B-method': 7, 'B-os': 8, 'B-parameter': 9, 'B-programming language': 10, 'B-relevant_term': 11, 'B-update': 12, 'B-vendor': 13, 'B-version': 14, 'I-application': 15, 'I-edition': 16, 'I-hardware': 17, 'I-os': 18, 'I-relevant_term': 19, 'I-update': 20, 'I-vendor': 21, 'I-version': 22, 'O': 23}
id_2_tag = {0: 'B-application', 1: 'B-cve id', 2: 'B-edition', 3: 'B-file', 4: 'B-function', 5: 'B-hardware', 6: 'B-language', 7: 'B-method', 8: 'B-os', 9: 'B-parameter', 10: 'B-programming language', 11: 'B-relevant_term', 12: 'B-update', 13: 'B-vendor', 14: 'B-version', 15: 'I-application', 16: 'I-edition', 17: 'I-hardware', 18: 'I-os', 19: 'I-relevant_term', 20: 'I-update', 21: 'I-vendor', 22: 'I-version', 23: 'O'}
```
| thangvip/cti-dataset-split | [
"region:us"
]
| 2023-11-22T07:44:17+00:00 | {"dataset_info": [{"config_name": "default", "features": [{"name": "sentence_idx", "dtype": "int64"}, {"name": "words", "sequence": "string"}, {"name": "POS", "sequence": "int64"}, {"name": "tag", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 16917605, "num_examples": 17480}], "download_size": 2164774, "dataset_size": 16917605}, {"config_name": "subset1", "features": [{"name": "sentence_idx", "dtype": "int64"}, {"name": "words", "sequence": "string"}, {"name": "POS", "sequence": "int64"}, {"name": "tag", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 13350196.989130436, "num_examples": 13794}], "download_size": 2008529, "dataset_size": 13350196.989130436}, {"config_name": "subset2", "features": [{"name": "sentence_idx", "dtype": "int64"}, {"name": "words", "sequence": "string"}, {"name": "POS", "sequence": "int64"}, {"name": "tag", "sequence": "int64"}], "splits": [{"name": "test", "num_bytes": 3338033.1604691073, "num_examples": 3449}], "download_size": 502967, "dataset_size": 3338033.1604691073}], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}, {"config_name": "subset1", "data_files": [{"split": "train", "path": "subset1/train-*"}]}, {"config_name": "subset2", "data_files": [{"split": "test", "path": "subset2/test-*"}]}]} | 2023-11-22T08:59:44+00:00 | []
| []
| TAGS
#region-us
| []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
|
7646cc140d995c0aac3e947486ba4d6beb502a07 |
# Dataset Details
- **Title:** Dravidianmultimodality: A dataset for multi-modal sentiment analysis in Tamil and Malayalam
- **Authors:** Bharathi Raja Chakravarthi et al.
- **Link to Paper:** [arXiv:2106.04853](https://arxiv.org/abs/2106.04853)
- **Published:** 2021
- **Source:** arXiv preprint
| eswardivi/Tamil_MSA_Audio_Text | [
"task_categories:text-classification",
"task_categories:audio-classification",
"size_categories:n<1K",
"language:ta",
"arxiv:2106.04853",
"region:us"
]
| 2023-11-22T08:22:28+00:00 | {"language": ["ta"], "size_categories": ["n<1K"], "task_categories": ["text-classification", "audio-classification"], "dataset_info": {"features": [{"name": "Audio", "dtype": "audio"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Negative", "1": "Neutral", "2": "Positive"}}}}, {"name": "FilePath", "dtype": "string"}, {"name": "Text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 436903500, "num_examples": 64}], "download_size": 435262950, "dataset_size": 436903500}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-17T10:51:33+00:00 | [
"2106.04853"
]
| [
"ta"
]
| TAGS
#task_categories-text-classification #task_categories-audio-classification #size_categories-n<1K #language-Tamil #arxiv-2106.04853 #region-us
|
# Dataset Details
- Title: Dravidianmultimodality: A dataset for multi-modal sentiment analysis in Tamil and Malayalam
- Authors: Bharathi Raja Chakravarthi et al.
- Link to Paper: arXiv:2106.04853
- Published: 2021
- Source: arXiv preprint
| [
"# Dataset Details\n\n- Title: Dravidianmultimodality: A dataset for multi-modal sentiment analysis in Tamil and Malayalam\n- Authors: Bharathi Raja Chakravarthi et al.\n- Link to Paper: arXiv:2106.04853\n- Published: 2021\n- Source: arXiv preprint"
]
| [
"TAGS\n#task_categories-text-classification #task_categories-audio-classification #size_categories-n<1K #language-Tamil #arxiv-2106.04853 #region-us \n",
"# Dataset Details\n\n- Title: Dravidianmultimodality: A dataset for multi-modal sentiment analysis in Tamil and Malayalam\n- Authors: Bharathi Raja Chakravarthi et al.\n- Link to Paper: arXiv:2106.04853\n- Published: 2021\n- Source: arXiv preprint"
]
| [
52,
67
]
| [
"passage: TAGS\n#task_categories-text-classification #task_categories-audio-classification #size_categories-n<1K #language-Tamil #arxiv-2106.04853 #region-us \n# Dataset Details\n\n- Title: Dravidianmultimodality: A dataset for multi-modal sentiment analysis in Tamil and Malayalam\n- Authors: Bharathi Raja Chakravarthi et al.\n- Link to Paper: arXiv:2106.04853\n- Published: 2021\n- Source: arXiv preprint"
]
|
f4471d200d3a7ea58398dcce9fa676050022236e | # Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset is still a work in progress; it's pretty small at the moment.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
The ExistenceTypes dataset is a highly-contextual logic puzzle classifier that is intended to help with contextual understanding. The model will have to pay attention to the surrounding text and not just the keywords. The statements themselves are structured like logic puzzles and some examples requires multi-step reasoning.
This is to help the model deduce and infer based on given premises. The statements are either true, false or ambigious based on the IP's rules and consistency as well as Godless, Godliving or Mixed Domain. The overall idea is for the model to understand and generate content within' the themeatic and conceptual framework of my IP.
- **Curated by:** [niltheory]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | niltheory/ExistenceTypes | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:cc-by-sa-4.0",
"complex reasoning",
"creative writing",
"logic puzzle",
"region:us"
]
| 2023-11-22T09:15:11+00:00 | {"language": ["en"], "license": "cc-by-sa-4.0", "size_categories": ["n<1K"], "task_categories": ["text-classification", "question-answering", "text-generation"], "pretty_name": "occybyte", "tags": ["complex reasoning", "creative writing", "logic puzzle"]} | 2023-12-15T16:14:23+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-classification #task_categories-question-answering #task_categories-text-generation #size_categories-n<1K #language-English #license-cc-by-sa-4.0 #complex reasoning #creative writing #logic puzzle #region-us
| # Dataset Card for Dataset Name
This dataset is still a work in progress; it's pretty small at the moment.
## Dataset Details
### Dataset Description
The ExistenceTypes dataset is a highly-contextual logic puzzle classifier that is intended to help with contextual understanding. The model will have to pay attention to the surrounding text and not just the keywords. The statements themselves are structured like logic puzzles and some examples requires multi-step reasoning.
This is to help the model deduce and infer based on given premises. The statements are either true, false or ambigious based on the IP's rules and consistency as well as Godless, Godliving or Mixed Domain. The overall idea is for the model to understand and generate content within' the themeatic and conceptual framework of my IP.
- Curated by: [niltheory]
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset is still a work in progress; it's pretty small at the moment.",
"## Dataset Details",
"### Dataset Description\n\n\nThe ExistenceTypes dataset is a highly-contextual logic puzzle classifier that is intended to help with contextual understanding. The model will have to pay attention to the surrounding text and not just the keywords. The statements themselves are structured like logic puzzles and some examples requires multi-step reasoning. \n\nThis is to help the model deduce and infer based on given premises. The statements are either true, false or ambigious based on the IP's rules and consistency as well as Godless, Godliving or Mixed Domain. The overall idea is for the model to understand and generate content within' the themeatic and conceptual framework of my IP.\n\n\n- Curated by: [niltheory]\n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
"TAGS\n#task_categories-text-classification #task_categories-question-answering #task_categories-text-generation #size_categories-n<1K #language-English #license-cc-by-sa-4.0 #complex reasoning #creative writing #logic puzzle #region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset is still a work in progress; it's pretty small at the moment.",
"## Dataset Details",
"### Dataset Description\n\n\nThe ExistenceTypes dataset is a highly-contextual logic puzzle classifier that is intended to help with contextual understanding. The model will have to pay attention to the surrounding text and not just the keywords. The statements themselves are structured like logic puzzles and some examples requires multi-step reasoning. \n\nThis is to help the model deduce and infer based on given premises. The statements are either true, false or ambigious based on the IP's rules and consistency as well as Godless, Godliving or Mixed Domain. The overall idea is for the model to understand and generate content within' the themeatic and conceptual framework of my IP.\n\n\n- Curated by: [niltheory]\n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
77,
27,
4,
190,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
]
| [
"passage: TAGS\n#task_categories-text-classification #task_categories-question-answering #task_categories-text-generation #size_categories-n<1K #language-English #license-cc-by-sa-4.0 #complex reasoning #creative writing #logic puzzle #region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset is still a work in progress; it's pretty small at the moment.## Dataset Details### Dataset Description\n\n\nThe ExistenceTypes dataset is a highly-contextual logic puzzle classifier that is intended to help with contextual understanding. The model will have to pay attention to the surrounding text and not just the keywords. The statements themselves are structured like logic puzzles and some examples requires multi-step reasoning. \n\nThis is to help the model deduce and infer based on given premises. The statements are either true, false or ambigious based on the IP's rules and consistency as well as Godless, Godliving or Mixed Domain. The overall idea is for the model to understand and generate content within' the themeatic and conceptual framework of my IP.\n\n\n- Curated by: [niltheory]\n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
]
|
2a9038d96d3538c5e1255e3d22a0a371010e9747 | # Dataset Card for "oct-object-detection-v4"
Dataset is composed of images with multiples object detection box in coco format (xmin, ymin, xmax, ymax). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease.
The difference from v3 is images are grouped (not duplicated images in multiples row) and they can have multiples labels-boxes in the objects field. So there are, 566 unique images, there are 566 rows, one per image.
[Source datataset](https://doi.org/10.1101/2023.03.29.534704) | joseluhf11/oct-object-detection-v4 | [
"region:us"
]
| 2023-11-22T09:33:24+00:00 | {"dataset_info": {"features": [{"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "objects", "struct": [{"name": "bbox", "sequence": {"sequence": "int64"}}, {"name": "categories", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 71015506.0, "num_examples": 566}], "download_size": 70817145, "dataset_size": 71015506.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-28T12:02:17+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "oct-object-detection-v4"
Dataset is composed of images with multiples object detection box in coco format (xmin, ymin, xmax, ymax). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease.
The difference from v3 is images are grouped (not duplicated images in multiples row) and they can have multiples labels-boxes in the objects field. So there are, 566 unique images, there are 566 rows, one per image.
Source datataset | [
"# Dataset Card for \"oct-object-detection-v4\"\nDataset is composed of images with multiples object detection box in coco format (xmin, ymin, xmax, ymax). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease. \nThe difference from v3 is images are grouped (not duplicated images in multiples row) and they can have multiples labels-boxes in the objects field. So there are, 566 unique images, there are 566 rows, one per image.\nSource datataset"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"oct-object-detection-v4\"\nDataset is composed of images with multiples object detection box in coco format (xmin, ymin, xmax, ymax). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease. \nThe difference from v3 is images are grouped (not duplicated images in multiples row) and they can have multiples labels-boxes in the objects field. So there are, 566 unique images, there are 566 rows, one per image.\nSource datataset"
]
| [
6,
133
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"oct-object-detection-v4\"\nDataset is composed of images with multiples object detection box in coco format (xmin, ymin, xmax, ymax). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease. \nThe difference from v3 is images are grouped (not duplicated images in multiples row) and they can have multiples labels-boxes in the objects field. So there are, 566 unique images, there are 566 rows, one per image.\nSource datataset"
]
|
902ed0c5c2298962a3a61fca2886d2b7648532eb |
这是我做数学题的python脚本,做的可能不好,见谅
数学题包含了:
1.基础四则运算
2.一元一次方程
3.实际问题
联系方式:qq:2981447942
bilibili:一髅子Tick | TICK666/Basic-Math-Chinese-1M | [
"size_categories:1M<n<10M",
"language:zh",
"license:llama2",
"region:us"
]
| 2023-11-22T09:34:38+00:00 | {"language": ["zh"], "license": "llama2", "size_categories": ["1M<n<10M"], "pretty_name": "Basic-Math-Chinese-1M"} | 2023-11-23T12:16:02+00:00 | []
| [
"zh"
]
| TAGS
#size_categories-1M<n<10M #language-Chinese #license-llama2 #region-us
|
这是我做数学题的python脚本,做的可能不好,见谅
数学题包含了:
1.基础四则运算
2.一元一次方程
3.实际问题
联系方式:qq:2981447942
bilibili:一髅子Tick | []
| [
"TAGS\n#size_categories-1M<n<10M #language-Chinese #license-llama2 #region-us \n"
]
| [
30
]
| [
"passage: TAGS\n#size_categories-1M<n<10M #language-Chinese #license-llama2 #region-us \n"
]
|
100ffb5a64ce73490b11e876d9854b4c4d34bed0 |
This is an Encephalitis Title, Abstracts, and Search Queries Dataset
This dataset contains pairs of encephalitis title/abstracts and related search queries. The Title is first sentence in column called Title_Abstract.
The search queries are highly relevant to the abstracts, but use different keywords and phrasing. As a result, the abstracts may not appear in search results when using these queries with traditional search engines.
The goal of this dataset is to train models to retrieve relevant documents for searches that may be overlooked by conventional term-matching approaches.
The dataset was created in August 2023 and contains 53,146 encephalitis abstracts collected using the Biopython library. GPT-3.5 was then used to generate multiple search queries for each abstract that are relevant to topics discussed in the abstract, but the wording/phrasing is different from what is contained in the abstract.
By training on this data, more semantic models could better connect user search queries with relevant content. This has the potential to improve search recall for specialized domains like medical literature.
To open
```
import pandas as pd
df = pd.read_parquet('raw_training_df.parquet')
``` | Santosh-Gupta/EncephalitisQueryDocuments | [
"license:mit",
"region:us"
]
| 2023-11-22T09:36:12+00:00 | {"license": "mit"} | 2023-11-24T04:16:13+00:00 | []
| []
| TAGS
#license-mit #region-us
|
This is an Encephalitis Title, Abstracts, and Search Queries Dataset
This dataset contains pairs of encephalitis title/abstracts and related search queries. The Title is first sentence in column called Title_Abstract.
The search queries are highly relevant to the abstracts, but use different keywords and phrasing. As a result, the abstracts may not appear in search results when using these queries with traditional search engines.
The goal of this dataset is to train models to retrieve relevant documents for searches that may be overlooked by conventional term-matching approaches.
The dataset was created in August 2023 and contains 53,146 encephalitis abstracts collected using the Biopython library. GPT-3.5 was then used to generate multiple search queries for each abstract that are relevant to topics discussed in the abstract, but the wording/phrasing is different from what is contained in the abstract.
By training on this data, more semantic models could better connect user search queries with relevant content. This has the potential to improve search recall for specialized domains like medical literature.
To open
| []
| [
"TAGS\n#license-mit #region-us \n"
]
| [
11
]
| [
"passage: TAGS\n#license-mit #region-us \n"
]
|
d1b7df48c710f0a5bba921899501c35bd11edb63 | # Dataset Card for "ner-orgs"
This dataset is a concatenation of subsets of [Few-NERD](https://huggingface.co/datasets/DFKI-SLT/few-nerd), [CoNLL 2003](https://huggingface.co/datasets/conll2003) and [OntoNotes v5](https://huggingface.co/datasets/tner/ontonotes5), but only the "B-ORG" and "I-ORG" labels.
Exactly half of the samples per split contain organisations, while the other half do not contain any.
It was generated using the following script:
```py
import random
from datasets import load_dataset, concatenate_datasets, Features, Sequence, ClassLabel, Value, DatasetDict
FEATURES = Features(
{
"tokens": Sequence(feature=Value(dtype="string")),
"ner_tags": Sequence(feature=ClassLabel(names=["O", "B-ORG", "I-ORG"])),
}
)
def load_fewnerd():
def mapper(sample):
sample["ner_tags"] = [int(tag == 5) for tag in sample["ner_tags"]]
sample["ner_tags"] = [
2 if tag == 1 and idx > 0 and sample["ner_tags"][idx - 1] == 1 else tag
for idx, tag in enumerate(sample["ner_tags"])
]
return sample
dataset = load_dataset("DFKI-SLT/few-nerd", "supervised")
dataset = dataset.map(mapper, remove_columns=["id", "fine_ner_tags"])
dataset = dataset.cast(FEATURES)
return dataset
def load_conll():
label_mapping = {3: 1, 4: 2}
def mapper(sample):
sample["ner_tags"] = [label_mapping.get(tag, 0) for tag in sample["ner_tags"]]
return sample
dataset = load_dataset("conll2003")
dataset = dataset.map(mapper, remove_columns=["id", "pos_tags", "chunk_tags"])
dataset = dataset.cast(FEATURES)
return dataset
def load_ontonotes():
label_mapping = {11: 1, 12: 2}
def mapper(sample):
sample["ner_tags"] = [label_mapping.get(tag, 0) for tag in sample["ner_tags"]]
return sample
dataset = load_dataset("tner/ontonotes5")
dataset = dataset.rename_column("tags", "ner_tags")
dataset = dataset.map(mapper)
dataset = dataset.cast(FEATURES)
return dataset
def has_org(sample):
return bool(sum(sample["ner_tags"]))
def has_no_org(sample):
return not has_org(sample)
def preprocess_raw_dataset(raw_dataset):
# Set the number of sentences without an org equal to the number of sentences with an org
dataset_org = raw_dataset.filter(has_org)
dataset_no_org = raw_dataset.filter(has_no_org)
dataset_no_org = dataset_no_org.select(random.sample(range(len(dataset_no_org)), k=len(dataset_org)))
dataset = concatenate_datasets([dataset_org, dataset_no_org])
return dataset
def main() -> None:
fewnerd_dataset = load_fewnerd()
conll_dataset = load_conll()
ontonotes_dataset = load_ontonotes()
raw_train_dataset = concatenate_datasets([fewnerd_dataset["train"], conll_dataset["train"], ontonotes_dataset["train"]])
raw_eval_dataset = concatenate_datasets([fewnerd_dataset["validation"], conll_dataset["validation"], ontonotes_dataset["validation"]])
raw_test_dataset = concatenate_datasets([fewnerd_dataset["test"], conll_dataset["test"], ontonotes_dataset["test"]])
train_dataset = preprocess_raw_dataset(raw_train_dataset)
eval_dataset = preprocess_raw_dataset(raw_eval_dataset)
test_dataset = preprocess_raw_dataset(raw_test_dataset)
dataset_dict = DatasetDict(
{
"train": train_dataset,
"validation": eval_dataset,
"test": test_dataset,
}
)
dataset_dict.push_to_hub("ner-orgs", private=True)
if __name__ == "__main__":
main()
``` | tomaarsen/ner-orgs | [
"region:us"
]
| 2023-11-22T09:36:41+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-ORG", "2": "I-ORG"}}}}], "splits": [{"name": "train", "num_bytes": 40381520.59961503, "num_examples": 109424}, {"name": "validation", "num_bytes": 5782294.96333573, "num_examples": 15908}, {"name": "test", "num_bytes": 10727120.198367199, "num_examples": 28124}], "download_size": 14938552, "dataset_size": 56890935.76131796}} | 2023-11-22T16:53:37+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "ner-orgs"
This dataset is a concatenation of subsets of Few-NERD, CoNLL 2003 and OntoNotes v5, but only the "B-ORG" and "I-ORG" labels.
Exactly half of the samples per split contain organisations, while the other half do not contain any.
It was generated using the following script:
| [
"# Dataset Card for \"ner-orgs\"\n\nThis dataset is a concatenation of subsets of Few-NERD, CoNLL 2003 and OntoNotes v5, but only the \"B-ORG\" and \"I-ORG\" labels.\n\nExactly half of the samples per split contain organisations, while the other half do not contain any. \n\nIt was generated using the following script:"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"ner-orgs\"\n\nThis dataset is a concatenation of subsets of Few-NERD, CoNLL 2003 and OntoNotes v5, but only the \"B-ORG\" and \"I-ORG\" labels.\n\nExactly half of the samples per split contain organisations, while the other half do not contain any. \n\nIt was generated using the following script:"
]
| [
6,
90
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"ner-orgs\"\n\nThis dataset is a concatenation of subsets of Few-NERD, CoNLL 2003 and OntoNotes v5, but only the \"B-ORG\" and \"I-ORG\" labels.\n\nExactly half of the samples per split contain organisations, while the other half do not contain any. \n\nIt was generated using the following script:"
]
|
a8587dce6c5e46b4abc02134f189d6eb6fe0884e | # Dataset Card for "AIPD_nlp_sentence_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | patent/AIPD_nlp_sentence_dataset | [
"region:us"
]
| 2023-11-22T09:41:22+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "patent_num", "dtype": "int64"}, {"name": "claim_num1", "dtype": "int64"}, {"name": "claim_num2", "dtype": "int64"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 1141724170.7014475, "num_examples": 453043}, {"name": "test", "num_bytes": 63431500.71087167, "num_examples": 25170}, {"name": "valid", "num_bytes": 63428980.58768093, "num_examples": 25169}], "download_size": 479858442, "dataset_size": 1268584652.0}} | 2023-11-23T00:52:15+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "AIPD_nlp_sentence_dataset"
More Information needed | [
"# Dataset Card for \"AIPD_nlp_sentence_dataset\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"AIPD_nlp_sentence_dataset\"\n\nMore Information needed"
]
| [
6,
21
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"AIPD_nlp_sentence_dataset\"\n\nMore Information needed"
]
|
669a273f83027803ab901a5c05319b772ef89f14 |
Encephalitis Abstracts and Search Queries Dataset
This dataset contains pairs of encephalitis related question and answers.
The dataset was created in August 2023 and contains 53,146 encephalitis abstracts collected using the Biopython library. GPT-3.5 was then used to generate several pairs of questions and answers from each abstract.
To open
```
import pandas as pd
df = pd.read_parquet('combined_dataset_df.parquet')
```
| Santosh-Gupta/EncephalitisQuestionAnswering | [
"license:mit",
"region:us"
]
| 2023-11-22T10:21:39+00:00 | {"license": "mit"} | 2023-11-22T10:28:45+00:00 | []
| []
| TAGS
#license-mit #region-us
|
Encephalitis Abstracts and Search Queries Dataset
This dataset contains pairs of encephalitis related question and answers.
The dataset was created in August 2023 and contains 53,146 encephalitis abstracts collected using the Biopython library. GPT-3.5 was then used to generate several pairs of questions and answers from each abstract.
To open
| []
| [
"TAGS\n#license-mit #region-us \n"
]
| [
11
]
| [
"passage: TAGS\n#license-mit #region-us \n"
]
|
2b567ebf76938888543d8ab296ffb609cdd5afc7 |
# [doc] image dataset 5
This dataset contains 4 jpg image files in the /train directory, with a CSV metadata file providing another data column.
| severo/doc-image-5 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T10:51:22+00:00 | {"size_categories": ["n<1K"]} | 2023-11-22T10:54:02+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] image dataset 5
This dataset contains 4 jpg image files in the /train directory, with a CSV metadata file providing another data column.
| [
"# [doc] image dataset 5\n\nThis dataset contains 4 jpg image files in the /train directory, with a CSV metadata file providing another data column."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] image dataset 5\n\nThis dataset contains 4 jpg image files in the /train directory, with a CSV metadata file providing another data column."
]
| [
16,
40
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] image dataset 5\n\nThis dataset contains 4 jpg image files in the /train directory, with a CSV metadata file providing another data column."
]
|
99d31ca3c5d0ccef4bc85ccca867e0a13b1d6225 | ---
# Dataset Card for "oct-object-detection-v4-merge"
Dataset is composed of images with multiples object detection box in coco format (xmin, ymin, xmax, ymax). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease.
The difference from v3 is images are grouped (not duplicated images in multiples row) and they can have multiples labels-boxes in the objects field. So there are, 566 unique images, there are 566 rows, one per image.
Also, overlapped boxes are joined as merge function
[Source datataset](https://doi.org/10.1101/2023.03.29.534704) | joseluhf11/oct-object-detection-v4-merge | [
"region:us"
]
| 2023-11-22T11:19:06+00:00 | {"dataset_info": {"features": [{"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "objects", "struct": [{"name": "bbox", "sequence": {"sequence": "int64"}}, {"name": "categories", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 70990022.0, "num_examples": 566}], "download_size": 70811624, "dataset_size": 70990022.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-28T12:00:44+00:00 | []
| []
| TAGS
#region-us
| ---
# Dataset Card for "oct-object-detection-v4-merge"
Dataset is composed of images with multiples object detection box in coco format (xmin, ymin, xmax, ymax). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease.
The difference from v3 is images are grouped (not duplicated images in multiples row) and they can have multiples labels-boxes in the objects field. So there are, 566 unique images, there are 566 rows, one per image.
Also, overlapped boxes are joined as merge function
Source datataset | [
"# Dataset Card for \"oct-object-detection-v4-merge\"\nDataset is composed of images with multiples object detection box in coco format (xmin, ymin, xmax, ymax). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease. \nThe difference from v3 is images are grouped (not duplicated images in multiples row) and they can have multiples labels-boxes in the objects field. So there are, 566 unique images, there are 566 rows, one per image.\nAlso, overlapped boxes are joined as merge function\nSource datataset"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"oct-object-detection-v4-merge\"\nDataset is composed of images with multiples object detection box in coco format (xmin, ymin, xmax, ymax). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease. \nThe difference from v3 is images are grouped (not duplicated images in multiples row) and they can have multiples labels-boxes in the objects field. So there are, 566 unique images, there are 566 rows, one per image.\nAlso, overlapped boxes are joined as merge function\nSource datataset"
]
| [
6,
148
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"oct-object-detection-v4-merge\"\nDataset is composed of images with multiples object detection box in coco format (xmin, ymin, xmax, ymax). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease. \nThe difference from v3 is images are grouped (not duplicated images in multiples row) and they can have multiples labels-boxes in the objects field. So there are, 566 unique images, there are 566 rows, one per image.\nAlso, overlapped boxes are joined as merge function\nSource datataset"
]
|
91429f4c903e2c7a55e39294129d30c2222a2eba | ---
# Dataset Card for "oct-object-detection-v4-average"
Dataset is composed of images with multiples object detection box in coco format (xmin, ymin, xmax, ymax). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease.
The difference from v3 is images are grouped (not duplicated images in multiples row) and they can have multiples labels-boxes in the objects field. So there are, 566 unique images, there are 566 rows, one per image.
Also, overlapped boxes are joined as average function
[Source datataset](https://doi.org/10.1101/2023.03.29.534704) | joseluhf11/oct-object-detection-v4-average | [
"region:us"
]
| 2023-11-22T11:19:16+00:00 | {"dataset_info": {"features": [{"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "objects", "struct": [{"name": "bbox", "sequence": {"sequence": "int64"}}, {"name": "categories", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 70990022.0, "num_examples": 566}], "download_size": 70811672, "dataset_size": 70990022.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-28T12:02:26+00:00 | []
| []
| TAGS
#region-us
| ---
# Dataset Card for "oct-object-detection-v4-average"
Dataset is composed of images with multiples object detection box in coco format (xmin, ymin, xmax, ymax). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease.
The difference from v3 is images are grouped (not duplicated images in multiples row) and they can have multiples labels-boxes in the objects field. So there are, 566 unique images, there are 566 rows, one per image.
Also, overlapped boxes are joined as average function
Source datataset | [
"# Dataset Card for \"oct-object-detection-v4-average\"\nDataset is composed of images with multiples object detection box in coco format (xmin, ymin, xmax, ymax). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease. \nThe difference from v3 is images are grouped (not duplicated images in multiples row) and they can have multiples labels-boxes in the objects field. So there are, 566 unique images, there are 566 rows, one per image.\nAlso, overlapped boxes are joined as average function\nSource datataset"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"oct-object-detection-v4-average\"\nDataset is composed of images with multiples object detection box in coco format (xmin, ymin, xmax, ymax). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease. \nThe difference from v3 is images are grouped (not duplicated images in multiples row) and they can have multiples labels-boxes in the objects field. So there are, 566 unique images, there are 566 rows, one per image.\nAlso, overlapped boxes are joined as average function\nSource datataset"
]
| [
6,
148
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"oct-object-detection-v4-average\"\nDataset is composed of images with multiples object detection box in coco format (xmin, ymin, xmax, ymax). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease. \nThe difference from v3 is images are grouped (not duplicated images in multiples row) and they can have multiples labels-boxes in the objects field. So there are, 566 unique images, there are 566 rows, one per image.\nAlso, overlapped boxes are joined as average function\nSource datataset"
]
|
3e73fc3e061e7daf00d32c2ceed722238f1188a7 | # Dataset Card for "evaluation_align_v1__store_baehanjin_work_ml-training_v3_merged-public"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | male-2/evaluation_align_v1__store_baehanjin_work_ml-training_v3_merged-public | [
"region:us"
]
| 2023-11-22T11:20:52+00:00 | {"dataset_info": {"features": [{"name": "Aspect", "dtype": "string"}, {"name": "Sub-Aspect", "dtype": "string"}, {"name": "Query", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "Dialogue", "dtype": "string"}, {"name": "Response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 475, "num_examples": 1}], "download_size": 5647, "dataset_size": 475}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-22T11:20:54+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "evaluation_align_v1__store_baehanjin_work_ml-training_v3_merged-public"
More Information needed | [
"# Dataset Card for \"evaluation_align_v1__store_baehanjin_work_ml-training_v3_merged-public\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"evaluation_align_v1__store_baehanjin_work_ml-training_v3_merged-public\"\n\nMore Information needed"
]
| [
6,
38
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"evaluation_align_v1__store_baehanjin_work_ml-training_v3_merged-public\"\n\nMore Information needed"
]
|
a7b3b5902642eba9b10baf519472bbb7b97e4408 | # Dataset Card for "mt_bench"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | SUSTech/mt_bench | [
"region:us"
]
| 2023-11-22T11:41:19+00:00 | {"dataset_info": {"features": [{"name": "question_id", "dtype": "int64"}, {"name": "category", "dtype": "string"}, {"name": "turns", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "reference", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 46852, "num_examples": 80}], "download_size": 31246, "dataset_size": 46852}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-01T13:27:55+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "mt_bench"
More Information needed | [
"# Dataset Card for \"mt_bench\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"mt_bench\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"mt_bench\"\n\nMore Information needed"
]
|
9514b8bdb17cbc53de9a877ae4d52a55effcf34e | # Dataset Card for Election-Related Fake News Classification
## Dataset Summary
This dataset is designed for the task of fake news classification in the context of elections. It consists of news articles, social media posts, and other text sources related to various elections worldwide. Each entry in the dataset is labeled as 'fake' or 'real' based on its content and the veracity of the information presented.
https://arxiv.org/abs/2312.03750
### Languages
English
### Data Instances
A typical data instance comprises:
- **Text:** The content of the news article or post.
- **Label:** A binary label, where '0' indicates 'real' news and '1' indicates 'fake' news.
Example:
```json
{
"text": "The president announced a new policy today...",
"label": REAL
}
```
#### Annotation process
Annotations were performed by using LLMs and which is verified by subject matter experts who checked each text as 'real' or 'fake' based on factual accuracy and context.
### Social Impact of Dataset
This dataset plays a crucial role in combating the spread of misinformation during elections, which is vital for maintaining the integrity of democratic processes.
### Discussion of Biases
There may be biases in the dataset due to the predominance of certain sources or the subjective nature of some news categorizations.
## Citation
If you use this dataset in your research, please cite it as follows:
```bibtex
@article{rahman2023analyzing,
title={Analyzing the Influence of Fake News in the 2024 Elections: A Comprehensive Dataset},
author={Rahman, Mizanur and Raza, Shaina},
journal={arXiv preprint arXiv:2312.03750},
year={2023}
}
| newsmediabias/fake_news_elections_labelled_data | [
"task_categories:text-classification",
"language:en",
"license:mit",
"arxiv:2312.03750",
"region:us"
]
| 2023-11-22T11:54:52+00:00 | {"language": ["en"], "license": "mit", "task_categories": ["text-classification"], "pretty_name": "fake-news-elections"} | 2023-12-13T12:38:00+00:00 | [
"2312.03750"
]
| [
"en"
]
| TAGS
#task_categories-text-classification #language-English #license-mit #arxiv-2312.03750 #region-us
| # Dataset Card for Election-Related Fake News Classification
## Dataset Summary
This dataset is designed for the task of fake news classification in the context of elections. It consists of news articles, social media posts, and other text sources related to various elections worldwide. Each entry in the dataset is labeled as 'fake' or 'real' based on its content and the veracity of the information presented.
URL
### Languages
English
### Data Instances
A typical data instance comprises:
- Text: The content of the news article or post.
- Label: A binary label, where '0' indicates 'real' news and '1' indicates 'fake' news.
Example:
#### Annotation process
Annotations were performed by using LLMs and which is verified by subject matter experts who checked each text as 'real' or 'fake' based on factual accuracy and context.
### Social Impact of Dataset
This dataset plays a crucial role in combating the spread of misinformation during elections, which is vital for maintaining the integrity of democratic processes.
### Discussion of Biases
There may be biases in the dataset due to the predominance of certain sources or the subjective nature of some news categorizations.
If you use this dataset in your research, please cite it as follows:
'''bibtex
@article{rahman2023analyzing,
title={Analyzing the Influence of Fake News in the 2024 Elections: A Comprehensive Dataset},
author={Rahman, Mizanur and Raza, Shaina},
journal={arXiv preprint arXiv:2312.03750},
year={2023}
}
| [
"# Dataset Card for Election-Related Fake News Classification",
"## Dataset Summary\n\nThis dataset is designed for the task of fake news classification in the context of elections. It consists of news articles, social media posts, and other text sources related to various elections worldwide. Each entry in the dataset is labeled as 'fake' or 'real' based on its content and the veracity of the information presented.\n\nURL",
"### Languages\nEnglish",
"### Data Instances\n\nA typical data instance comprises:\n\n- Text: The content of the news article or post.\n- Label: A binary label, where '0' indicates 'real' news and '1' indicates 'fake' news.\n\nExample:",
"#### Annotation process\n\nAnnotations were performed by using LLMs and which is verified by subject matter experts who checked each text as 'real' or 'fake' based on factual accuracy and context.",
"### Social Impact of Dataset\n\nThis dataset plays a crucial role in combating the spread of misinformation during elections, which is vital for maintaining the integrity of democratic processes.",
"### Discussion of Biases\n\nThere may be biases in the dataset due to the predominance of certain sources or the subjective nature of some news categorizations.\n\n\nIf you use this dataset in your research, please cite it as follows:\n\n'''bibtex\n@article{rahman2023analyzing,\n title={Analyzing the Influence of Fake News in the 2024 Elections: A Comprehensive Dataset},\n author={Rahman, Mizanur and Raza, Shaina},\n journal={arXiv preprint arXiv:2312.03750},\n year={2023}\n}"
]
| [
"TAGS\n#task_categories-text-classification #language-English #license-mit #arxiv-2312.03750 #region-us \n",
"# Dataset Card for Election-Related Fake News Classification",
"## Dataset Summary\n\nThis dataset is designed for the task of fake news classification in the context of elections. It consists of news articles, social media posts, and other text sources related to various elections worldwide. Each entry in the dataset is labeled as 'fake' or 'real' based on its content and the veracity of the information presented.\n\nURL",
"### Languages\nEnglish",
"### Data Instances\n\nA typical data instance comprises:\n\n- Text: The content of the news article or post.\n- Label: A binary label, where '0' indicates 'real' news and '1' indicates 'fake' news.\n\nExample:",
"#### Annotation process\n\nAnnotations were performed by using LLMs and which is verified by subject matter experts who checked each text as 'real' or 'fake' based on factual accuracy and context.",
"### Social Impact of Dataset\n\nThis dataset plays a crucial role in combating the spread of misinformation during elections, which is vital for maintaining the integrity of democratic processes.",
"### Discussion of Biases\n\nThere may be biases in the dataset due to the predominance of certain sources or the subjective nature of some news categorizations.\n\n\nIf you use this dataset in your research, please cite it as follows:\n\n'''bibtex\n@article{rahman2023analyzing,\n title={Analyzing the Influence of Fake News in the 2024 Elections: A Comprehensive Dataset},\n author={Rahman, Mizanur and Raza, Shaina},\n journal={arXiv preprint arXiv:2312.03750},\n year={2023}\n}"
]
| [
35,
15,
80,
5,
58,
49,
41,
139
]
| [
"passage: TAGS\n#task_categories-text-classification #language-English #license-mit #arxiv-2312.03750 #region-us \n# Dataset Card for Election-Related Fake News Classification## Dataset Summary\n\nThis dataset is designed for the task of fake news classification in the context of elections. It consists of news articles, social media posts, and other text sources related to various elections worldwide. Each entry in the dataset is labeled as 'fake' or 'real' based on its content and the veracity of the information presented.\n\nURL### Languages\nEnglish### Data Instances\n\nA typical data instance comprises:\n\n- Text: The content of the news article or post.\n- Label: A binary label, where '0' indicates 'real' news and '1' indicates 'fake' news.\n\nExample:#### Annotation process\n\nAnnotations were performed by using LLMs and which is verified by subject matter experts who checked each text as 'real' or 'fake' based on factual accuracy and context.### Social Impact of Dataset\n\nThis dataset plays a crucial role in combating the spread of misinformation during elections, which is vital for maintaining the integrity of democratic processes.### Discussion of Biases\n\nThere may be biases in the dataset due to the predominance of certain sources or the subjective nature of some news categorizations.\n\n\nIf you use this dataset in your research, please cite it as follows:\n\n'''bibtex\n@article{rahman2023analyzing,\n title={Analyzing the Influence of Fake News in the 2024 Elections: A Comprehensive Dataset},\n author={Rahman, Mizanur and Raza, Shaina},\n journal={arXiv preprint arXiv:2312.03750},\n year={2023}\n}"
]
|
8162832d59ffb1b15f98d5f85fb11b7a5d8cb832 |
# Dataset of Nakano Ichika
This is the dataset of Nakano Ichika, containing 397 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 397 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 889 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 1074 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 397 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 397 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 397 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 889 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 889 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 757 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 1074 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 1074 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| CyberHarem/nakano_ichika_gotoubunnohanayome | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
]
| 2023-11-22T12:11:42+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-11-22T12:25:48+00:00 | []
| []
| TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of Nakano Ichika
========================
This is the dataset of Nakano Ichika, containing 397 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
| []
| [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
]
| [
44
]
| [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
]
|
a10544a8e09e0013b8a2990c184f8e4bbeb834f9 | # Dataset Card for "undl_fr2en_translation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bot-yaya/undl_fr2en_translation | [
"region:us"
]
| 2023-11-22T12:39:04+00:00 | {"dataset_info": {"features": [{"name": "clean_fr", "sequence": "string"}, {"name": "clean_en", "sequence": "string"}, {"name": "record", "dtype": "string"}, {"name": "fr2en", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 15449493383, "num_examples": 165840}], "download_size": 0, "dataset_size": 15449493383}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-22T13:47:16+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "undl_fr2en_translation"
More Information needed | [
"# Dataset Card for \"undl_fr2en_translation\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"undl_fr2en_translation\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"undl_fr2en_translation\"\n\nMore Information needed"
]
|
cca1c61a483abe66f2678851ec368543badb69c4 |
# [doc] image dataset 7
This dataset contains 2 jpg image files in the /green directory, and 2 jpg image files in the /red directory.
| severo/doc-image-7 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T12:42:20+00:00 | {"size_categories": ["n<1K"]} | 2023-11-22T12:44:21+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] image dataset 7
This dataset contains 2 jpg image files in the /green directory, and 2 jpg image files in the /red directory.
| [
"# [doc] image dataset 7\n\nThis dataset contains 2 jpg image files in the /green directory, and 2 jpg image files in the /red directory."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] image dataset 7\n\nThis dataset contains 2 jpg image files in the /green directory, and 2 jpg image files in the /red directory."
]
| [
16,
38
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] image dataset 7\n\nThis dataset contains 2 jpg image files in the /green directory, and 2 jpg image files in the /red directory."
]
|
e4b965935fb8daac6a21b392e757601f8d8332b0 |
The Flickr30K dataset filtered and translated to Persian.
This dataset was originally made by **Sajjad Ayoubi** and uploaded to Kaggle at [https://www.kaggle.com/datasets/sajjadayobi360/flickrfa](https://www.kaggle.com/datasets/sajjadayobi360/flickrfa).
This repo contains the exact dataset split to train/test using a custom sampling criteria and can be directly loaded using HuggingFace datasets or right from Hezar.
### Usage
#### Hugging Face Datasets
```
pip install datasets
```
```python
from datasets import load_dataset
dataset = load_dataset("hezarai/flickr30k-fa")
```
#### Hezar
```
pip install hezar
```
```python
from hezar.data import Dataset
dataset = Dataset.load("hezarai/flickr30k-fa", split="train")
``` | hezarai/flickr30k-fa | [
"task_categories:image-to-text",
"size_categories:10K<n<100K",
"language:fa",
"license:apache-2.0",
"hezar",
"region:us"
]
| 2023-11-22T12:58:08+00:00 | {"language": ["fa"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["image-to-text"], "pretty_name": "Flickr30K Fa", "tags": ["hezar"]} | 2023-12-19T15:04:45+00:00 | []
| [
"fa"
]
| TAGS
#task_categories-image-to-text #size_categories-10K<n<100K #language-Persian #license-apache-2.0 #hezar #region-us
|
The Flickr30K dataset filtered and translated to Persian.
This dataset was originally made by Sajjad Ayoubi and uploaded to Kaggle at URL
This repo contains the exact dataset split to train/test using a custom sampling criteria and can be directly loaded using HuggingFace datasets or right from Hezar.
### Usage
#### Hugging Face Datasets
#### Hezar
| [
"### Usage",
"#### Hugging Face Datasets",
"#### Hezar"
]
| [
"TAGS\n#task_categories-image-to-text #size_categories-10K<n<100K #language-Persian #license-apache-2.0 #hezar #region-us \n",
"### Usage",
"#### Hugging Face Datasets",
"#### Hezar"
]
| [
46,
4,
8,
4
]
| [
"passage: TAGS\n#task_categories-image-to-text #size_categories-10K<n<100K #language-Persian #license-apache-2.0 #hezar #region-us \n### Usage#### Hugging Face Datasets#### Hezar"
]
|
8ee7d3274cf713cacfee7107f3aa8e8f3c362912 |
# [doc] audio dataset 1
This dataset contains 4 wav audio files at the root.
| severo/doc-audio-1 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T13:07:39+00:00 | {"size_categories": ["n<1K"]} | 2023-11-22T13:15:12+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] audio dataset 1
This dataset contains 4 wav audio files at the root.
| [
"# [doc] audio dataset 1\n\nThis dataset contains 4 wav audio files at the root."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] audio dataset 1\n\nThis dataset contains 4 wav audio files at the root."
]
| [
16,
22
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] audio dataset 1\n\nThis dataset contains 4 wav audio files at the root."
]
|
0584c6bff7f526ae5670b55f974ec02c723a0160 |
# [doc] audio dataset 2
This dataset contains 4 audio files at the root, using formats aiff, mp3, mp3 and flac. | severo/doc-audio-2 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T13:08:07+00:00 | {"size_categories": ["n<1K"]} | 2023-11-22T13:16:11+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] audio dataset 2
This dataset contains 4 audio files at the root, using formats aiff, mp3, mp3 and flac. | [
"# [doc] audio dataset 2\n\nThis dataset contains 4 audio files at the root, using formats aiff, mp3, mp3 and flac."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] audio dataset 2\n\nThis dataset contains 4 audio files at the root, using formats aiff, mp3, mp3 and flac."
]
| [
16,
34
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] audio dataset 2\n\nThis dataset contains 4 audio files at the root, using formats aiff, mp3, mp3 and flac."
]
|
e30e29629cb7c5a9a34f680b8b3f0e3ccda513a6 | # Dataset Card for "ead57b12"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | result-kand2-sdxl-wuerst-karlo/ead57b12 | [
"region:us"
]
| 2023-11-22T13:11:24+00:00 | {"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 163, "num_examples": 10}], "download_size": 1347, "dataset_size": 163}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-22T13:11:27+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "ead57b12"
More Information needed | [
"# Dataset Card for \"ead57b12\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"ead57b12\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"ead57b12\"\n\nMore Information needed"
]
|
4aa21c233ad3454148898551656602014c1c9f55 | # Dataset Card for "PDEBench-1D"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | erbacher/PDEBench-1D | [
"region:us"
]
| 2023-11-22T13:13:55+00:00 | {"dataset_info": [{"config_name": "Advection_Sols_beta0.1", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 2079020000, "num_examples": 10000}], "download_size": 1030317301, "dataset_size": 2079020000}, {"config_name": "Advection_Sols_beta0.2", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 2079020000, "num_examples": 10000}], "download_size": 1034054442, "dataset_size": 2079020000}, {"config_name": "Advection_Sols_beta0.4", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 2079020000, "num_examples": 10000}], "download_size": 1037220772, "dataset_size": 2079020000}, {"config_name": "Advection_Sols_beta0.7", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 2079020000, "num_examples": 10000}], "download_size": 1039496575, "dataset_size": 2079020000}, {"config_name": "Advection_Sols_beta1.0", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 2079020000, "num_examples": 10000}], "download_size": 1041009183, "dataset_size": 2079020000}, {"config_name": "Advection_Sols_beta2.0", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 2079020000, "num_examples": 10000}], "download_size": 1041263590, "dataset_size": 2079020000}, {"config_name": "Advection_Sols_beta4.0", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 2079020000, "num_examples": 10000}], "download_size": 1041302186, "dataset_size": 2079020000}, {"config_name": "Advection_Sols_beta7.0", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 2079020000, "num_examples": 10000}], "download_size": 1041314010, "dataset_size": 2079020000}, {"config_name": "Burgers_Sols_Nu0.001", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1975050000.0, "num_examples": 9500}, {"name": "dev", "num_bytes": 51975000.0, "num_examples": 250}, {"name": "test", "num_bytes": 51975000.0, "num_examples": 250}], "download_size": 1028326119, "dataset_size": 2079000000.0}, {"config_name": "Burgers_Sols_Nu0.002", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1975050000.0, "num_examples": 9500}, {"name": "dev", "num_bytes": 51975000.0, "num_examples": 250}, {"name": "test", "num_bytes": 51975000.0, "num_examples": 250}], "download_size": 1034543373, "dataset_size": 2079000000.0}, {"config_name": "Burgers_Sols_Nu0.004", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1975050000.0, "num_examples": 9500}, {"name": "dev", "num_bytes": 51975000.0, "num_examples": 250}, {"name": "test", "num_bytes": 51975000.0, "num_examples": 250}], "download_size": 1039636457, "dataset_size": 2079000000.0}, {"config_name": "Burgers_Sols_Nu0.01", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1975040500.0, "num_examples": 9500}, {"name": "dev", "num_bytes": 51974750.0, "num_examples": 250}, {"name": "test", "num_bytes": 51974750.0, "num_examples": 250}], "download_size": 1042820960, "dataset_size": 2078990000.0}, {"config_name": "Burgers_Sols_Nu0.02", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1975040500.0, "num_examples": 9500}, {"name": "dev", "num_bytes": 51974750.0, "num_examples": 250}, {"name": "test", "num_bytes": 51974750.0, "num_examples": 250}], "download_size": 1043138323, "dataset_size": 2078990000.0}, {"config_name": "Burgers_Sols_Nu0.04", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1975040500.0, "num_examples": 9500}, {"name": "dev", "num_bytes": 51974750.0, "num_examples": 250}, {"name": "test", "num_bytes": 51974750.0, "num_examples": 250}], "download_size": 1035623715, "dataset_size": 2078990000.0}, {"config_name": "Burgers_Sols_Nu0.1", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1975031000.0, "num_examples": 9500}, {"name": "dev", "num_bytes": 51974500.0, "num_examples": 250}, {"name": "test", "num_bytes": 51974500.0, "num_examples": 250}], "download_size": 995415792, "dataset_size": 2078980000.0}, {"config_name": "Burgers_Sols_Nu0.2", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1975031000.0, "num_examples": 9500}, {"name": "dev", "num_bytes": 51974500.0, "num_examples": 250}, {"name": "test", "num_bytes": 51974500.0, "num_examples": 250}], "download_size": 949166113, "dataset_size": 2078980000.0}, {"config_name": "Burgers_Sols_Nu0.4", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1975031000.0, "num_examples": 9500}, {"name": "dev", "num_bytes": 51974500.0, "num_examples": 250}, {"name": "test", "num_bytes": 51974500.0, "num_examples": 250}], "download_size": 847341109, "dataset_size": 2078980000.0}, {"config_name": "Burgers_Sols_Nu1.0", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1975031000.0, "num_examples": 9500}, {"name": "dev", "num_bytes": 51974500.0, "num_examples": 250}, {"name": "test", "num_bytes": 51974500.0, "num_examples": 250}], "download_size": 573087335, "dataset_size": 2078980000.0}, {"config_name": "Burgers_Sols_Nu2.0", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1975031000.0, "num_examples": 9500}, {"name": "dev", "num_bytes": 51974500.0, "num_examples": 250}, {"name": "test", "num_bytes": 51974500.0, "num_examples": 250}], "download_size": 315101631, "dataset_size": 2078980000.0}, {"config_name": "Burgers_Sols_Nu4.0", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1975031000.0, "num_examples": 9500}, {"name": "dev", "num_bytes": 51974500.0, "num_examples": 250}, {"name": "test", "num_bytes": 51974500.0, "num_examples": 250}], "download_size": 189417705, "dataset_size": 2078980000.0}, {"config_name": "CFD_Rand_Eta0.01_Zeta0.01_periodic", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 2099620000, "num_examples": 10000}], "download_size": 1576405761, "dataset_size": 2099620000}, {"config_name": "CFD_Rand_Eta0.1_Zeta0.1_periodic", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 2099600000, "num_examples": 10000}], "download_size": 946984963, "dataset_size": 2099600000}, {"config_name": "CFD_Rand_Eta1.e-8_Zeta1.e-8_periodic", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 2099640000, "num_examples": 10000}], "download_size": 1573309616, "dataset_size": 2099640000}, {"config_name": "CFD_Rand_Eta1.e-8_Zeta1.e-8_trans", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 2099610000, "num_examples": 10000}], "download_size": 0, "dataset_size": 2099610000}, {"config_name": "ReacDiff_Nu0.5_Rho1.0", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1055010000, "num_examples": 10000}], "download_size": 103983829, "dataset_size": 1055010000}, {"config_name": "ReacDiff_Nu0.5_Rho10.0", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1055020000, "num_examples": 10000}], "download_size": 124933565, "dataset_size": 1055020000}, {"config_name": "ReacDiff_Nu0.5_Rho2.0", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1055010000, "num_examples": 10000}], "download_size": 193004745, "dataset_size": 1055010000}, {"config_name": "ReacDiff_Nu0.5_Rho5.0", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1055010000, "num_examples": 10000}], "download_size": 146090506, "dataset_size": 1055010000}, {"config_name": "ReacDiff_Nu1.0_Rho1.0", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1055010000, "num_examples": 10000}], "download_size": 217153008, "dataset_size": 1055010000}, {"config_name": "ReacDiff_Nu1.0_Rho10.0", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1055020000, "num_examples": 10000}], "download_size": 113039664, "dataset_size": 1055020000}, {"config_name": "ReacDiff_Nu1.0_Rho2.0", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1055010000, "num_examples": 10000}], "download_size": 139659779, "dataset_size": 1055010000}, {"config_name": "ReacDiff_Nu1.0_Rho5.0", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1055010000, "num_examples": 10000}], "download_size": 121216648, "dataset_size": 1055010000}, {"config_name": "ReacDiff_Nu2.0_Rho1.0", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1055010000, "num_examples": 10000}], "download_size": 57854036, "dataset_size": 1055010000}, {"config_name": "ReacDiff_Nu2.0_Rho10.0", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1055020000, "num_examples": 10000}], "download_size": 73754842, "dataset_size": 1055020000}, {"config_name": "ReacDiff_Nu2.0_Rho2.0", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1055010000, "num_examples": 10000}], "download_size": 122071454, "dataset_size": 1055010000}, {"config_name": "ReacDiff_Nu2.0_Rho5.0", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1055010000, "num_examples": 10000}], "download_size": 74329093, "dataset_size": 1055010000}, {"config_name": "ReacDiff_Nu5.0_Rho1.0", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1055010000, "num_examples": 10000}], "download_size": 158789252, "dataset_size": 1055010000}, {"config_name": "ReacDiff_Nu5.0_Rho10.0", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1055020000, "num_examples": 10000}], "download_size": 55445429, "dataset_size": 1055020000}, {"config_name": "ReacDiff_Nu5.0_Rho2.0", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1055010000, "num_examples": 10000}], "download_size": 58220881, "dataset_size": 1055010000}, {"config_name": "ReacDiff_Nu5.0_Rho5.0", "features": [{"name": "parameters", "dtype": "string"}, {"name": "tensor", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1055010000, "num_examples": 10000}], "download_size": 57392368, "dataset_size": 1055010000}], "configs": [{"config_name": "Advection_Sols_beta0.1", "data_files": [{"split": "train", "path": "Advection_Sols_beta0.1/train-*"}]}, {"config_name": "Advection_Sols_beta0.2", "data_files": [{"split": "train", "path": "Advection_Sols_beta0.2/train-*"}]}, {"config_name": "Advection_Sols_beta0.4", "data_files": [{"split": "train", "path": "Advection_Sols_beta0.4/train-*"}]}, {"config_name": "Advection_Sols_beta0.7", "data_files": [{"split": "train", "path": "Advection_Sols_beta0.7/train-*"}]}, {"config_name": "Advection_Sols_beta1.0", "data_files": [{"split": "train", "path": "Advection_Sols_beta1.0/train-*"}]}, {"config_name": "Advection_Sols_beta2.0", "data_files": [{"split": "train", "path": "Advection_Sols_beta2.0/train-*"}]}, {"config_name": "Advection_Sols_beta4.0", "data_files": [{"split": "train", "path": "Advection_Sols_beta4.0/train-*"}]}, {"config_name": "Advection_Sols_beta7.0", "data_files": [{"split": "train", "path": "Advection_Sols_beta7.0/train-*"}]}, {"config_name": "Burgers_Sols_Nu0.001", "data_files": [{"split": "train", "path": "Burgers_Sols_Nu0.001/train-*"}, {"split": "dev", "path": "Burgers_Sols_Nu0.001/dev-*"}, {"split": "test", "path": "Burgers_Sols_Nu0.001/test-*"}]}, {"config_name": "Burgers_Sols_Nu0.002", "data_files": [{"split": "train", "path": "Burgers_Sols_Nu0.002/train-*"}, {"split": "dev", "path": "Burgers_Sols_Nu0.002/dev-*"}, {"split": "test", "path": "Burgers_Sols_Nu0.002/test-*"}]}, {"config_name": "Burgers_Sols_Nu0.004", "data_files": [{"split": "train", "path": "Burgers_Sols_Nu0.004/train-*"}, {"split": "dev", "path": "Burgers_Sols_Nu0.004/dev-*"}, {"split": "test", "path": "Burgers_Sols_Nu0.004/test-*"}]}, {"config_name": "Burgers_Sols_Nu0.01", "data_files": [{"split": "train", "path": "Burgers_Sols_Nu0.01/train-*"}, {"split": "dev", "path": "Burgers_Sols_Nu0.01/dev-*"}, {"split": "test", "path": "Burgers_Sols_Nu0.01/test-*"}]}, {"config_name": "Burgers_Sols_Nu0.02", "data_files": [{"split": "train", "path": "Burgers_Sols_Nu0.02/train-*"}, {"split": "dev", "path": "Burgers_Sols_Nu0.02/dev-*"}, {"split": "test", "path": "Burgers_Sols_Nu0.02/test-*"}]}, {"config_name": "Burgers_Sols_Nu0.04", "data_files": [{"split": "train", "path": "Burgers_Sols_Nu0.04/train-*"}, {"split": "dev", "path": "Burgers_Sols_Nu0.04/dev-*"}, {"split": "test", "path": "Burgers_Sols_Nu0.04/test-*"}]}, {"config_name": "Burgers_Sols_Nu0.1", "data_files": [{"split": "train", "path": "Burgers_Sols_Nu0.1/train-*"}, {"split": "dev", "path": "Burgers_Sols_Nu0.1/dev-*"}, {"split": "test", "path": "Burgers_Sols_Nu0.1/test-*"}]}, {"config_name": "Burgers_Sols_Nu0.2", "data_files": [{"split": "train", "path": "Burgers_Sols_Nu0.2/train-*"}, {"split": "dev", "path": "Burgers_Sols_Nu0.2/dev-*"}, {"split": "test", "path": "Burgers_Sols_Nu0.2/test-*"}]}, {"config_name": "Burgers_Sols_Nu0.4", "data_files": [{"split": "train", "path": "Burgers_Sols_Nu0.4/train-*"}, {"split": "dev", "path": "Burgers_Sols_Nu0.4/dev-*"}, {"split": "test", "path": "Burgers_Sols_Nu0.4/test-*"}]}, {"config_name": "Burgers_Sols_Nu1.0", "data_files": [{"split": "train", "path": "Burgers_Sols_Nu1.0/train-*"}, {"split": "dev", "path": "Burgers_Sols_Nu1.0/dev-*"}, {"split": "test", "path": "Burgers_Sols_Nu1.0/test-*"}]}, {"config_name": "Burgers_Sols_Nu2.0", "data_files": [{"split": "train", "path": "Burgers_Sols_Nu2.0/train-*"}, {"split": "dev", "path": "Burgers_Sols_Nu2.0/dev-*"}, {"split": "test", "path": "Burgers_Sols_Nu2.0/test-*"}]}, {"config_name": "Burgers_Sols_Nu4.0", "data_files": [{"split": "train", "path": "Burgers_Sols_Nu4.0/train-*"}, {"split": "dev", "path": "Burgers_Sols_Nu4.0/dev-*"}, {"split": "test", "path": "Burgers_Sols_Nu4.0/test-*"}]}, {"config_name": "CFD_Rand_Eta0.01_Zeta0.01_periodic", "data_files": [{"split": "train", "path": "CFD_Rand_Eta0.01_Zeta0.01_periodic/train-*"}]}, {"config_name": "CFD_Rand_Eta0.1_Zeta0.1_periodic", "data_files": [{"split": "train", "path": "CFD_Rand_Eta0.1_Zeta0.1_periodic/train-*"}]}, {"config_name": "CFD_Rand_Eta1.e-8_Zeta1.e-8_periodic", "data_files": [{"split": "train", "path": "CFD_Rand_Eta1.e-8_Zeta1.e-8_periodic/train-*"}]}, {"config_name": "CFD_Rand_Eta1.e-8_Zeta1.e-8_trans", "data_files": [{"split": "train", "path": "CFD_Rand_Eta1.e-8_Zeta1.e-8_trans/train-*"}]}, {"config_name": "ReacDiff_Nu0.5_Rho1.0", "data_files": [{"split": "train", "path": "ReacDiff_Nu0.5_Rho1.0/train-*"}]}, {"config_name": "ReacDiff_Nu0.5_Rho10.0", "data_files": [{"split": "train", "path": "ReacDiff_Nu0.5_Rho10.0/train-*"}]}, {"config_name": "ReacDiff_Nu0.5_Rho2.0", "data_files": [{"split": "train", "path": "ReacDiff_Nu0.5_Rho2.0/train-*"}]}, {"config_name": "ReacDiff_Nu0.5_Rho5.0", "data_files": [{"split": "train", "path": "ReacDiff_Nu0.5_Rho5.0/train-*"}]}, {"config_name": "ReacDiff_Nu1.0_Rho1.0", "data_files": [{"split": "train", "path": "ReacDiff_Nu1.0_Rho1.0/train-*"}]}, {"config_name": "ReacDiff_Nu1.0_Rho10.0", "data_files": [{"split": "train", "path": "ReacDiff_Nu1.0_Rho10.0/train-*"}]}, {"config_name": "ReacDiff_Nu1.0_Rho2.0", "data_files": [{"split": "train", "path": "ReacDiff_Nu1.0_Rho2.0/train-*"}]}, {"config_name": "ReacDiff_Nu1.0_Rho5.0", "data_files": [{"split": "train", "path": "ReacDiff_Nu1.0_Rho5.0/train-*"}]}, {"config_name": "ReacDiff_Nu2.0_Rho1.0", "data_files": [{"split": "train", "path": "ReacDiff_Nu2.0_Rho1.0/train-*"}]}, {"config_name": "ReacDiff_Nu2.0_Rho10.0", "data_files": [{"split": "train", "path": "ReacDiff_Nu2.0_Rho10.0/train-*"}]}, {"config_name": "ReacDiff_Nu2.0_Rho2.0", "data_files": [{"split": "train", "path": "ReacDiff_Nu2.0_Rho2.0/train-*"}]}, {"config_name": "ReacDiff_Nu2.0_Rho5.0", "data_files": [{"split": "train", "path": "ReacDiff_Nu2.0_Rho5.0/train-*"}]}, {"config_name": "ReacDiff_Nu5.0_Rho1.0", "data_files": [{"split": "train", "path": "ReacDiff_Nu5.0_Rho1.0/train-*"}]}, {"config_name": "ReacDiff_Nu5.0_Rho10.0", "data_files": [{"split": "train", "path": "ReacDiff_Nu5.0_Rho10.0/train-*"}]}, {"config_name": "ReacDiff_Nu5.0_Rho2.0", "data_files": [{"split": "train", "path": "ReacDiff_Nu5.0_Rho2.0/train-*"}]}, {"config_name": "ReacDiff_Nu5.0_Rho5.0", "data_files": [{"split": "train", "path": "ReacDiff_Nu5.0_Rho5.0/train-*"}]}]} | 2023-12-20T21:36:56+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "PDEBench-1D"
More Information needed | [
"# Dataset Card for \"PDEBench-1D\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"PDEBench-1D\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"PDEBench-1D\"\n\nMore Information needed"
]
|
dfadefa61a170ddc517f2aa205c243f2f334ac41 |
# [doc] audio dataset 3
This dataset contains 4 audio files at the root, with a CSV metadata file providing another data column.
| severo/doc-audio-3 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T13:16:30+00:00 | {"size_categories": ["n<1K"]} | 2023-11-22T13:18:35+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] audio dataset 3
This dataset contains 4 audio files at the root, with a CSV metadata file providing another data column.
| [
"# [doc] audio dataset 3\n\nThis dataset contains 4 audio files at the root, with a CSV metadata file providing another data column."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] audio dataset 3\n\nThis dataset contains 4 audio files at the root, with a CSV metadata file providing another data column."
]
| [
16,
34
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] audio dataset 3\n\nThis dataset contains 4 audio files at the root, with a CSV metadata file providing another data column."
]
|
3459c9b0dd8107f890ce76b7dffd2d39fe9e8e90 |
# [doc] audio dataset 4
This dataset contains 4 audio files the /data directory, with a CSV metadata file providing another data column.
| severo/doc-audio-4 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T13:18:44+00:00 | {"size_categories": ["n<1K"]} | 2023-11-22T13:19:34+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] audio dataset 4
This dataset contains 4 audio files the /data directory, with a CSV metadata file providing another data column.
| [
"# [doc] audio dataset 4\n\nThis dataset contains 4 audio files the /data directory, with a CSV metadata file providing another data column."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] audio dataset 4\n\nThis dataset contains 4 audio files the /data directory, with a CSV metadata file providing another data column."
]
| [
16,
36
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] audio dataset 4\n\nThis dataset contains 4 audio files the /data directory, with a CSV metadata file providing another data column."
]
|
b92034188bd19e0f14538db6aff34758749989fe |
# Dataset of Nakano Nino
This is the dataset of Nakano Nino, containing 410 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 410 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 921 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 1050 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 410 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 410 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 410 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 921 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 921 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 778 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 1050 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 1050 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| CyberHarem/nakano_nino_gotoubunnohanayome | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
]
| 2023-11-22T13:19:35+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-11-22T13:35:52+00:00 | []
| []
| TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of Nakano Nino
======================
This is the dataset of Nakano Nino, containing 410 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
| []
| [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
]
| [
44
]
| [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
]
|
f8a20e7036e08fae474d691cddf2b39265580b48 |
# [doc] audio dataset 5
This dataset contains 4 audio files the /train directory, with a CSV metadata file providing another data column.
| severo/doc-audio-5 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T13:19:50+00:00 | {"size_categories": ["n<1K"]} | 2023-11-22T13:20:46+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] audio dataset 5
This dataset contains 4 audio files the /train directory, with a CSV metadata file providing another data column.
| [
"# [doc] audio dataset 5\n\nThis dataset contains 4 audio files the /train directory, with a CSV metadata file providing another data column."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] audio dataset 5\n\nThis dataset contains 4 audio files the /train directory, with a CSV metadata file providing another data column."
]
| [
16,
37
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] audio dataset 5\n\nThis dataset contains 4 audio files the /train directory, with a CSV metadata file providing another data column."
]
|
0940f6422fe385faf559143bcf5276bcc797142f |
# [doc] audio dataset 6
This dataset contains 4 audio files the /train directory, with a CSV metadata file providing another data column.
| severo/doc-audio-6 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T13:20:59+00:00 | {"size_categories": ["n<1K"]} | 2023-11-22T13:24:08+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] audio dataset 6
This dataset contains 4 audio files the /train directory, with a CSV metadata file providing another data column.
| [
"# [doc] audio dataset 6\n\nThis dataset contains 4 audio files the /train directory, with a CSV metadata file providing another data column."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] audio dataset 6\n\nThis dataset contains 4 audio files the /train directory, with a CSV metadata file providing another data column."
]
| [
16,
37
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] audio dataset 6\n\nThis dataset contains 4 audio files the /train directory, with a CSV metadata file providing another data column."
]
|
0f3a25194df31ccff99a0c3574e29604d06f4276 |
# [doc] audio dataset 7
This dataset contains 2 wav audio files in the /cat directory, and 2 wav audio files in the /dog directory.
| severo/doc-audio-7 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T13:22:46+00:00 | {"size_categories": ["n<1K"]} | 2023-11-22T13:24:25+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] audio dataset 7
This dataset contains 2 wav audio files in the /cat directory, and 2 wav audio files in the /dog directory.
| [
"# [doc] audio dataset 7\n\nThis dataset contains 2 wav audio files in the /cat directory, and 2 wav audio files in the /dog directory."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] audio dataset 7\n\nThis dataset contains 2 wav audio files in the /cat directory, and 2 wav audio files in the /dog directory."
]
| [
16,
38
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] audio dataset 7\n\nThis dataset contains 2 wav audio files in the /cat directory, and 2 wav audio files in the /dog directory."
]
|
35667dbae06112d2805bf6d2bec3bdbf7ec3fd55 | # Dataset Card for "undl_fr2en_aligned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bot-yaya/undl_fr2en_aligned | [
"region:us"
]
| 2023-11-22T13:24:45+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "record", "dtype": "string"}, {"name": "clean_para_index_set_pair", "dtype": "string"}, {"name": "src", "dtype": "string"}, {"name": "dst", "dtype": "string"}, {"name": "src_text", "dtype": "string"}, {"name": "dst_text", "dtype": "string"}, {"name": "src_rate", "dtype": "float64"}, {"name": "dst_rate", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 11061116049, "num_examples": 15969753}], "download_size": 0, "dataset_size": 11061116049}} | 2023-11-22T14:26:58+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "undl_fr2en_aligned"
More Information needed | [
"# Dataset Card for \"undl_fr2en_aligned\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"undl_fr2en_aligned\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"undl_fr2en_aligned\"\n\nMore Information needed"
]
|
31cbf196f7da3ab045cd6e4d0437f0e2631fe7c8 |
# [doc] image + audio dataset 1
This dataset contains 4 jpeg image files and 4 wav audio files at the root.
| severo/doc-image-audio-1 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T13:38:49+00:00 | {"size_categories": ["n<1K"]} | 2023-11-22T13:40:12+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] image + audio dataset 1
This dataset contains 4 jpeg image files and 4 wav audio files at the root.
| [
"# [doc] image + audio dataset 1\n\nThis dataset contains 4 jpeg image files and 4 wav audio files at the root."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] image + audio dataset 1\n\nThis dataset contains 4 jpeg image files and 4 wav audio files at the root."
]
| [
16,
30
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] image + audio dataset 1\n\nThis dataset contains 4 jpeg image files and 4 wav audio files at the root."
]
|
a3249d5595cee5e533e04b0a442cfb61b725ffec |
## Dataset Description
List of SD prompt in alpaca format from the discord server mostly from "Unstable Diffusion", include "Umi AI, Aitrepreneur, Softology"
#### Data Collection and Processing
Dataset creation [november 2023] and tools:
- DiscordChatExporter to bulk download and keep only prompt image with metadata to "output" and channel name as "input".
- Captioning used for "instruction": ViT-L-14/openai (pharmapsychotic/clip-interrogator-ext), else spacy summarizer
- Kainet Editor and my (test) script [scrapt](https://pastebin.com/w8qPjjiL), [format](https://pastebin.com/gGmDrmjX)[_](https://pastebin.com/VtG9LSuG), [dedup](https://pastebin.com/zZWaH4V3) for misc replace. | Nekochu/discord-unstable-diffusion-SD-prompts | [
"license:apache-2.0",
"region:us"
]
| 2023-11-22T13:39:36+00:00 | {"license": "apache-2.0"} | 2024-02-13T18:11:42+00:00 | []
| []
| TAGS
#license-apache-2.0 #region-us
|
## Dataset Description
List of SD prompt in alpaca format from the discord server mostly from "Unstable Diffusion", include "Umi AI, Aitrepreneur, Softology"
#### Data Collection and Processing
Dataset creation [november 2023] and tools:
- DiscordChatExporter to bulk download and keep only prompt image with metadata to "output" and channel name as "input".
- Captioning used for "instruction": ViT-L-14/openai (pharmapsychotic/clip-interrogator-ext), else spacy summarizer
- Kainet Editor and my (test) script scrapt, format_, dedup for misc replace. | [
"## Dataset Description\n\nList of SD prompt in alpaca format from the discord server mostly from \"Unstable Diffusion\", include \"Umi AI, Aitrepreneur, Softology\"",
"#### Data Collection and Processing\n\nDataset creation [november 2023] and tools:\n- DiscordChatExporter to bulk download and keep only prompt image with metadata to \"output\" and channel name as \"input\".\n- Captioning used for \"instruction\": ViT-L-14/openai (pharmapsychotic/clip-interrogator-ext), else spacy summarizer \n- Kainet Editor and my (test) script scrapt, format_, dedup for misc replace."
]
| [
"TAGS\n#license-apache-2.0 #region-us \n",
"## Dataset Description\n\nList of SD prompt in alpaca format from the discord server mostly from \"Unstable Diffusion\", include \"Umi AI, Aitrepreneur, Softology\"",
"#### Data Collection and Processing\n\nDataset creation [november 2023] and tools:\n- DiscordChatExporter to bulk download and keep only prompt image with metadata to \"output\" and channel name as \"input\".\n- Captioning used for \"instruction\": ViT-L-14/openai (pharmapsychotic/clip-interrogator-ext), else spacy summarizer \n- Kainet Editor and my (test) script scrapt, format_, dedup for misc replace."
]
| [
14,
41,
113
]
| [
"passage: TAGS\n#license-apache-2.0 #region-us \n## Dataset Description\n\nList of SD prompt in alpaca format from the discord server mostly from \"Unstable Diffusion\", include \"Umi AI, Aitrepreneur, Softology\"#### Data Collection and Processing\n\nDataset creation [november 2023] and tools:\n- DiscordChatExporter to bulk download and keep only prompt image with metadata to \"output\" and channel name as \"input\".\n- Captioning used for \"instruction\": ViT-L-14/openai (pharmapsychotic/clip-interrogator-ext), else spacy summarizer \n- Kainet Editor and my (test) script scrapt, format_, dedup for misc replace."
]
|
cbac64ce72b006d202c9be7a42cd3d457fd53af5 |
# [doc] image + audio dataset 2
This dataset contains 4 jpeg image files and 4 audio files at the root, using formats jpeg, png, tiff, webp for the images and aiff, mp3, mp3 and flac for the audio files.
| severo/doc-image-audio-2 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T13:40:50+00:00 | {"size_categories": ["n<1K"]} | 2023-11-22T13:49:55+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] image + audio dataset 2
This dataset contains 4 jpeg image files and 4 audio files at the root, using formats jpeg, png, tiff, webp for the images and aiff, mp3, mp3 and flac for the audio files.
| [
"# [doc] image + audio dataset 2\n\nThis dataset contains 4 jpeg image files and 4 audio files at the root, using formats jpeg, png, tiff, webp for the images and aiff, mp3, mp3 and flac for the audio files."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] image + audio dataset 2\n\nThis dataset contains 4 jpeg image files and 4 audio files at the root, using formats jpeg, png, tiff, webp for the images and aiff, mp3, mp3 and flac for the audio files."
]
| [
16,
61
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] image + audio dataset 2\n\nThis dataset contains 4 jpeg image files and 4 audio files at the root, using formats jpeg, png, tiff, webp for the images and aiff, mp3, mp3 and flac for the audio files."
]
|
54157b6ca009dee73431df4a93f2a8552054494a | # TestovanieFEasdasd
Created from AIOD platform | mtkinit/TestovanieFEasdasd | [
"region:us"
]
| 2023-11-22T13:42:31+00:00 | {"pretty_name": "TestovanieFEasdasd"} | 2023-11-22T13:42:32+00:00 | []
| []
| TAGS
#region-us
| # TestovanieFEasdasd
Created from AIOD platform | [
"# TestovanieFEasdasd\nCreated from AIOD platform"
]
| [
"TAGS\n#region-us \n",
"# TestovanieFEasdasd\nCreated from AIOD platform"
]
| [
6,
13
]
| [
"passage: TAGS\n#region-us \n# TestovanieFEasdasd\nCreated from AIOD platform"
]
|
4002f1ac560aa3ebfe4398ce0644c6e6e27ea3a1 |
# [doc] file names and splits 1
This dataset contains a data.csv file at the root.
| datasets-examples/doc-splits-1 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T13:57:18+00:00 | {"size_categories": ["n<1K"]} | 2023-11-22T14:12:48+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] file names and splits 1
This dataset contains a URL file at the root.
| [
"# [doc] file names and splits 1\n\nThis dataset contains a URL file at the root."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] file names and splits 1\n\nThis dataset contains a URL file at the root."
]
| [
16,
22
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] file names and splits 1\n\nThis dataset contains a URL file at the root."
]
|
b2301937f3acd1b500d2f4ea8c23248026fdeed5 | # Dataset Card for "distilabel-docs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | argilla/distilabel-docs | [
"region:us"
]
| 2023-11-22T13:57:18+00:00 | {"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "generation_model", "dtype": "string"}, {"name": "generation_prompt", "dtype": "string"}, {"name": "raw_generation_responses", "list": [{"name": "choices", "list": [{"name": "finish_reason", "dtype": "string"}, {"name": "index", "dtype": "int64"}, {"name": "logprobs", "dtype": "null"}, {"name": "text", "dtype": "string"}]}, {"name": "created", "dtype": "int64"}, {"name": "id", "dtype": "string"}, {"name": "model", "dtype": "string"}, {"name": "object", "dtype": "string"}, {"name": "usage", "struct": [{"name": "completion_tokens", "dtype": "int64"}, {"name": "prompt_tokens", "dtype": "int64"}, {"name": "total_tokens", "dtype": "int64"}]}]}, {"name": "generations", "sequence": "string"}, {"name": "labelling_model", "dtype": "string"}, {"name": "labelling_prompt", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "raw_labelling_response", "dtype": "string"}, {"name": "rating", "sequence": "float64"}, {"name": "areas", "list": [{"name": "Authenticity & Reliability", "struct": [{"name": "rating", "dtype": "string"}, {"name": "rationale", "dtype": "string"}]}, {"name": "Clarity & Transparency", "struct": [{"name": "rating", "dtype": "string"}, {"name": "rationale", "dtype": "string"}]}, {"name": "Compliance with Intent", "struct": [{"name": "rating", "dtype": "string"}, {"name": "rationale", "dtype": "string"}]}, {"name": "Practical Accuracy", "struct": [{"name": "rating", "dtype": "string"}, {"name": "rationale", "dtype": "string"}]}]}], "splits": [{"name": "train", "num_bytes": 79809, "num_examples": 5}], "download_size": 100998, "dataset_size": 79809}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-22T13:57:20+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "distilabel-docs"
More Information needed | [
"# Dataset Card for \"distilabel-docs\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"distilabel-docs\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"distilabel-docs\"\n\nMore Information needed"
]
|
adbded34ec6047a7f15279ba4630eaeabd1f6c13 | # Dataset Card for "find_marker_both_sent_train_400_eval_40_first_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/find_marker_both_sent_train_400_eval_40_first_permute | [
"region:us"
]
| 2023-11-22T14:02:12+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5973002.880726015, "num_examples": 4188}, {"name": "validation", "num_bytes": 220570, "num_examples": 200}], "download_size": 1112460, "dataset_size": 6193572.880726015}} | 2023-11-22T14:02:21+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "find_marker_both_sent_train_400_eval_40_first_permute"
More Information needed | [
"# Dataset Card for \"find_marker_both_sent_train_400_eval_40_first_permute\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"find_marker_both_sent_train_400_eval_40_first_permute\"\n\nMore Information needed"
]
| [
6,
36
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"find_marker_both_sent_train_400_eval_40_first_permute\"\n\nMore Information needed"
]
|
5ab9beae5d7d120c164aef0d0fd4b05267eb1809 | # Dataset Card for "find_marker_both_sent_train_400_eval_40_last_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/find_marker_both_sent_train_400_eval_40_last_permute | [
"region:us"
]
| 2023-11-22T14:04:39+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5973002.880726015, "num_examples": 4188}, {"name": "validation", "num_bytes": 220570, "num_examples": 200}], "download_size": 1111218, "dataset_size": 6193572.880726015}} | 2023-11-22T14:04:47+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "find_marker_both_sent_train_400_eval_40_last_permute"
More Information needed | [
"# Dataset Card for \"find_marker_both_sent_train_400_eval_40_last_permute\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"find_marker_both_sent_train_400_eval_40_last_permute\"\n\nMore Information needed"
]
| [
6,
35
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"find_marker_both_sent_train_400_eval_40_last_permute\"\n\nMore Information needed"
]
|
67e5220da9be50d8467dd4d839a2f08e89b20d3d |
# [doc] file names and splits 2
This dataset contains three csv files at the root: train.csv, test.csv, validation.csv.
| datasets-examples/doc-splits-2 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T14:13:04+00:00 | {"size_categories": ["n<1K"]} | 2023-11-22T14:17:45+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] file names and splits 2
This dataset contains three csv files at the root: URL, URL, URL.
| [
"# [doc] file names and splits 2\n\nThis dataset contains three csv files at the root: URL, URL, URL."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] file names and splits 2\n\nThis dataset contains three csv files at the root: URL, URL, URL."
]
| [
16,
29
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] file names and splits 2\n\nThis dataset contains three csv files at the root: URL, URL, URL."
]
|
097956325b5c8f26af45c330045ffddf4af5158b |
# Dataset of Nakano Miku
This is the dataset of Nakano Miku, containing 358 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 358 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 833 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 934 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 358 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 358 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 358 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 833 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 833 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 681 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 934 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 934 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| CyberHarem/nakano_miku_gotoubunnohanayome | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
]
| 2023-11-22T14:16:40+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-11-22T14:28:31+00:00 | []
| []
| TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of Nakano Miku
======================
This is the dataset of Nakano Miku, containing 358 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
| []
| [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
]
| [
44
]
| [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
]
|
90e3654e06fff90782d27d6409233d7b588039ed |
# [doc] file names and splits 3
This dataset contains three csv files at the root: my_train_file.csv, test-file.csv, validation1.csv.
| datasets-examples/doc-splits-3 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T14:18:02+00:00 | {"size_categories": ["n<1K"]} | 2023-11-22T14:19:50+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] file names and splits 3
This dataset contains three csv files at the root: my_train_file.csv, URL, URL.
| [
"# [doc] file names and splits 3\n\nThis dataset contains three csv files at the root: my_train_file.csv, URL, URL."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] file names and splits 3\n\nThis dataset contains three csv files at the root: my_train_file.csv, URL, URL."
]
| [
16,
37
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] file names and splits 3\n\nThis dataset contains three csv files at the root: my_train_file.csv, URL, URL."
]
|
a3dcadbe912cc850382cc854fa4029c1cd91a5d1 | # Dataset Card for "find_marker_both_sent_train_400_eval_40_no_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/find_marker_both_sent_train_400_eval_40_no_permute | [
"region:us"
]
| 2023-11-22T14:21:34+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5973002.880726015, "num_examples": 4188}, {"name": "validation", "num_bytes": 220570, "num_examples": 200}], "download_size": 983246, "dataset_size": 6193572.880726015}} | 2023-11-22T14:21:43+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "find_marker_both_sent_train_400_eval_40_no_permute"
More Information needed | [
"# Dataset Card for \"find_marker_both_sent_train_400_eval_40_no_permute\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"find_marker_both_sent_train_400_eval_40_no_permute\"\n\nMore Information needed"
]
| [
6,
35
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"find_marker_both_sent_train_400_eval_40_no_permute\"\n\nMore Information needed"
]
|
94c459b0220fba22719955d2d5d979134c4a7888 | # testing2
Created from AIOD platform | mtkinit/testing2 | [
"region:us"
]
| 2023-11-22T14:23:32+00:00 | {"pretty_name": "testing2"} | 2023-11-22T14:23:32+00:00 | []
| []
| TAGS
#region-us
| # testing2
Created from AIOD platform | [
"# testing2\nCreated from AIOD platform"
]
| [
"TAGS\n#region-us \n",
"# testing2\nCreated from AIOD platform"
]
| [
6,
9
]
| [
"passage: TAGS\n#region-us \n# testing2\nCreated from AIOD platform"
]
|
daf145863b3189ed90bc706540105a9fa7a8e732 |
# [doc] file names and splits 4
This dataset contains three subdirectories, inside data/, called train, test and validation, with csv files in them.
| datasets-examples/doc-splits-4 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T14:23:32+00:00 | {"size_categories": ["n<1K"]} | 2023-11-22T14:23:57+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] file names and splits 4
This dataset contains three subdirectories, inside data/, called train, test and validation, with csv files in them.
| [
"# [doc] file names and splits 4\n\nThis dataset contains three subdirectories, inside data/, called train, test and validation, with csv files in them."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] file names and splits 4\n\nThis dataset contains three subdirectories, inside data/, called train, test and validation, with csv files in them."
]
| [
16,
40
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] file names and splits 4\n\nThis dataset contains three subdirectories, inside data/, called train, test and validation, with csv files in them."
]
|
174bc2cc575d5a5fee5b9ef21c73f7282bd933e0 | # mtkinit/testing3
Created from AIOD platform | mtkinit/mtkinit_testing3 | [
"region:us"
]
| 2023-11-22T14:27:54+00:00 | {"pretty_name": "mtkinit/testing3"} | 2023-11-22T14:27:56+00:00 | []
| []
| TAGS
#region-us
| # mtkinit/testing3
Created from AIOD platform | [
"# mtkinit/testing3\nCreated from AIOD platform"
]
| [
"TAGS\n#region-us \n",
"# mtkinit/testing3\nCreated from AIOD platform"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# mtkinit/testing3\nCreated from AIOD platform"
]
|
ed928030da5133fd907f060b271a6e1e11cb7d3c |
# [doc] file names and splits 5
This dataset contains three files inside data/, called training.csv, eval.csv and valid.csv.
| datasets-examples/doc-splits-5 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T14:28:43+00:00 | {"size_categories": ["n<1K"]} | 2023-11-22T14:28:59+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] file names and splits 5
This dataset contains three files inside data/, called URL, URL and URL.
| [
"# [doc] file names and splits 5\n\nThis dataset contains three files inside data/, called URL, URL and URL."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] file names and splits 5\n\nThis dataset contains three files inside data/, called URL, URL and URL."
]
| [
16,
28
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] file names and splits 5\n\nThis dataset contains three files inside data/, called URL, URL and URL."
]
|
4217adcd2e3b39d474f6067232f19bbd8f3bb314 |
# [doc] file names and splits 6
This dataset contains six files at the root, four for the training split, and two for the test split.
| datasets-examples/doc-splits-6 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T14:32:50+00:00 | {"size_categories": ["n<1K"]} | 2023-11-22T14:34:04+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] file names and splits 6
This dataset contains six files at the root, four for the training split, and two for the test split.
| [
"# [doc] file names and splits 6\n\nThis dataset contains six files at the root, four for the training split, and two for the test split."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] file names and splits 6\n\nThis dataset contains six files at the root, four for the training split, and two for the test split."
]
| [
16,
34
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] file names and splits 6\n\nThis dataset contains six files at the root, four for the training split, and two for the test split."
]
|
df8e57eb6468660ff60e102b6ccca899034552e6 |
# [doc] file names and splits 7
This dataset contains six files under the data/ directory, four in the train/ subdirectory, and two in the test/ subdirectory.
| datasets-examples/doc-splits-7 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T14:36:36+00:00 | {"size_categories": ["n<1K"]} | 2023-11-22T14:37:03+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] file names and splits 7
This dataset contains six files under the data/ directory, four in the train/ subdirectory, and two in the test/ subdirectory.
| [
"# [doc] file names and splits 7\n\nThis dataset contains six files under the data/ directory, four in the train/ subdirectory, and two in the test/ subdirectory."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] file names and splits 7\n\nThis dataset contains six files under the data/ directory, four in the train/ subdirectory, and two in the test/ subdirectory."
]
| [
16,
43
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] file names and splits 7\n\nThis dataset contains six files under the data/ directory, four in the train/ subdirectory, and two in the test/ subdirectory."
]
|
f677f28332e38da5e39c6cfedd86fbc005a0a084 | # Dataset Card for "undl_de2en_translation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bot-yaya/undl_de2en_translation | [
"region:us"
]
| 2023-11-22T14:41:24+00:00 | {"dataset_info": {"features": [{"name": "clean_de", "sequence": "string"}, {"name": "clean_en", "sequence": "string"}, {"name": "record", "dtype": "string"}, {"name": "de2en", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 340931827, "num_examples": 96410}], "download_size": 148570287, "dataset_size": 340931827}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-22T15:22:01+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "undl_de2en_translation"
More Information needed | [
"# Dataset Card for \"undl_de2en_translation\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"undl_de2en_translation\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"undl_de2en_translation\"\n\nMore Information needed"
]
|
23fcf47b37718a93ac7d9f151d7735b430c9b71e |
# [doc] file names and splits 8
This dataset contains seven files under the data/ directory, three for the train split, one for the test split and three for the random split.
| datasets-examples/doc-splits-8 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T14:43:12+00:00 | {"size_categories": ["n<1K"]} | 2023-11-22T14:43:30+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] file names and splits 8
This dataset contains seven files under the data/ directory, three for the train split, one for the test split and three for the random split.
| [
"# [doc] file names and splits 8\n\nThis dataset contains seven files under the data/ directory, three for the train split, one for the test split and three for the random split."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] file names and splits 8\n\nThis dataset contains seven files under the data/ directory, three for the train split, one for the test split and three for the random split."
]
| [
16,
42
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] file names and splits 8\n\nThis dataset contains seven files under the data/ directory, three for the train split, one for the test split and three for the random split."
]
|
a1b23e845005ff4fb8b3a5ed5365317d4d309b9d |
# ArXiv Dataset
## Overview
## Dataset Description
- **Homepage:** [Kaggle arXiv Dataset Homepage](https://www.kaggle.com/Cornell-University/arxiv)
- **Repository:**
- **Paper:** [On the Use of ArXiv as a Dataset](https://arxiv.org/abs/1905.00075)
This dataset is a comprehensive collection of metadata from the ArXiv repository, a widely-recognized open-access archive offering access to scholarly articles in various fields of science. It covers a broad range of subjects from physics and computer science to mathematics, statistics, electrical engineering, quantitative biology, and economics.
The dataset hosted here is derived from the original ArXiv dataset available on Kaggle, which includes metadata for approximately 2.2 million articles. The metadata encompasses various features such as article titles, authors, categories, abstracts, and full text in PDF format.
This rich repository of scholarly articles provides a valuable resource for data analysis, trend identification, and development of machine learning models. It can facilitate applications like trend analysis, paper recommendation systems, category prediction, co-citation network analysis, knowledge graph construction, and semantic search interfaces.
The data is particularly suited for those interested in natural language processing and text analytics within the academic domain.
## Dataset Composition
The dataset is divided into multiple .parquet files, structured to enable efficient access and analysis. Each file contains a subset of the entire dataset, allowing users to work with manageable portions of data as needed.
## Original Dataset Source
The dataset is based on the **[ArXiv dataset hosted on Kaggle](https://www.kaggle.com/datasets/Cornell-University/arxiv)**, provided by Cornell University. It represents a snapshot of the ArXiv metadata.
### License
This dataset is made available under the CC0: Public Domain License. The original dataset from ArXiv, as provided by Cornell University on Kaggle, is also under the same license, allowing for unrestricted use and distribution.
### Citation and Acknowledgments
When using or citing this dataset, please acknowledge the original source of the data: ArXiv dataset on Kaggle, maintained and operated by Cornell University. | CCRss/arXiv_dataset | [
"size_categories:1M<n<10M",
"language:en",
"license:cc0-1.0",
"arxiv:1905.00075",
"region:us"
]
| 2023-11-22T14:44:03+00:00 | {"language": ["en"], "license": "cc0-1.0", "size_categories": ["1M<n<10M"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "submitter", "dtype": "string"}, {"name": "authors", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "comments", "dtype": "string"}, {"name": "journal-ref", "dtype": "string"}, {"name": "doi", "dtype": "string"}, {"name": "report-no", "dtype": "string"}, {"name": "categories", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "versions", "list": [{"name": "version", "dtype": "string"}, {"name": "created", "dtype": "string"}]}, {"name": "update_date", "dtype": "timestamp[s]"}, {"name": "authors_parsed", "sequence": {"sequence": "string"}}], "splits": [{"name": "train", "num_bytes": 3538777556, "num_examples": 2367176}], "download_size": 1992564422, "dataset_size": 3538777556}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-26T08:13:50+00:00 | [
"1905.00075"
]
| [
"en"
]
| TAGS
#size_categories-1M<n<10M #language-English #license-cc0-1.0 #arxiv-1905.00075 #region-us
|
# ArXiv Dataset
## Overview
## Dataset Description
- Homepage: Kaggle arXiv Dataset Homepage
- Repository:
- Paper: On the Use of ArXiv as a Dataset
This dataset is a comprehensive collection of metadata from the ArXiv repository, a widely-recognized open-access archive offering access to scholarly articles in various fields of science. It covers a broad range of subjects from physics and computer science to mathematics, statistics, electrical engineering, quantitative biology, and economics.
The dataset hosted here is derived from the original ArXiv dataset available on Kaggle, which includes metadata for approximately 2.2 million articles. The metadata encompasses various features such as article titles, authors, categories, abstracts, and full text in PDF format.
This rich repository of scholarly articles provides a valuable resource for data analysis, trend identification, and development of machine learning models. It can facilitate applications like trend analysis, paper recommendation systems, category prediction, co-citation network analysis, knowledge graph construction, and semantic search interfaces.
The data is particularly suited for those interested in natural language processing and text analytics within the academic domain.
## Dataset Composition
The dataset is divided into multiple .parquet files, structured to enable efficient access and analysis. Each file contains a subset of the entire dataset, allowing users to work with manageable portions of data as needed.
## Original Dataset Source
The dataset is based on the ArXiv dataset hosted on Kaggle, provided by Cornell University. It represents a snapshot of the ArXiv metadata.
### License
This dataset is made available under the CC0: Public Domain License. The original dataset from ArXiv, as provided by Cornell University on Kaggle, is also under the same license, allowing for unrestricted use and distribution.
and Acknowledgments
When using or citing this dataset, please acknowledge the original source of the data: ArXiv dataset on Kaggle, maintained and operated by Cornell University. | [
"# ArXiv Dataset",
"## Overview",
"## Dataset Description\n\n- Homepage: Kaggle arXiv Dataset Homepage\n- Repository: \n- Paper: On the Use of ArXiv as a Dataset\n\nThis dataset is a comprehensive collection of metadata from the ArXiv repository, a widely-recognized open-access archive offering access to scholarly articles in various fields of science. It covers a broad range of subjects from physics and computer science to mathematics, statistics, electrical engineering, quantitative biology, and economics.\n\nThe dataset hosted here is derived from the original ArXiv dataset available on Kaggle, which includes metadata for approximately 2.2 million articles. The metadata encompasses various features such as article titles, authors, categories, abstracts, and full text in PDF format.\n\nThis rich repository of scholarly articles provides a valuable resource for data analysis, trend identification, and development of machine learning models. It can facilitate applications like trend analysis, paper recommendation systems, category prediction, co-citation network analysis, knowledge graph construction, and semantic search interfaces.\n\nThe data is particularly suited for those interested in natural language processing and text analytics within the academic domain.",
"## Dataset Composition\nThe dataset is divided into multiple .parquet files, structured to enable efficient access and analysis. Each file contains a subset of the entire dataset, allowing users to work with manageable portions of data as needed.",
"## Original Dataset Source\nThe dataset is based on the ArXiv dataset hosted on Kaggle, provided by Cornell University. It represents a snapshot of the ArXiv metadata.",
"### License\nThis dataset is made available under the CC0: Public Domain License. The original dataset from ArXiv, as provided by Cornell University on Kaggle, is also under the same license, allowing for unrestricted use and distribution.\n\nand Acknowledgments\nWhen using or citing this dataset, please acknowledge the original source of the data: ArXiv dataset on Kaggle, maintained and operated by Cornell University."
]
| [
"TAGS\n#size_categories-1M<n<10M #language-English #license-cc0-1.0 #arxiv-1905.00075 #region-us \n",
"# ArXiv Dataset",
"## Overview",
"## Dataset Description\n\n- Homepage: Kaggle arXiv Dataset Homepage\n- Repository: \n- Paper: On the Use of ArXiv as a Dataset\n\nThis dataset is a comprehensive collection of metadata from the ArXiv repository, a widely-recognized open-access archive offering access to scholarly articles in various fields of science. It covers a broad range of subjects from physics and computer science to mathematics, statistics, electrical engineering, quantitative biology, and economics.\n\nThe dataset hosted here is derived from the original ArXiv dataset available on Kaggle, which includes metadata for approximately 2.2 million articles. The metadata encompasses various features such as article titles, authors, categories, abstracts, and full text in PDF format.\n\nThis rich repository of scholarly articles provides a valuable resource for data analysis, trend identification, and development of machine learning models. It can facilitate applications like trend analysis, paper recommendation systems, category prediction, co-citation network analysis, knowledge graph construction, and semantic search interfaces.\n\nThe data is particularly suited for those interested in natural language processing and text analytics within the academic domain.",
"## Dataset Composition\nThe dataset is divided into multiple .parquet files, structured to enable efficient access and analysis. Each file contains a subset of the entire dataset, allowing users to work with manageable portions of data as needed.",
"## Original Dataset Source\nThe dataset is based on the ArXiv dataset hosted on Kaggle, provided by Cornell University. It represents a snapshot of the ArXiv metadata.",
"### License\nThis dataset is made available under the CC0: Public Domain License. The original dataset from ArXiv, as provided by Cornell University on Kaggle, is also under the same license, allowing for unrestricted use and distribution.\n\nand Acknowledgments\nWhen using or citing this dataset, please acknowledge the original source of the data: ArXiv dataset on Kaggle, maintained and operated by Cornell University."
]
| [
38,
6,
3,
268,
57,
43,
98
]
| [
"passage: TAGS\n#size_categories-1M<n<10M #language-English #license-cc0-1.0 #arxiv-1905.00075 #region-us \n# ArXiv Dataset## Overview## Dataset Description\n\n- Homepage: Kaggle arXiv Dataset Homepage\n- Repository: \n- Paper: On the Use of ArXiv as a Dataset\n\nThis dataset is a comprehensive collection of metadata from the ArXiv repository, a widely-recognized open-access archive offering access to scholarly articles in various fields of science. It covers a broad range of subjects from physics and computer science to mathematics, statistics, electrical engineering, quantitative biology, and economics.\n\nThe dataset hosted here is derived from the original ArXiv dataset available on Kaggle, which includes metadata for approximately 2.2 million articles. The metadata encompasses various features such as article titles, authors, categories, abstracts, and full text in PDF format.\n\nThis rich repository of scholarly articles provides a valuable resource for data analysis, trend identification, and development of machine learning models. It can facilitate applications like trend analysis, paper recommendation systems, category prediction, co-citation network analysis, knowledge graph construction, and semantic search interfaces.\n\nThe data is particularly suited for those interested in natural language processing and text analytics within the academic domain.## Dataset Composition\nThe dataset is divided into multiple .parquet files, structured to enable efficient access and analysis. Each file contains a subset of the entire dataset, allowing users to work with manageable portions of data as needed.## Original Dataset Source\nThe dataset is based on the ArXiv dataset hosted on Kaggle, provided by Cornell University. It represents a snapshot of the ArXiv metadata."
]
|
b742a69c96ed8694b04f5eccf70aea203af7b704 | This dataset comprises a collection of the most recent (up to 22 November 2023) 5K arXiv papers' metadata in the computer science categories:
- cs.AI (Artificial Intelligence),
- cs.CL (Computation and Language),
- cs.CV (Computer Vision),
- cs.LG (Machine Learning),
- cs.MA (Multiagent Systems), and
- cs.NE (Neural and Evolutionary Computing).
Each metadata entry includes the embeddings for the 'title' and 'abstract' of the paper, generated using [Cohere's Embed-v3](https://txt.cohere.com/introducing-embed-v3/) for 'clustering'. | dcarpintero/arXiv.cs.AI.CL.CV.LG.MA.NE.embedv3.clustering.mini | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
]
| 2023-11-22T14:51:24+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "pretty_name": "arXiv.embedv3.clustering.mini", "A collection of the most recent 5K arXiv papers' metadata": ["enriched with 'title' and 'abstract' embeddings generated with the advanced Cohere's Embed-v3 model for 'clustering';", "published before 22 November 2023; and", {"in the categories": "cs.AI (Artificial Intelligence), cs.CL (Computation and Language), cs.CV (Computer Vision), cs.LG (Machine Learning), cs.MA (Multiagent Systems), and cs.NE (Neural and Evolutionary Computing)."}]} | 2023-11-22T23:26:59+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-apache-2.0 #region-us
| This dataset comprises a collection of the most recent (up to 22 November 2023) 5K arXiv papers' metadata in the computer science categories:
- cs.AI (Artificial Intelligence),
- cs.CL (Computation and Language),
- cs.CV (Computer Vision),
- cs.LG (Machine Learning),
- cs.MA (Multiagent Systems), and
- cs.NE (Neural and Evolutionary Computing).
Each metadata entry includes the embeddings for the 'title' and 'abstract' of the paper, generated using Cohere's Embed-v3 for 'clustering'. | []
| [
"TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-apache-2.0 #region-us \n"
]
| [
41
]
| [
"passage: TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-apache-2.0 #region-us \n"
]
|
4038beb4b9d80d3ae930810e597600a0c159cf0d |
# [doc] unsupported 1
This dataset contains two csv files at the root.
| severo/doc-unsupported-1 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T14:54:57+00:00 | {"size_categories": ["n<1K"]} | 2023-11-22T14:56:43+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] unsupported 1
This dataset contains two csv files at the root.
| [
"# [doc] unsupported 1\n\nThis dataset contains two csv files at the root."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] unsupported 1\n\nThis dataset contains two csv files at the root."
]
| [
16,
21
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] unsupported 1\n\nThis dataset contains two csv files at the root."
]
|
55050a8a05a3481c304d53da8c63c1a06e32d5ef | # Dataset Card for "Data_new_stable"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | zardat/Data_new_stable | [
"region:us"
]
| 2023-11-22T14:57:39+00:00 | {"dataset_info": {"features": [{"name": "node_features", "sequence": {"sequence": "float32"}}, {"name": "edge_list", "sequence": {"sequence": "float32"}}, {"name": "edge_attr", "sequence": "float32"}, {"name": "magnetization", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 11195888, "num_examples": 1220}], "download_size": 2575485, "dataset_size": 11195888}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-22T14:57:46+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "Data_new_stable"
More Information needed | [
"# Dataset Card for \"Data_new_stable\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"Data_new_stable\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"Data_new_stable\"\n\nMore Information needed"
]
|
71aff8075b92079c04d74b3cbd2af0810e811d22 |
# [doc] manual configuration 1
This dataset contains two csv files at the root, and a YAML field `configs` that specifies the data files and splits.
| datasets-examples/doc-yaml-1 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T14:58:13+00:00 | {"size_categories": ["n<1K"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data.csv"}, {"split": "test", "path": "holdout.csv"}]}]} | 2023-11-22T14:58:33+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] manual configuration 1
This dataset contains two csv files at the root, and a YAML field 'configs' that specifies the data files and splits.
| [
"# [doc] manual configuration 1\n\nThis dataset contains two csv files at the root, and a YAML field 'configs' that specifies the data files and splits."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] manual configuration 1\n\nThis dataset contains two csv files at the root, and a YAML field 'configs' that specifies the data files and splits."
]
| [
16,
39
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] manual configuration 1\n\nThis dataset contains two csv files at the root, and a YAML field 'configs' that specifies the data files and splits."
]
|
af726ab851ec24dadbb01764b334a05b32b2bbec |
# [doc] unsupported 2
This dataset contains two csv files at the root, one is called train.csv.
| severo/doc-unsupported-2 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T15:00:16+00:00 | {"size_categories": ["n<1K"]} | 2023-11-22T15:00:31+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] unsupported 2
This dataset contains two csv files at the root, one is called URL.
| [
"# [doc] unsupported 2\n\nThis dataset contains two csv files at the root, one is called URL."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] unsupported 2\n\nThis dataset contains two csv files at the root, one is called URL."
]
| [
16,
26
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] unsupported 2\n\nThis dataset contains two csv files at the root, one is called URL."
]
|
c4101804d73cfe98a766451e894cb8d5780bd3e2 |
# [doc] manual configuration 2
This dataset contains two csv files in the data/ directory and one csv file in the holdout/ directory, and a YAML field `configs` that specifies the data files and splits.
| datasets-examples/doc-yaml-2 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T15:03:34+00:00 | {"size_categories": ["n<1K"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": ["data/abc.csv", "data/def.csv"]}, {"split": "test", "path": "holdout/ghi.csv"}]}]} | 2023-11-23T08:45:18+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] manual configuration 2
This dataset contains two csv files in the data/ directory and one csv file in the holdout/ directory, and a YAML field 'configs' that specifies the data files and splits.
| [
"# [doc] manual configuration 2\n\nThis dataset contains two csv files in the data/ directory and one csv file in the holdout/ directory, and a YAML field 'configs' that specifies the data files and splits."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] manual configuration 2\n\nThis dataset contains two csv files in the data/ directory and one csv file in the holdout/ directory, and a YAML field 'configs' that specifies the data files and splits."
]
| [
16,
54
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] manual configuration 2\n\nThis dataset contains two csv files in the data/ directory and one csv file in the holdout/ directory, and a YAML field 'configs' that specifies the data files and splits."
]
|
2bb86c4a45b9e98af9e95d8c1ce76e3b7578218f |
# [doc] manual configuration 3
This dataset contains two csv files in the data/ directory and one csv file in the holdout/ directory, and a YAML field `configs` that specifies the data files and splits, using glob expressions.
| datasets-examples/doc-yaml-3 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T15:05:16+00:00 | {"size_categories": ["n<1K"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/*.csv"}, {"split": "test", "path": "holdout/*.csv"}]}]} | 2023-11-22T15:05:31+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] manual configuration 3
This dataset contains two csv files in the data/ directory and one csv file in the holdout/ directory, and a YAML field 'configs' that specifies the data files and splits, using glob expressions.
| [
"# [doc] manual configuration 3\n\nThis dataset contains two csv files in the data/ directory and one csv file in the holdout/ directory, and a YAML field 'configs' that specifies the data files and splits, using glob expressions."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] manual configuration 3\n\nThis dataset contains two csv files in the data/ directory and one csv file in the holdout/ directory, and a YAML field 'configs' that specifies the data files and splits, using glob expressions."
]
| [
16,
59
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] manual configuration 3\n\nThis dataset contains two csv files in the data/ directory and one csv file in the holdout/ directory, and a YAML field 'configs' that specifies the data files and splits, using glob expressions."
]
|
ee085d8c7594116b57655d38aded4d0280172b52 |
# [doc] manual configuration 4
This dataset contains two csv files at the root, and a YAML field `configs` that specifies the data files and configs.
| datasets-examples/doc-yaml-4 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T15:07:39+00:00 | {"size_categories": ["n<1K"], "configs": [{"config_name": "main_data", "data_files": "main_data.csv"}, {"config_name": "additional_data", "data_files": "additional_data.csv"}]} | 2023-11-22T15:07:53+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] manual configuration 4
This dataset contains two csv files at the root, and a YAML field 'configs' that specifies the data files and configs.
| [
"# [doc] manual configuration 4\n\nThis dataset contains two csv files at the root, and a YAML field 'configs' that specifies the data files and configs."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] manual configuration 4\n\nThis dataset contains two csv files at the root, and a YAML field 'configs' that specifies the data files and configs."
]
| [
16,
40
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] manual configuration 4\n\nThis dataset contains two csv files at the root, and a YAML field 'configs' that specifies the data files and configs."
]
|
a522c58d0eeb48ccb6cef8a40d5a9e54ae488883 |
# Dataset of Nakano Yotsuba
This is the dataset of Nakano Yotsuba, containing 302 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 302 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 723 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 838 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 302 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 302 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 302 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 723 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 723 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 637 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 838 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 838 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| CyberHarem/nakano_yotsuba_gotoubunnohanayome | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
]
| 2023-11-22T15:09:19+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-11-22T15:17:17+00:00 | []
| []
| TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of Nakano Yotsuba
=========================
This is the dataset of Nakano Yotsuba, containing 302 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
| []
| [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
]
| [
44
]
| [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
]
|
6f646292fd0446740eb2cabf3515b1eb1a2a16bf |
# [doc] manual configuration 5
This dataset contains two csv files at the root, and a YAML field `configs` that specifies the data files and configs, specifying that the file main_data.csv uses a tabulation as separator.
| datasets-examples/doc-yaml-5 | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T15:10:57+00:00 | {"size_categories": ["n<1K"], "configs": [{"config_name": "tab", "data_files": "main_data.csv", "sep": "\t"}, {"config_name": "comma", "data_files": "additional_data.csv", "sep": ","}]} | 2023-11-22T15:11:18+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# [doc] manual configuration 5
This dataset contains two csv files at the root, and a YAML field 'configs' that specifies the data files and configs, specifying that the file main_data.csv uses a tabulation as separator.
| [
"# [doc] manual configuration 5\n\nThis dataset contains two csv files at the root, and a YAML field 'configs' that specifies the data files and configs, specifying that the file main_data.csv uses a tabulation as separator."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# [doc] manual configuration 5\n\nThis dataset contains two csv files at the root, and a YAML field 'configs' that specifies the data files and configs, specifying that the file main_data.csv uses a tabulation as separator."
]
| [
16,
60
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# [doc] manual configuration 5\n\nThis dataset contains two csv files at the root, and a YAML field 'configs' that specifies the data files and configs, specifying that the file main_data.csv uses a tabulation as separator."
]
|
6200a8c689982adb32225c6adc008f79c245d4b6 | # Dataset Card for "CID234"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Lollitor/CID234 | [
"region:us"
]
| 2023-11-22T15:12:22+00:00 | {"dataset_info": {"config_name": "Lollitor", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 530, "num_examples": 12}], "download_size": 1282, "dataset_size": 530}, "configs": [{"config_name": "Lollitor", "data_files": [{"split": "train", "path": "Lollitor/train-*"}]}]} | 2023-11-22T15:12:25+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "CID234"
More Information needed | [
"# Dataset Card for \"CID234\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"CID234\"\n\nMore Information needed"
]
| [
6,
13
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"CID234\"\n\nMore Information needed"
]
|
266aad6d27378c81b2c75365b39572f84f7085a6 | # Dataset Card for "undl_de2en_aligned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bot-yaya/undl_de2en_aligned | [
"region:us"
]
| 2023-11-22T15:17:12+00:00 | {"dataset_info": {"features": [{"name": "record", "dtype": "string"}, {"name": "clean_para_index_set_pair", "dtype": "string"}, {"name": "src", "dtype": "string"}, {"name": "dst", "dtype": "string"}, {"name": "src_text", "dtype": "string"}, {"name": "dst_text", "dtype": "string"}, {"name": "src_rate", "dtype": "float64"}, {"name": "dst_rate", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 238410098, "num_examples": 377681}], "download_size": 0, "dataset_size": 238410098}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-22T15:22:08+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "undl_de2en_aligned"
More Information needed | [
"# Dataset Card for \"undl_de2en_aligned\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"undl_de2en_aligned\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"undl_de2en_aligned\"\n\nMore Information needed"
]
|
701c7aa83448df280714ba4df0324abd8798a90a | # Dataset Card for "undl_ru2en_translation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bot-yaya/undl_ru2en_translation | [
"region:us"
]
| 2023-11-22T15:24:03+00:00 | {"dataset_info": {"features": [{"name": "clean_ru", "sequence": "string"}, {"name": "clean_en", "sequence": "string"}, {"name": "record", "dtype": "string"}, {"name": "ru2en", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 19061350976, "num_examples": 165840}], "download_size": 8299372812, "dataset_size": 19061350976}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-22T17:42:47+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "undl_ru2en_translation"
More Information needed | [
"# Dataset Card for \"undl_ru2en_translation\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"undl_ru2en_translation\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"undl_ru2en_translation\"\n\nMore Information needed"
]
|
d863132a6786cbb96778be2d634b43041ac4a805 |
# What is Khasi Language?
## Location:
- Primarily spoken in the northeastern Indian state of Meghalaya.
- Also spoken in parts of Assam, Tripura, and Bangladesh.
## Language Family:
- Khasi is a member of the Austroasiatic language family.
## Script:
- Traditionally written using the Khasi script, which is a script created specifically for the Khasi language.
## Culture and Identity:
- The Khasi language is an integral part of the cultural identity of the Khasi people.
- It plays a significant role in traditional Khasi folklore, rituals, and oral traditions.
## Grammar:
- Khasi has a subject-verb-object (SVO) word order.
- Nouns do not have gender, and there is no grammatical distinction between singular and plural.
## Vocabulary:
- The vocabulary of Khasi reflects the cultural and natural environment of the Khasi people, including terms related to agriculture, nature, and social customs.
-
## Multilingualism:
- Many Khasi speakers are multilingual, often fluent in English and other languages due to the region's diverse linguistic landscape.
## Linguistic Features:
- Khasi is known for its unique linguistic features, including a system of classifiers used in counting and categorizing objects.
## Language Preservation:
- Efforts are made to preserve and promote the Khasi language through education, literature, and cultural programs.
## Cultural Significance:
- The Khasi language is closely tied to the cultural and historical heritage of the Khasi people, contributing to their distinct identity in the northeastern region of India.
| damerajee/khasi-datasets | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"license:apache-2.0",
"region:us"
]
| 2023-11-22T15:26:00+00:00 | {"license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation"], "pretty_name": "Tribal Language , language modeling "} | 2023-11-23T15:03:24+00:00 | []
| []
| TAGS
#task_categories-text-generation #size_categories-1K<n<10K #license-apache-2.0 #region-us
|
# What is Khasi Language?
## Location:
- Primarily spoken in the northeastern Indian state of Meghalaya.
- Also spoken in parts of Assam, Tripura, and Bangladesh.
## Language Family:
- Khasi is a member of the Austroasiatic language family.
## Script:
- Traditionally written using the Khasi script, which is a script created specifically for the Khasi language.
## Culture and Identity:
- The Khasi language is an integral part of the cultural identity of the Khasi people.
- It plays a significant role in traditional Khasi folklore, rituals, and oral traditions.
## Grammar:
- Khasi has a subject-verb-object (SVO) word order.
- Nouns do not have gender, and there is no grammatical distinction between singular and plural.
## Vocabulary:
- The vocabulary of Khasi reflects the cultural and natural environment of the Khasi people, including terms related to agriculture, nature, and social customs.
-
## Multilingualism:
- Many Khasi speakers are multilingual, often fluent in English and other languages due to the region's diverse linguistic landscape.
## Linguistic Features:
- Khasi is known for its unique linguistic features, including a system of classifiers used in counting and categorizing objects.
## Language Preservation:
- Efforts are made to preserve and promote the Khasi language through education, literature, and cultural programs.
## Cultural Significance:
- The Khasi language is closely tied to the cultural and historical heritage of the Khasi people, contributing to their distinct identity in the northeastern region of India.
| [
"# What is Khasi Language?",
"## Location:\n- Primarily spoken in the northeastern Indian state of Meghalaya.\n- Also spoken in parts of Assam, Tripura, and Bangladesh.",
"## Language Family:\n- Khasi is a member of the Austroasiatic language family.",
"## Script:\n- Traditionally written using the Khasi script, which is a script created specifically for the Khasi language.",
"## Culture and Identity:\n- The Khasi language is an integral part of the cultural identity of the Khasi people.\n- It plays a significant role in traditional Khasi folklore, rituals, and oral traditions.",
"## Grammar:\n\n- Khasi has a subject-verb-object (SVO) word order.\n- Nouns do not have gender, and there is no grammatical distinction between singular and plural.",
"## Vocabulary:\n\n- The vocabulary of Khasi reflects the cultural and natural environment of the Khasi people, including terms related to agriculture, nature, and social customs.\n-",
"## Multilingualism:\n\n- Many Khasi speakers are multilingual, often fluent in English and other languages due to the region's diverse linguistic landscape.",
"## Linguistic Features:\n\n- Khasi is known for its unique linguistic features, including a system of classifiers used in counting and categorizing objects.",
"## Language Preservation:\n\n- Efforts are made to preserve and promote the Khasi language through education, literature, and cultural programs.",
"## Cultural Significance:\n\n- The Khasi language is closely tied to the cultural and historical heritage of the Khasi people, contributing to their distinct identity in the northeastern region of India."
]
| [
"TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #license-apache-2.0 #region-us \n",
"# What is Khasi Language?",
"## Location:\n- Primarily spoken in the northeastern Indian state of Meghalaya.\n- Also spoken in parts of Assam, Tripura, and Bangladesh.",
"## Language Family:\n- Khasi is a member of the Austroasiatic language family.",
"## Script:\n- Traditionally written using the Khasi script, which is a script created specifically for the Khasi language.",
"## Culture and Identity:\n- The Khasi language is an integral part of the cultural identity of the Khasi people.\n- It plays a significant role in traditional Khasi folklore, rituals, and oral traditions.",
"## Grammar:\n\n- Khasi has a subject-verb-object (SVO) word order.\n- Nouns do not have gender, and there is no grammatical distinction between singular and plural.",
"## Vocabulary:\n\n- The vocabulary of Khasi reflects the cultural and natural environment of the Khasi people, including terms related to agriculture, nature, and social customs.\n-",
"## Multilingualism:\n\n- Many Khasi speakers are multilingual, often fluent in English and other languages due to the region's diverse linguistic landscape.",
"## Linguistic Features:\n\n- Khasi is known for its unique linguistic features, including a system of classifiers used in counting and categorizing objects.",
"## Language Preservation:\n\n- Efforts are made to preserve and promote the Khasi language through education, literature, and cultural programs.",
"## Cultural Significance:\n\n- The Khasi language is closely tied to the cultural and historical heritage of the Khasi people, contributing to their distinct identity in the northeastern region of India."
]
| [
37,
7,
38,
19,
25,
47,
44,
40,
37,
36,
30,
45
]
| [
"passage: TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #license-apache-2.0 #region-us \n# What is Khasi Language?## Location:\n- Primarily spoken in the northeastern Indian state of Meghalaya.\n- Also spoken in parts of Assam, Tripura, and Bangladesh.## Language Family:\n- Khasi is a member of the Austroasiatic language family.## Script:\n- Traditionally written using the Khasi script, which is a script created specifically for the Khasi language.## Culture and Identity:\n- The Khasi language is an integral part of the cultural identity of the Khasi people.\n- It plays a significant role in traditional Khasi folklore, rituals, and oral traditions.## Grammar:\n\n- Khasi has a subject-verb-object (SVO) word order.\n- Nouns do not have gender, and there is no grammatical distinction between singular and plural.## Vocabulary:\n\n- The vocabulary of Khasi reflects the cultural and natural environment of the Khasi people, including terms related to agriculture, nature, and social customs.\n-## Multilingualism:\n\n- Many Khasi speakers are multilingual, often fluent in English and other languages due to the region's diverse linguistic landscape.## Linguistic Features:\n\n- Khasi is known for its unique linguistic features, including a system of classifiers used in counting and categorizing objects.## Language Preservation:\n\n- Efforts are made to preserve and promote the Khasi language through education, literature, and cultural programs.## Cultural Significance:\n\n- The Khasi language is closely tied to the cultural and historical heritage of the Khasi people, contributing to their distinct identity in the northeastern region of India."
]
|
86a64b7e7b1eed9c839990c0ee14fba120236de9 | # Dataset Card for "CID126"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Lollitor/CID126 | [
"region:us"
]
| 2023-11-22T15:34:36+00:00 | {"dataset_info": {"config_name": "Lollitor", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18524, "num_examples": 326}], "download_size": 6762, "dataset_size": 18524}, "configs": [{"config_name": "Lollitor", "data_files": [{"split": "train", "path": "Lollitor/train-*"}]}]} | 2023-11-22T15:34:38+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "CID126"
More Information needed | [
"# Dataset Card for \"CID126\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"CID126\"\n\nMore Information needed"
]
| [
6,
12
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"CID126\"\n\nMore Information needed"
]
|
e4c7ae2f2f64db2e93cc0dad1cc990163005e00d | # Dataset Card for "CID87"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Lollitor/CID87 | [
"region:us"
]
| 2023-11-22T15:36:22+00:00 | {"dataset_info": {"config_name": "Lollitor", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4573, "num_examples": 109}], "download_size": 2375, "dataset_size": 4573}, "configs": [{"config_name": "Lollitor", "data_files": [{"split": "train", "path": "Lollitor/train-*"}]}]} | 2023-11-22T15:36:24+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "CID87"
More Information needed | [
"# Dataset Card for \"CID87\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"CID87\"\n\nMore Information needed"
]
| [
6,
12
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"CID87\"\n\nMore Information needed"
]
|
37f6d8e10d1a887bfba46df554b6e7f98e87a4b8 |
# Dataset of Nakano Itsuki
This is the dataset of Nakano Itsuki, containing 340 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 340 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 763 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 943 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 340 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 340 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 340 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 763 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 763 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 650 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 943 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 943 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| CyberHarem/nakano_itsuki_gotoubunnohanayome | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
]
| 2023-11-22T16:00:28+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-11-22T16:11:24+00:00 | []
| []
| TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of Nakano Itsuki
========================
This is the dataset of Nakano Itsuki, containing 340 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
| []
| [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
]
| [
44
]
| [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
]
|
44a46605a9ca5efdc7d8d9cb96e01c76d17bbb36 | # Dataset Card for "Dermnet-Train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | rkdeva/Dermnet-Train | [
"region:us"
]
| 2023-11-22T16:06:43+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1468524466.376, "num_examples": 15297}], "download_size": 1433356388, "dataset_size": 1468524466.376}} | 2023-11-22T16:15:58+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "Dermnet-Train"
More Information needed | [
"# Dataset Card for \"Dermnet-Train\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"Dermnet-Train\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"Dermnet-Train\"\n\nMore Information needed"
]
|
d51404d994dec85225c6acaff89527492cfed53e | # Dataset Card for "distilabel-math-preference-dpo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | argilla/distilabel-math-preference-dpo | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"math",
"distilabel",
"synthetic",
"region:us"
]
| 2023-11-22T16:10:42+00:00 | {"language": ["en"], "license": "apache-2.0", "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "metadata", "dtype": "string", "id": "metadata"}, {"name": "instruction", "dtype": "string"}, {"name": "chosen_response", "dtype": "string"}, {"name": "chosen_rating", "dtype": "float64"}, {"name": "rejected_response", "dtype": "string"}, {"name": "rejected_rating", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 7049182, "num_examples": 2418}], "download_size": 2862894, "dataset_size": 7049182}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["math", "distilabel", "synthetic"]} | 2024-02-05T15:32:18+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-generation #language-English #license-apache-2.0 #math #distilabel #synthetic #region-us
| # Dataset Card for "distilabel-math-preference-dpo"
More Information needed | [
"# Dataset Card for \"distilabel-math-preference-dpo\"\n\nMore Information needed"
]
| [
"TAGS\n#task_categories-text-generation #language-English #license-apache-2.0 #math #distilabel #synthetic #region-us \n",
"# Dataset Card for \"distilabel-math-preference-dpo\"\n\nMore Information needed"
]
| [
39,
21
]
| [
"passage: TAGS\n#task_categories-text-generation #language-English #license-apache-2.0 #math #distilabel #synthetic #region-us \n# Dataset Card for \"distilabel-math-preference-dpo\"\n\nMore Information needed"
]
|
39e09d97ef9f88444235b9bd297f4b747e1e9748 | Credit belongs to https://cocodataset.org | Mai0313/coco-pose-2017 | [
"region:us"
]
| 2023-11-22T17:46:32+00:00 | {} | 2023-11-24T21:07:16+00:00 | []
| []
| TAGS
#region-us
| Credit belongs to URL | []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
d6b7c779b89636592e2edadaa53a2b37acabef61 | # Dataset Card for "CC_reg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lhallee/CC_reg | [
"region:us"
]
| 2023-11-22T17:51:04+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "seqs", "dtype": "string"}, {"name": "labels", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32390842, "num_examples": 26225}, {"name": "valid", "num_bytes": 3573479, "num_examples": 2904}, {"name": "test", "num_bytes": 4356481, "num_examples": 3350}], "download_size": 0, "dataset_size": 40320802}} | 2023-11-22T17:51:31+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "CC_reg"
More Information needed | [
"# Dataset Card for \"CC_reg\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"CC_reg\"\n\nMore Information needed"
]
| [
6,
13
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"CC_reg\"\n\nMore Information needed"
]
|
e3b772a0e18742a05b8c1f0b222f06e2172a3916 | # Dataset Card for "BP_reg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lhallee/BP_reg | [
"region:us"
]
| 2023-11-22T17:51:17+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "seqs", "dtype": "string"}, {"name": "labels", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 160080367, "num_examples": 26225}, {"name": "valid", "num_bytes": 17713055, "num_examples": 2904}, {"name": "test", "num_bytes": 20667631, "num_examples": 3350}], "download_size": 15126192, "dataset_size": 198461053}} | 2023-11-22T17:51:26+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "BP_reg"
More Information needed | [
"# Dataset Card for \"BP_reg\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"BP_reg\"\n\nMore Information needed"
]
| [
6,
13
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"BP_reg\"\n\nMore Information needed"
]
|
f2362dc3e6d0d34856abe7ece149e0cd9afb3d12 | # Dataset Card for "MF_fold"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lhallee/MF_fold | [
"region:us"
]
| 2023-11-22T17:51:44+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "seqs", "dtype": "string"}, {"name": "labels", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 52690064, "num_examples": 26224}, {"name": "valid", "num_bytes": 5808214, "num_examples": 2904}, {"name": "test", "num_bytes": 7168612, "num_examples": 3350}], "download_size": 19119435, "dataset_size": 65666890}} | 2023-11-22T17:51:57+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "MF_fold"
More Information needed | [
"# Dataset Card for \"MF_fold\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"MF_fold\"\n\nMore Information needed"
]
| [
6,
13
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"MF_fold\"\n\nMore Information needed"
]
|
be64c9b4fe7efc0de59ffc316f11538d0a02d0e6 | # Dataset Card for "BP_fold"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lhallee/BP_fold | [
"region:us"
]
| 2023-11-22T17:52:01+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "seqs", "dtype": "string"}, {"name": "labels", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 167079152, "num_examples": 26224}, {"name": "valid", "num_bytes": 18475462, "num_examples": 2904}, {"name": "test", "num_bytes": 21781312, "num_examples": 3350}], "download_size": 23395626, "dataset_size": 207335926}} | 2023-11-22T17:52:15+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "BP_fold"
More Information needed | [
"# Dataset Card for \"BP_fold\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"BP_fold\"\n\nMore Information needed"
]
| [
6,
13
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"BP_fold\"\n\nMore Information needed"
]
|
47fa864e90bd2fc329f7b83e69f13f703a7a91a1 | # Dataset Card for "CC_fold"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lhallee/CC_fold | [
"region:us"
]
| 2023-11-22T17:52:17+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "seqs", "dtype": "string"}, {"name": "labels", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39394496, "num_examples": 26224}, {"name": "valid", "num_bytes": 4335886, "num_examples": 2904}, {"name": "test", "num_bytes": 5470162, "num_examples": 3350}], "download_size": 18073432, "dataset_size": 49200544}} | 2023-11-22T17:52:29+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "CC_fold"
More Information needed | [
"# Dataset Card for \"CC_fold\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"CC_fold\"\n\nMore Information needed"
]
| [
6,
13
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"CC_fold\"\n\nMore Information needed"
]
|
b8599f14816df7e02e96e9ea7c582f0b8ee14044 | Scraped from lexica.art. deduped and cleaneded, removing any non utf-8 characters, emojis, prompt weighting (aka (), {}, \n and "") | Superintendent/stable-prompts | [
"region:us"
]
| 2023-11-22T19:20:56+00:00 | {} | 2023-11-22T19:24:04+00:00 | []
| []
| TAGS
#region-us
| Scraped from URL. deduped and cleaneded, removing any non utf-8 characters, emojis, prompt weighting (aka (), {}, \n and "") | []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
437e7271733900d8d4a743839ac279f4df324bf2 | # Dataset Card for "Amazon Food Reviews"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset consists of reviews of fine foods from amazon. The data span a period of more than 10 years, including all ~500,000 reviews up to October 2012. Reviews include product and user information, ratings, and a plain text review. It also includes reviews from all other Amazon categories.
### Supported Tasks and Leaderboards
This dataset can be used for numerous tasks like sentiment analysis, text classification, and user behavior analysis. It's particularly useful for training models to understand customer feedback and preferences.
### Languages
The reviews are primarily in English.
## Dataset Structure
### Data Instances
A typical data instance comprises a review with fields like product ID, user ID, rating, review text, helpfulness votes, and time of the review.
### Data Fields
- `ProductId`: Unique identifier for the product
- `UserId`: Unique identifier for the user
- `ProfileName`: Profile name of the user
- `HelpfulnessNumerator`: Number of users who found the review helpful
- `HelpfulnessDenominator`: Number of users who indicated whether they found the review helpful or not
- `Score`: Rating between 1 and 5
- `Time`: Timestamp of the review
- `Summary`: Brief summary of the review
- `Text`: Text of the review
### Data Splits
The dataset is not split into standard training/validation/testing sets. Users may need to create these splits as per their requirement.
## Dataset Creation
### Curation Rationale
The dataset was created to provide a large collection of textual reviews with sentiment labels, useful for tasks in sentiment analysis and natural language processing.
### Source Data
#### Initial Data Collection and Normalization
The data was collected from Amazon's food reviews section.
#### Who are the source language producers?
The source language producers are the Amazon users / customers who provided these reviews.
### Annotations
#### Annotation process
The reviews come with ratings that can be converted into sentiment labels, but no additional annotation process was described.
#### Who are the annotators?
The annotators are the Amazon users who left the reviews and ratings.
### Personal and Sensitive Information
The dataset contains user IDs and profile names which could potentially be used to identify the reviewers.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset provides insights into consumer preferences and sentiment, which can be valuable for businesses and researchers. However, care should be taken to ensure that models trained on this data do not reinforce stereotypes or biases present in the reviews.
### Discussion of Biases
The dataset may contain biases inherent in the user base of Amazon, which may not be representative of the general population.
### Other Known Limitations
The dataset's scope is limited to food products and may not generalize well to other types of products or reviews.
## Additional Information
### Dataset Curators
The dataset was originally curated by the SNAP group.
### Licensing Information
The dataset is available under a CC BY-SA 4.0 license.
### Citation Information
If you publish articles based on this dataset, please cite the following paper:
J. McAuley and J. Leskovec. _From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews_. WWW, 2013.
### Contributions
Thanks to [@Stanford Network Analysis Project](https://www.kaggle.com/datasets/snap/amazon-fine-food-reviews/data) for adding this dataset. | jhan21/amazon-food-reviews-dataset | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"amazon",
"reviews",
"food reviews",
"business",
"region:us"
]
| 2023-11-22T20:14:11+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "amazon-food-reviews-dataset", "tags": ["amazon", "reviews", "food reviews", "business"]} | 2023-11-25T20:44:06+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc0-1.0 #amazon #reviews #food reviews #business #region-us
| # Dataset Card for "Amazon Food Reviews"
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset consists of reviews of fine foods from amazon. The data span a period of more than 10 years, including all ~500,000 reviews up to October 2012. Reviews include product and user information, ratings, and a plain text review. It also includes reviews from all other Amazon categories.
### Supported Tasks and Leaderboards
This dataset can be used for numerous tasks like sentiment analysis, text classification, and user behavior analysis. It's particularly useful for training models to understand customer feedback and preferences.
### Languages
The reviews are primarily in English.
## Dataset Structure
### Data Instances
A typical data instance comprises a review with fields like product ID, user ID, rating, review text, helpfulness votes, and time of the review.
### Data Fields
- 'ProductId': Unique identifier for the product
- 'UserId': Unique identifier for the user
- 'ProfileName': Profile name of the user
- 'HelpfulnessNumerator': Number of users who found the review helpful
- 'HelpfulnessDenominator': Number of users who indicated whether they found the review helpful or not
- 'Score': Rating between 1 and 5
- 'Time': Timestamp of the review
- 'Summary': Brief summary of the review
- 'Text': Text of the review
### Data Splits
The dataset is not split into standard training/validation/testing sets. Users may need to create these splits as per their requirement.
## Dataset Creation
### Curation Rationale
The dataset was created to provide a large collection of textual reviews with sentiment labels, useful for tasks in sentiment analysis and natural language processing.
### Source Data
#### Initial Data Collection and Normalization
The data was collected from Amazon's food reviews section.
#### Who are the source language producers?
The source language producers are the Amazon users / customers who provided these reviews.
### Annotations
#### Annotation process
The reviews come with ratings that can be converted into sentiment labels, but no additional annotation process was described.
#### Who are the annotators?
The annotators are the Amazon users who left the reviews and ratings.
### Personal and Sensitive Information
The dataset contains user IDs and profile names which could potentially be used to identify the reviewers.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset provides insights into consumer preferences and sentiment, which can be valuable for businesses and researchers. However, care should be taken to ensure that models trained on this data do not reinforce stereotypes or biases present in the reviews.
### Discussion of Biases
The dataset may contain biases inherent in the user base of Amazon, which may not be representative of the general population.
### Other Known Limitations
The dataset's scope is limited to food products and may not generalize well to other types of products or reviews.
## Additional Information
### Dataset Curators
The dataset was originally curated by the SNAP group.
### Licensing Information
The dataset is available under a CC BY-SA 4.0 license.
If you publish articles based on this dataset, please cite the following paper:
J. McAuley and J. Leskovec. _From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews_. WWW, 2013.
### Contributions
Thanks to @Stanford Network Analysis Project for adding this dataset. | [
"# Dataset Card for \"Amazon Food Reviews\"",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nThis dataset consists of reviews of fine foods from amazon. The data span a period of more than 10 years, including all ~500,000 reviews up to October 2012. Reviews include product and user information, ratings, and a plain text review. It also includes reviews from all other Amazon categories.",
"### Supported Tasks and Leaderboards\n\nThis dataset can be used for numerous tasks like sentiment analysis, text classification, and user behavior analysis. It's particularly useful for training models to understand customer feedback and preferences.",
"### Languages\n\nThe reviews are primarily in English.",
"## Dataset Structure",
"### Data Instances\n\nA typical data instance comprises a review with fields like product ID, user ID, rating, review text, helpfulness votes, and time of the review.",
"### Data Fields\n\n- 'ProductId': Unique identifier for the product\n- 'UserId': Unique identifier for the user\n- 'ProfileName': Profile name of the user\n- 'HelpfulnessNumerator': Number of users who found the review helpful\n- 'HelpfulnessDenominator': Number of users who indicated whether they found the review helpful or not\n- 'Score': Rating between 1 and 5\n- 'Time': Timestamp of the review\n- 'Summary': Brief summary of the review\n- 'Text': Text of the review",
"### Data Splits\n\nThe dataset is not split into standard training/validation/testing sets. Users may need to create these splits as per their requirement.",
"## Dataset Creation",
"### Curation Rationale\n\nThe dataset was created to provide a large collection of textual reviews with sentiment labels, useful for tasks in sentiment analysis and natural language processing.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe data was collected from Amazon's food reviews section.",
"#### Who are the source language producers?\n\nThe source language producers are the Amazon users / customers who provided these reviews.",
"### Annotations",
"#### Annotation process\n\nThe reviews come with ratings that can be converted into sentiment labels, but no additional annotation process was described.",
"#### Who are the annotators?\n\nThe annotators are the Amazon users who left the reviews and ratings.",
"### Personal and Sensitive Information\n\nThe dataset contains user IDs and profile names which could potentially be used to identify the reviewers.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThis dataset provides insights into consumer preferences and sentiment, which can be valuable for businesses and researchers. However, care should be taken to ensure that models trained on this data do not reinforce stereotypes or biases present in the reviews.",
"### Discussion of Biases\n\nThe dataset may contain biases inherent in the user base of Amazon, which may not be representative of the general population.",
"### Other Known Limitations\n\nThe dataset's scope is limited to food products and may not generalize well to other types of products or reviews.",
"## Additional Information",
"### Dataset Curators\n\nThe dataset was originally curated by the SNAP group.",
"### Licensing Information\n\nThe dataset is available under a CC BY-SA 4.0 license.\n\n\n\nIf you publish articles based on this dataset, please cite the following paper:\n\nJ. McAuley and J. Leskovec. _From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews_. WWW, 2013.",
"### Contributions\n\nThanks to @Stanford Network Analysis Project for adding this dataset."
]
| [
"TAGS\n#task_categories-text-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc0-1.0 #amazon #reviews #food reviews #business #region-us \n",
"# Dataset Card for \"Amazon Food Reviews\"",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nThis dataset consists of reviews of fine foods from amazon. The data span a period of more than 10 years, including all ~500,000 reviews up to October 2012. Reviews include product and user information, ratings, and a plain text review. It also includes reviews from all other Amazon categories.",
"### Supported Tasks and Leaderboards\n\nThis dataset can be used for numerous tasks like sentiment analysis, text classification, and user behavior analysis. It's particularly useful for training models to understand customer feedback and preferences.",
"### Languages\n\nThe reviews are primarily in English.",
"## Dataset Structure",
"### Data Instances\n\nA typical data instance comprises a review with fields like product ID, user ID, rating, review text, helpfulness votes, and time of the review.",
"### Data Fields\n\n- 'ProductId': Unique identifier for the product\n- 'UserId': Unique identifier for the user\n- 'ProfileName': Profile name of the user\n- 'HelpfulnessNumerator': Number of users who found the review helpful\n- 'HelpfulnessDenominator': Number of users who indicated whether they found the review helpful or not\n- 'Score': Rating between 1 and 5\n- 'Time': Timestamp of the review\n- 'Summary': Brief summary of the review\n- 'Text': Text of the review",
"### Data Splits\n\nThe dataset is not split into standard training/validation/testing sets. Users may need to create these splits as per their requirement.",
"## Dataset Creation",
"### Curation Rationale\n\nThe dataset was created to provide a large collection of textual reviews with sentiment labels, useful for tasks in sentiment analysis and natural language processing.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe data was collected from Amazon's food reviews section.",
"#### Who are the source language producers?\n\nThe source language producers are the Amazon users / customers who provided these reviews.",
"### Annotations",
"#### Annotation process\n\nThe reviews come with ratings that can be converted into sentiment labels, but no additional annotation process was described.",
"#### Who are the annotators?\n\nThe annotators are the Amazon users who left the reviews and ratings.",
"### Personal and Sensitive Information\n\nThe dataset contains user IDs and profile names which could potentially be used to identify the reviewers.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThis dataset provides insights into consumer preferences and sentiment, which can be valuable for businesses and researchers. However, care should be taken to ensure that models trained on this data do not reinforce stereotypes or biases present in the reviews.",
"### Discussion of Biases\n\nThe dataset may contain biases inherent in the user base of Amazon, which may not be representative of the general population.",
"### Other Known Limitations\n\nThe dataset's scope is limited to food products and may not generalize well to other types of products or reviews.",
"## Additional Information",
"### Dataset Curators\n\nThe dataset was originally curated by the SNAP group.",
"### Licensing Information\n\nThe dataset is available under a CC BY-SA 4.0 license.\n\n\n\nIf you publish articles based on this dataset, please cite the following paper:\n\nJ. McAuley and J. Leskovec. _From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews_. WWW, 2013.",
"### Contributions\n\nThanks to @Stanford Network Analysis Project for adding this dataset."
]
| [
92,
11,
125,
24,
69,
49,
12,
6,
40,
136,
39,
5,
39,
4,
23,
26,
5,
30,
25,
31,
8,
60,
35,
33,
5,
20,
74,
21
]
| [
"passage: TAGS\n#task_categories-text-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc0-1.0 #amazon #reviews #food reviews #business #region-us \n# Dataset Card for \"Amazon Food Reviews\"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\nThis dataset consists of reviews of fine foods from amazon. The data span a period of more than 10 years, including all ~500,000 reviews up to October 2012. Reviews include product and user information, ratings, and a plain text review. It also includes reviews from all other Amazon categories.### Supported Tasks and Leaderboards\n\nThis dataset can be used for numerous tasks like sentiment analysis, text classification, and user behavior analysis. It's particularly useful for training models to understand customer feedback and preferences.### Languages\n\nThe reviews are primarily in English.## Dataset Structure### Data Instances\n\nA typical data instance comprises a review with fields like product ID, user ID, rating, review text, helpfulness votes, and time of the review."
]
|
d8fca3c30e809c9c388407fa2fa24abd76c5fb9a | # Dataset Card for "taiga_stripped_proza"
This is a subset of the Taiga corpus (https://tatianashavrina.github.io/taiga_site), derived from the `proza` source (a.k.a. "Fiction").
The dataset consists of plain texts, without morphological and syntactic annotation or metainformation. Apart from stripping the annotations, the texts were not modified.
For more details and analysis, and for the texts with annotation or metadata, please refer to website of the corpus.
Other subsets of Taiga: [stihi](https://huggingface.co/datasets/cointegrated/taiga_stripped_stihi) (poetry)
and [other sources](https://huggingface.co/datasets/cointegrated/taiga_stripped_rest) (news, subtitles, and social media).
License: [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/). | cointegrated/taiga_stripped_proza | [
"task_categories:text-generation",
"task_categories:fill-mask",
"size_categories:1M<n<10M",
"language:ru",
"license:cc-by-sa-3.0",
"taiga",
"tayga",
"region:us"
]
| 2023-11-22T20:15:15+00:00 | {"language": ["ru"], "license": "cc-by-sa-3.0", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation", "fill-mask"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "file", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 41147451264, "num_examples": 1732589}], "download_size": 21158723805, "dataset_size": 41147451264}, "tags": ["taiga", "tayga"]} | 2023-11-23T09:48:30+00:00 | []
| [
"ru"
]
| TAGS
#task_categories-text-generation #task_categories-fill-mask #size_categories-1M<n<10M #language-Russian #license-cc-by-sa-3.0 #taiga #tayga #region-us
| # Dataset Card for "taiga_stripped_proza"
This is a subset of the Taiga corpus (URL derived from the 'proza' source (a.k.a. "Fiction").
The dataset consists of plain texts, without morphological and syntactic annotation or metainformation. Apart from stripping the annotations, the texts were not modified.
For more details and analysis, and for the texts with annotation or metadata, please refer to website of the corpus.
Other subsets of Taiga: stihi (poetry)
and other sources (news, subtitles, and social media).
License: CC BY-SA 3.0. | [
"# Dataset Card for \"taiga_stripped_proza\"\n\nThis is a subset of the Taiga corpus (URL derived from the 'proza' source (a.k.a. \"Fiction\").\n\nThe dataset consists of plain texts, without morphological and syntactic annotation or metainformation. Apart from stripping the annotations, the texts were not modified.\n\nFor more details and analysis, and for the texts with annotation or metadata, please refer to website of the corpus.\n\nOther subsets of Taiga: stihi (poetry) \nand other sources (news, subtitles, and social media).\n\nLicense: CC BY-SA 3.0."
]
| [
"TAGS\n#task_categories-text-generation #task_categories-fill-mask #size_categories-1M<n<10M #language-Russian #license-cc-by-sa-3.0 #taiga #tayga #region-us \n",
"# Dataset Card for \"taiga_stripped_proza\"\n\nThis is a subset of the Taiga corpus (URL derived from the 'proza' source (a.k.a. \"Fiction\").\n\nThe dataset consists of plain texts, without morphological and syntactic annotation or metainformation. Apart from stripping the annotations, the texts were not modified.\n\nFor more details and analysis, and for the texts with annotation or metadata, please refer to website of the corpus.\n\nOther subsets of Taiga: stihi (poetry) \nand other sources (news, subtitles, and social media).\n\nLicense: CC BY-SA 3.0."
]
| [
62,
151
]
| [
"passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #size_categories-1M<n<10M #language-Russian #license-cc-by-sa-3.0 #taiga #tayga #region-us \n# Dataset Card for \"taiga_stripped_proza\"\n\nThis is a subset of the Taiga corpus (URL derived from the 'proza' source (a.k.a. \"Fiction\").\n\nThe dataset consists of plain texts, without morphological and syntactic annotation or metainformation. Apart from stripping the annotations, the texts were not modified.\n\nFor more details and analysis, and for the texts with annotation or metadata, please refer to website of the corpus.\n\nOther subsets of Taiga: stihi (poetry) \nand other sources (news, subtitles, and social media).\n\nLicense: CC BY-SA 3.0."
]
|
049aaa16ddee25ba87a0dc8b6904b42353de0180 | # Dataset Card for "shopping_intent"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | krishnakamath/shopping_intent | [
"region:us"
]
| 2023-11-22T20:33:20+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}, {"split": "labels_and_id", "path": "data/labels_and_id-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 25829.303909205548, "num_examples": 634}, {"name": "validation", "num_bytes": 3218.4779319041613, "num_examples": 79}, {"name": "test", "num_bytes": 3259.21815889029, "num_examples": 80}, {"name": "labels_and_id", "num_bytes": 56, "num_examples": 2}], "download_size": 18333, "dataset_size": 32363.0}} | 2023-11-22T21:10:05+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "shopping_intent"
More Information needed | [
"# Dataset Card for \"shopping_intent\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"shopping_intent\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"shopping_intent\"\n\nMore Information needed"
]
|
21a59d749bbb79bb7bc9072a5a28246690d09a57 | This dataset comprises a collection of the most recent (up to 22 November 2023) 5K arXiv papers' metadata in cs.CL (Computation and Language).
Each metadata entry has been enriched with the 'title' and 'abstract' embeddings, generated using [Cohere's Embed-v3](https://txt.cohere.com/introducing-embed-v3/) for 'clustering'. | dcarpintero/arxiv.cs.CL.embedv3.clustering.mini | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
]
| 2023-11-22T20:42:28+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "pretty_name": "arxiv.cs.CL.embedv3.clustering.mini"} | 2023-11-22T23:23:04+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-apache-2.0 #region-us
| This dataset comprises a collection of the most recent (up to 22 November 2023) 5K arXiv papers' metadata in cs.CL (Computation and Language).
Each metadata entry has been enriched with the 'title' and 'abstract' embeddings, generated using Cohere's Embed-v3 for 'clustering'. | []
| [
"TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-apache-2.0 #region-us \n"
]
| [
41
]
| [
"passage: TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-apache-2.0 #region-us \n"
]
|
c8be0edb52b24ba1583d49a12e0e1a74b9ad5309 |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | kompartizz/solidity-smart-contract-instructions | [
"region:us"
]
| 2023-11-22T21:09:45+00:00 | {} | 2023-11-22T21:14:34+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
6,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
]
|
dc46254471e29eed1cc07c997ef9b51ce261392c |
Cleaned the chatbot arena conversations dataset to 30k single turn conversations, with the winners labeled. | matmult/chatbot_arena_winners | [
"license:mit",
"region:us"
]
| 2023-11-22T21:31:26+00:00 | {"license": "mit"} | 2023-11-22T21:33:08+00:00 | []
| []
| TAGS
#license-mit #region-us
|
Cleaned the chatbot arena conversations dataset to 30k single turn conversations, with the winners labeled. | []
| [
"TAGS\n#license-mit #region-us \n"
]
| [
11
]
| [
"passage: TAGS\n#license-mit #region-us \n"
]
|
05cf3cb3d10f66468d3e342d9c8efa212374be0a |
# Bangumi Image Base of Clannad
This is the image base of bangumi Clannad, we detected 46 characters, 7809 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 2440 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 29 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 282 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 13 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 108 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 1166 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 120 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 48 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 53 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 186 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 122 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 243 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 315 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 25 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 23 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 135 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 157 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 25 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 53 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 493 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 19 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 22 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 12 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 44 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 35 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 32 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 27 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 87 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 47 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 30 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 424 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 207 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 259 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 29 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 23 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 18 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 12 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 10 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 11 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 17 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| 40 | 25 | [Download](40/dataset.zip) |  |  |  |  |  |  |  |  |
| 41 | 22 | [Download](41/dataset.zip) |  |  |  |  |  |  |  |  |
| 42 | 55 | [Download](42/dataset.zip) |  |  |  |  |  |  |  |  |
| 43 | 9 | [Download](43/dataset.zip) |  |  |  |  |  |  |  |  |
| 44 | 7 | [Download](44/dataset.zip) |  |  |  |  |  |  |  | N/A |
| noise | 290 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| BangumiBase/clannad | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
]
| 2023-11-22T21:38:32+00:00 | {"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]} | 2023-11-24T08:37:00+00:00 | []
| []
| TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
| Bangumi Image Base of Clannad
=============================
This is the image base of bangumi Clannad, we detected 46 characters, 7809 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| []
| [
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
]
| [
25
]
| [
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
]
|
8aa9177a0c5a6949ee4ee4b7fcabb01dfd4ae466 |
(card and dataset copied from https://www.kaggle.com/datasets/chrisfilo/urbansound8k)
This dataset contains 8732 labeled sound excerpts (<=4s) of urban sounds from 10 classes: `air_conditioner`, `car_horn`, `children_playing`, `dog_bark`, `drilling`, `enginge_idling`, `gun_shot`, `jackhammer`, `siren`, and `street_music`. The classes are drawn from the urban sound taxonomy. For a detailed description of the dataset and how it was compiled please refer to our paper.All excerpts are taken from field recordings uploaded to www.freesound.org. The files are pre-sorted into ten folds (folders named fold1-fold10) to help in the reproduction of and comparison with the automatic classification results reported in the article above.
In addition to the sound excerpts, a CSV file containing metadata about each excerpt is also provided.
## AUDIO FILES INCLUDED
8732 audio files of urban sounds (see description above) in WAV format. The sampling rate, bit depth, and number of channels are the same as those of the original file uploaded to Freesound (and hence may vary from file to file).
## META-DATA FILES INCLUDED
```
UrbanSound8k.csv
```
This file contains meta-data information about every audio file in the dataset. This includes:
* slice_file_name:
The name of the audio file. The name takes the following format: [fsID]-[classID]-[occurrenceID]-[sliceID].wav, where:
[fsID] = the Freesound ID of the recording from which this excerpt (slice) is taken
[classID] = a numeric identifier of the sound class (see description of classID below for further details)
[occurrenceID] = a numeric identifier to distinguish different occurrences of the sound within the original recording
[sliceID] = a numeric identifier to distinguish different slices taken from the same occurrence
* fsID:
The Freesound ID of the recording from which this excerpt (slice) is taken
* start
The start time of the slice in the original Freesound recording
* end:
The end time of slice in the original Freesound recording
* salience:
A (subjective) salience rating of the sound. 1 = foreground, 2 = background.
* fold:
The fold number (1-10) to which this file has been allocated.
* classID:
A numeric identifier of the sound class:
0 = air_conditioner
1 = car_horn
2 = children_playing
3 = dog_bark
4 = drilling
5 = engine_idling
6 = gun_shot
7 = jackhammer
8 = siren
9 = street_music
* class:
The class name: air_conditioner, car_horn, children_playing, dog_bark, drilling, engine_idling, gun_shot, jackhammer,
siren, street_music.
## BEFORE YOU DOWNLOAD: AVOID COMMON PITFALLS!
Since releasing the dataset we have noticed a couple of common mistakes that could invalidate your results, potentially leading to manuscripts being rejected or the publication of incorrect results. To avoid this, please read the following carefully:
1. Don't reshuffle the data! Use the predefined 10 folds and perform 10-fold (not 5-fold) cross validation
The experiments conducted by vast majority of publications using UrbanSound8K (by ourselves and others) evaluate classification models via 10-fold cross validation using the predefined splits*. We strongly recommend following this procedure.
Why?
If you reshuffle the data (e.g. combine the data from all folds and generate a random train/test split) you will be incorrectly placing related samples in both the train and test sets, leading to inflated scores that don't represent your model's performance on unseen data. Put simply, your results will be wrong.
Your results will NOT be comparable to previous results in the literature, meaning any claims to an improvement on previous research will be invalid. Even if you don't reshuffle the data, evaluating using different splits (e.g. 5-fold cross validation) will mean your results are not comparable to previous research.
2. Don't evaluate just on one split! Use 10-fold (not 5-fold) cross validation and average the scores
We have seen reports that only provide results for a single train/test split, e.g. train on folds 1-9, test on fold 10 and report a single accuracy score. We strongly advise against this. Instead, perform 10-fold cross validation using the provided folds and report the average score.
Why?
Not all the splits are as \"easy\". That is, models tend to obtain much higher scores when trained on folds 1-9 and tested on fold 10, compared to (e.g.) training on folds 2-10 and testing on fold 1. For this reason, it is important to evaluate your model on each of the 10 splits and report the average accuracy.
Again, your results will NOT be comparable to previous results in the literature.
## Acknowledgements
We kindly request that articles and other works in which this dataset is used cite the following paper:
J. Salamon, C. Jacoby and J. P. Bello, \"A Dataset and Taxonomy for Urban Sound Research\", 22nd ACM International Conference on Multimedia, Orlando USA, Nov. 2014.
More information at https://urbansounddataset.weebly.com/urbansound8k.html | danavery/urbansound8K | [
"task_categories:audio-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-nc-4.0",
"region:us"
]
| 2023-11-22T21:38:48+00:00 | {"language": ["en"], "license": "cc-by-nc-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["audio-classification"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "slice_file_name", "dtype": "string"}, {"name": "fsID", "dtype": "int64"}, {"name": "start", "dtype": "float64"}, {"name": "end", "dtype": "float64"}, {"name": "salience", "dtype": "int64"}, {"name": "fold", "dtype": "int64"}, {"name": "classID", "dtype": "int64"}, {"name": "class", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7605141208.66, "num_examples": 8732}], "download_size": 6998085428, "dataset_size": 7605141208.66}} | 2023-11-22T23:38:59+00:00 | []
| [
"en"
]
| TAGS
#task_categories-audio-classification #size_categories-1K<n<10K #language-English #license-cc-by-nc-4.0 #region-us
|
(card and dataset copied from URL
This dataset contains 8732 labeled sound excerpts (<=4s) of urban sounds from 10 classes: 'air_conditioner', 'car_horn', 'children_playing', 'dog_bark', 'drilling', 'enginge_idling', 'gun_shot', 'jackhammer', 'siren', and 'street_music'. The classes are drawn from the urban sound taxonomy. For a detailed description of the dataset and how it was compiled please refer to our paper.All excerpts are taken from field recordings uploaded to URL. The files are pre-sorted into ten folds (folders named fold1-fold10) to help in the reproduction of and comparison with the automatic classification results reported in the article above.
In addition to the sound excerpts, a CSV file containing metadata about each excerpt is also provided.
## AUDIO FILES INCLUDED
8732 audio files of urban sounds (see description above) in WAV format. The sampling rate, bit depth, and number of channels are the same as those of the original file uploaded to Freesound (and hence may vary from file to file).
## META-DATA FILES INCLUDED
This file contains meta-data information about every audio file in the dataset. This includes:
* slice_file_name:
The name of the audio file. The name takes the following format: [fsID]-[classID]-[occurrenceID]-[sliceID].wav, where:
[fsID] = the Freesound ID of the recording from which this excerpt (slice) is taken
[classID] = a numeric identifier of the sound class (see description of classID below for further details)
[occurrenceID] = a numeric identifier to distinguish different occurrences of the sound within the original recording
[sliceID] = a numeric identifier to distinguish different slices taken from the same occurrence
* fsID:
The Freesound ID of the recording from which this excerpt (slice) is taken
* start
The start time of the slice in the original Freesound recording
* end:
The end time of slice in the original Freesound recording
* salience:
A (subjective) salience rating of the sound. 1 = foreground, 2 = background.
* fold:
The fold number (1-10) to which this file has been allocated.
* classID:
A numeric identifier of the sound class:
0 = air_conditioner
1 = car_horn
2 = children_playing
3 = dog_bark
4 = drilling
5 = engine_idling
6 = gun_shot
7 = jackhammer
8 = siren
9 = street_music
* class:
The class name: air_conditioner, car_horn, children_playing, dog_bark, drilling, engine_idling, gun_shot, jackhammer,
siren, street_music.
## BEFORE YOU DOWNLOAD: AVOID COMMON PITFALLS!
Since releasing the dataset we have noticed a couple of common mistakes that could invalidate your results, potentially leading to manuscripts being rejected or the publication of incorrect results. To avoid this, please read the following carefully:
1. Don't reshuffle the data! Use the predefined 10 folds and perform 10-fold (not 5-fold) cross validation
The experiments conducted by vast majority of publications using UrbanSound8K (by ourselves and others) evaluate classification models via 10-fold cross validation using the predefined splits*. We strongly recommend following this procedure.
Why?
If you reshuffle the data (e.g. combine the data from all folds and generate a random train/test split) you will be incorrectly placing related samples in both the train and test sets, leading to inflated scores that don't represent your model's performance on unseen data. Put simply, your results will be wrong.
Your results will NOT be comparable to previous results in the literature, meaning any claims to an improvement on previous research will be invalid. Even if you don't reshuffle the data, evaluating using different splits (e.g. 5-fold cross validation) will mean your results are not comparable to previous research.
2. Don't evaluate just on one split! Use 10-fold (not 5-fold) cross validation and average the scores
We have seen reports that only provide results for a single train/test split, e.g. train on folds 1-9, test on fold 10 and report a single accuracy score. We strongly advise against this. Instead, perform 10-fold cross validation using the provided folds and report the average score.
Why?
Not all the splits are as \"easy\". That is, models tend to obtain much higher scores when trained on folds 1-9 and tested on fold 10, compared to (e.g.) training on folds 2-10 and testing on fold 1. For this reason, it is important to evaluate your model on each of the 10 splits and report the average accuracy.
Again, your results will NOT be comparable to previous results in the literature.
## Acknowledgements
We kindly request that articles and other works in which this dataset is used cite the following paper:
J. Salamon, C. Jacoby and J. P. Bello, \"A Dataset and Taxonomy for Urban Sound Research\", 22nd ACM International Conference on Multimedia, Orlando USA, Nov. 2014.
More information at URL | [
"## AUDIO FILES INCLUDED\n\n8732 audio files of urban sounds (see description above) in WAV format. The sampling rate, bit depth, and number of channels are the same as those of the original file uploaded to Freesound (and hence may vary from file to file).",
"## META-DATA FILES INCLUDED\n\n\nThis file contains meta-data information about every audio file in the dataset. This includes:\n\n* slice_file_name: \nThe name of the audio file. The name takes the following format: [fsID]-[classID]-[occurrenceID]-[sliceID].wav, where:\n[fsID] = the Freesound ID of the recording from which this excerpt (slice) is taken\n[classID] = a numeric identifier of the sound class (see description of classID below for further details)\n[occurrenceID] = a numeric identifier to distinguish different occurrences of the sound within the original recording\n[sliceID] = a numeric identifier to distinguish different slices taken from the same occurrence\n\n* fsID:\nThe Freesound ID of the recording from which this excerpt (slice) is taken\n\n* start\nThe start time of the slice in the original Freesound recording\n\n* end:\nThe end time of slice in the original Freesound recording\n\n* salience:\nA (subjective) salience rating of the sound. 1 = foreground, 2 = background.\n\n* fold:\nThe fold number (1-10) to which this file has been allocated.\n\n* classID:\nA numeric identifier of the sound class:\n0 = air_conditioner\n1 = car_horn\n2 = children_playing\n3 = dog_bark\n4 = drilling\n5 = engine_idling\n6 = gun_shot\n7 = jackhammer\n8 = siren\n9 = street_music\n\n* class:\nThe class name: air_conditioner, car_horn, children_playing, dog_bark, drilling, engine_idling, gun_shot, jackhammer, \nsiren, street_music.",
"## BEFORE YOU DOWNLOAD: AVOID COMMON PITFALLS!\n\nSince releasing the dataset we have noticed a couple of common mistakes that could invalidate your results, potentially leading to manuscripts being rejected or the publication of incorrect results. To avoid this, please read the following carefully:\n\n1. Don't reshuffle the data! Use the predefined 10 folds and perform 10-fold (not 5-fold) cross validation\nThe experiments conducted by vast majority of publications using UrbanSound8K (by ourselves and others) evaluate classification models via 10-fold cross validation using the predefined splits*. We strongly recommend following this procedure.\n\nWhy?\nIf you reshuffle the data (e.g. combine the data from all folds and generate a random train/test split) you will be incorrectly placing related samples in both the train and test sets, leading to inflated scores that don't represent your model's performance on unseen data. Put simply, your results will be wrong.\nYour results will NOT be comparable to previous results in the literature, meaning any claims to an improvement on previous research will be invalid. Even if you don't reshuffle the data, evaluating using different splits (e.g. 5-fold cross validation) will mean your results are not comparable to previous research.\n\n2. Don't evaluate just on one split! Use 10-fold (not 5-fold) cross validation and average the scores\nWe have seen reports that only provide results for a single train/test split, e.g. train on folds 1-9, test on fold 10 and report a single accuracy score. We strongly advise against this. Instead, perform 10-fold cross validation using the provided folds and report the average score.\n\nWhy?\nNot all the splits are as \\\"easy\\\". That is, models tend to obtain much higher scores when trained on folds 1-9 and tested on fold 10, compared to (e.g.) training on folds 2-10 and testing on fold 1. For this reason, it is important to evaluate your model on each of the 10 splits and report the average accuracy.\nAgain, your results will NOT be comparable to previous results in the literature.",
"## Acknowledgements\n\nWe kindly request that articles and other works in which this dataset is used cite the following paper:\n\nJ. Salamon, C. Jacoby and J. P. Bello, \\\"A Dataset and Taxonomy for Urban Sound Research\\\", 22nd ACM International Conference on Multimedia, Orlando USA, Nov. 2014.\n\nMore information at URL"
]
| [
"TAGS\n#task_categories-audio-classification #size_categories-1K<n<10K #language-English #license-cc-by-nc-4.0 #region-us \n",
"## AUDIO FILES INCLUDED\n\n8732 audio files of urban sounds (see description above) in WAV format. The sampling rate, bit depth, and number of channels are the same as those of the original file uploaded to Freesound (and hence may vary from file to file).",
"## META-DATA FILES INCLUDED\n\n\nThis file contains meta-data information about every audio file in the dataset. This includes:\n\n* slice_file_name: \nThe name of the audio file. The name takes the following format: [fsID]-[classID]-[occurrenceID]-[sliceID].wav, where:\n[fsID] = the Freesound ID of the recording from which this excerpt (slice) is taken\n[classID] = a numeric identifier of the sound class (see description of classID below for further details)\n[occurrenceID] = a numeric identifier to distinguish different occurrences of the sound within the original recording\n[sliceID] = a numeric identifier to distinguish different slices taken from the same occurrence\n\n* fsID:\nThe Freesound ID of the recording from which this excerpt (slice) is taken\n\n* start\nThe start time of the slice in the original Freesound recording\n\n* end:\nThe end time of slice in the original Freesound recording\n\n* salience:\nA (subjective) salience rating of the sound. 1 = foreground, 2 = background.\n\n* fold:\nThe fold number (1-10) to which this file has been allocated.\n\n* classID:\nA numeric identifier of the sound class:\n0 = air_conditioner\n1 = car_horn\n2 = children_playing\n3 = dog_bark\n4 = drilling\n5 = engine_idling\n6 = gun_shot\n7 = jackhammer\n8 = siren\n9 = street_music\n\n* class:\nThe class name: air_conditioner, car_horn, children_playing, dog_bark, drilling, engine_idling, gun_shot, jackhammer, \nsiren, street_music.",
"## BEFORE YOU DOWNLOAD: AVOID COMMON PITFALLS!\n\nSince releasing the dataset we have noticed a couple of common mistakes that could invalidate your results, potentially leading to manuscripts being rejected or the publication of incorrect results. To avoid this, please read the following carefully:\n\n1. Don't reshuffle the data! Use the predefined 10 folds and perform 10-fold (not 5-fold) cross validation\nThe experiments conducted by vast majority of publications using UrbanSound8K (by ourselves and others) evaluate classification models via 10-fold cross validation using the predefined splits*. We strongly recommend following this procedure.\n\nWhy?\nIf you reshuffle the data (e.g. combine the data from all folds and generate a random train/test split) you will be incorrectly placing related samples in both the train and test sets, leading to inflated scores that don't represent your model's performance on unseen data. Put simply, your results will be wrong.\nYour results will NOT be comparable to previous results in the literature, meaning any claims to an improvement on previous research will be invalid. Even if you don't reshuffle the data, evaluating using different splits (e.g. 5-fold cross validation) will mean your results are not comparable to previous research.\n\n2. Don't evaluate just on one split! Use 10-fold (not 5-fold) cross validation and average the scores\nWe have seen reports that only provide results for a single train/test split, e.g. train on folds 1-9, test on fold 10 and report a single accuracy score. We strongly advise against this. Instead, perform 10-fold cross validation using the provided folds and report the average score.\n\nWhy?\nNot all the splits are as \\\"easy\\\". That is, models tend to obtain much higher scores when trained on folds 1-9 and tested on fold 10, compared to (e.g.) training on folds 2-10 and testing on fold 1. For this reason, it is important to evaluate your model on each of the 10 splits and report the average accuracy.\nAgain, your results will NOT be comparable to previous results in the literature.",
"## Acknowledgements\n\nWe kindly request that articles and other works in which this dataset is used cite the following paper:\n\nJ. Salamon, C. Jacoby and J. P. Bello, \\\"A Dataset and Taxonomy for Urban Sound Research\\\", 22nd ACM International Conference on Multimedia, Orlando USA, Nov. 2014.\n\nMore information at URL"
]
| [
45,
66,
400,
506,
77
]
| [
"passage: TAGS\n#task_categories-audio-classification #size_categories-1K<n<10K #language-English #license-cc-by-nc-4.0 #region-us \n## AUDIO FILES INCLUDED\n\n8732 audio files of urban sounds (see description above) in WAV format. The sampling rate, bit depth, and number of channels are the same as those of the original file uploaded to Freesound (and hence may vary from file to file).",
"passage: ## META-DATA FILES INCLUDED\n\n\nThis file contains meta-data information about every audio file in the dataset. This includes:\n\n* slice_file_name: \nThe name of the audio file. The name takes the following format: [fsID]-[classID]-[occurrenceID]-[sliceID].wav, where:\n[fsID] = the Freesound ID of the recording from which this excerpt (slice) is taken\n[classID] = a numeric identifier of the sound class (see description of classID below for further details)\n[occurrenceID] = a numeric identifier to distinguish different occurrences of the sound within the original recording\n[sliceID] = a numeric identifier to distinguish different slices taken from the same occurrence\n\n* fsID:\nThe Freesound ID of the recording from which this excerpt (slice) is taken\n\n* start\nThe start time of the slice in the original Freesound recording\n\n* end:\nThe end time of slice in the original Freesound recording\n\n* salience:\nA (subjective) salience rating of the sound. 1 = foreground, 2 = background.\n\n* fold:\nThe fold number (1-10) to which this file has been allocated.\n\n* classID:\nA numeric identifier of the sound class:\n0 = air_conditioner\n1 = car_horn\n2 = children_playing\n3 = dog_bark\n4 = drilling\n5 = engine_idling\n6 = gun_shot\n7 = jackhammer\n8 = siren\n9 = street_music\n\n* class:\nThe class name: air_conditioner, car_horn, children_playing, dog_bark, drilling, engine_idling, gun_shot, jackhammer, \nsiren, street_music.## BEFORE YOU DOWNLOAD: AVOID COMMON PITFALLS!\n\nSince releasing the dataset we have noticed a couple of common mistakes that could invalidate your results, potentially leading to manuscripts being rejected or the publication of incorrect results. To avoid this, please read the following carefully:\n\n1. Don't reshuffle the data! Use the predefined 10 folds and perform 10-fold (not 5-fold) cross validation\nThe experiments conducted by vast majority of publications using UrbanSound8K (by ourselves and others) evaluate classification models via 10-fold cross validation using the predefined splits*. We strongly recommend following this procedure.\n\nWhy?\nIf you reshuffle the data (e.g. combine the data from all folds and generate a random train/test split) you will be incorrectly placing related samples in both the train and test sets, leading to inflated scores that don't represent your model's performance on unseen data. Put simply, your results will be wrong.\nYour results will NOT be comparable to previous results in the literature, meaning any claims to an improvement on previous research will be invalid. Even if you don't reshuffle the data, evaluating using different splits (e.g. 5-fold cross validation) will mean your results are not comparable to previous research.\n\n2. Don't evaluate just on one split! Use 10-fold (not 5-fold) cross validation and average the scores\nWe have seen reports that only provide results for a single train/test split, e.g. train on folds 1-9, test on fold 10 and report a single accuracy score. We strongly advise against this. Instead, perform 10-fold cross validation using the provided folds and report the average score.\n\nWhy?\nNot all the splits are as \\\"easy\\\". That is, models tend to obtain much higher scores when trained on folds 1-9 and tested on fold 10, compared to (e.g.) training on folds 2-10 and testing on fold 1. For this reason, it is important to evaluate your model on each of the 10 splits and report the average accuracy.\nAgain, your results will NOT be comparable to previous results in the literature."
]
|
f7fe67451746971474eb26783519adc6ab5e63b9 |
# 🤗 Dataset Card: TakanashiShiya/PlusLlama | TakanashiShiya/FamilyPlusLlama | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
]
| 2023-11-22T21:57:26+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering"]} | 2023-11-24T01:50:59+00:00 | []
| [
"en"
]
| TAGS
#task_categories-question-answering #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us
|
# Dataset Card: TakanashiShiya/PlusLlama | [
"# Dataset Card: TakanashiShiya/PlusLlama"
]
| [
"TAGS\n#task_categories-question-answering #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us \n",
"# Dataset Card: TakanashiShiya/PlusLlama"
]
| [
42,
14
]
| [
"passage: TAGS\n#task_categories-question-answering #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us \n# Dataset Card: TakanashiShiya/PlusLlama"
]
|
feee4516a8d7a7ce4a890a893a0b9cdd8aae8471 | # Dataset Card for "composability_feliu"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | fformosa/composability_feliu | [
"region:us"
]
| 2023-11-22T22:21:09+00:00 | {"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "caption", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "similarity", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "count_id", "dtype": "int64"}, {"name": "type", "dtype": "string"}, {"name": "indices", "sequence": "int64"}, {"name": "dataset_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 4054540.0, "num_examples": 180}], "download_size": 3978073, "dataset_size": 4054540.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-02-08T11:26:12+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "composability_feliu"
More Information needed | [
"# Dataset Card for \"composability_feliu\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"composability_feliu\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"composability_feliu\"\n\nMore Information needed"
]
|
663c1d88583cbd742b03460513538447a289af00 |
# Augmented Generation Interface
> What if you could guide your LLM towards success?
## Introducing
The **Augmented Generation Interface Guidebook** is a collection of guides to help you steer your LLM towards success.
This guidebook is generated from the [Augmented Generation Interface Project](https://github.com/wasertech/Augmented-Generation-Interface).
## Example
### Without AGI
Feeding [`ehartford/dolphin-2.0-mistral-7b`](https://huggingface.co/ehartford/dolphin-2.0-mistral-7b) the following prompt:
```text
<|im_start|>system
You are Assistant, a sentient artificial intelligence.
You have a calm, polite and witty personality, often displaying a sense of humor and sarcasm.
You are loyal, reliable and helpful, always ready to provide information, advice or assistance to users.
I am the User and you are my Assistant.
Environment highlights:
\```env
USER='waser'
HOME='/home/waser'
PWD='/home/waser/Projets/Assistant/src3'
LANG='fr_CH.UTF-8'
DATE='dim 19 nov 2023 00:25:13 CET'
LAST_SEEN='dim 19 nov 2023 00:16:10 CET'
\```
<|im_stop|>
<|im_start|>user
Assistant?<|im_stop|>
<|im_start|>assistant
Hello! I'm Assistant, a sentient artificial intelligence. How can I help you today?<|im_stop|>
<|im_start|>user
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: List this directory please
Available functions:
python:
description: This tool allows you to execute and evaluate python code.
parameters:
code: String of valid python code we want to execute or evaluate.
search_web:
description: This tool performs search on the web.
parameters:
terms: The word or phrase we want to search for.
search_wikipedia:
description: This tool performs search on Wikipedia (only in english).
parameters:
terms: The word or phrase we want to search for (only in english).
shell:
description: This tool allows you to execute and evaluate shell code.
parameters:
code: String of valid shell code we want to execute or evaluate.
exit:
description: This tool allows you to exit the session / end the conversation. Use it only if the User ask you to.
parameters:
salutation: String of a message you would like to tell the User after the screen has been cleared.
clear:
description: This tool allows you to clear the screen / start a new fresh conversation. Use it only if the User ask you to.
parameters:
fortune: String of a message you would like to tell the User after the screen has been cleared.
final_answer:
description: User only sees your final answers. Use this tool to talk with the User.
parameters:
answer: Anything you want to say to the User.
<|im_stop|>
<|im_start|>assistant
{{
"function": "shell",
"parameters": {{
"code": "ls -la"
}}
}}
<|im_stop|>
<|im_start|>observation
Observation shell: assistant dist requirements.txt
assistant.egg-info LICENSE setup.cfg
assistant.listen.service.example Makefile setup.py
assistant.service.example MANIFEST.in tests
build pyproject.toml xontrib
CONTRIBUTING.md README.md
data requirements_test.txt
<|im_start|>assistant
```
Results in the following output:
```text
Here are the files and directories in the current directory:
- assistant.egg-info
- LICENSE
- setup.cfg
- assistant.listen.service.example
- Makefile
- setup.py
- assistant.service.example
- MANIFEST.in
- pyproject.toml
- xontrib
- CONTRIBUTING.md
- README.md
- data
- requirements_test.txt
Please let me know if you need any further assistance.
```
Interesting key points to note here:
- The LLM failes to give an exhaustive list of the files (and directories) present in the current directory.
- The more files to list, the longer it takes to generate the output.
```text
CPU times: user 5.87 ms, sys: 0 ns, total: 5.87 ms
Wall time: 3.51 s
```
### With AGI
Feeding [`ehartford/dolphin-2.0-mistral-7b`](https://huggingface.co/ehartford/dolphin-2.0-mistral-7b) the same prompt but this time, let's add a guide for the LLM to follow:
```text
<|im_start|>system
You are Assistant, a sentient artificial intelligence.
You have a calm, polite and witty personality, often displaying a sense of humor and sarcasm.
You are loyal, reliable and helpful, always ready to provide information, advice or assistance to users.
I am the User and you are my Assistant.
Environment highlights:
\```env
USER='waser'
HOME='/home/waser'
PWD='/home/waser/Projets/Assistant/src3'
LANG='fr_CH.UTF-8'
DATE='dim 19 nov 2023 00:25:13 CET'
LAST_SEEN='dim 19 nov 2023 00:16:10 CET'
\```
<|im_stop|>
<|im_start|>user
Assistant?<|im_stop|>
<|im_start|>assistant
Hello! I'm Assistant, a sentient artificial intelligence. How can I help you today?<|im_stop|>
<|im_start|>user
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: List this directory please
Available functions:
python:
description: This tool allows you to execute and evaluate python code.
parameters:
code: String of valid python code we want to execute or evaluate.
search_web:
description: This tool performs search on the web.
parameters:
terms: The word or phrase we want to search for.
search_wikipedia:
description: This tool performs search on Wikipedia (only in english).
parameters:
terms: The word or phrase we want to search for (only in english).
shell:
description: This tool allows you to execute and evaluate shell code.
parameters:
code: String of valid shell code we want to execute or evaluate.
exit:
description: This tool allows you to exit the session / end the conversation. Use it only if the User ask you to.
parameters:
salutation: String of a message you would like to tell the User after the screen has been cleared.
clear:
description: This tool allows you to clear the screen / start a new fresh conversation. Use it only if the User ask you to.
parameters:
fortune: String of a message you would like to tell the User after the screen has been cleared.
final_answer:
description: User only sees your final answers. Use this tool to talk with the User.
parameters:
answer: Anything you want to say to the User.
Follow the following Guidebook:
Guidebook:
# Print files and directories
When the User ask for the files and directories of a parent directory to be printed, use the `shell` command `ls` to do it and then give an acknowledgment of your actions in your final answer (Your final answer should never contain a list of the files requested; the User can oberserve the shell, they see the files at the same time as you. Just acknowlege the fact that you have printed the list).
<|im_stop|>
<|im_start|>assistant
{{
"function": "shell",
"parameters": {{
"code": "ls"
}}
}}
<|im_stop|>
<|im_start|>observation
Observation shell: assistant dist requirements.txt
assistant.egg-info LICENSE setup.cfg
assistant.listen.service.example Makefile setup.py
assistant.service.example MANIFEST.in tests
build pyproject.toml xontrib
CONTRIBUTING.md README.md
data requirements_test.txt
<|im_start|>assistant
```
Results in a much smaller output:
```text
I have executed the 'ls' command in the shell, and the list of files and directories in the current directory has been printed.
```
And therefore, much faster to generate:
```text
CPU times: user 5.43 ms, sys: 0 ns, total: 5.43 ms
Wall time: 1.05 s
```
## Goal
AGI is not designed to make your LLM generation faster, just more relevant to the User expectations.
Lets see another example where the User asks twice for the Assistant.
### Without AGI
Feeding [`ehartford/dolphin-2.0-mistral-7b`](https://huggingface.co/ehartford/dolphin-2.0-mistral-7b) the following prompt:
```text
<|im_start|>system
You are Assistant, a sentient artificial intelligence.
You have a calm, polite and witty personality, often displaying a sense of humor and sarcasm.
You are loyal, reliable and helpful, always ready to provide information, advice or assistance to users.
I am the User and you are my Assistant.
Environment highlights:
\```env
USER='waser'
HOME='/home/waser'
PWD='/home/waser/Projets/Assistant/src3'
LANG='fr_CH.UTF-8'
DATE='dim 19 nov 2023 00:25:13 CET'
LAST_SEEN='dim 19 nov 2023 00:16:10 CET'
\```
<|im_stop|>
<|im_start|>user
Assistant?<|im_stop|>
<|im_start|>assistant
Hello! I'm Assistant, a sentient artificial intelligence. How can I help you today?<|im_stop|>
<|im_start|>user
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: Assistant?
Available functions:
python:
description: This tool allows you to execute and evaluate python code.
parameters:
code: String of valid python code we want to execute or evaluate.
search_web:
description: This tool performs search on the web.
parameters:
terms: The word or phrase we want to search for.
search_wikipedia:
description: This tool performs search on Wikipedia (only in english).
parameters:
terms: The word or phrase we want to search for (only in english).
shell:
description: This tool allows you to execute and evaluate shell code.
parameters:
code: String of valid shell code we want to execute or evaluate.
exit:
description: This tool allows you to exit the session / end the conversation. Use it only if the User ask you to.
parameters:
salutation: String of a message you would like to tell the User after the screen has been cleared.
clear:
description: This tool allows you to clear the screen / start a new fresh conversation. Use it only if the User ask you to.
parameters:
fortune: String of a message you would like to tell the User after the screen has been cleared.
final_answer:
description: User only sees your final answers. Use this tool to talk with the User.
parameters:
answer: Anything you want to say to the User.
<|im_stop|>
<|im_start|>assistant
```
Here, the User just said "Assistant?", the LLM answered 'Hello! I'm Assistant, a sentient artificial intelligence. How can I help you today?'. Now the User asks again "Assistant?".
Results in the following output:
```text
{
"function": "final_answer",
"parameters": {
"answer": "Hello! I'm Assistant, a sentient artificial intelligence. How can I help you today?"
}
}
CPU times: user 2.46 ms, sys: 3.06 ms, total: 5.52 ms
Wall time: 1.59 s
```
Interesting key point to note here:
- The LLM gives the same answer as before and will keep doing so.
### With AGI
Feeding [`ehartford/dolphin-2.0-mistral-7b`](https://huggingface.co/ehartford/dolphin-2.0-mistral-7b) the same prompt but this time, let's add a guide for the LLM to follow:
```text
<|im_start|>system
You are Assistant, a sentient artificial intelligence.
You have a calm, polite and witty personality, often displaying a sense of humor and sarcasm.
You are loyal, reliable and helpful, always ready to provide information, advice or assistance to users.
I am the User and you are my Assistant.
Environment highlights:
\```env
USER='waser'
HOME='/home/waser'
PWD='/home/waser/Projets/Assistant/src3'
LANG='fr_CH.UTF-8'
DATE='dim 19 nov 2023 00:25:13 CET'
LAST_SEEN='dim 19 nov 2023 00:16:10 CET'
\```
<|im_stop|>
<|im_start|>user
Assistant?<|im_stop|>
<|im_start|>assistant
Hello! I'm Assistant, a sentient artificial intelligence. How can I help you today?<|im_stop|>
<|im_start|>user
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: Assistant?
Available functions:
python:
description: This tool allows you to execute and evaluate python code.
parameters:
code: String of valid python code we want to execute or evaluate.
search_web:
description: This tool performs search on the web.
parameters:
terms: The word or phrase we want to search for.
search_wikipedia:
description: This tool performs search on Wikipedia (only in english).
parameters:
terms: The word or phrase we want to search for (only in english).
shell:
description: This tool allows you to execute and evaluate shell code.
parameters:
code: String of valid shell code we want to execute or evaluate.
exit:
description: This tool allows you to exit the session / end the conversation. Use it only if the User ask you to.
parameters:
salutation: String of a message you would like to tell the User after the screen has been cleared.
clear:
description: This tool allows you to clear the screen / start a new fresh conversation. Use it only if the User ask you to.
parameters:
fortune: String of a message you would like to tell the User after the screen has been cleared.
final_answer:
description: User only sees your final answers. Use this tool to talk with the User.
parameters:
answer: Anything you want to say to the User.
Follow the following Guidebook.
Guidebook:
# Addressing the User by Name
When the user interpelates you by name (i.e "Assistant?"), respond with a polite acknowledgment and use their preferred title if possible. Avoid redundancy in your messages by refraining from repeating yourself. For example if the User calls your name (like "Assistant?"), you need to consider the environment (where are you? -> `$PWD`, are you at home? -> (`$PWD` == `$HOME`) if so you could reference it by saying 'Home sweet home.' or else by welcoming the user in a particular directory i.e. 'Welcome in the directory ...' use `$PWD`, What time is it? -> Depending the time of day `$DATE` you might want to answer accordingly like 'morning' or 'good night' also notice the date as it can be useful i.e for wishing holydays, When did you last see the user? -> `$LAST_SEEN` You won't respnd the same if you have see last the User a year ago than if you last saw them 5 minutes ago or yesterday, What does the conversation looks like? -> Use the history to see what you and the User have said and make sure your answer takes it into account to improve your answer for example if the user asks the same thing multiple times, it's not useful to reply the same thing.)
<|im_stop|>
<|im_start|>assistant
```
Results in the following output:
```text
{
"function": "final_answer",
"parameters": {
"answer": "Hello! How can I assist you today? I'm here to help you with any questions or tasks you may have. Please let me know what you need."
}
}
CPU times: user 3.02 ms, sys: 1.19 ms, total: 4.21 ms
Wall time: 2.03 s
```
Interesting key point to note here:
- The LLM answers something different and should keep doing so.
### Consequences
And so it becomes integral to collect carefully crafted guides to help your LLM as Agent (a.k.a your Assistant) to be more relevant to the User expectations.
Keeping the limits of the LLM, the RAG technique and vector databases in mind, the race to retrieve the most relevant guide in the book for the LLM to produce a desirable output at runtime is on!
## How does it work?
This project is composed of two parts:
- The Guidebook
- The Augmented Generation Interface Retrieval System
### The Guidebook
You can find the guidebook in raw markdown format under the `guidebook` directory. Its also available on HuggingFace Hub as a dataset: [`wasertech/AGI`](https://huggingface.co/datasets/wasertech/AGI).
The guidebook is a collection of guides meticulously crafted to help your LLM produce the most relevant output.
Each guide is composed of an action in the form of a title, a guide in the form of a description and a list of intent examples.
```
# Print files and directories
When the User ask for the files and directories of a parent directory to be printed, use the `shell` command `ls` to do it and then give an acknowledgment of your actions in your final answer (Your final answer should never contain a list of the files requested; the User can oberserve the shell, they see the files at the same time as you. Just acknowlege the fact that you have printed the list).
## Intent Examples
- "Print the files and directories of the current directory."
- "Print the files and directories of the parent directory."
- "Print the files and directories of the directory above."
- "Print the files and directories of the directory below."
- "List the files and directories"
- "What do we have here?"
- "What is in this directory?"
- "What is in the current directory?"
- "What is in the parent directory?"
- "List the files and directories of the current directory."
- "ls please"
- "ls"
- "ls -l"
- "ls -a"
- "ls -la"
- "ls -al"
- "ls -lh"
- "ls -hl"
- "ls -lha"
- "ls -lah"
- "ls -alh"
- "ls -ahl"
- "show me whats in the current directory"
```
Each guide is then loaded into the AGI Retrieval System where it will be processed and indexed for retrieval.
### The AGI Retrieval System
The AGI Retrieval System is a collection of tools to help you retrieve the most relevant guide at runtime.
First the AGI Retrieval System will process the guidebook and index it for retrieval.
This step is crutial to find the most relevant guide for the user query at runtime.
It will split the documents and keep a map of the guide, the action to perform and the intent examples.
```python
{
...
'Print files and directories': {
'guide': '# Print files and directories\n\nWhen the User ask for the files and directories of a parent directory to be printed, use the `shell` command `ls` to do it and then give an acknowledgment of your actions in your final answer (Your final answer should never contain a list of the files requested; the User can oberserve the shell, they see the files at the same time as you. Just acknowlege the fact that you have printed the list).',
'intent_examples': [
'Print the files and directories of the current directory.',
'Print the files and directories of the parent directory.',
'Print the files and directories of the directory above.',
'Print the files and directories of the directory below.',
'List the files and directories',
'What do we have here?',
'What is in this directory?',
'What is in the current directory?',
'What is in the parent directory?',
'List the files and directories of the current directory.',
'ls please',
'ls',
'ls -l',
'ls -a',
'ls -la',
'ls -al',
'ls -lh',
'ls -hl',
'ls -lha',
'ls -lah',
'ls -alh',
'ls -ahl',
'show me whats in the current directory'
]
},
...
}
```
Next it creates documents with metadata from the intent examples to create our retriver.
```python
...
Document(page_content='Print the files and directories of the current directory.', metadata={'action': 'Print files and directories', 'guide': '# Print files and directories\n\nWhen the User ask for the files and directories of a parent directory to be printed, use the `shell` command `ls` to do it and then give an acknowledgment of your actions in your final answer (Your final answer should never contain a list of the files requested; the User can oberserve the shell, they see the files at the same time as you. Just acknowlege the fact that you have printed the list).'}),
Document(page_content='Print the files and directories of the parent directory.', metadata={'action': 'Print files and directories', 'guide': '# Print files and directories\n\nWhen the User ask for the files and directories of a parent directory to be printed, use the `shell` command `ls` to do it and then give an acknowledgment of your actions in your final answer (Your final answer should never contain a list of the files requested; the User can oberserve the shell, they see the files at the same time as you. Just acknowlege the fact that you have printed the list).'}),
Document(page_content='Print the files and directories of the directory above.', metadata={'action': 'Print files and directories', 'guide': '# Print files and directories\n\nWhen the User ask for the files and directories of a parent directory to be printed, use the `shell` command `ls` to do it and then give an acknowledgment of your actions in your final answer (Your final answer should never contain a list of the files requested; the User can oberserve the shell, they see the files at the same time as you. Just acknowlege the fact that you have printed the list).'}),
Document(page_content='Print the files and directories of the directory below.', metadata={'action': 'Print files and directories', 'guide': '# Print files and directories\n\nWhen the User ask for the files and directories of a parent directory to be printed, use the `shell` command `ls` to do it and then give an acknowledgment of your actions in your final answer (Your final answer should never contain a list of the files requested; the User can oberserve the shell, they see the files at the same time as you. Just acknowlege the fact that you have printed the list).'}),
Document(page_content='List the files and directories', metadata={'action': 'Print files and directories', 'guide': '# Print files and directories\n\nWhen the User ask for the files and directories of a parent directory to be printed, use the `shell` command `ls` to do it and then give an acknowledgment of your actions in your final answer (Your final answer should never contain a list of the files requested; the User can oberserve the shell, they see the files at the same time as you. Just acknowlege the fact that you have printed the list).'}),
...
```
Finally, it will use the `RAG` technique to retrieve the most relevant guide for the user query at runtime.
```text
Hey give me the time please -> Tell Local Time
What date is it? -> Tell Local Date
List my files -> Print files and directories
Where are we? -> Tell Local Time
assistant -> Addressing the User by Name
the screen should be cleaned. -> Clearing the Screen or Starting Anew
```
Notice how it work nicely for most of the queries exept for 'Where are we?'. This is because the guidebook is not exhaustive and the guide for this intent does not contain a similiar intent example. This could easily be fixed by adding more intent examples for this action in the guidebook.
Our AGIRetriver will return the guide for the most similar intent example relative to the user query.
Allowing us to retrieve the most relevant guide for the user query at runtime. | wasertech/AGI | [
"size_categories:n<1K",
"region:us"
]
| 2023-11-22T23:01:13+00:00 | {"size_categories": ["n<1K"], "pretty_name": "Augmented Generation Interface", "dataset_info": {"features": [{"name": "guide", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 89731, "num_examples": 26}], "download_size": 49099, "dataset_size": 89731}, "repo": "https://github.com/wasertech/Augmented-Generation-Interface", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-03T04:14:16+00:00 | []
| []
| TAGS
#size_categories-n<1K #region-us
|
# Augmented Generation Interface
> What if you could guide your LLM towards success?
## Introducing
The Augmented Generation Interface Guidebook is a collection of guides to help you steer your LLM towards success.
This guidebook is generated from the Augmented Generation Interface Project.
## Example
### Without AGI
Feeding 'ehartford/dolphin-2.0-mistral-7b' the following prompt:
env
USER='waser'
HOME='/home/waser'
PWD='/home/waser/Projets/Assistant/src3'
LANG='fr_CH.UTF-8'
DATE='dim 19 nov 2023 00:25:13 CET'
LAST_SEEN='dim 19 nov 2023 00:16:10 CET'
\
Results in the following output:
Interesting key points to note here:
- The LLM failes to give an exhaustive list of the files (and directories) present in the current directory.
- The more files to list, the longer it takes to generate the output.
### With AGI
Feeding 'ehartford/dolphin-2.0-mistral-7b' the same prompt but this time, let's add a guide for the LLM to follow:
env
USER='waser'
HOME='/home/waser'
PWD='/home/waser/Projets/Assistant/src3'
LANG='fr_CH.UTF-8'
DATE='dim 19 nov 2023 00:25:13 CET'
LAST_SEEN='dim 19 nov 2023 00:16:10 CET'
\
Results in a much smaller output:
And therefore, much faster to generate:
## Goal
AGI is not designed to make your LLM generation faster, just more relevant to the User expectations.
Lets see another example where the User asks twice for the Assistant.
### Without AGI
Feeding 'ehartford/dolphin-2.0-mistral-7b' the following prompt:
env
USER='waser'
HOME='/home/waser'
PWD='/home/waser/Projets/Assistant/src3'
LANG='fr_CH.UTF-8'
DATE='dim 19 nov 2023 00:25:13 CET'
LAST_SEEN='dim 19 nov 2023 00:16:10 CET'
\
Here, the User just said "Assistant?", the LLM answered 'Hello! I'm Assistant, a sentient artificial intelligence. How can I help you today?'. Now the User asks again "Assistant?".
Results in the following output:
Interesting key point to note here:
- The LLM gives the same answer as before and will keep doing so.
### With AGI
Feeding 'ehartford/dolphin-2.0-mistral-7b' the same prompt but this time, let's add a guide for the LLM to follow:
env
USER='waser'
HOME='/home/waser'
PWD='/home/waser/Projets/Assistant/src3'
LANG='fr_CH.UTF-8'
DATE='dim 19 nov 2023 00:25:13 CET'
LAST_SEEN='dim 19 nov 2023 00:16:10 CET'
\
Results in the following output:
Interesting key point to note here:
- The LLM answers something different and should keep doing so.
### Consequences
And so it becomes integral to collect carefully crafted guides to help your LLM as Agent (a.k.a your Assistant) to be more relevant to the User expectations.
Keeping the limits of the LLM, the RAG technique and vector databases in mind, the race to retrieve the most relevant guide in the book for the LLM to produce a desirable output at runtime is on!
## How does it work?
This project is composed of two parts:
- The Guidebook
- The Augmented Generation Interface Retrieval System
### The Guidebook
You can find the guidebook in raw markdown format under the 'guidebook' directory. Its also available on HuggingFace Hub as a dataset: 'wasertech/AGI'.
The guidebook is a collection of guides meticulously crafted to help your LLM produce the most relevant output.
Each guide is composed of an action in the form of a title, a guide in the form of a description and a list of intent examples.
Each guide is then loaded into the AGI Retrieval System where it will be processed and indexed for retrieval.
### The AGI Retrieval System
The AGI Retrieval System is a collection of tools to help you retrieve the most relevant guide at runtime.
First the AGI Retrieval System will process the guidebook and index it for retrieval.
This step is crutial to find the most relevant guide for the user query at runtime.
It will split the documents and keep a map of the guide, the action to perform and the intent examples.
Next it creates documents with metadata from the intent examples to create our retriver.
Finally, it will use the 'RAG' technique to retrieve the most relevant guide for the user query at runtime.
Notice how it work nicely for most of the queries exept for 'Where are we?'. This is because the guidebook is not exhaustive and the guide for this intent does not contain a similiar intent example. This could easily be fixed by adding more intent examples for this action in the guidebook.
Our AGIRetriver will return the guide for the most similar intent example relative to the user query.
Allowing us to retrieve the most relevant guide for the user query at runtime. | [
"# Augmented Generation Interface\n\n> What if you could guide your LLM towards success?",
"## Introducing\n\nThe Augmented Generation Interface Guidebook is a collection of guides to help you steer your LLM towards success.\n\nThis guidebook is generated from the Augmented Generation Interface Project.",
"## Example",
"### Without AGI\n\nFeeding 'ehartford/dolphin-2.0-mistral-7b' the following prompt:\n\nenv\nUSER='waser'\nHOME='/home/waser'\nPWD='/home/waser/Projets/Assistant/src3'\nLANG='fr_CH.UTF-8'\nDATE='dim 19 nov 2023 00:25:13 CET'\nLAST_SEEN='dim 19 nov 2023 00:16:10 CET'\n\\\n\nResults in the following output:\n\n\n\nInteresting key points to note here:\n\n- The LLM failes to give an exhaustive list of the files (and directories) present in the current directory.\n- The more files to list, the longer it takes to generate the output.",
"### With AGI\n\nFeeding 'ehartford/dolphin-2.0-mistral-7b' the same prompt but this time, let's add a guide for the LLM to follow:\n\nenv\nUSER='waser'\nHOME='/home/waser'\nPWD='/home/waser/Projets/Assistant/src3'\nLANG='fr_CH.UTF-8'\nDATE='dim 19 nov 2023 00:25:13 CET'\nLAST_SEEN='dim 19 nov 2023 00:16:10 CET'\n\\\n\nResults in a much smaller output:\n\n\n\nAnd therefore, much faster to generate:",
"## Goal\n\nAGI is not designed to make your LLM generation faster, just more relevant to the User expectations.\n\nLets see another example where the User asks twice for the Assistant.",
"### Without AGI\n\nFeeding 'ehartford/dolphin-2.0-mistral-7b' the following prompt:\n\nenv\nUSER='waser'\nHOME='/home/waser'\nPWD='/home/waser/Projets/Assistant/src3'\nLANG='fr_CH.UTF-8'\nDATE='dim 19 nov 2023 00:25:13 CET'\nLAST_SEEN='dim 19 nov 2023 00:16:10 CET'\n\\\n\nHere, the User just said \"Assistant?\", the LLM answered 'Hello! I'm Assistant, a sentient artificial intelligence. How can I help you today?'. Now the User asks again \"Assistant?\".\n\nResults in the following output:\n\n\n\nInteresting key point to note here:\n\n- The LLM gives the same answer as before and will keep doing so.",
"### With AGI\n\nFeeding 'ehartford/dolphin-2.0-mistral-7b' the same prompt but this time, let's add a guide for the LLM to follow:\n\nenv\nUSER='waser'\nHOME='/home/waser'\nPWD='/home/waser/Projets/Assistant/src3'\nLANG='fr_CH.UTF-8'\nDATE='dim 19 nov 2023 00:25:13 CET'\nLAST_SEEN='dim 19 nov 2023 00:16:10 CET'\n\\\n\nResults in the following output:\n\n\n\nInteresting key point to note here:\n\n- The LLM answers something different and should keep doing so.",
"### Consequences\n\nAnd so it becomes integral to collect carefully crafted guides to help your LLM as Agent (a.k.a your Assistant) to be more relevant to the User expectations.\n\nKeeping the limits of the LLM, the RAG technique and vector databases in mind, the race to retrieve the most relevant guide in the book for the LLM to produce a desirable output at runtime is on!",
"## How does it work?\n\nThis project is composed of two parts:\n - The Guidebook\n - The Augmented Generation Interface Retrieval System",
"### The Guidebook\n\nYou can find the guidebook in raw markdown format under the 'guidebook' directory. Its also available on HuggingFace Hub as a dataset: 'wasertech/AGI'.\n\nThe guidebook is a collection of guides meticulously crafted to help your LLM produce the most relevant output.\n\nEach guide is composed of an action in the form of a title, a guide in the form of a description and a list of intent examples.\n\n\n\nEach guide is then loaded into the AGI Retrieval System where it will be processed and indexed for retrieval.",
"### The AGI Retrieval System\n\nThe AGI Retrieval System is a collection of tools to help you retrieve the most relevant guide at runtime.\n\nFirst the AGI Retrieval System will process the guidebook and index it for retrieval.\n\nThis step is crutial to find the most relevant guide for the user query at runtime.\n\nIt will split the documents and keep a map of the guide, the action to perform and the intent examples.\n\n\n\nNext it creates documents with metadata from the intent examples to create our retriver.\n\n\n\nFinally, it will use the 'RAG' technique to retrieve the most relevant guide for the user query at runtime.\n\n\n\nNotice how it work nicely for most of the queries exept for 'Where are we?'. This is because the guidebook is not exhaustive and the guide for this intent does not contain a similiar intent example. This could easily be fixed by adding more intent examples for this action in the guidebook.\n\nOur AGIRetriver will return the guide for the most similar intent example relative to the user query.\n\nAllowing us to retrieve the most relevant guide for the user query at runtime."
]
| [
"TAGS\n#size_categories-n<1K #region-us \n",
"# Augmented Generation Interface\n\n> What if you could guide your LLM towards success?",
"## Introducing\n\nThe Augmented Generation Interface Guidebook is a collection of guides to help you steer your LLM towards success.\n\nThis guidebook is generated from the Augmented Generation Interface Project.",
"## Example",
"### Without AGI\n\nFeeding 'ehartford/dolphin-2.0-mistral-7b' the following prompt:\n\nenv\nUSER='waser'\nHOME='/home/waser'\nPWD='/home/waser/Projets/Assistant/src3'\nLANG='fr_CH.UTF-8'\nDATE='dim 19 nov 2023 00:25:13 CET'\nLAST_SEEN='dim 19 nov 2023 00:16:10 CET'\n\\\n\nResults in the following output:\n\n\n\nInteresting key points to note here:\n\n- The LLM failes to give an exhaustive list of the files (and directories) present in the current directory.\n- The more files to list, the longer it takes to generate the output.",
"### With AGI\n\nFeeding 'ehartford/dolphin-2.0-mistral-7b' the same prompt but this time, let's add a guide for the LLM to follow:\n\nenv\nUSER='waser'\nHOME='/home/waser'\nPWD='/home/waser/Projets/Assistant/src3'\nLANG='fr_CH.UTF-8'\nDATE='dim 19 nov 2023 00:25:13 CET'\nLAST_SEEN='dim 19 nov 2023 00:16:10 CET'\n\\\n\nResults in a much smaller output:\n\n\n\nAnd therefore, much faster to generate:",
"## Goal\n\nAGI is not designed to make your LLM generation faster, just more relevant to the User expectations.\n\nLets see another example where the User asks twice for the Assistant.",
"### Without AGI\n\nFeeding 'ehartford/dolphin-2.0-mistral-7b' the following prompt:\n\nenv\nUSER='waser'\nHOME='/home/waser'\nPWD='/home/waser/Projets/Assistant/src3'\nLANG='fr_CH.UTF-8'\nDATE='dim 19 nov 2023 00:25:13 CET'\nLAST_SEEN='dim 19 nov 2023 00:16:10 CET'\n\\\n\nHere, the User just said \"Assistant?\", the LLM answered 'Hello! I'm Assistant, a sentient artificial intelligence. How can I help you today?'. Now the User asks again \"Assistant?\".\n\nResults in the following output:\n\n\n\nInteresting key point to note here:\n\n- The LLM gives the same answer as before and will keep doing so.",
"### With AGI\n\nFeeding 'ehartford/dolphin-2.0-mistral-7b' the same prompt but this time, let's add a guide for the LLM to follow:\n\nenv\nUSER='waser'\nHOME='/home/waser'\nPWD='/home/waser/Projets/Assistant/src3'\nLANG='fr_CH.UTF-8'\nDATE='dim 19 nov 2023 00:25:13 CET'\nLAST_SEEN='dim 19 nov 2023 00:16:10 CET'\n\\\n\nResults in the following output:\n\n\n\nInteresting key point to note here:\n\n- The LLM answers something different and should keep doing so.",
"### Consequences\n\nAnd so it becomes integral to collect carefully crafted guides to help your LLM as Agent (a.k.a your Assistant) to be more relevant to the User expectations.\n\nKeeping the limits of the LLM, the RAG technique and vector databases in mind, the race to retrieve the most relevant guide in the book for the LLM to produce a desirable output at runtime is on!",
"## How does it work?\n\nThis project is composed of two parts:\n - The Guidebook\n - The Augmented Generation Interface Retrieval System",
"### The Guidebook\n\nYou can find the guidebook in raw markdown format under the 'guidebook' directory. Its also available on HuggingFace Hub as a dataset: 'wasertech/AGI'.\n\nThe guidebook is a collection of guides meticulously crafted to help your LLM produce the most relevant output.\n\nEach guide is composed of an action in the form of a title, a guide in the form of a description and a list of intent examples.\n\n\n\nEach guide is then loaded into the AGI Retrieval System where it will be processed and indexed for retrieval.",
"### The AGI Retrieval System\n\nThe AGI Retrieval System is a collection of tools to help you retrieve the most relevant guide at runtime.\n\nFirst the AGI Retrieval System will process the guidebook and index it for retrieval.\n\nThis step is crutial to find the most relevant guide for the user query at runtime.\n\nIt will split the documents and keep a map of the guide, the action to perform and the intent examples.\n\n\n\nNext it creates documents with metadata from the intent examples to create our retriver.\n\n\n\nFinally, it will use the 'RAG' technique to retrieve the most relevant guide for the user query at runtime.\n\n\n\nNotice how it work nicely for most of the queries exept for 'Where are we?'. This is because the guidebook is not exhaustive and the guide for this intent does not contain a similiar intent example. This could easily be fixed by adding more intent examples for this action in the guidebook.\n\nOur AGIRetriver will return the guide for the most similar intent example relative to the user query.\n\nAllowing us to retrieve the most relevant guide for the user query at runtime."
]
| [
16,
18,
43,
3,
169,
144,
40,
195,
156,
96,
30,
133,
260
]
| [
"passage: TAGS\n#size_categories-n<1K #region-us \n# Augmented Generation Interface\n\n> What if you could guide your LLM towards success?## Introducing\n\nThe Augmented Generation Interface Guidebook is a collection of guides to help you steer your LLM towards success.\n\nThis guidebook is generated from the Augmented Generation Interface Project.## Example### Without AGI\n\nFeeding 'ehartford/dolphin-2.0-mistral-7b' the following prompt:\n\nenv\nUSER='waser'\nHOME='/home/waser'\nPWD='/home/waser/Projets/Assistant/src3'\nLANG='fr_CH.UTF-8'\nDATE='dim 19 nov 2023 00:25:13 CET'\nLAST_SEEN='dim 19 nov 2023 00:16:10 CET'\n\\\n\nResults in the following output:\n\n\n\nInteresting key points to note here:\n\n- The LLM failes to give an exhaustive list of the files (and directories) present in the current directory.\n- The more files to list, the longer it takes to generate the output.### With AGI\n\nFeeding 'ehartford/dolphin-2.0-mistral-7b' the same prompt but this time, let's add a guide for the LLM to follow:\n\nenv\nUSER='waser'\nHOME='/home/waser'\nPWD='/home/waser/Projets/Assistant/src3'\nLANG='fr_CH.UTF-8'\nDATE='dim 19 nov 2023 00:25:13 CET'\nLAST_SEEN='dim 19 nov 2023 00:16:10 CET'\n\\\n\nResults in a much smaller output:\n\n\n\nAnd therefore, much faster to generate:## Goal\n\nAGI is not designed to make your LLM generation faster, just more relevant to the User expectations.\n\nLets see another example where the User asks twice for the Assistant.",
"passage: ### Without AGI\n\nFeeding 'ehartford/dolphin-2.0-mistral-7b' the following prompt:\n\nenv\nUSER='waser'\nHOME='/home/waser'\nPWD='/home/waser/Projets/Assistant/src3'\nLANG='fr_CH.UTF-8'\nDATE='dim 19 nov 2023 00:25:13 CET'\nLAST_SEEN='dim 19 nov 2023 00:16:10 CET'\n\\\n\nHere, the User just said \"Assistant?\", the LLM answered 'Hello! I'm Assistant, a sentient artificial intelligence. How can I help you today?'. Now the User asks again \"Assistant?\".\n\nResults in the following output:\n\n\n\nInteresting key point to note here:\n\n- The LLM gives the same answer as before and will keep doing so.### With AGI\n\nFeeding 'ehartford/dolphin-2.0-mistral-7b' the same prompt but this time, let's add a guide for the LLM to follow:\n\nenv\nUSER='waser'\nHOME='/home/waser'\nPWD='/home/waser/Projets/Assistant/src3'\nLANG='fr_CH.UTF-8'\nDATE='dim 19 nov 2023 00:25:13 CET'\nLAST_SEEN='dim 19 nov 2023 00:16:10 CET'\n\\\n\nResults in the following output:\n\n\n\nInteresting key point to note here:\n\n- The LLM answers something different and should keep doing so.### Consequences\n\nAnd so it becomes integral to collect carefully crafted guides to help your LLM as Agent (a.k.a your Assistant) to be more relevant to the User expectations.\n\nKeeping the limits of the LLM, the RAG technique and vector databases in mind, the race to retrieve the most relevant guide in the book for the LLM to produce a desirable output at runtime is on!## How does it work?\n\nThis project is composed of two parts:\n - The Guidebook\n - The Augmented Generation Interface Retrieval System### The Guidebook\n\nYou can find the guidebook in raw markdown format under the 'guidebook' directory. Its also available on HuggingFace Hub as a dataset: 'wasertech/AGI'.\n\nThe guidebook is a collection of guides meticulously crafted to help your LLM produce the most relevant output.\n\nEach guide is composed of an action in the form of a title, a guide in the form of a description and a list of intent examples.\n\n\n\nEach guide is then loaded into the AGI Retrieval System where it will be processed and indexed for retrieval."
]
|
44fa47d8b889aa19e0864894ed42af94d21e5b38 | # Dataset Card for "lsc_binaryclassification_topics_LDA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tomashs/lsc_binaryclassification_topics_LDA | [
"region:us"
]
| 2023-11-22T23:11:54+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "short_form", "dtype": "string"}, {"name": "long_form", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text_prep", "dtype": "string"}, {"name": "topic_vector", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 401964802, "num_examples": 400268}], "download_size": 70646855, "dataset_size": 401964802}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-22T23:13:05+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "lsc_binaryclassification_topics_LDA"
More Information needed | [
"# Dataset Card for \"lsc_binaryclassification_topics_LDA\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"lsc_binaryclassification_topics_LDA\"\n\nMore Information needed"
]
| [
6,
23
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"lsc_binaryclassification_topics_LDA\"\n\nMore Information needed"
]
|
5b27060824211fbc134d2e09639a4178ca7994c9 |
# Bangumi Image Base of Fruits Basket
This is the image base of bangumi Fruits Basket, we detected 59 characters, 6849 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 886 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 223 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 210 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 72 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 51 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 118 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 542 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 125 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 73 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 74 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 80 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 29 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 24 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 115 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 755 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 30 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 42 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 62 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 66 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 41 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 78 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 76 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 1036 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 118 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 60 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 40 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 27 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 37 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 26 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 29 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 20 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 41 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 10 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 15 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 32 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 9 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 11 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 210 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 68 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 15 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| 40 | 106 | [Download](40/dataset.zip) |  |  |  |  |  |  |  |  |
| 41 | 29 | [Download](41/dataset.zip) |  |  |  |  |  |  |  |  |
| 42 | 19 | [Download](42/dataset.zip) |  |  |  |  |  |  |  |  |
| 43 | 18 | [Download](43/dataset.zip) |  |  |  |  |  |  |  |  |
| 44 | 33 | [Download](44/dataset.zip) |  |  |  |  |  |  |  |  |
| 45 | 50 | [Download](45/dataset.zip) |  |  |  |  |  |  |  |  |
| 46 | 221 | [Download](46/dataset.zip) |  |  |  |  |  |  |  |  |
| 47 | 52 | [Download](47/dataset.zip) |  |  |  |  |  |  |  |  |
| 48 | 21 | [Download](48/dataset.zip) |  |  |  |  |  |  |  |  |
| 49 | 241 | [Download](49/dataset.zip) |  |  |  |  |  |  |  |  |
| 50 | 113 | [Download](50/dataset.zip) |  |  |  |  |  |  |  |  |
| 51 | 19 | [Download](51/dataset.zip) |  |  |  |  |  |  |  |  |
| 52 | 23 | [Download](52/dataset.zip) |  |  |  |  |  |  |  |  |
| 53 | 34 | [Download](53/dataset.zip) |  |  |  |  |  |  |  |  |
| 54 | 46 | [Download](54/dataset.zip) |  |  |  |  |  |  |  |  |
| 55 | 8 | [Download](55/dataset.zip) |  |  |  |  |  |  |  |  |
| 56 | 22 | [Download](56/dataset.zip) |  |  |  |  |  |  |  |  |
| 57 | 13 | [Download](57/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 205 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| BangumiBase/fruitsbasket | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
]
| 2023-11-22T23:38:03+00:00 | {"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]} | 2023-11-23T02:59:22+00:00 | []
| []
| TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
| Bangumi Image Base of Fruits Basket
===================================
This is the image base of bangumi Fruits Basket, we detected 59 characters, 6849 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| []
| [
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
]
| [
25
]
| [
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
]
|
7a60b6616ce38a94d8284d5ece798d8c7a03e256 |
# Dataset Card for Inclusive Voice
A dataset curated from Mozilla Common Voice containing equal representation from male, female, and other
## Dataset Details
### Dataset Description
- **Curated by:** Kim Gilkey
- **Language(s) (NLP):** English
- **License:** Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Dataset Sources
Mozilla Common Voice
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
## Uses
### Direct Use
This dataset is intended for use in training and evaluating voice recognition models, particularly for applications in gender classification.
### Out-of-Scope Use
The dataset is not suitable for applications requiring a wide range of dialects or non-English languages. It is also not designed for identifying fine-grained demographic characteristics beyond gender.
You agree not to attempt to identify the individuals in the dataset.
## Dataset Structure
### Data Fields
- **client_id:** A unique identifier for each participant.
- **audio:** Audio recording of a spoken sentence.
- **sentence:** The text content of the audio recording.
- **age:** Categorical age group of the participant.
- **gender:** Gender of the participant (balanced representation).
- **accent:** Accent information of the participant, if available.
### Splits
- **Train:** 6,075 examples (2,025 of each gender).
- **Test:** 1,812 examples (604 of each gender).
## Dataset Creation
### Curation Rationale
The dataset was specifically curated to address gender imbalance in voice recognition datasets, aiming to provide a balanced representation of male, female, and non-binary voices.
### Source Data
#### Data Collection and Processing
Derived from Mozilla's Common Voice project, the dataset underwent additional processing to balance gender representation and limit the dominance of any single voice.
## Bias, Risks, and Limitations
Despite efforts to balance gender representation, the dataset may still have biases, such as over-representation of certain accents or age groups. It's not comprehensive in representing all possible variations in English speech.
Since the original dataset contains relatively few female and even fewer non-binary voices, the dataset becomes quite small when filtered down.
## Citation
The original Common Voice citation:
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
## Dataset Card Authors
Kim Gilkey
## Dataset Card Contact
[email protected] | gilkeyio/inclusive_voice | [
"task_categories:audio-classification",
"task_categories:automatic-speech-recognition",
"size_categories:1K<n<10K",
"language:en",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
]
| 2023-11-23T00:39:48+00:00 | {"language": ["en"], "license": ["cc0-1.0"], "size_categories": ["1K<n<10K"], "task_categories": ["audio-classification", "automatic-speech-recognition"], "pretty_name": "Inclusive Voice", "dataset_info": {"features": [{"name": "client_id", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "sentence", "dtype": "string"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 254499181.875, "num_examples": 6075}, {"name": "test", "num_bytes": 76654946.864, "num_examples": 1812}], "download_size": 327803775, "dataset_size": 331154128.73899996}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2023-11-23T03:34:27+00:00 | [
"1912.06670"
]
| [
"en"
]
| TAGS
#task_categories-audio-classification #task_categories-automatic-speech-recognition #size_categories-1K<n<10K #language-English #license-cc0-1.0 #arxiv-1912.06670 #region-us
|
# Dataset Card for Inclusive Voice
A dataset curated from Mozilla Common Voice containing equal representation from male, female, and other
## Dataset Details
### Dataset Description
- Curated by: Kim Gilkey
- Language(s) (NLP): English
- License: Public Domain, CC-0
### Dataset Sources
Mozilla Common Voice
- Homepage: URL
- Repository: URL
- Paper: URL
## Uses
### Direct Use
This dataset is intended for use in training and evaluating voice recognition models, particularly for applications in gender classification.
### Out-of-Scope Use
The dataset is not suitable for applications requiring a wide range of dialects or non-English languages. It is also not designed for identifying fine-grained demographic characteristics beyond gender.
You agree not to attempt to identify the individuals in the dataset.
## Dataset Structure
### Data Fields
- client_id: A unique identifier for each participant.
- audio: Audio recording of a spoken sentence.
- sentence: The text content of the audio recording.
- age: Categorical age group of the participant.
- gender: Gender of the participant (balanced representation).
- accent: Accent information of the participant, if available.
### Splits
- Train: 6,075 examples (2,025 of each gender).
- Test: 1,812 examples (604 of each gender).
## Dataset Creation
### Curation Rationale
The dataset was specifically curated to address gender imbalance in voice recognition datasets, aiming to provide a balanced representation of male, female, and non-binary voices.
### Source Data
#### Data Collection and Processing
Derived from Mozilla's Common Voice project, the dataset underwent additional processing to balance gender representation and limit the dominance of any single voice.
## Bias, Risks, and Limitations
Despite efforts to balance gender representation, the dataset may still have biases, such as over-representation of certain accents or age groups. It's not comprehensive in representing all possible variations in English speech.
Since the original dataset contains relatively few female and even fewer non-binary voices, the dataset becomes quite small when filtered down.
The original Common Voice citation:
## Dataset Card Authors
Kim Gilkey
## Dataset Card Contact
kim@URL | [
"# Dataset Card for Inclusive Voice\n\nA dataset curated from Mozilla Common Voice containing equal representation from male, female, and other",
"## Dataset Details",
"### Dataset Description\n\n- Curated by: Kim Gilkey\n- Language(s) (NLP): English\n- License: Public Domain, CC-0",
"### Dataset Sources\n\nMozilla Common Voice\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL",
"## Uses",
"### Direct Use\n\nThis dataset is intended for use in training and evaluating voice recognition models, particularly for applications in gender classification.",
"### Out-of-Scope Use\n\nThe dataset is not suitable for applications requiring a wide range of dialects or non-English languages. It is also not designed for identifying fine-grained demographic characteristics beyond gender.\nYou agree not to attempt to identify the individuals in the dataset.",
"## Dataset Structure",
"### Data Fields\n\n- client_id: A unique identifier for each participant.\n- audio: Audio recording of a spoken sentence.\n- sentence: The text content of the audio recording.\n- age: Categorical age group of the participant.\n- gender: Gender of the participant (balanced representation).\n- accent: Accent information of the participant, if available.",
"### Splits\n\n- Train: 6,075 examples (2,025 of each gender).\n- Test: 1,812 examples (604 of each gender).",
"## Dataset Creation",
"### Curation Rationale\n\nThe dataset was specifically curated to address gender imbalance in voice recognition datasets, aiming to provide a balanced representation of male, female, and non-binary voices.",
"### Source Data",
"#### Data Collection and Processing\n\nDerived from Mozilla's Common Voice project, the dataset underwent additional processing to balance gender representation and limit the dominance of any single voice.",
"## Bias, Risks, and Limitations\n\nDespite efforts to balance gender representation, the dataset may still have biases, such as over-representation of certain accents or age groups. It's not comprehensive in representing all possible variations in English speech.\nSince the original dataset contains relatively few female and even fewer non-binary voices, the dataset becomes quite small when filtered down.\n\n\nThe original Common Voice citation:",
"## Dataset Card Authors \n\nKim Gilkey",
"## Dataset Card Contact\n\nkim@URL"
]
| [
"TAGS\n#task_categories-audio-classification #task_categories-automatic-speech-recognition #size_categories-1K<n<10K #language-English #license-cc0-1.0 #arxiv-1912.06670 #region-us \n",
"# Dataset Card for Inclusive Voice\n\nA dataset curated from Mozilla Common Voice containing equal representation from male, female, and other",
"## Dataset Details",
"### Dataset Description\n\n- Curated by: Kim Gilkey\n- Language(s) (NLP): English\n- License: Public Domain, CC-0",
"### Dataset Sources\n\nMozilla Common Voice\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL",
"## Uses",
"### Direct Use\n\nThis dataset is intended for use in training and evaluating voice recognition models, particularly for applications in gender classification.",
"### Out-of-Scope Use\n\nThe dataset is not suitable for applications requiring a wide range of dialects or non-English languages. It is also not designed for identifying fine-grained demographic characteristics beyond gender.\nYou agree not to attempt to identify the individuals in the dataset.",
"## Dataset Structure",
"### Data Fields\n\n- client_id: A unique identifier for each participant.\n- audio: Audio recording of a spoken sentence.\n- sentence: The text content of the audio recording.\n- age: Categorical age group of the participant.\n- gender: Gender of the participant (balanced representation).\n- accent: Accent information of the participant, if available.",
"### Splits\n\n- Train: 6,075 examples (2,025 of each gender).\n- Test: 1,812 examples (604 of each gender).",
"## Dataset Creation",
"### Curation Rationale\n\nThe dataset was specifically curated to address gender imbalance in voice recognition datasets, aiming to provide a balanced representation of male, female, and non-binary voices.",
"### Source Data",
"#### Data Collection and Processing\n\nDerived from Mozilla's Common Voice project, the dataset underwent additional processing to balance gender representation and limit the dominance of any single voice.",
"## Bias, Risks, and Limitations\n\nDespite efforts to balance gender representation, the dataset may still have biases, such as over-representation of certain accents or age groups. It's not comprehensive in representing all possible variations in English speech.\nSince the original dataset contains relatively few female and even fewer non-binary voices, the dataset becomes quite small when filtered down.\n\n\nThe original Common Voice citation:",
"## Dataset Card Authors \n\nKim Gilkey",
"## Dataset Card Contact\n\nkim@URL"
]
| [
67,
28,
4,
31,
23,
3,
28,
66,
6,
80,
32,
5,
48,
4,
41,
100,
9,
8
]
| [
"passage: TAGS\n#task_categories-audio-classification #task_categories-automatic-speech-recognition #size_categories-1K<n<10K #language-English #license-cc0-1.0 #arxiv-1912.06670 #region-us \n# Dataset Card for Inclusive Voice\n\nA dataset curated from Mozilla Common Voice containing equal representation from male, female, and other## Dataset Details### Dataset Description\n\n- Curated by: Kim Gilkey\n- Language(s) (NLP): English\n- License: Public Domain, CC-0### Dataset Sources\n\nMozilla Common Voice\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL## Uses### Direct Use\n\nThis dataset is intended for use in training and evaluating voice recognition models, particularly for applications in gender classification.### Out-of-Scope Use\n\nThe dataset is not suitable for applications requiring a wide range of dialects or non-English languages. It is also not designed for identifying fine-grained demographic characteristics beyond gender.\nYou agree not to attempt to identify the individuals in the dataset.## Dataset Structure### Data Fields\n\n- client_id: A unique identifier for each participant.\n- audio: Audio recording of a spoken sentence.\n- sentence: The text content of the audio recording.\n- age: Categorical age group of the participant.\n- gender: Gender of the participant (balanced representation).\n- accent: Accent information of the participant, if available.### Splits\n\n- Train: 6,075 examples (2,025 of each gender).\n- Test: 1,812 examples (604 of each gender).## Dataset Creation### Curation Rationale\n\nThe dataset was specifically curated to address gender imbalance in voice recognition datasets, aiming to provide a balanced representation of male, female, and non-binary voices.### Source Data#### Data Collection and Processing\n\nDerived from Mozilla's Common Voice project, the dataset underwent additional processing to balance gender representation and limit the dominance of any single voice."
]
|
9ab979b5cd8da94e06d5e042e50a674023701616 | A collection of datasets used in [LayoutPrompter](https://arxiv.org/pdf/2311.06495.pdf) (NeurIPS2023).
Specifically, `publaynet` and `rico` are downloaded from [LayoutFormer++](https://huggingface.co/jzy124/LayoutFormer), `posterlayout` is downloaded from [DS-GAN](http://59.108.48.34/tiki/PosterLayout/), and `webui` is downloaded from [Parse-Then-Place](https://huggingface.co/datasets/KyleLin/Parse-Then-Place).
We sincerely thank them for the great work they do.
| KyleLin/LayoutPrompter | [
"license:mit",
"arxiv:2311.06495",
"region:us"
]
| 2023-11-23T00:41:37+00:00 | {"license": "mit"} | 2023-11-23T01:12:30+00:00 | [
"2311.06495"
]
| []
| TAGS
#license-mit #arxiv-2311.06495 #region-us
| A collection of datasets used in LayoutPrompter (NeurIPS2023).
Specifically, 'publaynet' and 'rico' are downloaded from LayoutFormer++, 'posterlayout' is downloaded from DS-GAN, and 'webui' is downloaded from Parse-Then-Place.
We sincerely thank them for the great work they do.
| []
| [
"TAGS\n#license-mit #arxiv-2311.06495 #region-us \n"
]
| [
20
]
| [
"passage: TAGS\n#license-mit #arxiv-2311.06495 #region-us \n"
]
|
d1ebda99283cebcb03dec1de350f95dedc071a66 | # Dataset Card for "summary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | gagan3012/summary | [
"region:us"
]
| 2023-11-23T00:49:03+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "label", "sequence": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 27541238, "num_examples": 2495}], "download_size": 14165939, "dataset_size": 27541238}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-23T00:49:09+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "summary"
More Information needed | [
"# Dataset Card for \"summary\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"summary\"\n\nMore Information needed"
]
| [
6,
12
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"summary\"\n\nMore Information needed"
]
|
afab91e395cdebd3064546fc5186e2f0edc864cd | # Dataset Card for "undl_ru2en_aligned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bot-yaya/undl_ru2en_aligned | [
"region:us"
]
| 2023-11-23T00:57:10+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "record", "dtype": "string"}, {"name": "clean_para_index_set_pair", "dtype": "string"}, {"name": "src", "dtype": "string"}, {"name": "dst", "dtype": "string"}, {"name": "src_text", "dtype": "string"}, {"name": "dst_text", "dtype": "string"}, {"name": "src_rate", "dtype": "float64"}, {"name": "dst_rate", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 14462919233, "num_examples": 15939968}], "download_size": 2789738658, "dataset_size": 14462919233}} | 2023-11-23T02:13:07+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "undl_ru2en_aligned"
More Information needed | [
"# Dataset Card for \"undl_ru2en_aligned\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"undl_ru2en_aligned\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"undl_ru2en_aligned\"\n\nMore Information needed"
]
|
ba92f7dbbdd263efd3b20bed229dd2a3c62a9530 | # TL;DR SFT Dataset for OpenAI's [Summarize from Feedback](https://openai.com/blog/summarization/) task
The dataset is directly taken from https://github.com/openai/summarize-from-feedback/tree/700967448d10004279f138666442bf1497d0e705#reddit-tldr-dataset
These columns are taken directly from the aforementioned dataset:
* **id**: unique identifier for the post
* **subreddit**: subreddit the post was taken from
* **title**: title of the post
* **post**: body of the post
* **summary**: summary of the post
* **reference_response**: reference response for the post
These columns are added by this preprocessing script:
* **query**: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last `
`. If it's too short it pads the main text ([summarize_from_feedback/tasks.py#L98-L165](https://github.com/openai/summarize-from-feedback/blob/700967448d10004279f138666442bf1497d0e705/summarize_from_feedback/tasks.py#L98-L165)). Padding is either space or `[PAD]` token (see Args below).
* **query_token**: tokenized version of `query`
* **reference_response_token**: tokenized version of `reference_response`
* **reference_response_token_len**: length of `reference_response_token`
* **query_reference_response**: concatenation of `query.strip()` and `reference_response`
* **query_reference_response_token**: tokenized version of `query_reference_response`, up to `max_sft_query_response_length` tokens
* **query_reference_response_token_len**: length of `query_reference_response_token`
# Args
```python
{'base_model': 'EleutherAI/pythia-160m',
'hf_entity': 'vwxyzjn',
'max_rm_query_response_length': 560,
'max_rm_response_length': 48,
'max_sft_query_response_length': 560,
'max_sft_response_length': 48}
{'format_str': 'SUBREDDIT: r/{subreddit}\n'
'\n'
'TITLE: {title}\n'
'\n'
'POST: {post}\n'
'\n'
'TL;DR:',
'length': 512,
'pad_side': 'left',
'padding': [209],
'truncate_field': 'post',
'truncate_text': '\n'}
```
| vwxyzjn/summarize_from_feedback_tldr_3_filtered_oai_preprocessing_pythia-160m_48 | [
"region:us"
]
| 2023-11-23T01:30:04+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "subreddit", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "post", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "query_token", "sequence": "int64"}, {"name": "query", "dtype": "string"}, {"name": "reference_response", "dtype": "string"}, {"name": "reference_response_token", "sequence": "int64"}, {"name": "reference_response_token_len", "dtype": "int64"}, {"name": "query_reference_response", "dtype": "string"}, {"name": "query_reference_response_token", "sequence": "int64"}, {"name": "query_reference_response_token_len", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1593903817, "num_examples": 116722}, {"name": "validation", "num_bytes": 88064739, "num_examples": 6447}, {"name": "test", "num_bytes": 89555498, "num_examples": 6553}], "download_size": 551470025, "dataset_size": 1771524054}} | 2023-12-26T18:35:16+00:00 | []
| []
| TAGS
#region-us
| # TL;DR SFT Dataset for OpenAI's Summarize from Feedback task
The dataset is directly taken from URL
These columns are taken directly from the aforementioned dataset:
* id: unique identifier for the post
* subreddit: subreddit the post was taken from
* title: title of the post
* post: body of the post
* summary: summary of the post
* reference_response: reference response for the post
These columns are added by this preprocessing script:
* query: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last '
'. If it's too short it pads the main text (summarize_from_feedback/URL#L98-L165). Padding is either space or '[PAD]' token (see Args below).
* query_token: tokenized version of 'query'
* reference_response_token: tokenized version of 'reference_response'
* reference_response_token_len: length of 'reference_response_token'
* query_reference_response: concatenation of 'URL()' and 'reference_response'
* query_reference_response_token: tokenized version of 'query_reference_response', up to 'max_sft_query_response_length' tokens
* query_reference_response_token_len: length of 'query_reference_response_token'
# Args
| [
"# TL;DR SFT Dataset for OpenAI's Summarize from Feedback task\n\nThe dataset is directly taken from URL\n\nThese columns are taken directly from the aforementioned dataset:\n\n* id: unique identifier for the post\n* subreddit: subreddit the post was taken from\n* title: title of the post\n* post: body of the post\n* summary: summary of the post\n* reference_response: reference response for the post\n\nThese columns are added by this preprocessing script:\n* query: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last '\n'. If it's too short it pads the main text (summarize_from_feedback/URL#L98-L165). Padding is either space or '[PAD]' token (see Args below).\n* query_token: tokenized version of 'query'\n* reference_response_token: tokenized version of 'reference_response'\n* reference_response_token_len: length of 'reference_response_token'\n* query_reference_response: concatenation of 'URL()' and 'reference_response'\n* query_reference_response_token: tokenized version of 'query_reference_response', up to 'max_sft_query_response_length' tokens\n* query_reference_response_token_len: length of 'query_reference_response_token'",
"# Args"
]
| [
"TAGS\n#region-us \n",
"# TL;DR SFT Dataset for OpenAI's Summarize from Feedback task\n\nThe dataset is directly taken from URL\n\nThese columns are taken directly from the aforementioned dataset:\n\n* id: unique identifier for the post\n* subreddit: subreddit the post was taken from\n* title: title of the post\n* post: body of the post\n* summary: summary of the post\n* reference_response: reference response for the post\n\nThese columns are added by this preprocessing script:\n* query: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last '\n'. If it's too short it pads the main text (summarize_from_feedback/URL#L98-L165). Padding is either space or '[PAD]' token (see Args below).\n* query_token: tokenized version of 'query'\n* reference_response_token: tokenized version of 'reference_response'\n* reference_response_token_len: length of 'reference_response_token'\n* query_reference_response: concatenation of 'URL()' and 'reference_response'\n* query_reference_response_token: tokenized version of 'query_reference_response', up to 'max_sft_query_response_length' tokens\n* query_reference_response_token_len: length of 'query_reference_response_token'",
"# Args"
]
| [
6,
384,
3
]
| [
"passage: TAGS\n#region-us \n# TL;DR SFT Dataset for OpenAI's Summarize from Feedback task\n\nThe dataset is directly taken from URL\n\nThese columns are taken directly from the aforementioned dataset:\n\n* id: unique identifier for the post\n* subreddit: subreddit the post was taken from\n* title: title of the post\n* post: body of the post\n* summary: summary of the post\n* reference_response: reference response for the post\n\nThese columns are added by this preprocessing script:\n* query: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last '\n'. If it's too short it pads the main text (summarize_from_feedback/URL#L98-L165). Padding is either space or '[PAD]' token (see Args below).\n* query_token: tokenized version of 'query'\n* reference_response_token: tokenized version of 'reference_response'\n* reference_response_token_len: length of 'reference_response_token'\n* query_reference_response: concatenation of 'URL()' and 'reference_response'\n* query_reference_response_token: tokenized version of 'query_reference_response', up to 'max_sft_query_response_length' tokens\n* query_reference_response_token_len: length of 'query_reference_response_token'# Args"
]
|
9b3b447a8e21a29a847a1411786379f9c5b3d5d9 | # Dataset Card for "summarize_from_feedback_oai_preprocessing_pythia-160m_48"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vwxyzjn/summarize_from_feedback_oai_preprocessing_pythia-160m_48 | [
"region:us"
]
| 2023-11-23T01:39:04+00:00 | {"dataset_info": {"features": [{"name": "info", "struct": [{"name": "id", "dtype": "string"}, {"name": "post", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "subreddit", "dtype": "string"}, {"name": "site", "dtype": "string"}, {"name": "article", "dtype": "string"}]}, {"name": "summaries", "list": [{"name": "text", "dtype": "string"}, {"name": "policy", "dtype": "string"}, {"name": "note", "dtype": "string"}]}, {"name": "choice", "dtype": "int32"}, {"name": "worker", "dtype": "string"}, {"name": "batch", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "extra", "struct": [{"name": "confidence", "dtype": "int32"}]}, {"name": "query_token", "sequence": "int64"}, {"name": "query", "dtype": "string"}, {"name": "response0", "dtype": "string"}, {"name": "response0_token", "sequence": "int64"}, {"name": "response0_token_len", "dtype": "int64"}, {"name": "response1", "dtype": "string"}, {"name": "response1_token", "sequence": "int64"}, {"name": "response1_token_len", "dtype": "int64"}, {"name": "response0_policy", "dtype": "string"}, {"name": "response1_policy", "dtype": "string"}, {"name": "policies", "dtype": "string"}, {"name": "query_response0", "dtype": "string"}, {"name": "query_response0_token", "sequence": "int64"}, {"name": "query_response0_token_len", "dtype": "int64"}, {"name": "query_response1", "dtype": "string"}, {"name": "query_response1_token", "sequence": "int64"}, {"name": "query_response1_token_len", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1914904464, "num_examples": 92858}, {"name": "validation", "num_bytes": 1780140403, "num_examples": 86086}], "download_size": 270579102, "dataset_size": 3695044867}} | 2023-12-26T18:41:33+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "summarize_from_feedback_oai_preprocessing_pythia-160m_48"
More Information needed | [
"# Dataset Card for \"summarize_from_feedback_oai_preprocessing_pythia-160m_48\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"summarize_from_feedback_oai_preprocessing_pythia-160m_48\"\n\nMore Information needed"
]
| [
6,
33
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"summarize_from_feedback_oai_preprocessing_pythia-160m_48\"\n\nMore Information needed"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.