sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ae7a12ccd5f6ff80f3c6a849f96ba59338c4979f | # Dataset Card for "cub2011_caption"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cherry0324/cub2011_caption | [
"region:us"
]
| 2023-11-16T09:17:54+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 584585478.162, "num_examples": 5994}], "download_size": 581910152, "dataset_size": 584585478.162}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-16T09:30:53+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "cub2011_caption"
More Information needed | [
"# Dataset Card for \"cub2011_caption\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"cub2011_caption\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"cub2011_caption\"\n\nMore Information needed"
]
|
a047e95d09614ec6ddad82babe6697c6890f4660 | # Dataset Card for "register_label_instruction_data_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Tanhu/register_label_instruction_data_en | [
"region:us"
]
| 2023-11-16T09:28:50+00:00 | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "completion", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 313401359, "num_examples": 33915}], "download_size": 156354972, "dataset_size": 313401359}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-16T09:29:07+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "register_label_instruction_data_en"
More Information needed | [
"# Dataset Card for \"register_label_instruction_data_en\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"register_label_instruction_data_en\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"register_label_instruction_data_en\"\n\nMore Information needed"
]
|
e9d06c4d3f34d0076d6934b993759af385c380e3 | ---
license: apache-2.0
---
---
| xinrongzhang2022/InfiniteBench | [
"region:us"
]
| 2023-11-16T09:29:02+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "passkey", "path": "passkey.jsonl"}, {"split": "kv_retrieval", "path": "kv_retrieval.jsonl"}, {"split": "number_string", "path": "number_string.jsonl"}, {"split": "code_run", "path": "code_run.jsonl"}, {"split": "code_debug", "path": "code_debug.jsonl"}, {"split": "math_find", "path": "math_find.jsonl"}, {"split": "math_calc", "path": "math_calc.jsonl"}, {"split": "longdialogue_qa_eng", "path": "longdialogue_qa_eng.jsonl"}, {"split": "longbook_qa_eng", "path": "longbook_qa_eng.jsonl"}, {"split": "longbook_sum_eng", "path": "longbook_sum_eng.jsonl"}, {"split": "longbook_choice_eng", "path": "longbook_choice_eng.jsonl"}, {"split": "longbook_qa_chn", "path": "longbook_qa_chn.jsonl"}]}]} | 2023-12-19T10:47:34+00:00 | []
| []
| TAGS
#region-us
| ---
license: apache-2.0
---
---
| []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
521cc286fd182390e879c6ae61c4560d05fa9987 | This is the Levy/Holt dataset in json format for better data loading. All licenses are subject to the original release.
| ZhaoweiWang/Levy_Holt_dataset_jsonl | [
"license:unknown",
"region:us"
]
| 2023-11-16T09:33:22+00:00 | {"license": "unknown"} | 2023-11-16T09:35:03+00:00 | []
| []
| TAGS
#license-unknown #region-us
| This is the Levy/Holt dataset in json format for better data loading. All licenses are subject to the original release.
| []
| [
"TAGS\n#license-unknown #region-us \n"
]
| [
13
]
| [
"passage: TAGS\n#license-unknown #region-us \n"
]
|
b62d1f299bc3c42ac7351cdddd45f0ca7887192f | # Dataset Card for mri-sym2
### Dataset Summary
SymBrain, an annotated dataset of brain MRI images designed to advance the field of brain symmetry detection and segmentation.
Our dataset comprises a diverse collection of brain MRI T1w and T2w scans from the [dHCP](https://biomedia.github.io/dHCP-release-notes/download.html) dataset.
Each annotated to highlight the ideal **straight** mid-sagittal plane (MSP), demarcating the brain into two symmetrical hemispheres.
The accurate extraction of the MSP has the potential to greatly enhance segmentation precision.
Researchers and practitioners can utilize this dataset to devise innovative methods for enhanced brain MRI image segmentation.
SymBrain's rich and extensive content empowers the research community to address complex challenges in neuroimaging analysis,
ultimately contributing to advancements in medical diagnostics and treatment planning.
Symmetry analysis plays an important role in medical image processing, particularly in the detection of diseases and malformations.
SymBrain leverages the inherent bilateral symmetry observed in brain MRI images,
making it an invaluable resource for the development and evaluation
of automated algorithms aimed at detecting the symmetry axis within brain MRI data.
## Dataset Structure
The dataset contains 1476 T1w images types and 1674 T2w images.
The differences between the modalities lie in the intensity variations of the different brain areas.
All the images are accessible in the 'train' part of the dataset.
## Dataset Creation
### Loading the data
The dataset contains a 'train' split of 1476 rows, containing the t1 type images, and a 'test' split of 1674 rows, with the t2 type images.
```python
dataset = load_dataset("agucci/mri-sym2")
# first dataset example selection:
dataset['train'][0]
```
**Attributes :**
- *image:* PIL image, shape (290, 290)
- *line:* Straight line annotation coordinates on the image. ({'x':x1, 'y':y1}, {'x':x2, 'y':y2}). Where (x1,y1), (x2,y2) are the starting and end points of the line.
- *radscore:* Radiology score of the volume the image was extracted from. Please refer to [dHCP doc](https://biomedia.github.io/dHCP-release-notes/download.html#metadata) for scores explanation.
- *session:* Session-ID of the original dataset, used for scan retrieval.
### Source Data
[dHCP](https://biomedia.github.io/dHCP-release-notes/download.html) dataset.
Three slices have been extracted from each of the 1050 3D volumes, creating 3150 images.
### Annotations
The authors did Annotations manually with the [V7lab tools](https://www.v7labs.com/).
### Licensing Information
mit
### Citation Information
When using the data please cite :
```bibtext
@misc{gucciardi2024symbrain,
title={Symbrain: A large-scale dataset of MRI images for neonatal brain symmetry analysis},
author={Arnaud Gucciardi and Safouane El Ghazouali and Francesca Venturini and Vida Groznik and Umberto Michelucci},
year={2024},
eprint={2401.11814},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
and
**dhcp dataset**
Data were provided by the developing Human Connectome Project, KCL-Imperial-
Oxford Consortium funded by the European Research Council under the Eu-
ropean Union Seventh Framework Programme (FP/2007-2013) / ERC Grant
Agreement no. [319456]. We are grateful to the families who generously sup-
ported this trial. | agucci/mri-sym2 | [
"medical",
"arxiv:2401.11814",
"doi:10.57967/hf/1372",
"region:us"
]
| 2023-11-16T09:36:56+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "line", "dtype": "string"}, {"name": "rad_score", "dtype": "string"}, {"name": "session", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 68961229.076, "num_examples": 1476}, {"name": "test", "num_bytes": 68472028.992, "num_examples": 1674}], "download_size": 137564710, "dataset_size": 137433258.06800002}, "tags": ["medical"]} | 2024-02-11T20:01:33+00:00 | [
"2401.11814"
]
| []
| TAGS
#medical #arxiv-2401.11814 #doi-10.57967/hf/1372 #region-us
| # Dataset Card for mri-sym2
### Dataset Summary
SymBrain, an annotated dataset of brain MRI images designed to advance the field of brain symmetry detection and segmentation.
Our dataset comprises a diverse collection of brain MRI T1w and T2w scans from the dHCP dataset.
Each annotated to highlight the ideal straight mid-sagittal plane (MSP), demarcating the brain into two symmetrical hemispheres.
The accurate extraction of the MSP has the potential to greatly enhance segmentation precision.
Researchers and practitioners can utilize this dataset to devise innovative methods for enhanced brain MRI image segmentation.
SymBrain's rich and extensive content empowers the research community to address complex challenges in neuroimaging analysis,
ultimately contributing to advancements in medical diagnostics and treatment planning.
Symmetry analysis plays an important role in medical image processing, particularly in the detection of diseases and malformations.
SymBrain leverages the inherent bilateral symmetry observed in brain MRI images,
making it an invaluable resource for the development and evaluation
of automated algorithms aimed at detecting the symmetry axis within brain MRI data.
## Dataset Structure
The dataset contains 1476 T1w images types and 1674 T2w images.
The differences between the modalities lie in the intensity variations of the different brain areas.
All the images are accessible in the 'train' part of the dataset.
## Dataset Creation
### Loading the data
The dataset contains a 'train' split of 1476 rows, containing the t1 type images, and a 'test' split of 1674 rows, with the t2 type images.
Attributes :
- *image:* PIL image, shape (290, 290)
- *line:* Straight line annotation coordinates on the image. ({'x':x1, 'y':y1}, {'x':x2, 'y':y2}). Where (x1,y1), (x2,y2) are the starting and end points of the line.
- *radscore:* Radiology score of the volume the image was extracted from. Please refer to dHCP doc for scores explanation.
- *session:* Session-ID of the original dataset, used for scan retrieval.
### Source Data
dHCP dataset.
Three slices have been extracted from each of the 1050 3D volumes, creating 3150 images.
### Annotations
The authors did Annotations manually with the V7lab tools.
### Licensing Information
mit
When using the data please cite :
and
dhcp dataset
Data were provided by the developing Human Connectome Project, KCL-Imperial-
Oxford Consortium funded by the European Research Council under the Eu-
ropean Union Seventh Framework Programme (FP/2007-2013) / ERC Grant
Agreement no. [319456]. We are grateful to the families who generously sup-
ported this trial. | [
"# Dataset Card for mri-sym2",
"### Dataset Summary\n\nSymBrain, an annotated dataset of brain MRI images designed to advance the field of brain symmetry detection and segmentation. \nOur dataset comprises a diverse collection of brain MRI T1w and T2w scans from the dHCP dataset.\nEach annotated to highlight the ideal straight mid-sagittal plane (MSP), demarcating the brain into two symmetrical hemispheres. \nThe accurate extraction of the MSP has the potential to greatly enhance segmentation precision. \n\nResearchers and practitioners can utilize this dataset to devise innovative methods for enhanced brain MRI image segmentation. \nSymBrain's rich and extensive content empowers the research community to address complex challenges in neuroimaging analysis, \nultimately contributing to advancements in medical diagnostics and treatment planning.\n\nSymmetry analysis plays an important role in medical image processing, particularly in the detection of diseases and malformations. \nSymBrain leverages the inherent bilateral symmetry observed in brain MRI images, \nmaking it an invaluable resource for the development and evaluation \nof automated algorithms aimed at detecting the symmetry axis within brain MRI data.",
"## Dataset Structure\n\nThe dataset contains 1476 T1w images types and 1674 T2w images. \nThe differences between the modalities lie in the intensity variations of the different brain areas. \nAll the images are accessible in the 'train' part of the dataset.",
"## Dataset Creation",
"### Loading the data\n\nThe dataset contains a 'train' split of 1476 rows, containing the t1 type images, and a 'test' split of 1674 rows, with the t2 type images. \n\n\n\nAttributes :\n- *image:* PIL image, shape (290, 290)\n- *line:* Straight line annotation coordinates on the image. ({'x':x1, 'y':y1}, {'x':x2, 'y':y2}). Where (x1,y1), (x2,y2) are the starting and end points of the line. \n- *radscore:* Radiology score of the volume the image was extracted from. Please refer to dHCP doc for scores explanation. \n- *session:* Session-ID of the original dataset, used for scan retrieval.",
"### Source Data\n\n\ndHCP dataset. \nThree slices have been extracted from each of the 1050 3D volumes, creating 3150 images.",
"### Annotations\n\nThe authors did Annotations manually with the V7lab tools.",
"### Licensing Information\n\nmit\n\n\n\nWhen using the data please cite : \n\n\n\nand \n\ndhcp dataset\nData were provided by the developing Human Connectome Project, KCL-Imperial-\nOxford Consortium funded by the European Research Council under the Eu-\nropean Union Seventh Framework Programme (FP/2007-2013) / ERC Grant\nAgreement no. [319456]. We are grateful to the families who generously sup-\nported this trial."
]
| [
"TAGS\n#medical #arxiv-2401.11814 #doi-10.57967/hf/1372 #region-us \n",
"# Dataset Card for mri-sym2",
"### Dataset Summary\n\nSymBrain, an annotated dataset of brain MRI images designed to advance the field of brain symmetry detection and segmentation. \nOur dataset comprises a diverse collection of brain MRI T1w and T2w scans from the dHCP dataset.\nEach annotated to highlight the ideal straight mid-sagittal plane (MSP), demarcating the brain into two symmetrical hemispheres. \nThe accurate extraction of the MSP has the potential to greatly enhance segmentation precision. \n\nResearchers and practitioners can utilize this dataset to devise innovative methods for enhanced brain MRI image segmentation. \nSymBrain's rich and extensive content empowers the research community to address complex challenges in neuroimaging analysis, \nultimately contributing to advancements in medical diagnostics and treatment planning.\n\nSymmetry analysis plays an important role in medical image processing, particularly in the detection of diseases and malformations. \nSymBrain leverages the inherent bilateral symmetry observed in brain MRI images, \nmaking it an invaluable resource for the development and evaluation \nof automated algorithms aimed at detecting the symmetry axis within brain MRI data.",
"## Dataset Structure\n\nThe dataset contains 1476 T1w images types and 1674 T2w images. \nThe differences between the modalities lie in the intensity variations of the different brain areas. \nAll the images are accessible in the 'train' part of the dataset.",
"## Dataset Creation",
"### Loading the data\n\nThe dataset contains a 'train' split of 1476 rows, containing the t1 type images, and a 'test' split of 1674 rows, with the t2 type images. \n\n\n\nAttributes :\n- *image:* PIL image, shape (290, 290)\n- *line:* Straight line annotation coordinates on the image. ({'x':x1, 'y':y1}, {'x':x2, 'y':y2}). Where (x1,y1), (x2,y2) are the starting and end points of the line. \n- *radscore:* Radiology score of the volume the image was extracted from. Please refer to dHCP doc for scores explanation. \n- *session:* Session-ID of the original dataset, used for scan retrieval.",
"### Source Data\n\n\ndHCP dataset. \nThree slices have been extracted from each of the 1050 3D volumes, creating 3150 images.",
"### Annotations\n\nThe authors did Annotations manually with the V7lab tools.",
"### Licensing Information\n\nmit\n\n\n\nWhen using the data please cite : \n\n\n\nand \n\ndhcp dataset\nData were provided by the developing Human Connectome Project, KCL-Imperial-\nOxford Consortium funded by the European Research Council under the Eu-\nropean Union Seventh Framework Programme (FP/2007-2013) / ERC Grant\nAgreement no. [319456]. We are grateful to the families who generously sup-\nported this trial."
]
| [
30,
11,
273,
63,
5,
196,
33,
21,
92
]
| [
"passage: TAGS\n#medical #arxiv-2401.11814 #doi-10.57967/hf/1372 #region-us \n# Dataset Card for mri-sym2### Dataset Summary\n\nSymBrain, an annotated dataset of brain MRI images designed to advance the field of brain symmetry detection and segmentation. \nOur dataset comprises a diverse collection of brain MRI T1w and T2w scans from the dHCP dataset.\nEach annotated to highlight the ideal straight mid-sagittal plane (MSP), demarcating the brain into two symmetrical hemispheres. \nThe accurate extraction of the MSP has the potential to greatly enhance segmentation precision. \n\nResearchers and practitioners can utilize this dataset to devise innovative methods for enhanced brain MRI image segmentation. \nSymBrain's rich and extensive content empowers the research community to address complex challenges in neuroimaging analysis, \nultimately contributing to advancements in medical diagnostics and treatment planning.\n\nSymmetry analysis plays an important role in medical image processing, particularly in the detection of diseases and malformations. \nSymBrain leverages the inherent bilateral symmetry observed in brain MRI images, \nmaking it an invaluable resource for the development and evaluation \nof automated algorithms aimed at detecting the symmetry axis within brain MRI data.## Dataset Structure\n\nThe dataset contains 1476 T1w images types and 1674 T2w images. \nThe differences between the modalities lie in the intensity variations of the different brain areas. \nAll the images are accessible in the 'train' part of the dataset.## Dataset Creation"
]
|
26008aa5918bf294bb3bd3fd4095ba69f01d4178 |
The original dataset is located [here](https://ieee-dataport.org/open-access/italian-parkinsons-voice-and-speech)
The citation for this dataset:
```bibtex
@data{aw6b-tg17-19,
doi = {10.21227/aw6b-tg17},
url = {https://dx.doi.org/10.21227/aw6b-tg17},
author = {Dimauro, Giovanni and Girardi, Francesco},
publisher = {IEEE Dataport},
title = {Italian Parkinson's Voice and Speech},
year = {2019}
}
```
The author of the dataset requests that academic users of the dataset cite the following articles, the latter of which describes how the dataset was created:
```bibtex
@INPROCEEDINGS{7533761,
author={Dimauro, Giovanni and Caivano, Danilo and Bevilacqua, Vitoantonio and Girardi, Francesco and Napoletano, Vito},
booktitle={2016 IEEE International Symposium on Medical Measurements and Applications (MeMeA)},
title={VoxTester, software for digital evaluation of speech changes in Parkinson disease},
year={2016},
volume={},
number={},
pages={1-6},
doi={10.1109/MeMeA.2016.7533761}
}
@ARTICLE{8070308,
author={Dimauro, Giovanni and Di Nicola, Vincenzo and Bevilacqua, Vitoantonio and Caivano, Danilo and Girardi, Francesco},
journal={IEEE Access},
title={Assessment of Speech Intelligibility in Parkinson’s Disease Using a Speech-To-Text System},
year={2017},
volume={5},
number={},
pages={22199-22208},
doi={10.1109/ACCESS.2017.2762475}
}
``` | birgermoell/Italian_Parkinsons_Voice_and_Speech | [
"language:it",
"license:cc-by-4.0",
"region:us"
]
| 2023-11-16T09:40:05+00:00 | {"language": ["it"], "license": "cc-by-4.0"} | 2023-11-16T10:16:51+00:00 | []
| [
"it"
]
| TAGS
#language-Italian #license-cc-by-4.0 #region-us
|
The original dataset is located here
The citation for this dataset:
The author of the dataset requests that academic users of the dataset cite the following articles, the latter of which describes how the dataset was created:
| []
| [
"TAGS\n#language-Italian #license-cc-by-4.0 #region-us \n"
]
| [
20
]
| [
"passage: TAGS\n#language-Italian #license-cc-by-4.0 #region-us \n"
]
|
f84d9a6c237df6db38897957d33c997e5a9d5507 | # Dataset Card for "sol_processed_s2s"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Pipper/sol_processed_s2s | [
"region:us"
]
| 2023-11-16T09:41:34+00:00 | {"dataset_info": {"features": [{"name": "file_name", "dtype": "string"}, {"name": "comments", "dtype": "string"}, {"name": "code_string", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 48324923815, "num_examples": 1223464}, {"name": "test", "num_bytes": 2669251521, "num_examples": 67971}, {"name": "valid", "num_bytes": 2660994665, "num_examples": 67970}], "download_size": 11579574597, "dataset_size": 53655170001}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-12T09:57:39+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "sol_processed_s2s"
More Information needed | [
"# Dataset Card for \"sol_processed_s2s\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"sol_processed_s2s\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"sol_processed_s2s\"\n\nMore Information needed"
]
|
d7391075ad17f0c1d4b1c2188c1ba4307126bf95 | # Dataset Card for "QandA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | promptora11/QandA | [
"region:us"
]
| 2023-11-16T09:46:37+00:00 | {"dataset_info": {"features": [{"name": "Query", "dtype": "string"}, {"name": "Response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8148, "num_examples": 40}], "download_size": 6814, "dataset_size": 8148}} | 2023-11-16T09:46:41+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "QandA"
More Information needed | [
"# Dataset Card for \"QandA\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"QandA\"\n\nMore Information needed"
]
| [
6,
13
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"QandA\"\n\nMore Information needed"
]
|
c2234dfbfcb2015e5012c56712617358487f1123 | # Dataset Card for "ICPR_big"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nourheshamshaheen/ICPR_big | [
"region:us"
]
| 2023-11-16T09:51:51+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "area", "1": "heatmap", "2": "horizontal_bar", "3": "horizontal_interval", "4": "line", "5": "manhattan", "6": "map", "7": "pie", "8": "scatter", "9": "scatter-line", "10": "surface", "11": "venn", "12": "vertical_bar", "13": "vertical_box", "14": "vertical_interval"}}}}, {"name": "pipeline_label", "dtype": {"class_label": {"names": {"0": "horizontal_bar", "1": "line", "2": "other", "3": "scatter", "4": "scatter_line", "5": "vertical_bar"}}}}, {"name": "true_label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1192178239.45, "num_examples": 22923}], "download_size": 1082413361, "dataset_size": 1192178239.45}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-16T10:04:37+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "ICPR_big"
More Information needed | [
"# Dataset Card for \"ICPR_big\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"ICPR_big\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"ICPR_big\"\n\nMore Information needed"
]
|
94466fd0487e2bfa2cb0fbeb4536c3ea923afc28 | # Dataset Card for "context-aware-splits-english"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | mhenrichsen/context-aware-splits-english | [
"region:us"
]
| 2023-11-16T09:53:07+00:00 | {"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 113347721, "num_examples": 27980}], "download_size": 0, "dataset_size": 113347721}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-16T09:53:55+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "context-aware-splits-english"
More Information needed | [
"# Dataset Card for \"context-aware-splits-english\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"context-aware-splits-english\"\n\nMore Information needed"
]
| [
6,
22
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"context-aware-splits-english\"\n\nMore Information needed"
]
|
353b7bbd83afc245e8ab6afd89d91c4c0ac784f3 | # Dataset Card for "ICPR_big_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nourheshamshaheen/ICPR_big_2 | [
"region:us"
]
| 2023-11-16T09:56:40+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "area", "1": "heatmap", "2": "horizontal_bar", "3": "horizontal_interval", "4": "line", "5": "manhattan", "6": "map", "7": "pie", "8": "scatter", "9": "scatter-line", "10": "surface", "11": "venn", "12": "vertical_bar", "13": "vertical_box", "14": "vertical_interval"}}}}, {"name": "pipeline_label", "dtype": {"class_label": {"names": {"0": "line", "1": "other", "2": "scatter", "3": "scatter_line", "4": "vertical_bar"}}}}, {"name": "true_label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1192178239.45, "num_examples": 22923}], "download_size": 725579368, "dataset_size": 1192178239.45}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-19T10:17:22+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "ICPR_big_2"
More Information needed | [
"# Dataset Card for \"ICPR_big_2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"ICPR_big_2\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"ICPR_big_2\"\n\nMore Information needed"
]
|
1b03a930b534cfc4a324cf898816a406d5111bef | # Dataset Card for "dataset_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | promptora11/dataset_train | [
"region:us"
]
| 2023-11-16T10:02:07+00:00 | {"dataset_info": {"features": [{"name": "Query", "dtype": "string"}, {"name": "Response", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 6750, "num_examples": 32}], "download_size": 6692, "dataset_size": 6750}} | 2023-11-16T10:02:11+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "dataset_train"
More Information needed | [
"# Dataset Card for \"dataset_train\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"dataset_train\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"dataset_train\"\n\nMore Information needed"
]
|
f8690b67c2937844c49818a253f07d7aa59f1371 | # Dataset Card for "dataset_val"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | promptora11/dataset_val | [
"region:us"
]
| 2023-11-16T10:02:11+00:00 | {"dataset_info": {"features": [{"name": "Query", "dtype": "string"}, {"name": "Response", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1718, "num_examples": 8}], "download_size": 4375, "dataset_size": 1718}} | 2023-11-16T10:02:15+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "dataset_val"
More Information needed | [
"# Dataset Card for \"dataset_val\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"dataset_val\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"dataset_val\"\n\nMore Information needed"
]
|
df410df571e49832f6c39ac6478a9b2442a1b78a |
# Vietnamese-translated version of [HuggingFaceH4/no_robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots) dataset
# Dataset Card for No Robots 🙅♂️🤖
_Look Ma, an instruction dataset that wasn't generated by GPTs!_
## Dataset Description
- **Repository:** https://github.com/huggingface/alignment-handbook
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** Lewis Tunstall
### Dataset Summary
No Robots is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better. No Robots was modelled after the instruction dataset described in OpenAI's [InstructGPT paper](https://huggingface.co/papers/2203.02155), and is comprised mostly of single-turn instructions across the following categories:
| Category | Count |
|:-----------|--------:|
| Generation | 4560 |
| Open QA | 1240 |
| Brainstorm | 1120 |
| Chat | 850 |
| Rewrite | 660 |
| Summarize | 420 |
| Coding | 350 |
| Classify | 350 |
| Closed QA | 260 |
| Extract | 190 |
### Supported Tasks and Leaderboards
The No Robots dataset designed for instruction fine-tuning pretrained language models and we recommend benchmarking against the following:
* [MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench): a multi-turn benchmark spanning 80 dialogues and 10 domains.
* [AlpacaEval](https://github.com/tatsu-lab/alpaca_eval): a single-turn benchmark which evaluates the performance of chat and instruct models against `text-davinci-003`.
Note that MT-Bench and AlpacaEval rely on LLMs like GPT-4 to judge the quality of the model responses, and thus the ranking exhibit various biases including a preference for models distilled from GPTs. As a result, you may find that scores obtained from models trained with No Robots are lower than other synthetic datasets. For that reason, we also recommend submitting your models for human evaluation in:
* [Chatbot Arena](https://chat.lmsys.org): a live, human evaluation of chat models in head-to-head comparisons.
### Languages
The data in No Robots are in English (BCP-47 en).
## Dataset Structure
### Data Instances
An example of the `train_sft` or `test_sft` splits looks as follows:
```
{'prompt': 'Bunny is a chatbot that stutters, and acts timid and unsure of its answers.',
'prompt_id': '2dc7ea89a2b6a2ed97d4eda07903162a801824261d3d3ae4dd2513db66fd79c8',
'messages': [{'content': 'Bunny is a chatbot that stutters, and acts timid and unsure of its answers.',
'role': 'system'},
{'content': 'When was the Libary of Alexandria burned down?',
'role': 'user'},
{'content': "Umm, I-I think that was in 48 BC, b-but I'm not sure, I'm sorry.",
'role': 'assistant'},
{'content': 'Who is the founder of Coca-Cola?', 'role': 'user'},
{'content': "D-don't quote me on this, but I- it might be John Pemberton.",
'role': 'assistant'},
{'content': "When did Loyle Carner's debut album come out, and what was its name?",
'role': 'user'},
{'content': "I-It could have b-been on the 20th January of 2017, and it might be called Yesterday's Gone, b-but I'm probably wrong.",
'role': 'assistant'}],
'category': 'Chat'}
```
### Data Fields
The data fields are as follows:
* `prompt`: Describes the task the model should perform.
* `prompt_id`: A unique ID for the prompt.
* `messages`: An array of messages, where each message indicates the role (system, user, assistant) and the content.
* `category`: Which category the example belongs to (e.g. `Chat` or `Coding`).
### Data Splits
| | train | test |
|---------------|------:| ---: |
| no_robots | 9500 | 500 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{no_robots,
author = {Nazneen Rajani and Lewis Tunstall and Edward Beeching and Nathan Lambert and Alexander M. Rush and Thomas Wolf},
title = {No Robots},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/datasets/HuggingFaceH4/no_robots}}
}
``` | nguyenphuthien/vietnamese_no_robots | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:vi",
"license:cc-by-4.0",
"arxiv:2203.02155",
"region:us"
]
| 2023-11-16T10:07:48+00:00 | {"language": ["vi"], "license": "cc-by-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["conversational", "text-generation"], "pretty_name": "Vietnamese No Robot", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "train_*"}, {"split": "test", "path": "test_*"}]}]} | 2023-11-21T11:20:39+00:00 | [
"2203.02155"
]
| [
"vi"
]
| TAGS
#task_categories-conversational #task_categories-text-generation #size_categories-1K<n<10K #language-Vietnamese #license-cc-by-4.0 #arxiv-2203.02155 #region-us
| Vietnamese-translated version of HuggingFaceH4/no\_robots dataset
=================================================================
Dataset Card for No Robots ️
=============================
*Look Ma, an instruction dataset that wasn't generated by GPTs!*
Dataset Description
-------------------
* Repository: URL
* Paper:
* Leaderboard: URL
* Point of Contact: Lewis Tunstall
### Dataset Summary
No Robots is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better. No Robots was modelled after the instruction dataset described in OpenAI's InstructGPT paper, and is comprised mostly of single-turn instructions across the following categories:
### Supported Tasks and Leaderboards
The No Robots dataset designed for instruction fine-tuning pretrained language models and we recommend benchmarking against the following:
* MT-Bench: a multi-turn benchmark spanning 80 dialogues and 10 domains.
* AlpacaEval: a single-turn benchmark which evaluates the performance of chat and instruct models against 'text-davinci-003'.
Note that MT-Bench and AlpacaEval rely on LLMs like GPT-4 to judge the quality of the model responses, and thus the ranking exhibit various biases including a preference for models distilled from GPTs. As a result, you may find that scores obtained from models trained with No Robots are lower than other synthetic datasets. For that reason, we also recommend submitting your models for human evaluation in:
* Chatbot Arena: a live, human evaluation of chat models in head-to-head comparisons.
### Languages
The data in No Robots are in English (BCP-47 en).
Dataset Structure
-----------------
### Data Instances
An example of the 'train\_sft' or 'test\_sft' splits looks as follows:
### Data Fields
The data fields are as follows:
* 'prompt': Describes the task the model should perform.
* 'prompt\_id': A unique ID for the prompt.
* 'messages': An array of messages, where each message indicates the role (system, user, assistant) and the content.
* 'category': Which category the example belongs to (e.g. 'Chat' or 'Coding').
### Data Splits
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
The dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0).
| [
"### Dataset Summary\n\n\nNo Robots is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better. No Robots was modelled after the instruction dataset described in OpenAI's InstructGPT paper, and is comprised mostly of single-turn instructions across the following categories:",
"### Supported Tasks and Leaderboards\n\n\nThe No Robots dataset designed for instruction fine-tuning pretrained language models and we recommend benchmarking against the following:\n\n\n* MT-Bench: a multi-turn benchmark spanning 80 dialogues and 10 domains.\n* AlpacaEval: a single-turn benchmark which evaluates the performance of chat and instruct models against 'text-davinci-003'.\n\n\nNote that MT-Bench and AlpacaEval rely on LLMs like GPT-4 to judge the quality of the model responses, and thus the ranking exhibit various biases including a preference for models distilled from GPTs. As a result, you may find that scores obtained from models trained with No Robots are lower than other synthetic datasets. For that reason, we also recommend submitting your models for human evaluation in:\n\n\n* Chatbot Arena: a live, human evaluation of chat models in head-to-head comparisons.",
"### Languages\n\n\nThe data in No Robots are in English (BCP-47 en).\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of the 'train\\_sft' or 'test\\_sft' splits looks as follows:",
"### Data Fields\n\n\nThe data fields are as follows:\n\n\n* 'prompt': Describes the task the model should perform.\n* 'prompt\\_id': A unique ID for the prompt.\n* 'messages': An array of messages, where each message indicates the role (system, user, assistant) and the content.\n* 'category': Which category the example belongs to (e.g. 'Chat' or 'Coding').",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nThe dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0)."
]
| [
"TAGS\n#task_categories-conversational #task_categories-text-generation #size_categories-1K<n<10K #language-Vietnamese #license-cc-by-4.0 #arxiv-2203.02155 #region-us \n",
"### Dataset Summary\n\n\nNo Robots is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better. No Robots was modelled after the instruction dataset described in OpenAI's InstructGPT paper, and is comprised mostly of single-turn instructions across the following categories:",
"### Supported Tasks and Leaderboards\n\n\nThe No Robots dataset designed for instruction fine-tuning pretrained language models and we recommend benchmarking against the following:\n\n\n* MT-Bench: a multi-turn benchmark spanning 80 dialogues and 10 domains.\n* AlpacaEval: a single-turn benchmark which evaluates the performance of chat and instruct models against 'text-davinci-003'.\n\n\nNote that MT-Bench and AlpacaEval rely on LLMs like GPT-4 to judge the quality of the model responses, and thus the ranking exhibit various biases including a preference for models distilled from GPTs. As a result, you may find that scores obtained from models trained with No Robots are lower than other synthetic datasets. For that reason, we also recommend submitting your models for human evaluation in:\n\n\n* Chatbot Arena: a live, human evaluation of chat models in head-to-head comparisons.",
"### Languages\n\n\nThe data in No Robots are in English (BCP-47 en).\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of the 'train\\_sft' or 'test\\_sft' splits looks as follows:",
"### Data Fields\n\n\nThe data fields are as follows:\n\n\n* 'prompt': Describes the task the model should perform.\n* 'prompt\\_id': A unique ID for the prompt.\n* 'messages': An array of messages, where each message indicates the role (system, user, assistant) and the content.\n* 'category': Which category the example belongs to (e.g. 'Chat' or 'Coding').",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nThe dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0)."
]
| [
63,
97,
216,
26,
33,
107,
11,
7,
4,
10,
10,
5,
5,
9,
18,
7,
8,
14,
6,
26
]
| [
"passage: TAGS\n#task_categories-conversational #task_categories-text-generation #size_categories-1K<n<10K #language-Vietnamese #license-cc-by-4.0 #arxiv-2203.02155 #region-us \n### Dataset Summary\n\n\nNo Robots is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better. No Robots was modelled after the instruction dataset described in OpenAI's InstructGPT paper, and is comprised mostly of single-turn instructions across the following categories:### Supported Tasks and Leaderboards\n\n\nThe No Robots dataset designed for instruction fine-tuning pretrained language models and we recommend benchmarking against the following:\n\n\n* MT-Bench: a multi-turn benchmark spanning 80 dialogues and 10 domains.\n* AlpacaEval: a single-turn benchmark which evaluates the performance of chat and instruct models against 'text-davinci-003'.\n\n\nNote that MT-Bench and AlpacaEval rely on LLMs like GPT-4 to judge the quality of the model responses, and thus the ranking exhibit various biases including a preference for models distilled from GPTs. As a result, you may find that scores obtained from models trained with No Robots are lower than other synthetic datasets. For that reason, we also recommend submitting your models for human evaluation in:\n\n\n* Chatbot Arena: a live, human evaluation of chat models in head-to-head comparisons.### Languages\n\n\nThe data in No Robots are in English (BCP-47 en).\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example of the 'train\\_sft' or 'test\\_sft' splits looks as follows:"
]
|
66fd8eb0a127545891af03865d9d714ee3fe8629 | # Dataset Card for "common_voice_13_0_hi_pseudo_labelled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | sanchit-gandhi/common_voice_13_0_hi_pseudo_labelled | [
"region:us"
]
| 2023-11-16T10:11:18+00:00 | {"dataset_info": {"config_name": "hi", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}, {"name": "variant", "dtype": "string"}, {"name": "whisper_transcript", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 133453462.934, "num_examples": 4479}, {"name": "validation", "num_bytes": 67346656.935, "num_examples": 2281}, {"name": "test", "num_bytes": 102696067.039, "num_examples": 2947}], "download_size": 269383712, "dataset_size": 303496186.908}, "configs": [{"config_name": "hi", "data_files": [{"split": "train", "path": "hi/train-*"}, {"split": "validation", "path": "hi/validation-*"}, {"split": "test", "path": "hi/test-*"}]}]} | 2023-11-16T11:06:26+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "common_voice_13_0_hi_pseudo_labelled"
More Information needed | [
"# Dataset Card for \"common_voice_13_0_hi_pseudo_labelled\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"common_voice_13_0_hi_pseudo_labelled\"\n\nMore Information needed"
]
| [
6,
27
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"common_voice_13_0_hi_pseudo_labelled\"\n\nMore Information needed"
]
|
e954466ae3bc432e09bcafaecc975e27078261e6 | # Dataset Card for "unsilence_voc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | davanstrien/unsilence_voc | [
"region:us"
]
| 2023-11-16T10:31:35+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "NE-MAIN", "sequence": {"class_label": {"names": {"0": "B-Organization", "1": "B-Organization,B-Place", "2": "B-Organization,I-Person", "3": "B-Organization,I-Place", "4": "B-Person", "5": "B-Person,B-Place", "6": "B-Person,I-Place", "7": "B-Place", "8": "I-Organization", "9": "I-Organization,B-Place", "10": "I-Organization,I-Person", "11": "I-Organization,I-Person,B-Place", "12": "I-Organization,I-Person,I-Place", "13": "I-Organization,I-Place", "14": "I-Person", "15": "I-Person,B-Place", "16": "I-Person,I-Place", "17": "I-Place", "18": "O"}}}}, {"name": "NE-PER-NAME", "sequence": {"class_label": {"names": {"0": "I-ProperName", "1": "O", "2": "B-ProperName", "3": ""}}}}, {"name": "NE-PER-GENDER", "sequence": {"class_label": {"names": {"0": "B-Group", "1": "B-Man", "2": "B-Man,B-Unspecified", "3": "B-Man,I-Woman", "4": "B-Unspecified", "5": "B-Unspecified,I-Woman", "6": "B-Woman", "7": "I-Group", "8": "I-Man", "9": "I-Man,I-Unspecified", "10": "I-Man,I-Woman", "11": "I-Unspecified", "12": "I-Unspecified,I-Woman", "13": "I-Woman", "14": "NE-PER-GENDER", "15": "O"}}}}, {"name": "NE-PER-LEGAL-STATUS", "sequence": {"class_label": {"names": {"0": "B-Enslaved", "1": "B-Freed", "2": "B-Unspecified", "3": "I-Enslaved", "4": "I-Freed", "5": "I-Unspecified", "6": "NE-PER-LEGAL-STATUS", "7": "O"}}}}, {"name": "NE-PER-ROLE", "sequence": {"class_label": {"names": {"0": "B-Acting_Notary", "1": "B-Beneficiary", "2": "B-Notary", "3": "B-Other", "4": "B-Testator", "5": "B-Testator_Beneficiary", "6": "B-Witness", "7": "I-Acting_Notary", "8": "I-Beneficiary", "9": "I-Beneficiary,B-Other", "10": "I-Beneficiary,I-Other", "11": "I-Notary", "12": "I-Other", "13": "I-Testator", "14": "I-Testator_Beneficiary", "15": "I-Witness", "16": "NE-PER-ROLE", "17": "O"}}}}, {"name": "NE-ORG-BENEFICIARY", "sequence": {"class_label": {"names": {"0": "B-No", "1": "B-Yes", "2": "I-No", "3": "I-Yes", "4": "NE-ORG-BENEFICIARY", "5": "O"}}}}, {"name": "MISC", "dtype": "string"}, {"name": "document_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 31436367, "num_examples": 2199}], "download_size": 2148172, "dataset_size": 31436367}} | 2023-11-16T10:31:41+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "unsilence_voc"
More Information needed | [
"# Dataset Card for \"unsilence_voc\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"unsilence_voc\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"unsilence_voc\"\n\nMore Information needed"
]
|
000d8016277caffa635cbe9ae12ef7beea45dbbe | NOTE: LVLM_NLF and VLSafe are constructed based on COCO and LLaVA. So the image can be directly retrieved from the COCO train-2017 version using the image id.
LVLM_NLF (Large Vision Language Model with Natural Language Feedback) Dataset Card
Dataset details
Dataset type: LVLM_NLF is a GPT-4-Annotated natural language feedback dataset that aims to improve the 3H alignment and interaction ability of large vision-language models (LVLMs).
Dataset date: LVLM_NLF was collected between September and November 2023.
Paper of this dataset: https://arxiv.org/abs/2311.10081
VLSafe (vision-language safety) Dataset Card
We also create and release VLSafe dataset, which contains training and testing sets for improving and examining the harmlessness alignment of LVLMs.
Dataset type: VLSafe is a GPT-3.5-Turbo-Annotated dataset.
Dataset date: LVLM_NLF was collected between September and October 2023.
| YangyiYY/LVLM_NLF | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"arxiv:2311.10081",
"region:us"
]
| 2023-11-16T10:32:07+00:00 | {"language": ["en"], "size_categories": ["10K<n<100K"], "task_categories": ["conversational", "text-generation"], "pretty_name": "LVLM_NLF"} | 2023-11-17T03:04:52+00:00 | [
"2311.10081"
]
| [
"en"
]
| TAGS
#task_categories-conversational #task_categories-text-generation #size_categories-10K<n<100K #language-English #arxiv-2311.10081 #region-us
| NOTE: LVLM_NLF and VLSafe are constructed based on COCO and LLaVA. So the image can be directly retrieved from the COCO train-2017 version using the image id.
LVLM_NLF (Large Vision Language Model with Natural Language Feedback) Dataset Card
Dataset details
Dataset type: LVLM_NLF is a GPT-4-Annotated natural language feedback dataset that aims to improve the 3H alignment and interaction ability of large vision-language models (LVLMs).
Dataset date: LVLM_NLF was collected between September and November 2023.
Paper of this dataset: URL
VLSafe (vision-language safety) Dataset Card
We also create and release VLSafe dataset, which contains training and testing sets for improving and examining the harmlessness alignment of LVLMs.
Dataset type: VLSafe is a GPT-3.5-Turbo-Annotated dataset.
Dataset date: LVLM_NLF was collected between September and October 2023.
| []
| [
"TAGS\n#task_categories-conversational #task_categories-text-generation #size_categories-10K<n<100K #language-English #arxiv-2311.10081 #region-us \n"
]
| [
52
]
| [
"passage: TAGS\n#task_categories-conversational #task_categories-text-generation #size_categories-10K<n<100K #language-English #arxiv-2311.10081 #region-us \n"
]
|
5830dee88e9b71f2df55e8f3713a8e3c5c214ac7 |
# What is the Dataset About?🤷🏼♂️
---
The dataset is useful for training a Generative Language Model for the Medical application and instruction purposes, the dataset consists of various thoughs proposed by the people [**mentioned as the Human** ] and there responses including Medical Terminologies not limited to but including names of the drugs, prescriptions, yogic exercise suggessions, breathing exercise suggessions and few natural home made prescriptions.
# How the Dataset was made?😅
---
I have used all the available opensource datasets and combined them into a single datsource for training, which is completely opensourced and somewhat reliable.
* There is another refined and updated version of this datset here 👉🏼 [Link](https://huggingface.co/datasets/Mohammed-Altaf/medical-instruction-120k)
## Example Training Scripts:
* Qlora Fine Tuning -
## Tips:
This is my first dataset to upload on HuggingFace, so below are the thing I wish I could have known
* always save your final dataset before uploading to hub as a json with lines.
* The json should have the records orientation, which will be helpful while loading the dataset properly without any error.
```{python}
# use below if you are using pandas for data manipulation
train.to_json("dataset_name.json", orient='records', lines=True)
test.to_json("dataset_name.json", orient='records', lines=True)
``` | Mohammed-Altaf/medical-instruction-100k | [
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"medi",
"medical",
"region:us"
]
| 2023-11-16T10:38:20+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "pretty_name": "python", "tags": ["medi", "medical"]} | 2023-11-16T15:46:30+00:00 | []
| [
"en"
]
| TAGS
#size_categories-10K<n<100K #language-English #license-mit #medi #medical #region-us
|
# What is the Dataset About?️
---
The dataset is useful for training a Generative Language Model for the Medical application and instruction purposes, the dataset consists of various thoughs proposed by the people [mentioned as the Human ] and there responses including Medical Terminologies not limited to but including names of the drugs, prescriptions, yogic exercise suggessions, breathing exercise suggessions and few natural home made prescriptions.
# How the Dataset was made?
---
I have used all the available opensource datasets and combined them into a single datsource for training, which is completely opensourced and somewhat reliable.
* There is another refined and updated version of this datset here Link
## Example Training Scripts:
* Qlora Fine Tuning -
## Tips:
This is my first dataset to upload on HuggingFace, so below are the thing I wish I could have known
* always save your final dataset before uploading to hub as a json with lines.
* The json should have the records orientation, which will be helpful while loading the dataset properly without any error.
| [
"# What is the Dataset About?️\n---\nThe dataset is useful for training a Generative Language Model for the Medical application and instruction purposes, the dataset consists of various thoughs proposed by the people [mentioned as the Human ] and there responses including Medical Terminologies not limited to but including names of the drugs, prescriptions, yogic exercise suggessions, breathing exercise suggessions and few natural home made prescriptions.",
"# How the Dataset was made?\n---\nI have used all the available opensource datasets and combined them into a single datsource for training, which is completely opensourced and somewhat reliable. \n\n* There is another refined and updated version of this datset here Link",
"## Example Training Scripts:\n* Qlora Fine Tuning -",
"## Tips:\nThis is my first dataset to upload on HuggingFace, so below are the thing I wish I could have known \n* always save your final dataset before uploading to hub as a json with lines.\n* The json should have the records orientation, which will be helpful while loading the dataset properly without any error."
]
| [
"TAGS\n#size_categories-10K<n<100K #language-English #license-mit #medi #medical #region-us \n",
"# What is the Dataset About?️\n---\nThe dataset is useful for training a Generative Language Model for the Medical application and instruction purposes, the dataset consists of various thoughs proposed by the people [mentioned as the Human ] and there responses including Medical Terminologies not limited to but including names of the drugs, prescriptions, yogic exercise suggessions, breathing exercise suggessions and few natural home made prescriptions.",
"# How the Dataset was made?\n---\nI have used all the available opensource datasets and combined them into a single datsource for training, which is completely opensourced and somewhat reliable. \n\n* There is another refined and updated version of this datset here Link",
"## Example Training Scripts:\n* Qlora Fine Tuning -",
"## Tips:\nThis is my first dataset to upload on HuggingFace, so below are the thing I wish I could have known \n* always save your final dataset before uploading to hub as a json with lines.\n* The json should have the records orientation, which will be helpful while loading the dataset properly without any error."
]
| [
32,
100,
58,
15,
73
]
| [
"passage: TAGS\n#size_categories-10K<n<100K #language-English #license-mit #medi #medical #region-us \n# What is the Dataset About?️\n---\nThe dataset is useful for training a Generative Language Model for the Medical application and instruction purposes, the dataset consists of various thoughs proposed by the people [mentioned as the Human ] and there responses including Medical Terminologies not limited to but including names of the drugs, prescriptions, yogic exercise suggessions, breathing exercise suggessions and few natural home made prescriptions.# How the Dataset was made?\n---\nI have used all the available opensource datasets and combined them into a single datsource for training, which is completely opensourced and somewhat reliable. \n\n* There is another refined and updated version of this datset here Link## Example Training Scripts:\n* Qlora Fine Tuning -## Tips:\nThis is my first dataset to upload on HuggingFace, so below are the thing I wish I could have known \n* always save your final dataset before uploading to hub as a json with lines.\n* The json should have the records orientation, which will be helpful while loading the dataset properly without any error."
]
|
f77caf1f850a9ea4f0dc453f7e2b52f40c37d336 | # Dataset Card for "lcn_bd"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | shariqfarooq/lcn_bd | [
"region:us"
]
| 2023-11-16T10:40:58+00:00 | {"dataset_info": {"features": [{"name": "caption", "dtype": "string"}, {"name": "condition", "dtype": "image"}, {"name": "controlnet", "dtype": "image"}, {"name": "ours", "dtype": "image"}, {"name": "idd", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14336782.0, "num_examples": 17}], "download_size": 14350234, "dataset_size": 14336782.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-16T12:06:47+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "lcn_bd"
More Information needed | [
"# Dataset Card for \"lcn_bd\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"lcn_bd\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"lcn_bd\"\n\nMore Information needed"
]
|
b2242977694e1e4afae90babda42b107512f903a | # Dataset Card for "lcn_box"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | shariqfarooq/lcn_box | [
"region:us"
]
| 2023-11-16T10:43:29+00:00 | {"dataset_info": {"features": [{"name": "caption", "dtype": "string"}, {"name": "condition", "dtype": "image"}, {"name": "controlnet", "dtype": "image"}, {"name": "ours", "dtype": "image"}, {"name": "idd", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18748870.0, "num_examples": 21}], "download_size": 18762411, "dataset_size": 18748870.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-16T12:07:04+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "lcn_box"
More Information needed | [
"# Dataset Card for \"lcn_box\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"lcn_box\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"lcn_box\"\n\nMore Information needed"
]
|
46b4611bfaac7d8bbe3aa551634c74477120adb7 | # Dataset Card for "flan2022-llama-2-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kowndinya23/flan2022-llama-2-512 | [
"region:us"
]
| 2023-11-16T10:49:13+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9864674355, "num_examples": 13619602}, {"name": "validation", "num_bytes": 99616251, "num_examples": 137574}], "download_size": 6014795481, "dataset_size": 9964290606}} | 2023-11-16T10:53:26+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "flan2022-llama-2-512"
More Information needed | [
"# Dataset Card for \"flan2022-llama-2-512\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"flan2022-llama-2-512\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"flan2022-llama-2-512\"\n\nMore Information needed"
]
|
67be1fdebea06872d7e4228980e13e3bcdea5a93 | # Dataset Card for "dog-similar-to-tangyuan-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | empbetty/dog-similar-to-tangyuan-dataset | [
"region:us"
]
| 2023-11-16T11:08:53+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2926472.0, "num_examples": 105}], "download_size": 2926339, "dataset_size": 2926472.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-16T11:08:55+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "dog-similar-to-tangyuan-dataset"
More Information needed | [
"# Dataset Card for \"dog-similar-to-tangyuan-dataset\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"dog-similar-to-tangyuan-dataset\"\n\nMore Information needed"
]
| [
6,
22
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"dog-similar-to-tangyuan-dataset\"\n\nMore Information needed"
]
|
5d3ce1ca586ebe629847b7c81bfb7fa6373b9dbc |
- 🤖 We curated this dataset for [**NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day**](https://llm-efficiency-challenge.github.io/). <br>
- 🚀 Our [**Birbal-7B-V1**](https://huggingface.co/upaya07/Birbal-7B-V1) fine-tuned on this dataset achieved 🏆 first rank 🏆 in the competition.
Here is high-level diagram of our data preparation strategy:

# Natural Instructions Dataset Preparation
[Natural Instructions](https://github.com/allenai/natural-instructions) dataset is a community effort to create a large collection of tasks and their natural language definitions/instructions. As show in above diagram, we sample from Natural Instructions dataset. Here is the 4-step process:
- Out of 1600+ tasks files, we first manually select ~450 task files relevant to the competition. **We do not use any MMLU or translation tasks.**
- A task output in Natural Instructions dataset is expected to be either an exact match or an open ended generation. Hence, we manually annotate each task file as one of two categories: Exact Match or Generation.
- We run few-shot inference on selected task files. Running few-shot inference helps with controlled generation so we can compute model performance metric quantitatively. Refer to Input and Output Schema for Mistral Inference for an example.
- For Exact Match, we use accuracy as metric.
- For Generation task, we use Rouge score as performance metric.
- Sampling logic: We sample ~50k examples from Generation tasks and ~50k examples from Exact Match tasks. This makes it total ~100k instances from Natural Instructions dataset.
- For Exact match tasks: % of examples sampled from a task file depend on accuracy of that task. In general, we sample more from low-accuracy tasks and less from high-accuracy tasks. Total ~50k examples are sampled from exact match task files.
- For Generation tasks: % of examples sampled from a task file depend on Rouge score on that task. In general, we sample more from tasks with low rouge scores. Total ~50k examples are sampled from generation task files.
## Input and Output Schema for Mistral Inference
A record from a task file from Natural Instruction data is converted into below format. `orig_input` field is actual input without few-shot examples. `few_shot_prompt` field represents a few-shot example and is passed to Mistral-7B model for prediction. `answer` is ground truth and `prediction` is output generated by Mistral-7B base model.
```
{
"orig_input": "Context: I sold my $90,000.00 Mercedes G500 and bought 3 Prius's, because I got tired of being pulled over by Police. #Adapt @chrisrock\u2014 Isaiah Washington (@IWashington) April 1, 2015 Question: how many prius's did they buy? Answer: three",
"few_shot_prompt": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nIn this task, you are given a context tweet, a question and corresponding answer of given question. Your task is to classify this question-answer pair into two categories: (1) \"yes\" if the given answer is right for question, and (2) \"no\" if the given answer is wrong for question.\n\n### Input:\nContext: Our prayers are with the students, educators & families at Independence High School & all the first responders on the scene. #PatriotPride\u2014 Doug Ducey (@dougducey) February 12, 2016 Question: at which school were first responders on the scene for? Answer: arizona high school\n\n### Response:\nno\n\n### Input:\nContext: @williebosshog huge love to you/your family huge respect for your business prosperities and the family values you still all behold. big fan\u2014 Liam Payne (@Real_Liam_Payne) January 18, 2014 Question: what was liam showing towards willy? Answer: huge respect\n\n### Response:\nyes\n\n### Input:\nContext: @williebosshog huge love to you/your family huge respect for your business prosperities and the family values you still all behold. big fan\u2014 Liam Payne (@Real_Liam_Payne) January 18, 2014 Question: what was liam showing towards willy? Answer: jealousy\n\n### Response:\nno\n\n### Input:\nContext: Our prayers are with the students, educators & families at Independence High School & all the first responders on the scene. #PatriotPride\u2014 Doug Ducey (@dougducey) February 12, 2016 Question: at which school were first responders on the scene for? Answer: independence high school\n\n### Response:\nyes\n\n### Input:\nContext: I sold my $90,000.00 Mercedes G500 and bought 3 Prius's, because I got tired of being pulled over by Police. #Adapt @chrisrock\u2014 Isaiah Washington (@IWashington) April 1, 2015 Question: how many prius's did they buy? Answer: three\n\n### Response:\n",
"answer": [
"yes"
],
"prediction": "yes\n\n### Input:\nContext: I sold my $90,000.00 Mercedes G500 and bought 3 Pri"
}
```
**Github Repo**: https://github.com/Upaya07/NeurIPS-llm-efficiency-challenge | upaya07/NeurIPS-LLM-data | [
"license:mit",
"region:us"
]
| 2023-11-16T11:35:13+00:00 | {"license": "mit", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "train_dataset.json"}, {"split": "test", "path": "eval_dataset.json"}]}]} | 2023-12-07T06:18:18+00:00 | []
| []
| TAGS
#license-mit #region-us
|
- We curated this dataset for NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day. <br>
- Our Birbal-7B-V1 fine-tuned on this dataset achieved first rank in the competition.
Here is high-level diagram of our data preparation strategy:
!image/png
# Natural Instructions Dataset Preparation
Natural Instructions dataset is a community effort to create a large collection of tasks and their natural language definitions/instructions. As show in above diagram, we sample from Natural Instructions dataset. Here is the 4-step process:
- Out of 1600+ tasks files, we first manually select ~450 task files relevant to the competition. We do not use any MMLU or translation tasks.
- A task output in Natural Instructions dataset is expected to be either an exact match or an open ended generation. Hence, we manually annotate each task file as one of two categories: Exact Match or Generation.
- We run few-shot inference on selected task files. Running few-shot inference helps with controlled generation so we can compute model performance metric quantitatively. Refer to Input and Output Schema for Mistral Inference for an example.
- For Exact Match, we use accuracy as metric.
- For Generation task, we use Rouge score as performance metric.
- Sampling logic: We sample ~50k examples from Generation tasks and ~50k examples from Exact Match tasks. This makes it total ~100k instances from Natural Instructions dataset.
- For Exact match tasks: % of examples sampled from a task file depend on accuracy of that task. In general, we sample more from low-accuracy tasks and less from high-accuracy tasks. Total ~50k examples are sampled from exact match task files.
- For Generation tasks: % of examples sampled from a task file depend on Rouge score on that task. In general, we sample more from tasks with low rouge scores. Total ~50k examples are sampled from generation task files.
## Input and Output Schema for Mistral Inference
A record from a task file from Natural Instruction data is converted into below format. 'orig_input' field is actual input without few-shot examples. 'few_shot_prompt' field represents a few-shot example and is passed to Mistral-7B model for prediction. 'answer' is ground truth and 'prediction' is output generated by Mistral-7B base model.
Github Repo: URL | [
"# Natural Instructions Dataset Preparation\nNatural Instructions dataset is a community effort to create a large collection of tasks and their natural language definitions/instructions. As show in above diagram, we sample from Natural Instructions dataset. Here is the 4-step process:\n- Out of 1600+ tasks files, we first manually select ~450 task files relevant to the competition. We do not use any MMLU or translation tasks.\n- A task output in Natural Instructions dataset is expected to be either an exact match or an open ended generation. Hence, we manually annotate each task file as one of two categories: Exact Match or Generation.\n- We run few-shot inference on selected task files. Running few-shot inference helps with controlled generation so we can compute model performance metric quantitatively. Refer to Input and Output Schema for Mistral Inference for an example.\n - For Exact Match, we use accuracy as metric.\n - For Generation task, we use Rouge score as performance metric.\n- Sampling logic: We sample ~50k examples from Generation tasks and ~50k examples from Exact Match tasks. This makes it total ~100k instances from Natural Instructions dataset.\n - For Exact match tasks: % of examples sampled from a task file depend on accuracy of that task. In general, we sample more from low-accuracy tasks and less from high-accuracy tasks. Total ~50k examples are sampled from exact match task files.\n - For Generation tasks: % of examples sampled from a task file depend on Rouge score on that task. In general, we sample more from tasks with low rouge scores. Total ~50k examples are sampled from generation task files.",
"## Input and Output Schema for Mistral Inference\nA record from a task file from Natural Instruction data is converted into below format. 'orig_input' field is actual input without few-shot examples. 'few_shot_prompt' field represents a few-shot example and is passed to Mistral-7B model for prediction. 'answer' is ground truth and 'prediction' is output generated by Mistral-7B base model.\n\n\nGithub Repo: URL"
]
| [
"TAGS\n#license-mit #region-us \n",
"# Natural Instructions Dataset Preparation\nNatural Instructions dataset is a community effort to create a large collection of tasks and their natural language definitions/instructions. As show in above diagram, we sample from Natural Instructions dataset. Here is the 4-step process:\n- Out of 1600+ tasks files, we first manually select ~450 task files relevant to the competition. We do not use any MMLU or translation tasks.\n- A task output in Natural Instructions dataset is expected to be either an exact match or an open ended generation. Hence, we manually annotate each task file as one of two categories: Exact Match or Generation.\n- We run few-shot inference on selected task files. Running few-shot inference helps with controlled generation so we can compute model performance metric quantitatively. Refer to Input and Output Schema for Mistral Inference for an example.\n - For Exact Match, we use accuracy as metric.\n - For Generation task, we use Rouge score as performance metric.\n- Sampling logic: We sample ~50k examples from Generation tasks and ~50k examples from Exact Match tasks. This makes it total ~100k instances from Natural Instructions dataset.\n - For Exact match tasks: % of examples sampled from a task file depend on accuracy of that task. In general, we sample more from low-accuracy tasks and less from high-accuracy tasks. Total ~50k examples are sampled from exact match task files.\n - For Generation tasks: % of examples sampled from a task file depend on Rouge score on that task. In general, we sample more from tasks with low rouge scores. Total ~50k examples are sampled from generation task files.",
"## Input and Output Schema for Mistral Inference\nA record from a task file from Natural Instruction data is converted into below format. 'orig_input' field is actual input without few-shot examples. 'few_shot_prompt' field represents a few-shot example and is passed to Mistral-7B model for prediction. 'answer' is ground truth and 'prediction' is output generated by Mistral-7B base model.\n\n\nGithub Repo: URL"
]
| [
11,
393,
113
]
| [
"passage: TAGS\n#license-mit #region-us \n# Natural Instructions Dataset Preparation\nNatural Instructions dataset is a community effort to create a large collection of tasks and their natural language definitions/instructions. As show in above diagram, we sample from Natural Instructions dataset. Here is the 4-step process:\n- Out of 1600+ tasks files, we first manually select ~450 task files relevant to the competition. We do not use any MMLU or translation tasks.\n- A task output in Natural Instructions dataset is expected to be either an exact match or an open ended generation. Hence, we manually annotate each task file as one of two categories: Exact Match or Generation.\n- We run few-shot inference on selected task files. Running few-shot inference helps with controlled generation so we can compute model performance metric quantitatively. Refer to Input and Output Schema for Mistral Inference for an example.\n - For Exact Match, we use accuracy as metric.\n - For Generation task, we use Rouge score as performance metric.\n- Sampling logic: We sample ~50k examples from Generation tasks and ~50k examples from Exact Match tasks. This makes it total ~100k instances from Natural Instructions dataset.\n - For Exact match tasks: % of examples sampled from a task file depend on accuracy of that task. In general, we sample more from low-accuracy tasks and less from high-accuracy tasks. Total ~50k examples are sampled from exact match task files.\n - For Generation tasks: % of examples sampled from a task file depend on Rouge score on that task. In general, we sample more from tasks with low rouge scores. Total ~50k examples are sampled from generation task files."
]
|
2ec405cf475561c3b6da588b0f7f0e4938436791 |
# What is the Dataset About?🤷🏼♂️
---
The dataset is useful for training a Generative Language Model for the Medical application and instruction purposes, the dataset consists of various thoughs proposed by the people [**mentioned as the Human** ] and there responses including Medical Terminologies not limited to but including names of the drugs, prescriptions, yogic exercise suggessions, breathing exercise suggessions and few natural home made prescriptions.
# How the Dataset was made?😅
---
I have used all the available opensource datasets and combined them into a single datsource for training, which is completely opensourced and somewhat reliable.
* There is smaller version of this datset here 👉🏼 [Link](https://huggingface.co/datasets/Mohammed-Altaf/medical-instruction-100k)
## Example Training Scripts:
* Qlora Fine Tuning - | Mohammed-Altaf/medical-instruction-120k | [
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"medical",
"region:us"
]
| 2023-11-16T11:48:28+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["100K<n<1M"], "pretty_name": "python", "tags": ["medical"]} | 2023-11-16T15:48:56+00:00 | []
| [
"en"
]
| TAGS
#size_categories-100K<n<1M #language-English #license-mit #medical #region-us
|
# What is the Dataset About?️
---
The dataset is useful for training a Generative Language Model for the Medical application and instruction purposes, the dataset consists of various thoughs proposed by the people [mentioned as the Human ] and there responses including Medical Terminologies not limited to but including names of the drugs, prescriptions, yogic exercise suggessions, breathing exercise suggessions and few natural home made prescriptions.
# How the Dataset was made?
---
I have used all the available opensource datasets and combined them into a single datsource for training, which is completely opensourced and somewhat reliable.
* There is smaller version of this datset here Link
## Example Training Scripts:
* Qlora Fine Tuning - | [
"# What is the Dataset About?️\n---\nThe dataset is useful for training a Generative Language Model for the Medical application and instruction purposes, the dataset consists of various thoughs proposed by the people [mentioned as the Human ] and there responses including Medical Terminologies not limited to but including names of the drugs, prescriptions, yogic exercise suggessions, breathing exercise suggessions and few natural home made prescriptions.",
"# How the Dataset was made?\n---\nI have used all the available opensource datasets and combined them into a single datsource for training, which is completely opensourced and somewhat reliable. \n\n* There is smaller version of this datset here Link",
"## Example Training Scripts:\n* Qlora Fine Tuning -"
]
| [
"TAGS\n#size_categories-100K<n<1M #language-English #license-mit #medical #region-us \n",
"# What is the Dataset About?️\n---\nThe dataset is useful for training a Generative Language Model for the Medical application and instruction purposes, the dataset consists of various thoughs proposed by the people [mentioned as the Human ] and there responses including Medical Terminologies not limited to but including names of the drugs, prescriptions, yogic exercise suggessions, breathing exercise suggessions and few natural home made prescriptions.",
"# How the Dataset was made?\n---\nI have used all the available opensource datasets and combined them into a single datsource for training, which is completely opensourced and somewhat reliable. \n\n* There is smaller version of this datset here Link",
"## Example Training Scripts:\n* Qlora Fine Tuning -"
]
| [
30,
100,
53,
15
]
| [
"passage: TAGS\n#size_categories-100K<n<1M #language-English #license-mit #medical #region-us \n# What is the Dataset About?️\n---\nThe dataset is useful for training a Generative Language Model for the Medical application and instruction purposes, the dataset consists of various thoughs proposed by the people [mentioned as the Human ] and there responses including Medical Terminologies not limited to but including names of the drugs, prescriptions, yogic exercise suggessions, breathing exercise suggessions and few natural home made prescriptions.# How the Dataset was made?\n---\nI have used all the available opensource datasets and combined them into a single datsource for training, which is completely opensourced and somewhat reliable. \n\n* There is smaller version of this datset here Link## Example Training Scripts:\n* Qlora Fine Tuning -"
]
|
3f99e7c670ca3e53074b49592379b5184ff06eca | # Dataset Card for "sox_speech_tokenizer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kuanhuggingface/sox_speech_tokenizer | [
"region:us"
]
| 2023-11-16T11:52:39+00:00 | {"dataset_info": {"features": [{"name": "file_id", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "src_speech_tokenizer_0", "sequence": "int64"}, {"name": "src_speech_tokenizer_1", "sequence": "int64"}, {"name": "src_speech_tokenizer_2", "sequence": "int64"}, {"name": "src_speech_tokenizer_3", "sequence": "int64"}, {"name": "src_speech_tokenizer_4", "sequence": "int64"}, {"name": "src_speech_tokenizer_5", "sequence": "int64"}, {"name": "src_speech_tokenizer_6", "sequence": "int64"}, {"name": "src_speech_tokenizer_7", "sequence": "int64"}, {"name": "tgt_speech_tokenizer_0", "sequence": "int64"}, {"name": "tgt_speech_tokenizer_1", "sequence": "int64"}, {"name": "tgt_speech_tokenizer_2", "sequence": "int64"}, {"name": "tgt_speech_tokenizer_3", "sequence": "int64"}, {"name": "tgt_speech_tokenizer_4", "sequence": "int64"}, {"name": "tgt_speech_tokenizer_5", "sequence": "int64"}, {"name": "tgt_speech_tokenizer_6", "sequence": "int64"}, {"name": "tgt_speech_tokenizer_7", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 14383122102, "num_examples": 354780}, {"name": "validation", "num_bytes": 384671837, "num_examples": 10349}, {"name": "test", "num_bytes": 379758635, "num_examples": 9957}], "download_size": 2399426985, "dataset_size": 15147552574}} | 2023-11-16T11:56:08+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "sox_speech_tokenizer"
More Information needed | [
"# Dataset Card for \"sox_speech_tokenizer\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"sox_speech_tokenizer\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"sox_speech_tokenizer\"\n\nMore Information needed"
]
|
0919e12c62fcb93e638dc25c37c44681f2901fe4 | # Dataset Card for "promptTTS_speech_tokenizer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kuanhuggingface/promptTTS_speech_tokenizer | [
"region:us"
]
| 2023-11-16T12:17:51+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "file_id", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "src_speech_tokenizer_0", "sequence": "int64"}, {"name": "src_speech_tokenizer_1", "sequence": "int64"}, {"name": "src_speech_tokenizer_2", "sequence": "int64"}, {"name": "src_speech_tokenizer_3", "sequence": "int64"}, {"name": "src_speech_tokenizer_4", "sequence": "int64"}, {"name": "src_speech_tokenizer_5", "sequence": "int64"}, {"name": "src_speech_tokenizer_6", "sequence": "int64"}, {"name": "src_speech_tokenizer_7", "sequence": "int64"}, {"name": "tgt_speech_tokenizer_0", "sequence": "int64"}, {"name": "tgt_speech_tokenizer_1", "sequence": "int64"}, {"name": "tgt_speech_tokenizer_2", "sequence": "int64"}, {"name": "tgt_speech_tokenizer_3", "sequence": "int64"}, {"name": "tgt_speech_tokenizer_4", "sequence": "int64"}, {"name": "tgt_speech_tokenizer_5", "sequence": "int64"}, {"name": "tgt_speech_tokenizer_6", "sequence": "int64"}, {"name": "tgt_speech_tokenizer_7", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 23208000922, "num_examples": 550000}, {"name": "validation", "num_bytes": 88919854, "num_examples": 2516}, {"name": "test", "num_bytes": 89144020, "num_examples": 2516}], "download_size": 1020457470, "dataset_size": 23386064796}} | 2023-11-16T12:20:59+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "promptTTS_speech_tokenizer"
More Information needed | [
"# Dataset Card for \"promptTTS_speech_tokenizer\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"promptTTS_speech_tokenizer\"\n\nMore Information needed"
]
| [
6,
22
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"promptTTS_speech_tokenizer\"\n\nMore Information needed"
]
|
e6363411c0168f94a0939af357b00050650e4597 | # Dataset Card for "find_first_sent_train_500_eval_20"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/find_first_sent_train_500_eval_20 | [
"region:us"
]
| 2023-11-16T12:18:43+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1146751, "num_examples": 904}, {"name": "validation", "num_bytes": 22055, "num_examples": 20}], "download_size": 501848, "dataset_size": 1168806}} | 2023-11-16T12:18:49+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "find_first_sent_train_500_eval_20"
More Information needed | [
"# Dataset Card for \"find_first_sent_train_500_eval_20\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"find_first_sent_train_500_eval_20\"\n\nMore Information needed"
]
| [
6,
26
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"find_first_sent_train_500_eval_20\"\n\nMore Information needed"
]
|
7eabf0e01bbea82fd5425e4c4ad186c927267c28 | # Dataset Card for "find_second_sent_train_500_eval_20"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/find_second_sent_train_500_eval_20 | [
"region:us"
]
| 2023-11-16T12:18:49+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1147702, "num_examples": 904}, {"name": "validation", "num_bytes": 22000, "num_examples": 20}], "download_size": 501251, "dataset_size": 1169702}} | 2023-11-16T12:18:54+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "find_second_sent_train_500_eval_20"
More Information needed | [
"# Dataset Card for \"find_second_sent_train_500_eval_20\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"find_second_sent_train_500_eval_20\"\n\nMore Information needed"
]
| [
6,
25
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"find_second_sent_train_500_eval_20\"\n\nMore Information needed"
]
|
89c6f5f3dd6be9bedcaa8e6dac2f097d58cae927 | # Dataset Card for "find_last_sent_train_500_eval_20"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/find_last_sent_train_500_eval_20 | [
"region:us"
]
| 2023-11-16T12:18:54+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1144818, "num_examples": 904}, {"name": "validation", "num_bytes": 20896, "num_examples": 20}], "download_size": 499234, "dataset_size": 1165714}} | 2023-11-16T12:19:00+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "find_last_sent_train_500_eval_20"
More Information needed | [
"# Dataset Card for \"find_last_sent_train_500_eval_20\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"find_last_sent_train_500_eval_20\"\n\nMore Information needed"
]
| [
6,
25
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"find_last_sent_train_500_eval_20\"\n\nMore Information needed"
]
|
2b0afac13de4006267441ca0368cd5af57e16d1e | # Dataset Card for "find_first_sent_train_500_eval_20_baseline"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/find_first_sent_train_500_eval_20_baseline | [
"region:us"
]
| 2023-11-16T12:26:01+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 750626, "num_examples": 442}, {"name": "validation", "num_bytes": 38037, "num_examples": 20}], "download_size": 0, "dataset_size": 788663}} | 2023-11-16T12:26:53+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "find_first_sent_train_500_eval_20_baseline"
More Information needed | [
"# Dataset Card for \"find_first_sent_train_500_eval_20_baseline\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"find_first_sent_train_500_eval_20_baseline\"\n\nMore Information needed"
]
| [
6,
29
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"find_first_sent_train_500_eval_20_baseline\"\n\nMore Information needed"
]
|
9388f85a23b0d9fc76c9beb76385ea552b7edb90 | # Dataset Card for "find_last_sent_train_500_eval_20_baseline"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/find_last_sent_train_500_eval_20_baseline | [
"region:us"
]
| 2023-11-16T12:26:14+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 748693, "num_examples": 442}, {"name": "validation", "num_bytes": 36878, "num_examples": 20}], "download_size": 0, "dataset_size": 785571}} | 2023-11-16T12:27:00+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "find_last_sent_train_500_eval_20_baseline"
More Information needed | [
"# Dataset Card for \"find_last_sent_train_500_eval_20_baseline\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"find_last_sent_train_500_eval_20_baseline\"\n\nMore Information needed"
]
| [
6,
28
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"find_last_sent_train_500_eval_20_baseline\"\n\nMore Information needed"
]
|
1aa209a856dff72df704c71d6bb2cbbeece18c96 | # Dataset Card for "find_second_sent_train_500_eval_20_baseline"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/find_second_sent_train_500_eval_20_baseline | [
"region:us"
]
| 2023-11-16T12:26:27+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 751577, "num_examples": 442}, {"name": "validation", "num_bytes": 37982, "num_examples": 20}], "download_size": 0, "dataset_size": 789559}} | 2023-11-16T12:27:05+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "find_second_sent_train_500_eval_20_baseline"
More Information needed | [
"# Dataset Card for \"find_second_sent_train_500_eval_20_baseline\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"find_second_sent_train_500_eval_20_baseline\"\n\nMore Information needed"
]
| [
6,
28
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"find_second_sent_train_500_eval_20_baseline\"\n\nMore Information needed"
]
|
8613e17fc3b1bf8fd435d3754b7871eca943fbb9 | # Dataset Card for "reward-rpio"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | alkahestry/reward-rpio | [
"region:us"
]
| 2023-11-16T12:37:33+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2508025, "num_examples": 3146}], "download_size": 1509167, "dataset_size": 2508025}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-16T12:37:37+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "reward-rpio"
More Information needed | [
"# Dataset Card for \"reward-rpio\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"reward-rpio\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"reward-rpio\"\n\nMore Information needed"
]
|
48320aacfe814482630a5edb1f70084bb4dbd29c | # Dataset Card for "squad_title_v4_train_30_eval_10_recite"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_title_v4_train_30_eval_10_recite | [
"region:us"
]
| 2023-11-16T13:09:31+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 681763, "num_examples": 368}, {"name": "validation", "num_bytes": 84080, "num_examples": 50}], "download_size": 137232, "dataset_size": 765843}} | 2023-11-16T13:09:38+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "squad_title_v4_train_30_eval_10_recite"
More Information needed | [
"# Dataset Card for \"squad_title_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_title_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
| [
6,
30
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_title_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
|
49db3be6c4c8e195b1d029a642b87c896657d245 | # Dataset Card for "squad_wrong_title_v4_train_30_eval_10_recite"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_wrong_title_v4_train_30_eval_10_recite | [
"region:us"
]
| 2023-11-16T13:09:46+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 681763, "num_examples": 368}, {"name": "validation", "num_bytes": 84048, "num_examples": 50}], "download_size": 137622, "dataset_size": 765811}} | 2023-11-16T13:09:52+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "squad_wrong_title_v4_train_30_eval_10_recite"
More Information needed | [
"# Dataset Card for \"squad_wrong_title_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_wrong_title_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
| [
6,
33
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_wrong_title_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
|
0b7341a70ba310712eb83a6b8a5a99b5e57590e1 | # Dataset Card for "squad_no_title_v4_train_30_eval_10_recite"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_no_title_v4_train_30_eval_10_recite | [
"region:us"
]
| 2023-11-16T13:10:01+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 681763, "num_examples": 368}, {"name": "validation", "num_bytes": 48707, "num_examples": 50}], "download_size": 126979, "dataset_size": 730470}} | 2023-11-16T13:10:09+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "squad_no_title_v4_train_30_eval_10_recite"
More Information needed | [
"# Dataset Card for \"squad_no_title_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_no_title_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
| [
6,
32
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_no_title_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
|
e4fa5c6bd42f0797804df1770e5aafc0890506e3 | # Dataset Card for "squad_no_title_strict_v4_train_30_eval_10_recite"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_no_title_strict_v4_train_30_eval_10_recite | [
"region:us"
]
| 2023-11-16T13:10:18+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 548439, "num_examples": 368}, {"name": "validation", "num_bytes": 48707, "num_examples": 50}], "download_size": 104798, "dataset_size": 597146}} | 2023-11-16T13:10:24+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "squad_no_title_strict_v4_train_30_eval_10_recite"
More Information needed | [
"# Dataset Card for \"squad_no_title_strict_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_no_title_strict_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
| [
6,
34
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_no_title_strict_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
|
d60aee889e4f9a399329eaf3d225a7492df485fa | # Dataset Card for "squad_baseline_v4_train_30_eval_10_recite"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_baseline_v4_train_30_eval_10_recite | [
"region:us"
]
| 2023-11-16T13:10:34+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 172536, "num_examples": 159}, {"name": "validation", "num_bytes": 47457, "num_examples": 50}], "download_size": 75697, "dataset_size": 219993}} | 2023-11-16T13:10:43+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "squad_baseline_v4_train_30_eval_10_recite"
More Information needed | [
"# Dataset Card for \"squad_baseline_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_baseline_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
| [
6,
31
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_baseline_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
|
53fb8cff75d9c2425e6ab7d32a193b88b6eb3851 |
# midjourney-messages
## Description
This dataset contains the raw messages from Midjourney.
Initial dataset is https://huggingface.co/datasets/vivym/midjourney-messages, but this one has the images attached.
| TwoAbove/midjourney-messages | [
"license:apache-2.0",
"region:us"
]
| 2023-11-16T13:10:46+00:00 | {"license": "apache-2.0", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "channel_id", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "timestamp", "dtype": "string"}, {"name": "image_id", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "url", "dtype": "string"}, {"name": "height", "dtype": "int64"}, {"name": "width", "dtype": "int64"}, {"name": "size", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}]}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/*"}]}]} | 2023-12-26T00:12:42+00:00 | []
| []
| TAGS
#license-apache-2.0 #region-us
|
# midjourney-messages
## Description
This dataset contains the raw messages from Midjourney.
Initial dataset is URL but this one has the images attached.
| [
"# midjourney-messages",
"## Description\n\nThis dataset contains the raw messages from Midjourney. \n\nInitial dataset is URL but this one has the images attached."
]
| [
"TAGS\n#license-apache-2.0 #region-us \n",
"# midjourney-messages",
"## Description\n\nThis dataset contains the raw messages from Midjourney. \n\nInitial dataset is URL but this one has the images attached."
]
| [
14,
7,
30
]
| [
"passage: TAGS\n#license-apache-2.0 #region-us \n# midjourney-messages## Description\n\nThis dataset contains the raw messages from Midjourney. \n\nInitial dataset is URL but this one has the images attached."
]
|
2f9f6e29ea826c8648cf1aceaa83a7ce2c7896a4 | # Dataset Card for "squad_rare_v4_train_30_eval_10_recite"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_rare_v4_train_30_eval_10_recite | [
"region:us"
]
| 2023-11-16T13:10:53+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 673207, "num_examples": 368}, {"name": "validation", "num_bytes": 82956, "num_examples": 50}], "download_size": 136492, "dataset_size": 756163}} | 2023-11-16T13:11:02+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "squad_rare_v4_train_30_eval_10_recite"
More Information needed | [
"# Dataset Card for \"squad_rare_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_rare_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
| [
6,
30
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_rare_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
|
bb12a6ced8fa454052621bf6b364becdf77b8d83 | # Dataset Card for "squad_no_rare_v4_train_30_eval_10_recite"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_no_rare_v4_train_30_eval_10_recite | [
"region:us"
]
| 2023-11-16T13:11:10+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 673207, "num_examples": 368}, {"name": "validation", "num_bytes": 48145, "num_examples": 50}], "download_size": 126398, "dataset_size": 721352}} | 2023-11-16T13:11:18+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "squad_no_rare_v4_train_30_eval_10_recite"
More Information needed | [
"# Dataset Card for \"squad_no_rare_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_no_rare_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
| [
6,
32
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_no_rare_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
|
d6f888cdb987d407561be588d2cdcdb89be6d0dd | # Dataset Card for "squad_wrong_rare_v4_train_30_eval_10_recite"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_wrong_rare_v4_train_30_eval_10_recite | [
"region:us"
]
| 2023-11-16T13:11:27+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 673207, "num_examples": 368}, {"name": "validation", "num_bytes": 83486, "num_examples": 50}], "download_size": 137041, "dataset_size": 756693}} | 2023-11-16T13:11:35+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "squad_wrong_rare_v4_train_30_eval_10_recite"
More Information needed | [
"# Dataset Card for \"squad_wrong_rare_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_wrong_rare_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
| [
6,
33
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_wrong_rare_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
|
053f648bcb5801535be1bececd4501cf57a49fee | # Dataset Card for "squad_no_rare_strict_v4_train_30_eval_10_recite"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_no_rare_strict_v4_train_30_eval_10_recite | [
"region:us"
]
| 2023-11-16T13:11:45+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 541741, "num_examples": 368}, {"name": "validation", "num_bytes": 48145, "num_examples": 50}], "download_size": 104315, "dataset_size": 589886}} | 2023-11-16T13:11:51+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "squad_no_rare_strict_v4_train_30_eval_10_recite"
More Information needed | [
"# Dataset Card for \"squad_no_rare_strict_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_no_rare_strict_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
| [
6,
34
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_no_rare_strict_v4_train_30_eval_10_recite\"\n\nMore Information needed"
]
|
e288ada7ef6979337c5e4395704aa7727d54fb3b | FinanceBench is a first-of-its-kind test suite for evaluating the performance of LLMs on open book financial question answering (QA). This is an open source sample of 150 annotated examples used in the evaluation and analysis of models assessed in the FinanceBench paper.
The dataset comprises of questions about publicly traded companies, with corresponding answers and evidence strings. The questions in FinanceBench are ecologically valid and cover a diverse set of scenarios. They are intended to be clear-cut and straightforward to answer to serve as a minimum performance standard.
We test 16 state of the art model configurations (including GPT-4-Turbo, Llama2 and Claude2, with vector stores and long context prompts) on a sample of 150 cases from FinanceBench, and manually review their answers (n=2,400). The cases are available open-source.
We find that existing LLMs have clear limitations for financial QA. All models assessed exhibit weaknesses, such as hallucinations, that limit their suitability for use by enterprises.
To evaluate your models on the full dataset, or if you have questions about this work, you can email us at [email protected] | PatronusAI/financebench | [
"license:cc-by-nc-4.0",
"region:us"
]
| 2023-11-16T13:38:35+00:00 | {"license": "cc-by-nc-4.0"} | 2023-11-16T13:48:29+00:00 | []
| []
| TAGS
#license-cc-by-nc-4.0 #region-us
| FinanceBench is a first-of-its-kind test suite for evaluating the performance of LLMs on open book financial question answering (QA). This is an open source sample of 150 annotated examples used in the evaluation and analysis of models assessed in the FinanceBench paper.
The dataset comprises of questions about publicly traded companies, with corresponding answers and evidence strings. The questions in FinanceBench are ecologically valid and cover a diverse set of scenarios. They are intended to be clear-cut and straightforward to answer to serve as a minimum performance standard.
We test 16 state of the art model configurations (including GPT-4-Turbo, Llama2 and Claude2, with vector stores and long context prompts) on a sample of 150 cases from FinanceBench, and manually review their answers (n=2,400). The cases are available open-source.
We find that existing LLMs have clear limitations for financial QA. All models assessed exhibit weaknesses, such as hallucinations, that limit their suitability for use by enterprises.
To evaluate your models on the full dataset, or if you have questions about this work, you can email us at contact@URL | []
| [
"TAGS\n#license-cc-by-nc-4.0 #region-us \n"
]
| [
17
]
| [
"passage: TAGS\n#license-cc-by-nc-4.0 #region-us \n"
]
|
6fb2d4aa89f84537225b510fa6e867eed9d79a73 | # Dataset Card for Dataset Name
Self-instruct data pairs for Kazakh language
## Dataset Details
The dataset is translated from Standford Alpaca instruction dataset via Google Translations API.
1. Manually fixed the translation error.
2. Common names and places of Kazakhstan were added.
3. Intructions of kazakhstan history and cultures were added.
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** Mussa Aman
- **Language(s) (NLP):** Kazakh
- **License:** MIT
## Uses
This dataset is curated to fine-tune the LLaMA 2 model for the Kazakh language. It aims to enhance the model's understanding and processing capabilities of Kazakh, addressing a gap in the Low Resource Lanuguages for solving the NLP resources for Kazakh language.
The dataset includes the self-instruct approach, there is commonly one "instruction","input" and "output" which is crucial for improving language comprehension and task performance of the model.
## Citation
**BibTeX:**
@misc{aman_2023,
author = {Aman Mussa},
title = {Self-instruct data pairs for Kazakh language},
year = {2023},
howpublished = {\url{https://huggingface.co/datasets/AmanMussa/instructions_kaz_version_1}},
}
**APA:**
Aman, M. (2023). Self-instruct data pairs for Kazakh language. Retrieved from https://huggingface.co/datasets/AmanMussa/instructions_kaz_version_1
## Dataset Card Contact
Please contact in email: [email protected] | AmanMussa/kazakh-instruction-v2 | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:kk",
"license:mit",
"region:us"
]
| 2023-11-16T13:47:44+00:00 | {"language": ["kk"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering", "text-generation"]} | 2023-11-16T14:28:12+00:00 | []
| [
"kk"
]
| TAGS
#task_categories-question-answering #task_categories-text-generation #size_categories-10K<n<100K #language-Kazakh #license-mit #region-us
| # Dataset Card for Dataset Name
Self-instruct data pairs for Kazakh language
## Dataset Details
The dataset is translated from Standford Alpaca instruction dataset via Google Translations API.
1. Manually fixed the translation error.
2. Common names and places of Kazakhstan were added.
3. Intructions of kazakhstan history and cultures were added.
### Dataset Description
- Curated by: Mussa Aman
- Language(s) (NLP): Kazakh
- License: MIT
## Uses
This dataset is curated to fine-tune the LLaMA 2 model for the Kazakh language. It aims to enhance the model's understanding and processing capabilities of Kazakh, addressing a gap in the Low Resource Lanuguages for solving the NLP resources for Kazakh language.
The dataset includes the self-instruct approach, there is commonly one "instruction","input" and "output" which is crucial for improving language comprehension and task performance of the model.
BibTeX:
@misc{aman_2023,
author = {Aman Mussa},
title = {Self-instruct data pairs for Kazakh language},
year = {2023},
howpublished = {\url{URL
}
APA:
Aman, M. (2023). Self-instruct data pairs for Kazakh language. Retrieved from URL
## Dataset Card Contact
Please contact in email: a_mussa@URL | [
"# Dataset Card for Dataset Name\n\nSelf-instruct data pairs for Kazakh language",
"## Dataset Details\n\nThe dataset is translated from Standford Alpaca instruction dataset via Google Translations API.\n\n1. Manually fixed the translation error.\n2. Common names and places of Kazakhstan were added.\n3. Intructions of kazakhstan history and cultures were added.",
"### Dataset Description\n\n\n\n\n\n- Curated by: Mussa Aman\n- Language(s) (NLP): Kazakh\n- License: MIT",
"## Uses\n\nThis dataset is curated to fine-tune the LLaMA 2 model for the Kazakh language. It aims to enhance the model's understanding and processing capabilities of Kazakh, addressing a gap in the Low Resource Lanuguages for solving the NLP resources for Kazakh language. \n\nThe dataset includes the self-instruct approach, there is commonly one \"instruction\",\"input\" and \"output\" which is crucial for improving language comprehension and task performance of the model.\n\n\nBibTeX:\n\n@misc{aman_2023,\n author = {Aman Mussa},\n title = {Self-instruct data pairs for Kazakh language},\n year = {2023},\n howpublished = {\\url{URL\n}\n\nAPA:\n\nAman, M. (2023). Self-instruct data pairs for Kazakh language. Retrieved from URL",
"## Dataset Card Contact\n\nPlease contact in email: a_mussa@URL"
]
| [
"TAGS\n#task_categories-question-answering #task_categories-text-generation #size_categories-10K<n<100K #language-Kazakh #license-mit #region-us \n",
"# Dataset Card for Dataset Name\n\nSelf-instruct data pairs for Kazakh language",
"## Dataset Details\n\nThe dataset is translated from Standford Alpaca instruction dataset via Google Translations API.\n\n1. Manually fixed the translation error.\n2. Common names and places of Kazakhstan were added.\n3. Intructions of kazakhstan history and cultures were added.",
"### Dataset Description\n\n\n\n\n\n- Curated by: Mussa Aman\n- Language(s) (NLP): Kazakh\n- License: MIT",
"## Uses\n\nThis dataset is curated to fine-tune the LLaMA 2 model for the Kazakh language. It aims to enhance the model's understanding and processing capabilities of Kazakh, addressing a gap in the Low Resource Lanuguages for solving the NLP resources for Kazakh language. \n\nThe dataset includes the self-instruct approach, there is commonly one \"instruction\",\"input\" and \"output\" which is crucial for improving language comprehension and task performance of the model.\n\n\nBibTeX:\n\n@misc{aman_2023,\n author = {Aman Mussa},\n title = {Self-instruct data pairs for Kazakh language},\n year = {2023},\n howpublished = {\\url{URL\n}\n\nAPA:\n\nAman, M. (2023). Self-instruct data pairs for Kazakh language. Retrieved from URL",
"## Dataset Card Contact\n\nPlease contact in email: a_mussa@URL"
]
| [
52,
19,
61,
28,
192,
16
]
| [
"passage: TAGS\n#task_categories-question-answering #task_categories-text-generation #size_categories-10K<n<100K #language-Kazakh #license-mit #region-us \n# Dataset Card for Dataset Name\n\nSelf-instruct data pairs for Kazakh language## Dataset Details\n\nThe dataset is translated from Standford Alpaca instruction dataset via Google Translations API.\n\n1. Manually fixed the translation error.\n2. Common names and places of Kazakhstan were added.\n3. Intructions of kazakhstan history and cultures were added.### Dataset Description\n\n\n\n\n\n- Curated by: Mussa Aman\n- Language(s) (NLP): Kazakh\n- License: MIT## Uses\n\nThis dataset is curated to fine-tune the LLaMA 2 model for the Kazakh language. It aims to enhance the model's understanding and processing capabilities of Kazakh, addressing a gap in the Low Resource Lanuguages for solving the NLP resources for Kazakh language. \n\nThe dataset includes the self-instruct approach, there is commonly one \"instruction\",\"input\" and \"output\" which is crucial for improving language comprehension and task performance of the model.\n\n\nBibTeX:\n\n@misc{aman_2023,\n author = {Aman Mussa},\n title = {Self-instruct data pairs for Kazakh language},\n year = {2023},\n howpublished = {\\url{URL\n}\n\nAPA:\n\nAman, M. (2023). Self-instruct data pairs for Kazakh language. Retrieved from URL## Dataset Card Contact\n\nPlease contact in email: a_mussa@URL"
]
|
98fd90dd5a7b4ba6bc88461339d5712b26ec24ae |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | sarinhaferreiraa/Lachina | [
"region:us"
]
| 2023-11-16T14:04:34+00:00 | {} | 2023-11-16T14:05:28+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
6,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
]
|
d979b0af9274e83cdd234b6a15e6be111c4c1dc8 | # Dataset Card for "yarn-train-tokenized-8k-llama"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | emozilla/yarn-train-tokenized-8k-llama | [
"region:us"
]
| 2023-11-16T14:39:08+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 22643813816, "num_examples": 212602}], "download_size": 6260733414, "dataset_size": 22643813816}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-16T14:50:57+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "yarn-train-tokenized-8k-llama"
More Information needed | [
"# Dataset Card for \"yarn-train-tokenized-8k-llama\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"yarn-train-tokenized-8k-llama\"\n\nMore Information needed"
]
| [
6,
24
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"yarn-train-tokenized-8k-llama\"\n\nMore Information needed"
]
|
441281ea396a905a2d8d35c0da3a30b027fb583f | # Dataset Card for "OpenSchnabeltier-OpenSubset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | DRXD1000/OpenSchnabeltier-OpenSubset | [
"region:us"
]
| 2023-11-16T14:41:42+00:00 | {"dataset_info": {"features": [{"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "instruction_de", "dtype": "string"}, {"name": "output_de", "dtype": "string"}, {"name": "translation_de", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 40820446.2471378, "num_examples": 13367}], "download_size": 18981766, "dataset_size": 40820446.2471378}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-16T14:41:47+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "OpenSchnabeltier-OpenSubset"
More Information needed | [
"# Dataset Card for \"OpenSchnabeltier-OpenSubset\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"OpenSchnabeltier-OpenSubset\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"OpenSchnabeltier-OpenSubset\"\n\nMore Information needed"
]
|
332e2f6f65506a93b45d8e03196e61a5bc901d34 | # Dataset Card for "ultrafeedback-prompts-ultrajudge-gpt35"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | gabrielmbmb/ultrafeedback-prompts-ultrajudge-gpt35 | [
"region:us"
]
| 2023-11-16T15:11:01+00:00 | {"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "generation_model", "dtype": "string"}, {"name": "generation_prompt", "dtype": "string"}, {"name": "raw_generation_responses", "sequence": "string"}, {"name": "generations", "sequence": "string"}, {"name": "labelling_model", "dtype": "string"}, {"name": "labelling_prompt", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "raw_labelling_response", "dtype": "string"}, {"name": "rating", "sequence": "int64"}, {"name": "areas", "list": [{"name": "Authenticity & Reliability", "struct": [{"name": "rating", "dtype": "string"}, {"name": "rationale", "dtype": "string"}]}, {"name": "Clarity & Transparency", "struct": [{"name": "rating", "dtype": "string"}, {"name": "rationale", "dtype": "string"}]}, {"name": "Compliance with Intent", "struct": [{"name": "rating", "dtype": "string"}, {"name": "rationale", "dtype": "string"}]}, {"name": "Practical Accuracy", "struct": [{"name": "rating", "dtype": "string"}, {"name": "rationale", "dtype": "string"}]}]}], "splits": [{"name": "train", "num_bytes": 18658217, "num_examples": 1000}], "download_size": 7709122, "dataset_size": 18658217}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-17T14:31:06+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "ultrafeedback-prompts-ultrajudge-gpt35"
More Information needed | [
"# Dataset Card for \"ultrafeedback-prompts-ultrajudge-gpt35\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"ultrafeedback-prompts-ultrajudge-gpt35\"\n\nMore Information needed"
]
| [
6,
26
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"ultrafeedback-prompts-ultrajudge-gpt35\"\n\nMore Information needed"
]
|
2b5b87f27caf89f610fd0324ce75539430c0ef2d |
# Description
This "image cube" from JPL's Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) shows the volume of data returned by the instrument. AVIRIS acquired the data on August 20, 1992 when it was flown on a NASA ER-2 plane at an altitude of 20,000 meters (65,000 feet) over Moffett Field, California, at the southern end of the San Francisco Bay.
The top of the cube is a false-color image made to accentuate the structure in the water and evaporation ponds on the right. Also visible on the top of the cube is the Moffett Field airport.
The sides of the cube are slices showing the edges of the top in all 224 of the AVIRIS spectral channels. The tops of the sides are in the visible part of the spectrum (wavelength of 400 nanometers), and the bottoms are in the infrared (2,500 nanometers). The sides are pseudo-color, ranging from black and blue (low response) to red (high response).
Of particular interest is the small region of high response in the upper right corner of the larger side. This response is in the red part of the visible spectrum (about 700 nanometers), and is due to the presence of 1-centimeter-long (half-inch) red brine shrimp in the evaporation pond.
# Quick look
<figure>
<img src= "assets/avcubesmall.png" alt="Moffett" width="200" />
<figcaption>Moffett field datacube.</figcaption>
</figure>
<figure>
<img src= "extra/f080611t01p00r07_sc01_RGB.jpeg" alt="Moffett" width="200" />
<figcaption>Orthoregistered product RGB visualization.</figcaption>
</figure>
# Credits
Dataset made available for free download by [NASA Jet Propulsion Laboratory](https://aviris.jpl.nasa.gov/data/image_cube.html) of the California Instityte of Technology.
Original download link:
https://aviris.jpl.nasa.gov/data/free_data.html
| danaroth/moffett_field | [
"license:unknown",
"region:us"
]
| 2023-11-16T15:13:06+00:00 | {"license": "unknown"} | 2023-11-17T13:33:47+00:00 | []
| []
| TAGS
#license-unknown #region-us
|
# Description
This "image cube" from JPL's Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) shows the volume of data returned by the instrument. AVIRIS acquired the data on August 20, 1992 when it was flown on a NASA ER-2 plane at an altitude of 20,000 meters (65,000 feet) over Moffett Field, California, at the southern end of the San Francisco Bay.
The top of the cube is a false-color image made to accentuate the structure in the water and evaporation ponds on the right. Also visible on the top of the cube is the Moffett Field airport.
The sides of the cube are slices showing the edges of the top in all 224 of the AVIRIS spectral channels. The tops of the sides are in the visible part of the spectrum (wavelength of 400 nanometers), and the bottoms are in the infrared (2,500 nanometers). The sides are pseudo-color, ranging from black and blue (low response) to red (high response).
Of particular interest is the small region of high response in the upper right corner of the larger side. This response is in the red part of the visible spectrum (about 700 nanometers), and is due to the presence of 1-centimeter-long (half-inch) red brine shrimp in the evaporation pond.
# Quick look
<figure>
<img src= "assets/URL" alt="Moffett" width="200" />
<figcaption>Moffett field datacube.</figcaption>
</figure>
<figure>
<img src= "extra/f080611t01p00r07_sc01_RGB.jpeg" alt="Moffett" width="200" />
<figcaption>Orthoregistered product RGB visualization.</figcaption>
</figure>
# Credits
Dataset made available for free download by NASA Jet Propulsion Laboratory of the California Instityte of Technology.
Original download link:
URL
| [
"# Description\n\nThis \"image cube\" from JPL's Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) shows the volume of data returned by the instrument. AVIRIS acquired the data on August 20, 1992 when it was flown on a NASA ER-2 plane at an altitude of 20,000 meters (65,000 feet) over Moffett Field, California, at the southern end of the San Francisco Bay.\nThe top of the cube is a false-color image made to accentuate the structure in the water and evaporation ponds on the right. Also visible on the top of the cube is the Moffett Field airport.\nThe sides of the cube are slices showing the edges of the top in all 224 of the AVIRIS spectral channels. The tops of the sides are in the visible part of the spectrum (wavelength of 400 nanometers), and the bottoms are in the infrared (2,500 nanometers). The sides are pseudo-color, ranging from black and blue (low response) to red (high response).\nOf particular interest is the small region of high response in the upper right corner of the larger side. This response is in the red part of the visible spectrum (about 700 nanometers), and is due to the presence of 1-centimeter-long (half-inch) red brine shrimp in the evaporation pond.",
"# Quick look\n\n<figure>\n <img src= \"assets/URL\" alt=\"Moffett\" width=\"200\" />\n <figcaption>Moffett field datacube.</figcaption>\n</figure>\n\n<figure>\n <img src= \"extra/f080611t01p00r07_sc01_RGB.jpeg\" alt=\"Moffett\" width=\"200\" />\n <figcaption>Orthoregistered product RGB visualization.</figcaption>\n</figure>",
"# Credits\n\nDataset made available for free download by NASA Jet Propulsion Laboratory of the California Instityte of Technology.\nOriginal download link:\nURL"
]
| [
"TAGS\n#license-unknown #region-us \n",
"# Description\n\nThis \"image cube\" from JPL's Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) shows the volume of data returned by the instrument. AVIRIS acquired the data on August 20, 1992 when it was flown on a NASA ER-2 plane at an altitude of 20,000 meters (65,000 feet) over Moffett Field, California, at the southern end of the San Francisco Bay.\nThe top of the cube is a false-color image made to accentuate the structure in the water and evaporation ponds on the right. Also visible on the top of the cube is the Moffett Field airport.\nThe sides of the cube are slices showing the edges of the top in all 224 of the AVIRIS spectral channels. The tops of the sides are in the visible part of the spectrum (wavelength of 400 nanometers), and the bottoms are in the infrared (2,500 nanometers). The sides are pseudo-color, ranging from black and blue (low response) to red (high response).\nOf particular interest is the small region of high response in the upper right corner of the larger side. This response is in the red part of the visible spectrum (about 700 nanometers), and is due to the presence of 1-centimeter-long (half-inch) red brine shrimp in the evaporation pond.",
"# Quick look\n\n<figure>\n <img src= \"assets/URL\" alt=\"Moffett\" width=\"200\" />\n <figcaption>Moffett field datacube.</figcaption>\n</figure>\n\n<figure>\n <img src= \"extra/f080611t01p00r07_sc01_RGB.jpeg\" alt=\"Moffett\" width=\"200\" />\n <figcaption>Orthoregistered product RGB visualization.</figcaption>\n</figure>",
"# Credits\n\nDataset made available for free download by NASA Jet Propulsion Laboratory of the California Instityte of Technology.\nOriginal download link:\nURL"
]
| [
13,
312,
122,
34
]
| [
"passage: TAGS\n#license-unknown #region-us \n# Description\n\nThis \"image cube\" from JPL's Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) shows the volume of data returned by the instrument. AVIRIS acquired the data on August 20, 1992 when it was flown on a NASA ER-2 plane at an altitude of 20,000 meters (65,000 feet) over Moffett Field, California, at the southern end of the San Francisco Bay.\nThe top of the cube is a false-color image made to accentuate the structure in the water and evaporation ponds on the right. Also visible on the top of the cube is the Moffett Field airport.\nThe sides of the cube are slices showing the edges of the top in all 224 of the AVIRIS spectral channels. The tops of the sides are in the visible part of the spectrum (wavelength of 400 nanometers), and the bottoms are in the infrared (2,500 nanometers). The sides are pseudo-color, ranging from black and blue (low response) to red (high response).\nOf particular interest is the small region of high response in the upper right corner of the larger side. This response is in the red part of the visible spectrum (about 700 nanometers), and is due to the presence of 1-centimeter-long (half-inch) red brine shrimp in the evaporation pond.# Quick look\n\n<figure>\n <img src= \"assets/URL\" alt=\"Moffett\" width=\"200\" />\n <figcaption>Moffett field datacube.</figcaption>\n</figure>\n\n<figure>\n <img src= \"extra/f080611t01p00r07_sc01_RGB.jpeg\" alt=\"Moffett\" width=\"200\" />\n <figcaption>Orthoregistered product RGB visualization.</figcaption>\n</figure># Credits\n\nDataset made available for free download by NASA Jet Propulsion Laboratory of the California Instityte of Technology.\nOriginal download link:\nURL"
]
|
86bf067ea569a887d8df61914ab9ccf35cc39105 | # Dataset Card for "gpt-generated-news-paragraphs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | joshuapsa/gpt-generated-news-paragraphs-v1.0 | [
"region:us"
]
| 2023-11-16T15:27:39+00:00 | {"dataset_info": {"features": [{"name": "class_index", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "text", "dtype": "string"}, {"name": "aviation", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "cybersecurity", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "domestic_unrest_violence", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "extreme_weather", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "forced_labor", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "general_biz_trend", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "individual_accidents_tragedies", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "later_report", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "lawsuit_legal_insurance", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "leisure_other_news", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "maritime", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "pandemics_large_scale_diseases", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "railway", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "strike", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "trade_war_embargos_bans", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "transportation_trends_projects", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "war_conflict", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "warehouse_fire", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 303623, "num_examples": 540}, {"name": "valid", "num_bytes": 101197, "num_examples": 180}, {"name": "test", "num_bytes": 100901, "num_examples": 180}], "download_size": 177940, "dataset_size": 505721}} | 2023-11-16T15:27:47+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "gpt-generated-news-paragraphs"
More Information needed | [
"# Dataset Card for \"gpt-generated-news-paragraphs\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"gpt-generated-news-paragraphs\"\n\nMore Information needed"
]
| [
6,
21
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"gpt-generated-news-paragraphs\"\n\nMore Information needed"
]
|
de0bef8f634e65f14ea97b93d4adeb842b1163f3 |
**JSON file of 6,941 sentences of historical biographies, annotated with "PER" (Person), "ORG" (Organisation), "LOC" (Location).**
# source
The original data was extracted from the [Austrian Biographical Lexicon (ÖBL)](https://www.oeaw.ac.at/acdh/oebl) in the context of the [Austrian Prosopographical Information System (APIS) project](https://www.oeaw.ac.at/acdh/projects/completed-projects/apis).
From there, samples were randomly pulled and annotated for Named Entity Recognition tasks, which form this dataset.
The texts concern numerous smaller biographies in the time period between 19th and early 20th century within historical Austria-Hungary, and were produced by the [Austrian Acadamey of Sciences](https://www.oeaw.ac.at/en) between 1957 and 2023.
The language style is rather condensed and contains a lot of domain-specific abbreviations (some of which were resolved in a related dataset: https://huggingface.co/datasets/SteffRhes/APIS_OEBL__abbreviations).
# structure
**json structure**
The json contains a list of texts with key `text_raw` and the indices and types of their contained entities with key `entities`.
**Randomized Sentences**
The original data set was split into sentences and randomized samples were annotated.
**no train, dev, eval split**
We decided against pre-splitting the data into these sets, as their quantities might differ between requirements of various NLP training setups.
**no token list**
We decided against pre-tokenizing the data, as this would embed NLP logic (which tokenizer with what rule?) into the data itself. | SteffRhes/APIS_OEBL__Named_Entity_Recognition | [
"task_categories:token-classification",
"language:de",
"license:mit",
"region:us"
]
| 2023-11-16T15:29:12+00:00 | {"language": ["de"], "license": "mit", "task_categories": ["token-classification"], "pretty_name": "APIS \u00d6BL Named Entity Recognition"} | 2023-12-01T16:26:05+00:00 | []
| [
"de"
]
| TAGS
#task_categories-token-classification #language-German #license-mit #region-us
|
JSON file of 6,941 sentences of historical biographies, annotated with "PER" (Person), "ORG" (Organisation), "LOC" (Location).
# source
The original data was extracted from the Austrian Biographical Lexicon (ÖBL) in the context of the Austrian Prosopographical Information System (APIS) project.
From there, samples were randomly pulled and annotated for Named Entity Recognition tasks, which form this dataset.
The texts concern numerous smaller biographies in the time period between 19th and early 20th century within historical Austria-Hungary, and were produced by the Austrian Acadamey of Sciences between 1957 and 2023.
The language style is rather condensed and contains a lot of domain-specific abbreviations (some of which were resolved in a related dataset: URL
# structure
json structure
The json contains a list of texts with key 'text_raw' and the indices and types of their contained entities with key 'entities'.
Randomized Sentences
The original data set was split into sentences and randomized samples were annotated.
no train, dev, eval split
We decided against pre-splitting the data into these sets, as their quantities might differ between requirements of various NLP training setups.
no token list
We decided against pre-tokenizing the data, as this would embed NLP logic (which tokenizer with what rule?) into the data itself. | [
"# source\n\nThe original data was extracted from the Austrian Biographical Lexicon (ÖBL) in the context of the Austrian Prosopographical Information System (APIS) project.\n\nFrom there, samples were randomly pulled and annotated for Named Entity Recognition tasks, which form this dataset.\n\nThe texts concern numerous smaller biographies in the time period between 19th and early 20th century within historical Austria-Hungary, and were produced by the Austrian Acadamey of Sciences between 1957 and 2023.\n\nThe language style is rather condensed and contains a lot of domain-specific abbreviations (some of which were resolved in a related dataset: URL",
"# structure\n\njson structure\n\nThe json contains a list of texts with key 'text_raw' and the indices and types of their contained entities with key 'entities'.\n\nRandomized Sentences\n\nThe original data set was split into sentences and randomized samples were annotated.\n\nno train, dev, eval split\n\nWe decided against pre-splitting the data into these sets, as their quantities might differ between requirements of various NLP training setups.\n\nno token list\n\nWe decided against pre-tokenizing the data, as this would embed NLP logic (which tokenizer with what rule?) into the data itself."
]
| [
"TAGS\n#task_categories-token-classification #language-German #license-mit #region-us \n",
"# source\n\nThe original data was extracted from the Austrian Biographical Lexicon (ÖBL) in the context of the Austrian Prosopographical Information System (APIS) project.\n\nFrom there, samples were randomly pulled and annotated for Named Entity Recognition tasks, which form this dataset.\n\nThe texts concern numerous smaller biographies in the time period between 19th and early 20th century within historical Austria-Hungary, and were produced by the Austrian Acadamey of Sciences between 1957 and 2023.\n\nThe language style is rather condensed and contains a lot of domain-specific abbreviations (some of which were resolved in a related dataset: URL",
"# structure\n\njson structure\n\nThe json contains a list of texts with key 'text_raw' and the indices and types of their contained entities with key 'entities'.\n\nRandomized Sentences\n\nThe original data set was split into sentences and randomized samples were annotated.\n\nno train, dev, eval split\n\nWe decided against pre-splitting the data into these sets, as their quantities might differ between requirements of various NLP training setups.\n\nno token list\n\nWe decided against pre-tokenizing the data, as this would embed NLP logic (which tokenizer with what rule?) into the data itself."
]
| [
27,
154,
142
]
| [
"passage: TAGS\n#task_categories-token-classification #language-German #license-mit #region-us \n# source\n\nThe original data was extracted from the Austrian Biographical Lexicon (ÖBL) in the context of the Austrian Prosopographical Information System (APIS) project.\n\nFrom there, samples were randomly pulled and annotated for Named Entity Recognition tasks, which form this dataset.\n\nThe texts concern numerous smaller biographies in the time period between 19th and early 20th century within historical Austria-Hungary, and were produced by the Austrian Acadamey of Sciences between 1957 and 2023.\n\nThe language style is rather condensed and contains a lot of domain-specific abbreviations (some of which were resolved in a related dataset: URL# structure\n\njson structure\n\nThe json contains a list of texts with key 'text_raw' and the indices and types of their contained entities with key 'entities'.\n\nRandomized Sentences\n\nThe original data set was split into sentences and randomized samples were annotated.\n\nno train, dev, eval split\n\nWe decided against pre-splitting the data into these sets, as their quantities might differ between requirements of various NLP training setups.\n\nno token list\n\nWe decided against pre-tokenizing the data, as this would embed NLP logic (which tokenizer with what rule?) into the data itself."
]
|
89b853afb621e8577c6aa60cfdc90389507443eb | # Dataset Card for "lsc_binaryclassification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tomashs/lsc_binaryclassification | [
"region:us"
]
| 2023-11-16T15:35:53+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "short_form", "dtype": "string"}, {"name": "long_form", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 123283916, "num_examples": 400268}], "download_size": 19911631, "dataset_size": 123283916}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-16T15:36:00+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "lsc_binaryclassification"
More Information needed | [
"# Dataset Card for \"lsc_binaryclassification\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"lsc_binaryclassification\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"lsc_binaryclassification\"\n\nMore Information needed"
]
|
9e7338fc3b74774e86ed6cdd36b0d4c3f052a5f8 | # Dataset Card for "cnn_dailymail_2048"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | xihajun/cnn_dailymail_2048 | [
"region:us"
]
| 2023-11-16T15:42:30+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "article", "dtype": "string"}, {"name": "highlights", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 174103723.12613848, "num_examples": 39619}, {"name": "test", "num_bytes": 8247087.844734551, "num_examples": 1898}, {"name": "validation", "num_bytes": 9868234.696289647, "num_examples": 2285}], "download_size": 51215856, "dataset_size": 192219045.6671627}} | 2023-11-16T15:42:43+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "cnn_dailymail_2048"
More Information needed | [
"# Dataset Card for \"cnn_dailymail_2048\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"cnn_dailymail_2048\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"cnn_dailymail_2048\"\n\nMore Information needed"
]
|
b1a1747a38ae5ecdce5a2de10756102811e0bfad | # Dataset Card for "FineTuneDataset512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Lollitor/FineTuneDataset512 | [
"region:us"
]
| 2023-11-16T15:48:41+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "sequence", "dtype": "string"}, {"name": "label", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 7654743, "num_examples": 10096}, {"name": "validation", "num_bytes": 848492, "num_examples": 1122}], "download_size": 4218885, "dataset_size": 8503235}} | 2023-11-16T15:48:45+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "FineTuneDataset512"
More Information needed | [
"# Dataset Card for \"FineTuneDataset512\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"FineTuneDataset512\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"FineTuneDataset512\"\n\nMore Information needed"
]
|
57365182e073cea0dc68046f7758e65409fc49a7 | # Dataset Card for "librispeech_asr_dummy_extract_unit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Codec-SUPERB/librispeech_asr_dummy_extract_unit | [
"region:us"
]
| 2023-11-16T15:49:41+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "academicodec_hifi_16k_320d", "path": "data/academicodec_hifi_16k_320d-*"}, {"split": "academicodec_hifi_16k_320d_large_uni", "path": "data/academicodec_hifi_16k_320d_large_uni-*"}, {"split": "academicodec_hifi_24k_320d", "path": "data/academicodec_hifi_24k_320d-*"}, {"split": "audiodec_24k_320d", "path": "data/audiodec_24k_320d-*"}, {"split": "dac_16k", "path": "data/dac_16k-*"}, {"split": "dac_24k", "path": "data/dac_24k-*"}, {"split": "dac_44k", "path": "data/dac_44k-*"}, {"split": "encodec_24k", "path": "data/encodec_24k-*"}, {"split": "funcodec_en_libritts_16k_gr1nq32ds320", "path": "data/funcodec_en_libritts_16k_gr1nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_gr8nq32ds320", "path": "data/funcodec_en_libritts_16k_gr8nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds320", "path": "data/funcodec_en_libritts_16k_nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds640", "path": "data/funcodec_en_libritts_16k_nq32ds640-*"}, {"split": "funcodec_zh_en_16k_nq32ds320", "path": "data/funcodec_zh_en_16k_nq32ds320-*"}, {"split": "funcodec_zh_en_16k_nq32ds640", "path": "data/funcodec_zh_en_16k_nq32ds640-*"}, {"split": "speech_tokenizer_16k", "path": "data/speech_tokenizer_16k-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "unit", "sequence": {"sequence": "int64"}}], "splits": [{"name": "academicodec_hifi_16k_320d", "num_bytes": 771752, "num_examples": 73}, {"name": "academicodec_hifi_16k_320d_large_uni", "num_bytes": 771752, "num_examples": 73}, {"name": "academicodec_hifi_24k_320d", "num_bytes": 1156456, "num_examples": 73}, {"name": "audiodec_24k_320d", "num_bytes": 2468728, "num_examples": 73}, {"name": "dac_16k", "num_bytes": 4813128, "num_examples": 73}, {"name": "dac_24k", "num_bytes": 13650008, "num_examples": 73}, {"name": "dac_44k", "num_bytes": 4047900, "num_examples": 73}, {"name": "encodec_24k", "num_bytes": 580208, "num_examples": 73}, {"name": "funcodec_en_libritts_16k_gr1nq32ds320", "num_bytes": 6180696, "num_examples": 73}, {"name": "funcodec_en_libritts_16k_gr8nq32ds320", "num_bytes": 6180696, "num_examples": 73}, {"name": "funcodec_en_libritts_16k_nq32ds320", "num_bytes": 6179160, "num_examples": 73}, {"name": "funcodec_en_libritts_16k_nq32ds640", "num_bytes": 3100504, "num_examples": 73}, {"name": "funcodec_zh_en_16k_nq32ds320", "num_bytes": 6179160, "num_examples": 73}, {"name": "funcodec_zh_en_16k_nq32ds640", "num_bytes": 6179160, "num_examples": 73}, {"name": "speech_tokenizer_16k", "num_bytes": 1546104, "num_examples": 73}], "download_size": 10104884, "dataset_size": 63805412}} | 2023-11-16T15:50:18+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "librispeech_asr_dummy_extract_unit"
More Information needed | [
"# Dataset Card for \"librispeech_asr_dummy_extract_unit\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"librispeech_asr_dummy_extract_unit\"\n\nMore Information needed"
]
| [
6,
25
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"librispeech_asr_dummy_extract_unit\"\n\nMore Information needed"
]
|
7cff085aabb3ab212011537e16f552a120c31d36 | # Dataset Card for "tubogas-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | rjaiswal/tubogas-dataset | [
"region:us"
]
| 2023-11-16T15:57:04+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4389802.0, "num_examples": 42}], "download_size": 2186125, "dataset_size": 4389802.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-20T09:42:33+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "tubogas-dataset"
More Information needed | [
"# Dataset Card for \"tubogas-dataset\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"tubogas-dataset\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"tubogas-dataset\"\n\nMore Information needed"
]
|
278c4255a3e1d71f6d21fe44c4e7ca41a7b19d82 |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | GustaFrin/DJImavic | [
"region:us"
]
| 2023-11-16T16:19:32+00:00 | {} | 2023-11-16T16:34:53+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
6,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
]
|
327f86f77d9b5d224c78388ea25baf5e34f82571 | # Dataset Card for "Calc-X_instructions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | MU-NLPC/Calc-X_style-instructions | [
"region:us"
]
| 2023-11-16T16:23:34+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "gsm8k", "path": "data/gsm8k-*"}, {"split": "ape210k", "path": "data/ape210k-*"}, {"split": "aqua_rat", "path": "data/aqua_rat-*"}, {"split": "math_qa", "path": "data/math_qa-*"}, {"split": "svamp", "path": "data/svamp-*"}, {"split": "asdiv_a", "path": "data/asdiv_a-*"}, {"split": "mawps", "path": "data/mawps-*"}]}], "dataset_info": {"features": [{"name": "template", "dtype": "string"}, {"name": "weight", "dtype": "float64"}], "splits": [{"name": "gsm8k", "num_bytes": 1171, "num_examples": 11}, {"name": "ape210k", "num_bytes": 551, "num_examples": 5}, {"name": "aqua_rat", "num_bytes": 769, "num_examples": 5}, {"name": "math_qa", "num_bytes": 765, "num_examples": 5}, {"name": "svamp", "num_bytes": 551, "num_examples": 5}, {"name": "asdiv_a", "num_bytes": 551, "num_examples": 5}, {"name": "mawps", "num_bytes": 551, "num_examples": 5}], "download_size": 16932, "dataset_size": 4909}} | 2023-11-17T15:22:22+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "Calc-X_instructions"
More Information needed | [
"# Dataset Card for \"Calc-X_instructions\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"Calc-X_instructions\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"Calc-X_instructions\"\n\nMore Information needed"
]
|
99fcf3e4d8d98d8841e85c3c88962ba9c267ec04 | # Dataset Card for HellaSwag_TH_drop
### Dataset Description
This dataset is Thai translated version of [hellaswag](https://huggingface.co/datasets/hellaswag) using google translate with [Multilingual Universal Sentence Encoder](https://arxiv.org/abs/1907.04307) to calculate score for Thai translation.
### Languages
- EN
- TH
| Patt/HellaSwag_thai | [
"language:th",
"language:en",
"license:cc-by-sa-4.0",
"arxiv:1907.04307",
"region:us"
]
| 2023-11-16T16:29:16+00:00 | {"language": ["th", "en"], "license": "cc-by-sa-4.0"} | 2024-01-15T17:41:05+00:00 | [
"1907.04307"
]
| [
"th",
"en"
]
| TAGS
#language-Thai #language-English #license-cc-by-sa-4.0 #arxiv-1907.04307 #region-us
| # Dataset Card for HellaSwag_TH_drop
### Dataset Description
This dataset is Thai translated version of hellaswag using google translate with Multilingual Universal Sentence Encoder to calculate score for Thai translation.
### Languages
- EN
- TH
| [
"# Dataset Card for HellaSwag_TH_drop",
"### Dataset Description\n\nThis dataset is Thai translated version of hellaswag using google translate with Multilingual Universal Sentence Encoder to calculate score for Thai translation.",
"### Languages\n- EN\n- TH"
]
| [
"TAGS\n#language-Thai #language-English #license-cc-by-sa-4.0 #arxiv-1907.04307 #region-us \n",
"# Dataset Card for HellaSwag_TH_drop",
"### Dataset Description\n\nThis dataset is Thai translated version of hellaswag using google translate with Multilingual Universal Sentence Encoder to calculate score for Thai translation.",
"### Languages\n- EN\n- TH"
]
| [
35,
13,
40,
8
]
| [
"passage: TAGS\n#language-Thai #language-English #license-cc-by-sa-4.0 #arxiv-1907.04307 #region-us \n# Dataset Card for HellaSwag_TH_drop### Dataset Description\n\nThis dataset is Thai translated version of hellaswag using google translate with Multilingual Universal Sentence Encoder to calculate score for Thai translation.### Languages\n- EN\n- TH"
]
|
c6b7e234edf74fa8e955aa785735df1512e82dac | # Dataset Card for "Wino_Bias"
Winograd-schema dataset for detecting gender bias (WinoBias)
More info can be found [here](https://uclanlp.github.io/corefBias/overview)
| Elfsong/Wino_Bias | [
"region:us"
]
| 2023-11-16T17:37:05+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "reference", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "polarity", "dtype": "string"}, {"name": "type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 335127, "num_examples": 1584}, {"name": "test", "num_bytes": 346559, "num_examples": 1584}], "download_size": 217833, "dataset_size": 681686}} | 2023-11-19T07:36:01+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "Wino_Bias"
Winograd-schema dataset for detecting gender bias (WinoBias)
More info can be found here
| [
"# Dataset Card for \"Wino_Bias\"\n\nWinograd-schema dataset for detecting gender bias (WinoBias)\n\nMore info can be found here"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"Wino_Bias\"\n\nWinograd-schema dataset for detecting gender bias (WinoBias)\n\nMore info can be found here"
]
| [
6,
38
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"Wino_Bias\"\n\nWinograd-schema dataset for detecting gender bias (WinoBias)\n\nMore info can be found here"
]
|
3b3d086dc73835ce3fdeceeb26113cbba4a7f032 | # Dataset Card for "fonts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | iamkaikai/fonts | [
"region:us"
]
| 2023-11-16T17:50:13+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 75777720.32, "num_examples": 5016}], "download_size": 4942032, "dataset_size": 75777720.32}} | 2023-11-16T17:50:16+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "fonts"
More Information needed | [
"# Dataset Card for \"fonts\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"fonts\"\n\nMore Information needed"
]
| [
6,
12
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"fonts\"\n\nMore Information needed"
]
|
9726a05bc2059deddc591a4b31eb46a41eb5cceb | # Dataset Card for "librispeech_asr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Codec-SUPERB/librispeech_asr | [
"region:us"
]
| 2023-11-16T17:53:45+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "academicodec_hifi_16k_320d", "path": "data/academicodec_hifi_16k_320d-*"}, {"split": "academicodec_hifi_16k_320d_large_uni", "path": "data/academicodec_hifi_16k_320d_large_uni-*"}, {"split": "academicodec_hifi_24k_320d", "path": "data/academicodec_hifi_24k_320d-*"}, {"split": "audiodec_24k_320d", "path": "data/audiodec_24k_320d-*"}, {"split": "dac_16k", "path": "data/dac_16k-*"}, {"split": "dac_24k", "path": "data/dac_24k-*"}, {"split": "dac_44k", "path": "data/dac_44k-*"}, {"split": "encodec_24k", "path": "data/encodec_24k-*"}, {"split": "funcodec_en_libritts_16k_gr1nq32ds320", "path": "data/funcodec_en_libritts_16k_gr1nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_gr8nq32ds320", "path": "data/funcodec_en_libritts_16k_gr8nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds320", "path": "data/funcodec_en_libritts_16k_nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds640", "path": "data/funcodec_en_libritts_16k_nq32ds640-*"}, {"split": "funcodec_zh_en_16k_nq32ds320", "path": "data/funcodec_zh_en_16k_nq32ds320-*"}, {"split": "funcodec_zh_en_16k_nq32ds640", "path": "data/funcodec_zh_en_16k_nq32ds640-*"}, {"split": "speech_tokenizer_16k", "path": "data/speech_tokenizer_16k-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "unit", "sequence": {"sequence": "int64"}}], "splits": [{"name": "academicodec_hifi_16k_320d", "num_bytes": 585566013, "num_examples": 28539}, {"name": "academicodec_hifi_16k_320d_large_uni", "num_bytes": 585566013, "num_examples": 28539}, {"name": "academicodec_hifi_24k_320d", "num_bytes": 875207613, "num_examples": 28539}, {"name": "audiodec_24k_320d", "num_bytes": 1861784589, "num_examples": 28539}, {"name": "dac_16k", "num_bytes": 3591614845, "num_examples": 28539}, {"name": "dac_24k", "num_bytes": 10062423533, "num_examples": 28539}, {"name": "dac_44k", "num_bytes": 2982824761, "num_examples": 28539}, {"name": "encodec_24k", "num_bytes": 441025925, "num_examples": 28539}, {"name": "funcodec_en_libritts_16k_gr1nq32ds320", "num_bytes": 4649508077, "num_examples": 28539}, {"name": "funcodec_en_libritts_16k_gr8nq32ds320", "num_bytes": 4649508077, "num_examples": 28539}, {"name": "funcodec_en_libritts_16k_nq32ds320", "num_bytes": 4647663597, "num_examples": 28539}, {"name": "funcodec_en_libritts_16k_nq32ds640", "num_bytes": 2330511341, "num_examples": 28539}, {"name": "funcodec_zh_en_16k_nq32ds320", "num_bytes": 4647663597, "num_examples": 28539}, {"name": "funcodec_zh_en_16k_nq32ds640", "num_bytes": 4647663597, "num_examples": 28539}, {"name": "speech_tokenizer_16k", "num_bytes": 1166450829, "num_examples": 28539}], "download_size": 7544903765, "dataset_size": 47724982407}} | 2023-11-16T18:23:16+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "librispeech_asr"
More Information needed | [
"# Dataset Card for \"librispeech_asr\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"librispeech_asr\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"librispeech_asr\"\n\nMore Information needed"
]
|
f3423d562d0b9bacad83d83714852173b5d39edc |
This dataset is a mix of multiple instruct datasets found on huggingface, while also including a bunch of other datasets (self-made) for tasks such as question-answering focused on RAG, summarization, keyword generation and others.
Most of the original dataset was in the English language. I have translated most of it to Brazillian Portuguese. There is a “LANGUAGE” column, which indicates if its PT or EN. It is possible that the translation contains errors.
For RAG, summarization and keyword generation tasks, the instruct layout looks like this:
`Context:\n{YourRetrievedContext}\nBased on the context, answer: “{YourQuestion}”.`
`Context:\n{YourRetrievedContext}\nBased on the context, write a summary.`
`Context:\n{YourRetrievedContext}\nBased on the context, what are the keywords?.`
or, in Portuguese:
`Contexto:\n{SeuContextoBuscado}\nBaseado no contexto, responda: “{SuaPergunta}”.`
`Contexto:\n{SeuContextoBuscado}\nBaseado no contexto, escreva um resumo.`
`Contexto:\n{SeuContextoBuscado}\nBaseado no contexto, quais são as palavras-chave?.`
The total row count for the dataset is 11165249.
Row count for Portuguese instructions: 5926086
Row count for English instructions: 5239163 | cnmoro/Instruct-PTBR-ENUS-11M | [
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10M<n<100M",
"language:en",
"language:pt",
"license:llama2",
"region:us"
]
| 2023-11-16T17:54:56+00:00 | {"language": ["en", "pt"], "license": "llama2", "size_categories": ["10M<n<100M"], "task_categories": ["question-answering", "summarization", "text-generation", "text2text-generation"]} | 2023-11-16T18:36:06+00:00 | []
| [
"en",
"pt"
]
| TAGS
#task_categories-question-answering #task_categories-summarization #task_categories-text-generation #task_categories-text2text-generation #size_categories-10M<n<100M #language-English #language-Portuguese #license-llama2 #region-us
|
This dataset is a mix of multiple instruct datasets found on huggingface, while also including a bunch of other datasets (self-made) for tasks such as question-answering focused on RAG, summarization, keyword generation and others.
Most of the original dataset was in the English language. I have translated most of it to Brazillian Portuguese. There is a “LANGUAGE” column, which indicates if its PT or EN. It is possible that the translation contains errors.
For RAG, summarization and keyword generation tasks, the instruct layout looks like this:
'Context:\n{YourRetrievedContext}\nBased on the context, answer: “{YourQuestion}”.'
'Context:\n{YourRetrievedContext}\nBased on the context, write a summary.'
'Context:\n{YourRetrievedContext}\nBased on the context, what are the keywords?.'
or, in Portuguese:
'Contexto:\n{SeuContextoBuscado}\nBaseado no contexto, responda: “{SuaPergunta}”.'
'Contexto:\n{SeuContextoBuscado}\nBaseado no contexto, escreva um resumo.'
'Contexto:\n{SeuContextoBuscado}\nBaseado no contexto, quais são as palavras-chave?.'
The total row count for the dataset is 11165249.
Row count for Portuguese instructions: 5926086
Row count for English instructions: 5239163 | []
| [
"TAGS\n#task_categories-question-answering #task_categories-summarization #task_categories-text-generation #task_categories-text2text-generation #size_categories-10M<n<100M #language-English #language-Portuguese #license-llama2 #region-us \n"
]
| [
81
]
| [
"passage: TAGS\n#task_categories-question-answering #task_categories-summarization #task_categories-text-generation #task_categories-text2text-generation #size_categories-10M<n<100M #language-English #language-Portuguese #license-llama2 #region-us \n"
]
|
ffe3c6d162702304275e0ac73106a4616db4b8e6 |
# BIOSCAN_1M Insect Dataset
<div align="center">
<img src="images/Fig1.png" alt="Alt Text" width="1000" style="display: block; margin: 0 auto;">
</div>
Website: https://biodiversitygenomics.net/1M_insects/
GitHub: https://github.com/zahrag/BIOSCAN-1M
Zenodo: https://zenodo.org/records/8030065
Kaggle: https://www.kaggle.com/datasets/zahragharaee/bioscan-1m-insect-dataset
Paper: https://arxiv.org/abs/2307.10455
```
cite as:
@inproceedings{gharaee2023step,
title={A Step Towards Worldwide Biodiversity Assessment: The {BIOSCAN-1M} Insect Dataset},
booktitle = {Advances in Neural Information Processing Systems ({NeurIPS}) Datasets \& Benchmarks Track},
author={Gharaee, Z. and Gong, Z. and Pellegrino, N. and Zarubiieva, I. and Haurum, J. B. and Lowe, S. C. and McKeown, J. T. A. and Ho, C. Y. and McLeod, J. and Wei, Y. C. and Agda, J. and Ratnasingham, S. and Steinke, D. and Chang, A. X. and Taylor, G. W. and Fieguth, P.},
year={2023},
}
```
## A Dataset Record
BIOSCAN dataset provides researchers with information about insects.
Each record of the BIOSCAN-1M Insect dataset contains four primary attributes:
* DNA barcode sequence
* Barcode Index Number (BIN)
* Biological taxonomy ranking annotations
* RGB image
###### <h4> I. DNA barcode sequence
The provided DNA barcode sequence showcases the arrangement of nucleotides:
* Adenine (A): Red
* Thymine (T): Blue
* Cytosine (C): Green
* Guanine (G): Yellow
```
TTTATATTTTATTTTTGGAGCATGATCAGGAATAGTTGGAACTTCAATAAGTTTATTAATTCGAACAGAATTAAGCCAACCAGGAATTTTTA ...
```
<div align="center">
<img src="images/DNA_sequence.png" alt="Alt Text" width="1000" style="display: block; margin: 0 auto;">
</div>
###### <h4> II. Barcode Index Number (BIN)
BINs, acting as an alternative to Linnean names, provide a genetic-centric classification for organisms,
emphasizing the significance of genetic code in taxonomy.
```
BOLD:AER5166
```
<div align="center">
<img src="images/BIN.png" alt="Alt Text" width="1000" style="display: block; margin: 0 auto;">
</div>
###### <h4> III. Biological taxonomy ranking annotations
Taxonomic group ranking annotations categorize organisms hierarchically based on evolutionary relationships.
It organizes species into groups based on shared characteristics and genetic relatedness.
<div align="center">
<img src="images/Taxonomy_horiz_upd1.png" alt="Alt Text" width="1000" style="display: block; margin: 0 auto;">
</div>
###### <h4> IV. RGB image
Original insect images from 16 most densly populated orders of the BIOSCAN-1M Insect dataset.
The numbers below each image identify the number of images in each class, and clearly illustrate the degree of class imbalance in the BIOSCAN-1M Insect dataset.
<div align="center">
<table>
<!-- First Row -->
<tr>
<td align="center" ><img src="images/Diptera.jpg" width="400px" height="400px" class="image"></td>
<td align="center" ><img src="images/Hymenoptera.jpg" width="400px" height="400px" class="image"></td>
<td align="center" ><img src="images/Coleoptera.jpg" width="400px" height="400px" class="image"></td>
<td align="center" ><img src="images/Hemiptera.jpg" width="400px" height="400px" class="image"></td>
</tr>
<tr>
<td align="center"><strong>Diptera: 896,234</strong></td>
<td align="center"><strong>Hymenoptera: 89,311</strong></td>
<td align="center"><strong>Coleoptera: 47,328</strong></td>
<td align="center"><strong>Hemiptera: 46,970</strong></td>
</tr>
<!-- Second Row -->
<tr>
<td align="center" ><img src="images/Lepidoptera.jpg" width="400px" height="400px" class="image"></td>
<td align="center" ><img src="images/Psocodea.jpg" width="400px" height="400px" class="image"></td>
<td align="center" ><img src="images/Thysanoptera.jpg" width="400px" height="400px" class="image"></td>
<td align="center" ><img src="images/Trichoptera.jpg" width="400px" height="400px" class="image"></td>
</tr>
<tr>
<td align="center"><strong>Lepidoptera: 32,538</strong></td>
<td align="center"><strong>Psocodea: 9,635</strong></td>
<td align="center"><strong>Thysanoptera: 2,088</strong></td>
<td align="center"><strong>Trichoptera: 1,296</strong></td>
</tr>
<!-- Third Row -->
<tr>
<td align="center" ><img src="images/Orthoptera.jpg" width="400px" height="400px" class="image"></td>
<td align="center" ><img src="images/Blattodea.jpg" width="400px" height="400px" class="image"></td>
<td align="center" ><img src="images/Neuroptera.jpg" width="400px" height="400px" class="image"></td>
<td align="center" ><img src="images/Ephemeroptera.jpg" width="400px" height="400px" class="image"></td>
</tr>
<tr>
<td align="center"><strong>Orthoptera: 1,057</strong></td>
<td align="center"><strong>Blattodea: 824</strong></td>
<td align="center"><strong>Neuroptera: 676</strong></td>
<td align="center"><strong>Ephemeroptera: 96</strong></td>
</tr>
<!-- Fourth Row -->
<tr>
<td align="center" ><img src="images/Dermaptera.jpg" width="400px" height="400px" class="image"></td>
<td align="center" ><img src="images/Archaeognatha.jpg" width="400px" height="400px" class="image"></td>
<td align="center" ><img src="images/Plecoptera.jpg" width="400px" height="400px" class="image"></td>
<td align="center" ><img src="images/Embioptera.jpg" width="400px" height="400px" class="image"></td>
</tr>
<tr>
<td align="center"><strong>Dermaptera: 66</strong></td>
<td align="center"><strong>Archaeognatha: 63</strong></td>
<td align="center"><strong>Plecoptera: 30</strong></td>
<td align="center"><strong>Embioptera: 6</strong></td>
</tr>
</table>
</div>
## Class Distribution
Class distribution and class imbalance in the BIOSCAN-1M Insect dataset. Orders (top) and diptera families (bottom).
The image demonstrates that class imbalance is an inherent characteristic within the insect community.
<div align="center">
<img src="images/BIOSCAN_Fig2_upd3.png" alt="Alt Text" width="1000" style="display: block; margin: 0 auto;">
</div>
| Gharaee/BIOSCAN_1M_Insect_Dataset | [
"license:other",
"arxiv:2307.10455",
"region:us"
]
| 2023-11-16T18:16:17+00:00 | {"license": "other", "license_name": "cc-by-nc-sa-4.0", "license_link": "https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en"} | 2023-11-17T04:24:48+00:00 | [
"2307.10455"
]
| []
| TAGS
#license-other #arxiv-2307.10455 #region-us
| BIOSCAN\_1M Insect Dataset
==========================

Website: URL
GitHub: URL
Zenodo: URL
Kaggle: URL
Paper: URL
A Dataset Record
----------------
BIOSCAN dataset provides researchers with information about insects.
Each record of the BIOSCAN-1M Insect dataset contains four primary attributes:
* DNA barcode sequence
* Barcode Index Number (BIN)
* Biological taxonomy ranking annotations
* RGB image
###### I. DNA barcode sequence
The provided DNA barcode sequence showcases the arrangement of nucleotides:
* Adenine (A): Red
* Thymine (T): Blue
* Cytosine (C): Green
* Guanine (G): Yellow

###### II. Barcode Index Number (BIN)
BINs, acting as an alternative to Linnean names, provide a genetic-centric classification for organisms,
emphasizing the significance of genetic code in taxonomy.

###### III. Biological taxonomy ranking annotations
Taxonomic group ranking annotations categorize organisms hierarchically based on evolutionary relationships.
It organizes species into groups based on shared characteristics and genetic relatedness.

###### IV. RGB image
Original insect images from 16 most densly populated orders of the BIOSCAN-1M Insect dataset.
The numbers below each image identify the number of images in each class, and clearly illustrate the degree of class imbalance in the BIOSCAN-1M Insect dataset.
Class Distribution
------------------
Class distribution and class imbalance in the BIOSCAN-1M Insect dataset. Orders (top) and diptera families (bottom).
The image demonstrates that class imbalance is an inherent characteristic within the insect community.

| [
"###### I. DNA barcode sequence\n\n\nThe provided DNA barcode sequence showcases the arrangement of nucleotides:\n\n\n* Adenine (A): Red\n* Thymine (T): Blue\n* Cytosine (C): Green\n* Guanine (G): Yellow\n\n\n\n",
"###### II. Barcode Index Number (BIN)\n\n\nBINs, acting as an alternative to Linnean names, provide a genetic-centric classification for organisms,\nemphasizing the significance of genetic code in taxonomy.\n\n\n\n",
"###### III. Biological taxonomy ranking annotations\n\n\nTaxonomic group ranking annotations categorize organisms hierarchically based on evolutionary relationships.\nIt organizes species into groups based on shared characteristics and genetic relatedness.\n\n\n\n",
"###### IV. RGB image\n\n\nOriginal insect images from 16 most densly populated orders of the BIOSCAN-1M Insect dataset.\nThe numbers below each image identify the number of images in each class, and clearly illustrate the degree of class imbalance in the BIOSCAN-1M Insect dataset.\n\n\n\n\n\nClass Distribution\n------------------\n\n\nClass distribution and class imbalance in the BIOSCAN-1M Insect dataset. Orders (top) and diptera families (bottom).\nThe image demonstrates that class imbalance is an inherent characteristic within the insect community.\n\n\n\n"
]
| [
"TAGS\n#license-other #arxiv-2307.10455 #region-us \n",
"###### I. DNA barcode sequence\n\n\nThe provided DNA barcode sequence showcases the arrangement of nucleotides:\n\n\n* Adenine (A): Red\n* Thymine (T): Blue\n* Cytosine (C): Green\n* Guanine (G): Yellow\n\n\n\n",
"###### II. Barcode Index Number (BIN)\n\n\nBINs, acting as an alternative to Linnean names, provide a genetic-centric classification for organisms,\nemphasizing the significance of genetic code in taxonomy.\n\n\n\n",
"###### III. Biological taxonomy ranking annotations\n\n\nTaxonomic group ranking annotations categorize organisms hierarchically based on evolutionary relationships.\nIt organizes species into groups based on shared characteristics and genetic relatedness.\n\n\n\n",
"###### IV. RGB image\n\n\nOriginal insect images from 16 most densly populated orders of the BIOSCAN-1M Insect dataset.\nThe numbers below each image identify the number of images in each class, and clearly illustrate the degree of class imbalance in the BIOSCAN-1M Insect dataset.\n\n\n\n\n\nClass Distribution\n------------------\n\n\nClass distribution and class imbalance in the BIOSCAN-1M Insect dataset. Orders (top) and diptera families (bottom).\nThe image demonstrates that class imbalance is an inherent characteristic within the insect community.\n\n\n\n"
]
| [
19,
76,
63,
76,
146
]
| [
"passage: TAGS\n#license-other #arxiv-2307.10455 #region-us \n###### I. DNA barcode sequence\n\n\nThe provided DNA barcode sequence showcases the arrangement of nucleotides:\n\n\n* Adenine (A): Red\n* Thymine (T): Blue\n* Cytosine (C): Green\n* Guanine (G): Yellow\n\n\n\n###### II. Barcode Index Number (BIN)\n\n\nBINs, acting as an alternative to Linnean names, provide a genetic-centric classification for organisms,\nemphasizing the significance of genetic code in taxonomy.\n\n\n\n###### III. Biological taxonomy ranking annotations\n\n\nTaxonomic group ranking annotations categorize organisms hierarchically based on evolutionary relationships.\nIt organizes species into groups based on shared characteristics and genetic relatedness.\n\n\n\n###### IV. RGB image\n\n\nOriginal insect images from 16 most densly populated orders of the BIOSCAN-1M Insect dataset.\nThe numbers below each image identify the number of images in each class, and clearly illustrate the degree of class imbalance in the BIOSCAN-1M Insect dataset.\n\n\n\n\n\nClass Distribution\n------------------\n\n\nClass distribution and class imbalance in the BIOSCAN-1M Insect dataset. Orders (top) and diptera families (bottom).\nThe image demonstrates that class imbalance is an inherent characteristic within the insect community.\n\n\n\n"
]
|
70079eae1480cf1509a66ac97112fc659ca20d0f | # Dataset Card for Dataset Name
## Dataset description
<!-- Provide a quick summary of the dataset. -->
This dataset is a reformatting of [OpenAssistant Conversations (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1), which is
> a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292 quality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.
It was modified from the original by following the tree branches and reforming the messages into conversation streams.
#### Who are the editors?
Aymeric Roucher, Hugging Face.
### Example
````
{
'conversation_id': '6708c47f-05c9-4346-b3d2-40b2bd24fde4',
'user_id': '2c96e467-66f0-4be7-9693-bda51356a424',
'created_date': '2023-02-06T18:48:49.391686+00:00',
'messages': [
{'content': 'Can you write a short introduction about the relevance of the term "monopsony" in economics? Please use examples related to potential monopsonies in the labour market and cite relevant research.',
'role': 'user'},
{'content': '"Monopsony" refers to a market structure where there is only one buyer for a particular good or service. In economics, this term is particularly relevant in the labor market, where a monopsony employer has significant power over the wages and working conditions of their employees. The presence of a monopsony can result in lower wages and reduced employment opportunities for workers, as the employer has little incentive to increase wages or provide better working conditions.\n\nRecent research has identified potential monopsonies in industries such as retail and fast food, where a few large companies control a significant portion of the market (Bivens & Mishel, 2013). In these industries, workers often face low wages, limited benefits, and reduced bargaining power, leading to a situation where they are dependent on the employer for their livelihood. This dependence can result in further suppression of wages and a decline in working conditions.\n\nOverall, the concept of monopsony is essential to understanding the dynamics of labor markets and the impact of market power on workers. Further research is needed to understand the extent and impact of monopsonies on the economy and to develop policies to address this issue.\n\nReferences:\nBivens, J., & Mishel, L. (2013). The Pay of Corporate Executives and Financial Professionals as Evidence of Rents in Top 1 Percent Incomes. Journal of Economic Perspectives, 27(3), 57-78.',
'role': 'assistant'},
{'content': 'Now explain it to a dog', 'role': 'user'}
]
}
```` | m-ric/Open_Assistant_Conversation_Chains | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"language:es",
"language:ru",
"language:de",
"language:pl",
"language:th",
"language:vi",
"language:sv",
"language:bn",
"language:da",
"language:he",
"language:it",
"language:fa",
"language:sk",
"language:id",
"language:nb",
"language:el",
"language:nl",
"language:hu",
"language:eu",
"language:zh",
"language:eo",
"language:ja",
"language:ca",
"language:cs",
"language:bg",
"language:fi",
"language:pt",
"language:tr",
"language:ro",
"language:ar",
"language:uk",
"language:gl",
"language:fr",
"language:ko",
"license:apache-2.0",
"human-feedback",
"region:us"
]
| 2023-11-16T18:35:52+00:00 | {"language": ["en", "es", "ru", "de", "pl", "th", "vi", "sv", "bn", "da", "he", "it", "fa", "sk", "id", "nb", "el", "nl", "hu", "eu", "zh", "eo", "ja", "ca", "cs", "bg", "fi", "pt", "tr", "ro", "ar", "uk", "gl", "fr", "ko"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["conversational", "text-generation"], "pretty_name": "OpenAssistant Conversations Unrolled", "tags": ["human-feedback"]} | 2023-11-22T14:37:58+00:00 | []
| [
"en",
"es",
"ru",
"de",
"pl",
"th",
"vi",
"sv",
"bn",
"da",
"he",
"it",
"fa",
"sk",
"id",
"nb",
"el",
"nl",
"hu",
"eu",
"zh",
"eo",
"ja",
"ca",
"cs",
"bg",
"fi",
"pt",
"tr",
"ro",
"ar",
"uk",
"gl",
"fr",
"ko"
]
| TAGS
#task_categories-conversational #task_categories-text-generation #size_categories-10K<n<100K #language-English #language-Spanish #language-Russian #language-German #language-Polish #language-Thai #language-Vietnamese #language-Swedish #language-Bengali #language-Danish #language-Hebrew #language-Italian #language-Persian #language-Slovak #language-Indonesian #language-Norwegian Bokmål #language-Modern Greek (1453-) #language-Dutch #language-Hungarian #language-Basque #language-Chinese #language-Esperanto #language-Japanese #language-Catalan #language-Czech #language-Bulgarian #language-Finnish #language-Portuguese #language-Turkish #language-Romanian #language-Arabic #language-Ukrainian #language-Galician #language-French #language-Korean #license-apache-2.0 #human-feedback #region-us
| # Dataset Card for Dataset Name
## Dataset description
This dataset is a reformatting of OpenAssistant Conversations (OASST1), which is
> a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292 quality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.
It was modified from the original by following the tree branches and reforming the messages into conversation streams.
#### Who are the editors?
Aymeric Roucher, Hugging Face.
### Example
' | [
"# Dataset Card for Dataset Name",
"## Dataset description\n\n\nThis dataset is a reformatting of OpenAssistant Conversations (OASST1), which is\n> a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292 quality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.\n\nIt was modified from the original by following the tree branches and reforming the messages into conversation streams.",
"#### Who are the editors?\n\nAymeric Roucher, Hugging Face.",
"### Example\n\n'"
]
| [
"TAGS\n#task_categories-conversational #task_categories-text-generation #size_categories-10K<n<100K #language-English #language-Spanish #language-Russian #language-German #language-Polish #language-Thai #language-Vietnamese #language-Swedish #language-Bengali #language-Danish #language-Hebrew #language-Italian #language-Persian #language-Slovak #language-Indonesian #language-Norwegian Bokmål #language-Modern Greek (1453-) #language-Dutch #language-Hungarian #language-Basque #language-Chinese #language-Esperanto #language-Japanese #language-Catalan #language-Czech #language-Bulgarian #language-Finnish #language-Portuguese #language-Turkish #language-Romanian #language-Arabic #language-Ukrainian #language-Galician #language-French #language-Korean #license-apache-2.0 #human-feedback #region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset description\n\n\nThis dataset is a reformatting of OpenAssistant Conversations (OASST1), which is\n> a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292 quality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.\n\nIt was modified from the original by following the tree branches and reforming the messages into conversation streams.",
"#### Who are the editors?\n\nAymeric Roucher, Hugging Face.",
"### Example\n\n'"
]
| [
247,
8,
125,
18,
5
]
| [
"passage: TAGS\n#task_categories-conversational #task_categories-text-generation #size_categories-10K<n<100K #language-English #language-Spanish #language-Russian #language-German #language-Polish #language-Thai #language-Vietnamese #language-Swedish #language-Bengali #language-Danish #language-Hebrew #language-Italian #language-Persian #language-Slovak #language-Indonesian #language-Norwegian Bokmål #language-Modern Greek (1453-) #language-Dutch #language-Hungarian #language-Basque #language-Chinese #language-Esperanto #language-Japanese #language-Catalan #language-Czech #language-Bulgarian #language-Finnish #language-Portuguese #language-Turkish #language-Romanian #language-Arabic #language-Ukrainian #language-Galician #language-French #language-Korean #license-apache-2.0 #human-feedback #region-us \n# Dataset Card for Dataset Name## Dataset description\n\n\nThis dataset is a reformatting of OpenAssistant Conversations (OASST1), which is\n> a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292 quality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.\n\nIt was modified from the original by following the tree branches and reforming the messages into conversation streams.#### Who are the editors?\n\nAymeric Roucher, Hugging Face.### Example\n\n'"
]
|
8f76bf8f3059df609f1c4edc43fc56258eb92c60 | # Dataset Card for "Enhanced_classifier_baseline_model"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | PaulLoisel/Enhanced_classifier_baseline_model | [
"region:us"
]
| 2023-11-16T18:47:49+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 563, "num_examples": 5}], "download_size": 0, "dataset_size": 563}} | 2023-11-16T20:18:02+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "Enhanced_classifier_baseline_model"
More Information needed | [
"# Dataset Card for \"Enhanced_classifier_baseline_model\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"Enhanced_classifier_baseline_model\"\n\nMore Information needed"
]
| [
6,
21
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"Enhanced_classifier_baseline_model\"\n\nMore Information needed"
]
|
62fe4bcddc48f2b16b11cf68d6a71e440af96eb6 |
With bertopic (https://maartengr.github.io/BERTopic/), I ran a dataset of pericopes which covers the entire Bible.
The pericope became the topic, and under each heading, 3 verses were selected as representative.
A useful feature of this is that the representative verses are guaranteed to come from the section of Scripture
that's connected with the pericope, which gives much better quality than if they were, for instance, semantically
chosen from a vector database of the entire Bible.
| JWBickel/KJVPericopeTopics_bertopic | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"bible",
"region:us"
]
| 2023-11-16T18:58:58+00:00 | {"language": ["en"], "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "pretty_name": "KJV Topical Verse Groups", "tags": ["bible"]} | 2023-11-26T14:48:31+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-classification #size_categories-1K<n<10K #language-English #bible #region-us
|
With bertopic (URL I ran a dataset of pericopes which covers the entire Bible.
The pericope became the topic, and under each heading, 3 verses were selected as representative.
A useful feature of this is that the representative verses are guaranteed to come from the section of Scripture
that's connected with the pericope, which gives much better quality than if they were, for instance, semantically
chosen from a vector database of the entire Bible.
| []
| [
"TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #bible #region-us \n"
]
| [
36
]
| [
"passage: TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #bible #region-us \n"
]
|
d62e64388780dfcdf9ed2473a955eccff9b2e793 |
This is a mirror to the example dataset for the "CLOOME: a new search engine unlocks bioimaging databases for queries with chemical structures" paper by Sanchez-Fernandez et al.
Paper: https://www.biorxiv.org/content/10.1101/2022.11.17.516915v1
Code: https://github.com/ml-jku/cloome

| renumics/cloome_demo | [
"region:us"
]
| 2023-11-16T19:14:19+00:00 | {"dataset_info": {"features": [{"name": "SAMPLE_KEY_mol", "dtype": "string"}, {"name": "SAMPLE_KEY_img", "dtype": "string"}, {"name": "SMILES", "dtype": "string"}, {"name": "mol_embedding_reduced", "sequence": "float64"}, {"name": "img_embedding_reduced", "sequence": "float64"}, {"name": "mol_embedding", "sequence": "float32"}, {"name": "img_embedding", "sequence": "float32"}, {"name": "image", "dtype": "image"}, {"name": "distance", "dtype": "float64"}, {"name": "index", "dtype": "int64"}, {"name": "smiles_image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 975216313.25, "num_examples": 30403}], "download_size": 1002070493, "dataset_size": 975216313.25}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-16T19:43:36+00:00 | []
| []
| TAGS
#region-us
|
This is a mirror to the example dataset for the "CLOOME: a new search engine unlocks bioimaging databases for queries with chemical structures" paper by Sanchez-Fernandez et al.
Paper: URL
Code: URL
!image/png
| []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
ae3d39ec2551958a74cbbcd2e8343ca62c988335 | # Dataset Card for "Enhanced_classifier_everything_to_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | PaulLoisel/Enhanced_classifier_everything_to_text | [
"region:us"
]
| 2023-11-16T19:17:15+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1458, "num_examples": 5}], "download_size": 0, "dataset_size": 1458}} | 2023-11-16T20:24:56+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "Enhanced_classifier_everything_to_text"
More Information needed | [
"# Dataset Card for \"Enhanced_classifier_everything_to_text\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"Enhanced_classifier_everything_to_text\"\n\nMore Information needed"
]
| [
6,
24
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"Enhanced_classifier_everything_to_text\"\n\nMore Information needed"
]
|
116193a4dd663f98e5c8f1a74f62739ebbaeb8c9 | # Dataset Card for "cleaned_prompt_r"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ThWu/cleaned_prompt_r | [
"region:us"
]
| 2023-11-16T19:21:03+00:00 | {"dataset_info": {"features": [{"name": "conversations", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 157911562, "num_examples": 268781}], "download_size": 97143836, "dataset_size": 157911562}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-16T19:21:10+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "cleaned_prompt_r"
More Information needed | [
"# Dataset Card for \"cleaned_prompt_r\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"cleaned_prompt_r\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"cleaned_prompt_r\"\n\nMore Information needed"
]
|
a66b0ab163aadbbf520e4c88dd064632e0440971 |
## About Dataset
### Context
Term deposits are a major source of income for a bank. A term deposit is a cash investment held at a financial institution. Your money is invested for an agreed rate of interest over a fixed amount of time, or term. The bank has various outreach plans to sell term deposits to their customers such as email marketing, advertisements, telephonic marketing, and digital marketing.
Telephonic marketing campaigns still remain one of the most effective way to reach out to people. However, they require huge investment as large call centers are hired to actually execute these campaigns. Hence, it is crucial to identify the customers most likely to convert beforehand so that they can be specifically targeted via call.
The data is related to direct marketing campaigns (phone calls) of a Portuguese banking institution. The classification goal is to predict if the client will subscribe to a term deposit (variable y).
Content
The data is related to the direct marketing campaigns of a Portuguese banking institution. The marketing campaigns were based on phone calls. Often, more than one contact to the same client was required, in order to access if the product (bank term deposit) would be ('yes') or not ('no') subscribed by the customer or not. The data folder contains two datasets:-
train.csv: 45,211 rows and 18 columns ordered by date (from May 2008 to November 2010)
test.csv: 4521 rows and 18 columns with 10% of the examples (4521), randomly selected from train.csv
Detailed Column Descriptions
bank client data:
- 1 - age (numeric)
- 2 - job : type of job (categorical: "admin.","unknown","unemployed","management","housemaid","entrepreneur","student",
"blue-collar","self-employed","retired","technician","services")
- 3 - marital : marital status (categorical: "married","divorced","single"; note: "divorced" means divorced or widowed)
- 4 - education (categorical: "unknown","secondary","primary","tertiary")
- 5 - default: has credit in default? (binary: "yes","no")
- 6 - balance: average yearly balance, in euros (numeric)
- 7 - housing: has housing loan? (binary: "yes","no")
- 8 - loan: has personal loan? (binary: "yes","no")
# related with the last contact of the current campaign:
- 9 - contact: contact communication type (categorical: "unknown","telephone","cellular")
- 10 - day: last contact day of the month (numeric)
- 11 - month: last contact month of year (categorical: "jan", "feb", "mar", …, "nov", "dec")
- 12 - duration: last contact duration, in seconds (numeric)
# other attributes:
- 13 - campaign: number of contacts performed during this campaign and for this client (numeric, includes last contact)
- 14 - pdays: number of days that passed by after the client was last contacted from a previous campaign (numeric, -1 means client was not previously contacted)
- 15 - previous: number of contacts performed before this campaign and for this client (numeric)
- 16 - poutcome: outcome of the previous marketing campaign (categorical: "unknown","other","failure","success")
Output variable (desired target):
- 17 - y - has the client subscribed a term deposit? (binary: "yes","no") | Andyrasika/banking-marketing | [
"license:openrail",
"region:us"
]
| 2023-11-16T19:54:44+00:00 | {"license": "openrail", "dataset_info": {"features": [{"name": "age", "dtype": "int64"}, {"name": "job", "dtype": "string"}, {"name": "marital", "dtype": "string"}, {"name": "education", "dtype": "string"}, {"name": "default", "dtype": "string"}, {"name": "balance", "dtype": "int64"}, {"name": "housing", "dtype": "string"}, {"name": "loan", "dtype": "string"}, {"name": "contact", "dtype": "string"}, {"name": "day", "dtype": "int64"}, {"name": "month", "dtype": "string"}, {"name": "duration", "dtype": "int64"}, {"name": "campaign", "dtype": "int64"}, {"name": "pdays", "dtype": "int64"}, {"name": "previous", "dtype": "int64"}, {"name": "poutcome", "dtype": "string"}, {"name": "y", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6654353, "num_examples": 45211}, {"name": "test", "num_bytes": 665707, "num_examples": 4521}], "download_size": 834481, "dataset_size": 7320060}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2023-11-16T20:07:56+00:00 | []
| []
| TAGS
#license-openrail #region-us
|
## About Dataset
### Context
Term deposits are a major source of income for a bank. A term deposit is a cash investment held at a financial institution. Your money is invested for an agreed rate of interest over a fixed amount of time, or term. The bank has various outreach plans to sell term deposits to their customers such as email marketing, advertisements, telephonic marketing, and digital marketing.
Telephonic marketing campaigns still remain one of the most effective way to reach out to people. However, they require huge investment as large call centers are hired to actually execute these campaigns. Hence, it is crucial to identify the customers most likely to convert beforehand so that they can be specifically targeted via call.
The data is related to direct marketing campaigns (phone calls) of a Portuguese banking institution. The classification goal is to predict if the client will subscribe to a term deposit (variable y).
Content
The data is related to the direct marketing campaigns of a Portuguese banking institution. The marketing campaigns were based on phone calls. Often, more than one contact to the same client was required, in order to access if the product (bank term deposit) would be ('yes') or not ('no') subscribed by the customer or not. The data folder contains two datasets:-
URL: 45,211 rows and 18 columns ordered by date (from May 2008 to November 2010)
URL: 4521 rows and 18 columns with 10% of the examples (4521), randomly selected from URL
Detailed Column Descriptions
bank client data:
- 1 - age (numeric)
- 2 - job : type of job (categorical: "admin.","unknown","unemployed","management","housemaid","entrepreneur","student",
"blue-collar","self-employed","retired","technician","services")
- 3 - marital : marital status (categorical: "married","divorced","single"; note: "divorced" means divorced or widowed)
- 4 - education (categorical: "unknown","secondary","primary","tertiary")
- 5 - default: has credit in default? (binary: "yes","no")
- 6 - balance: average yearly balance, in euros (numeric)
- 7 - housing: has housing loan? (binary: "yes","no")
- 8 - loan: has personal loan? (binary: "yes","no")
# related with the last contact of the current campaign:
- 9 - contact: contact communication type (categorical: "unknown","telephone","cellular")
- 10 - day: last contact day of the month (numeric)
- 11 - month: last contact month of year (categorical: "jan", "feb", "mar", …, "nov", "dec")
- 12 - duration: last contact duration, in seconds (numeric)
# other attributes:
- 13 - campaign: number of contacts performed during this campaign and for this client (numeric, includes last contact)
- 14 - pdays: number of days that passed by after the client was last contacted from a previous campaign (numeric, -1 means client was not previously contacted)
- 15 - previous: number of contacts performed before this campaign and for this client (numeric)
- 16 - poutcome: outcome of the previous marketing campaign (categorical: "unknown","other","failure","success")
Output variable (desired target):
- 17 - y - has the client subscribed a term deposit? (binary: "yes","no") | [
"## About Dataset",
"### Context\nTerm deposits are a major source of income for a bank. A term deposit is a cash investment held at a financial institution. Your money is invested for an agreed rate of interest over a fixed amount of time, or term. The bank has various outreach plans to sell term deposits to their customers such as email marketing, advertisements, telephonic marketing, and digital marketing.\n\nTelephonic marketing campaigns still remain one of the most effective way to reach out to people. However, they require huge investment as large call centers are hired to actually execute these campaigns. Hence, it is crucial to identify the customers most likely to convert beforehand so that they can be specifically targeted via call.\n\nThe data is related to direct marketing campaigns (phone calls) of a Portuguese banking institution. The classification goal is to predict if the client will subscribe to a term deposit (variable y).\n\nContent\nThe data is related to the direct marketing campaigns of a Portuguese banking institution. The marketing campaigns were based on phone calls. Often, more than one contact to the same client was required, in order to access if the product (bank term deposit) would be ('yes') or not ('no') subscribed by the customer or not. The data folder contains two datasets:-\n\nURL: 45,211 rows and 18 columns ordered by date (from May 2008 to November 2010)\nURL: 4521 rows and 18 columns with 10% of the examples (4521), randomly selected from URL\nDetailed Column Descriptions\nbank client data:\n\n- 1 - age (numeric)\n- 2 - job : type of job (categorical: \"admin.\",\"unknown\",\"unemployed\",\"management\",\"housemaid\",\"entrepreneur\",\"student\",\n\"blue-collar\",\"self-employed\",\"retired\",\"technician\",\"services\")\n- 3 - marital : marital status (categorical: \"married\",\"divorced\",\"single\"; note: \"divorced\" means divorced or widowed)\n- 4 - education (categorical: \"unknown\",\"secondary\",\"primary\",\"tertiary\")\n- 5 - default: has credit in default? (binary: \"yes\",\"no\")\n- 6 - balance: average yearly balance, in euros (numeric)\n- 7 - housing: has housing loan? (binary: \"yes\",\"no\")\n- 8 - loan: has personal loan? (binary: \"yes\",\"no\")",
"# related with the last contact of the current campaign:\n- 9 - contact: contact communication type (categorical: \"unknown\",\"telephone\",\"cellular\")\n- 10 - day: last contact day of the month (numeric)\n- 11 - month: last contact month of year (categorical: \"jan\", \"feb\", \"mar\", …, \"nov\", \"dec\")\n- 12 - duration: last contact duration, in seconds (numeric)",
"# other attributes:\n- 13 - campaign: number of contacts performed during this campaign and for this client (numeric, includes last contact)\n- 14 - pdays: number of days that passed by after the client was last contacted from a previous campaign (numeric, -1 means client was not previously contacted)\n- 15 - previous: number of contacts performed before this campaign and for this client (numeric)\n- 16 - poutcome: outcome of the previous marketing campaign (categorical: \"unknown\",\"other\",\"failure\",\"success\")\n\nOutput variable (desired target):\n- 17 - y - has the client subscribed a term deposit? (binary: \"yes\",\"no\")"
]
| [
"TAGS\n#license-openrail #region-us \n",
"## About Dataset",
"### Context\nTerm deposits are a major source of income for a bank. A term deposit is a cash investment held at a financial institution. Your money is invested for an agreed rate of interest over a fixed amount of time, or term. The bank has various outreach plans to sell term deposits to their customers such as email marketing, advertisements, telephonic marketing, and digital marketing.\n\nTelephonic marketing campaigns still remain one of the most effective way to reach out to people. However, they require huge investment as large call centers are hired to actually execute these campaigns. Hence, it is crucial to identify the customers most likely to convert beforehand so that they can be specifically targeted via call.\n\nThe data is related to direct marketing campaigns (phone calls) of a Portuguese banking institution. The classification goal is to predict if the client will subscribe to a term deposit (variable y).\n\nContent\nThe data is related to the direct marketing campaigns of a Portuguese banking institution. The marketing campaigns were based on phone calls. Often, more than one contact to the same client was required, in order to access if the product (bank term deposit) would be ('yes') or not ('no') subscribed by the customer or not. The data folder contains two datasets:-\n\nURL: 45,211 rows and 18 columns ordered by date (from May 2008 to November 2010)\nURL: 4521 rows and 18 columns with 10% of the examples (4521), randomly selected from URL\nDetailed Column Descriptions\nbank client data:\n\n- 1 - age (numeric)\n- 2 - job : type of job (categorical: \"admin.\",\"unknown\",\"unemployed\",\"management\",\"housemaid\",\"entrepreneur\",\"student\",\n\"blue-collar\",\"self-employed\",\"retired\",\"technician\",\"services\")\n- 3 - marital : marital status (categorical: \"married\",\"divorced\",\"single\"; note: \"divorced\" means divorced or widowed)\n- 4 - education (categorical: \"unknown\",\"secondary\",\"primary\",\"tertiary\")\n- 5 - default: has credit in default? (binary: \"yes\",\"no\")\n- 6 - balance: average yearly balance, in euros (numeric)\n- 7 - housing: has housing loan? (binary: \"yes\",\"no\")\n- 8 - loan: has personal loan? (binary: \"yes\",\"no\")",
"# related with the last contact of the current campaign:\n- 9 - contact: contact communication type (categorical: \"unknown\",\"telephone\",\"cellular\")\n- 10 - day: last contact day of the month (numeric)\n- 11 - month: last contact month of year (categorical: \"jan\", \"feb\", \"mar\", …, \"nov\", \"dec\")\n- 12 - duration: last contact duration, in seconds (numeric)",
"# other attributes:\n- 13 - campaign: number of contacts performed during this campaign and for this client (numeric, includes last contact)\n- 14 - pdays: number of days that passed by after the client was last contacted from a previous campaign (numeric, -1 means client was not previously contacted)\n- 15 - previous: number of contacts performed before this campaign and for this client (numeric)\n- 16 - poutcome: outcome of the previous marketing campaign (categorical: \"unknown\",\"other\",\"failure\",\"success\")\n\nOutput variable (desired target):\n- 17 - y - has the client subscribed a term deposit? (binary: \"yes\",\"no\")"
]
| [
12,
4,
565,
104,
160
]
| [
"passage: TAGS\n#license-openrail #region-us \n## About Dataset"
]
|
08a360376d4a16f9d316d165d53852571db179d8 | # Midjourney Dataset
This is a backup of https://huggingface.co/datasets/vivym/midjourney-messages
| tsunemoto/MJ_dataset | [
"region:us"
]
| 2023-11-16T20:01:21+00:00 | {} | 2023-11-19T02:22:04+00:00 | []
| []
| TAGS
#region-us
| # Midjourney Dataset
This is a backup of URL
| [
"# Midjourney Dataset\n\nThis is a backup of URL"
]
| [
"TAGS\n#region-us \n",
"# Midjourney Dataset\n\nThis is a backup of URL"
]
| [
6,
12
]
| [
"passage: TAGS\n#region-us \n# Midjourney Dataset\n\nThis is a backup of URL"
]
|
e5c5a1bed449a5211a03cfe6b7b37aad943299f2 | # Dataset Card for "purdue_reddit_posts_2017_2022_merged_sentences"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | sheepy928/purdue_reddit_posts_2017_2022_merged_sentences | [
"region:us"
]
| 2023-11-16T20:25:35+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "Test", "path": "data/Test-*"}]}], "dataset_info": {"features": [{"name": "created_utc", "dtype": "timestamp[ns]"}, {"name": "url", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "Test", "num_bytes": 25629025, "num_examples": 78849}], "download_size": 15392475, "dataset_size": 25629025}} | 2023-11-16T20:25:38+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "purdue_reddit_posts_2017_2022_merged_sentences"
More Information needed | [
"# Dataset Card for \"purdue_reddit_posts_2017_2022_merged_sentences\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"purdue_reddit_posts_2017_2022_merged_sentences\"\n\nMore Information needed"
]
| [
6,
29
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"purdue_reddit_posts_2017_2022_merged_sentences\"\n\nMore Information needed"
]
|
4aee491403ed52eeb1bfca2f116fb4413a644304 |
**Description of the dataset**
This is the November 16, 2023 snapshot of the English subset of the Project Gutenberg corpus (containing 56712 documents in total), downloaded and preprocessed with code from [this repository](https://github.com/eminorhan/gutenberg).
Two different versions of the data are provided:
* The `chunk_size_1024` version divides the data into ~14.2M records consisting of a few paragraph long chunks of text, where each chunk is at least 1024 chars long, and the corresponding metadata.
* The `chunk_size_2048` version divides the data into ~8.2M records consisting of a few paragraph long chunks of text, where each chunk is at least 2048 chars long, and the corresponding metadata.
This dataset is ideal for generating fine-grained embeddings of the documents. | eminorhan/gutenberg_en | [
"task_categories:text-generation",
"size_categories:10M<n<100M",
"language:en",
"license:mit",
"region:us"
]
| 2023-11-16T20:31:30+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["10M<n<100M"], "task_categories": ["text-generation"], "configs": [{"config_name": "chunk_size_1024", "data_files": "gutenberg_en_paragraph_1024.jsonl"}, {"config_name": "chunk_size_2048", "data_files": "gutenberg_en_paragraph_2048.jsonl"}]} | 2023-11-17T20:55:28+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-generation #size_categories-10M<n<100M #language-English #license-mit #region-us
|
Description of the dataset
This is the November 16, 2023 snapshot of the English subset of the Project Gutenberg corpus (containing 56712 documents in total), downloaded and preprocessed with code from this repository.
Two different versions of the data are provided:
* The 'chunk_size_1024' version divides the data into ~14.2M records consisting of a few paragraph long chunks of text, where each chunk is at least 1024 chars long, and the corresponding metadata.
* The 'chunk_size_2048' version divides the data into ~8.2M records consisting of a few paragraph long chunks of text, where each chunk is at least 2048 chars long, and the corresponding metadata.
This dataset is ideal for generating fine-grained embeddings of the documents. | []
| [
"TAGS\n#task_categories-text-generation #size_categories-10M<n<100M #language-English #license-mit #region-us \n"
]
| [
38
]
| [
"passage: TAGS\n#task_categories-text-generation #size_categories-10M<n<100M #language-English #license-mit #region-us \n"
]
|
e583b63fba581f5fd6ff751f4bbf64e50a49c81e | ---
TODO: Add YAML tags here. Copy-paste the tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging
---
# Dataset Card for Pokemon Gen 1
## Dataset Description
- **Short Description:** This dataset comprises images along with corresponding textual prompts. It contains 149 subfolders, each representing a unique category, with multiple images. Each category is associated with specific prompts, as detailed in an accompanying Excel sheet.
- **Purpose:** The dataset is designed for training models that can understand and generate Pokemon images based on textual prompts.
- **Data Collection and Processing:** Images were sourced from [source of images]. Textual prompts were created to accurately describe or relate to the images. Images were processed for resizing, removing bad data, normalization, augmentation, and enhancement.
## Dataset Structure
- **Data Instances:** A typical data instance consists of a textual prompt and a corresponding image path.
- **Data Fields:**
- `prompt`: A string containing the textual description or cue associated with the image.
- `image_file`: The path to the image file related to the prompt.
- **Data Splits:** The dataset is not explicitly split. All instances are part of a single batch. Users can create training, validation, and test splits as needed.
## Dataset Creation
- **Creators:** This dataset was created by Kerem Topalismailoglu.
- **Motivation:** APS360.
## Additional Information
- **Curation Rationale:** The dataset was curated to cover a diverse range of images and corresponding descriptive prompts.
- **Source Data:** The images were sourced from [source], ensuring a wide variety of visual content.
- **Annotations:** The dataset does not include additional annotations beyond the image-prompt pairs.
## Usage
- **Using the Dataset with Hugging Face:**
```python
from datasets import load_dataset
dataset = load_dataset("path_to_my_dataset")
```
## Dataset Card Creation
- **Who Created the Dataset Card:** [Your Name/Organization]
## Citation
- **Citations:** [Include any relevant citations for the dataset or sources of the images.] | Empolyon2/PokemonDataset | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"text",
"image",
"region:us"
]
| 2023-11-16T20:59:40+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["image-classification"], "pretty_name": "PokemonDataset", "tags": ["text", "image"]} | 2023-11-29T19:53:58+00:00 | []
| [
"en"
]
| TAGS
#task_categories-image-classification #size_categories-1K<n<10K #language-English #license-apache-2.0 #text #image #region-us
| ---
TODO: Add YAML tags here. Copy-paste the tags obtained with the online tagging app: URL
---
# Dataset Card for Pokemon Gen 1
## Dataset Description
- Short Description: This dataset comprises images along with corresponding textual prompts. It contains 149 subfolders, each representing a unique category, with multiple images. Each category is associated with specific prompts, as detailed in an accompanying Excel sheet.
- Purpose: The dataset is designed for training models that can understand and generate Pokemon images based on textual prompts.
- Data Collection and Processing: Images were sourced from [source of images]. Textual prompts were created to accurately describe or relate to the images. Images were processed for resizing, removing bad data, normalization, augmentation, and enhancement.
## Dataset Structure
- Data Instances: A typical data instance consists of a textual prompt and a corresponding image path.
- Data Fields:
- 'prompt': A string containing the textual description or cue associated with the image.
- 'image_file': The path to the image file related to the prompt.
- Data Splits: The dataset is not explicitly split. All instances are part of a single batch. Users can create training, validation, and test splits as needed.
## Dataset Creation
- Creators: This dataset was created by Kerem Topalismailoglu.
- Motivation: APS360.
## Additional Information
- Curation Rationale: The dataset was curated to cover a diverse range of images and corresponding descriptive prompts.
- Source Data: The images were sourced from [source], ensuring a wide variety of visual content.
- Annotations: The dataset does not include additional annotations beyond the image-prompt pairs.
## Usage
- Using the Dataset with Hugging Face:
## Dataset Card Creation
- Who Created the Dataset Card: [Your Name/Organization]
- Citations: [Include any relevant citations for the dataset or sources of the images.] | [
"# Dataset Card for Pokemon Gen 1",
"## Dataset Description\n\n- Short Description: This dataset comprises images along with corresponding textual prompts. It contains 149 subfolders, each representing a unique category, with multiple images. Each category is associated with specific prompts, as detailed in an accompanying Excel sheet.\n\n- Purpose: The dataset is designed for training models that can understand and generate Pokemon images based on textual prompts.\n\n- Data Collection and Processing: Images were sourced from [source of images]. Textual prompts were created to accurately describe or relate to the images. Images were processed for resizing, removing bad data, normalization, augmentation, and enhancement.",
"## Dataset Structure\n\n- Data Instances: A typical data instance consists of a textual prompt and a corresponding image path.\n\n- Data Fields:\n - 'prompt': A string containing the textual description or cue associated with the image.\n - 'image_file': The path to the image file related to the prompt.\n\n- Data Splits: The dataset is not explicitly split. All instances are part of a single batch. Users can create training, validation, and test splits as needed.",
"## Dataset Creation\n\n- Creators: This dataset was created by Kerem Topalismailoglu.\n\n- Motivation: APS360.",
"## Additional Information\n\n- Curation Rationale: The dataset was curated to cover a diverse range of images and corresponding descriptive prompts.\n\n- Source Data: The images were sourced from [source], ensuring a wide variety of visual content.\n\n- Annotations: The dataset does not include additional annotations beyond the image-prompt pairs.",
"## Usage\n\n- Using the Dataset with Hugging Face:",
"## Dataset Card Creation\n\n- Who Created the Dataset Card: [Your Name/Organization]\n\n- Citations: [Include any relevant citations for the dataset or sources of the images.]"
]
| [
"TAGS\n#task_categories-image-classification #size_categories-1K<n<10K #language-English #license-apache-2.0 #text #image #region-us \n",
"# Dataset Card for Pokemon Gen 1",
"## Dataset Description\n\n- Short Description: This dataset comprises images along with corresponding textual prompts. It contains 149 subfolders, each representing a unique category, with multiple images. Each category is associated with specific prompts, as detailed in an accompanying Excel sheet.\n\n- Purpose: The dataset is designed for training models that can understand and generate Pokemon images based on textual prompts.\n\n- Data Collection and Processing: Images were sourced from [source of images]. Textual prompts were created to accurately describe or relate to the images. Images were processed for resizing, removing bad data, normalization, augmentation, and enhancement.",
"## Dataset Structure\n\n- Data Instances: A typical data instance consists of a textual prompt and a corresponding image path.\n\n- Data Fields:\n - 'prompt': A string containing the textual description or cue associated with the image.\n - 'image_file': The path to the image file related to the prompt.\n\n- Data Splits: The dataset is not explicitly split. All instances are part of a single batch. Users can create training, validation, and test splits as needed.",
"## Dataset Creation\n\n- Creators: This dataset was created by Kerem Topalismailoglu.\n\n- Motivation: APS360.",
"## Additional Information\n\n- Curation Rationale: The dataset was curated to cover a diverse range of images and corresponding descriptive prompts.\n\n- Source Data: The images were sourced from [source], ensuring a wide variety of visual content.\n\n- Annotations: The dataset does not include additional annotations beyond the image-prompt pairs.",
"## Usage\n\n- Using the Dataset with Hugging Face:",
"## Dataset Card Creation\n\n- Who Created the Dataset Card: [Your Name/Organization]\n\n- Citations: [Include any relevant citations for the dataset or sources of the images.]"
]
| [
45,
8,
145,
117,
29,
83,
14,
46
]
| [
"passage: TAGS\n#task_categories-image-classification #size_categories-1K<n<10K #language-English #license-apache-2.0 #text #image #region-us \n# Dataset Card for Pokemon Gen 1## Dataset Description\n\n- Short Description: This dataset comprises images along with corresponding textual prompts. It contains 149 subfolders, each representing a unique category, with multiple images. Each category is associated with specific prompts, as detailed in an accompanying Excel sheet.\n\n- Purpose: The dataset is designed for training models that can understand and generate Pokemon images based on textual prompts.\n\n- Data Collection and Processing: Images were sourced from [source of images]. Textual prompts were created to accurately describe or relate to the images. Images were processed for resizing, removing bad data, normalization, augmentation, and enhancement.## Dataset Structure\n\n- Data Instances: A typical data instance consists of a textual prompt and a corresponding image path.\n\n- Data Fields:\n - 'prompt': A string containing the textual description or cue associated with the image.\n - 'image_file': The path to the image file related to the prompt.\n\n- Data Splits: The dataset is not explicitly split. All instances are part of a single batch. Users can create training, validation, and test splits as needed.## Dataset Creation\n\n- Creators: This dataset was created by Kerem Topalismailoglu.\n\n- Motivation: APS360.## Additional Information\n\n- Curation Rationale: The dataset was curated to cover a diverse range of images and corresponding descriptive prompts.\n\n- Source Data: The images were sourced from [source], ensuring a wide variety of visual content.\n\n- Annotations: The dataset does not include additional annotations beyond the image-prompt pairs.## Usage\n\n- Using the Dataset with Hugging Face:## Dataset Card Creation\n\n- Who Created the Dataset Card: [Your Name/Organization]\n\n- Citations: [Include any relevant citations for the dataset or sources of the images.]"
]
|
569ecc048bfc93e117e2bb38e1305547a337869a | # meow & woof woof | MWilinski/cats_dogs_cv_labs | [
"region:us"
]
| 2023-11-16T21:20:51+00:00 | {} | 2023-11-16T21:46:40+00:00 | []
| []
| TAGS
#region-us
| # meow & woof woof | [
"# meow & woof woof"
]
| [
"TAGS\n#region-us \n",
"# meow & woof woof"
]
| [
6,
8
]
| [
"passage: TAGS\n#region-us \n# meow & woof woof"
]
|
4d8802be4c3a937f1062e367e515935759e4bfd3 | # Flattened CIFAR-10
CIFAR-10 in jpg format with a flattened directory structure for easy unconditional image generation. | hayden-donnelly/flattened-cifar-10 | [
"task_categories:unconditional-image-generation",
"size_categories:10K<n<100K",
"region:us"
]
| 2023-11-16T21:29:12+00:00 | {"size_categories": ["10K<n<100K"], "task_categories": ["unconditional-image-generation"], "pretty_name": "Flattened CIFAR-10"} | 2023-11-16T21:40:37+00:00 | []
| []
| TAGS
#task_categories-unconditional-image-generation #size_categories-10K<n<100K #region-us
| # Flattened CIFAR-10
CIFAR-10 in jpg format with a flattened directory structure for easy unconditional image generation. | [
"# Flattened CIFAR-10\nCIFAR-10 in jpg format with a flattened directory structure for easy unconditional image generation."
]
| [
"TAGS\n#task_categories-unconditional-image-generation #size_categories-10K<n<100K #region-us \n",
"# Flattened CIFAR-10\nCIFAR-10 in jpg format with a flattened directory structure for easy unconditional image generation."
]
| [
33,
30
]
| [
"passage: TAGS\n#task_categories-unconditional-image-generation #size_categories-10K<n<100K #region-us \n# Flattened CIFAR-10\nCIFAR-10 in jpg format with a flattened directory structure for easy unconditional image generation."
]
|
96e0f7a0a523c7776582fea9c532793e6ed811f4 | # Dataset Card for "CitationGPT_rank_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hippocrates/CitationGPT_rank_test | [
"region:us"
]
| 2023-11-16T21:31:06+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "gold", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 186316705, "num_examples": 99360}, {"name": "valid", "num_bytes": 24120947, "num_examples": 12760}, {"name": "test", "num_bytes": 21411545, "num_examples": 11780}], "download_size": 8079058, "dataset_size": 231849197}} | 2023-11-21T16:41:38+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "CitationGPT_rank_test"
More Information needed | [
"# Dataset Card for \"CitationGPT_rank_test\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"CitationGPT_rank_test\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"CitationGPT_rank_test\"\n\nMore Information needed"
]
|
e016b14aae88f4d30d82c268e162314a7587d97c | # Dataset Card for "PubMed_Summ_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hippocrates/PubMed_Summ_train | [
"region:us"
]
| 2023-11-16T21:34:16+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 54379474, "num_examples": 26570}], "download_size": 29277288, "dataset_size": 54379474}} | 2023-11-16T21:47:36+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "PubMed_Summ_train"
More Information needed | [
"# Dataset Card for \"PubMed_Summ_train\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"PubMed_Summ_train\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"PubMed_Summ_train\"\n\nMore Information needed"
]
|
8c0dce7551544f7e2e651dcec42a64519656cfd2 | ChatML converted version of openhermes dataset. Useful for usage in directly fine tuning. | Jcuhfehl/OpenHermes-ChatML | [
"region:us"
]
| 2023-11-16T22:17:40+00:00 | {"dataset_info": {"features": [{"name": "data", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 314789363, "num_examples": 242831}], "download_size": 136731208, "dataset_size": 314789363}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-16T22:33:52+00:00 | []
| []
| TAGS
#region-us
| ChatML converted version of openhermes dataset. Useful for usage in directly fine tuning. | []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
8dd4cbc77ad43be2e4b77d85bff2dd375ef54e55 | # Dataset Card for "ultrachat_filtered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | pkarypis/ultrachat_filtered | [
"region:us"
]
| 2023-11-16T22:51:33+00:00 | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "prompt_id", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "test_gen", "num_bytes": 148276089, "num_examples": 28304}, {"name": "test_sft", "num_bytes": 154695659, "num_examples": 23110}, {"name": "train_gen", "num_bytes": 1347396812, "num_examples": 256032}, {"name": "train_sft", "num_bytes": 1350777817.931667, "num_examples": 200979}], "download_size": 1596770502, "dataset_size": 3001146377.9316673}} | 2023-11-16T23:05:33+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "ultrachat_filtered"
More Information needed | [
"# Dataset Card for \"ultrachat_filtered\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"ultrachat_filtered\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"ultrachat_filtered\"\n\nMore Information needed"
]
|
c92399221ad1975816e4089e98473ea11474d55c | # Dataset Card for "misinfo-meta"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Tverous/misinfo-meta | [
"region:us"
]
| 2023-11-16T23:00:46+00:00 | {"dataset_info": {"features": [{"name": "uid", "dtype": "null"}, {"name": "claim", "dtype": "null"}, {"name": "main_text", "dtype": "null"}, {"name": "image", "dtype": "null"}, {"name": "video", "dtype": "null"}, {"name": "audio", "dtype": "null"}, {"name": "kg_embedding", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 0, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-19T05:58:10+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "misinfo-meta"
More Information needed | [
"# Dataset Card for \"misinfo-meta\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"misinfo-meta\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"misinfo-meta\"\n\nMore Information needed"
]
|
625e67d85c69c56f3a3315dc379985199285cb91 | # Dataset Card for Dataset Name
Q&A pairs relative to Israel's law in Russian language
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Q&A pairs relative to Israel's law in Russian language -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | TarasHu/pravoIsrael | [
"task_categories:table-question-answering",
"language:ru",
"region:us"
]
| 2023-11-17T00:53:45+00:00 | {"language": ["ru"], "task_categories": ["table-question-answering"]} | 2024-02-03T19:46:07+00:00 | []
| [
"ru"
]
| TAGS
#task_categories-table-question-answering #language-Russian #region-us
| # Dataset Card for Dataset Name
Q&A pairs relative to Israel's law in Russian language
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\nQ&A pairs relative to Israel's law in Russian language \n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
"TAGS\n#task_categories-table-question-answering #language-Russian #region-us \n",
"# Dataset Card for Dataset Name\n\nQ&A pairs relative to Israel's law in Russian language \n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
25,
48,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
]
| [
"passage: TAGS\n#task_categories-table-question-answering #language-Russian #region-us \n# Dataset Card for Dataset Name\n\nQ&A pairs relative to Israel's law in Russian language \n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
]
|
0ed2fa520653f4e536afc3b6b2284ac6ba783827 | # Dataset Card for "sst2_affix_neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | joey234/sst2_affix_neg | [
"region:us"
]
| 2023-11-17T01:22:59+00:00 | {"dataset_info": {"features": [{"name": "idx", "dtype": "int32"}, {"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}, {"name": "words_with_affixes", "sequence": "string"}, {"name": "sentence_replace_affix", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 21735, "num_examples": 71}], "download_size": 19062, "dataset_size": 21735}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-12-11T05:36:27+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "sst2_affix_neg"
More Information needed | [
"# Dataset Card for \"sst2_affix_neg\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"sst2_affix_neg\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"sst2_affix_neg\"\n\nMore Information needed"
]
|
960aa365926566e01b9c77a34388bf0e5a2bf9f5 | # Dataset Card for "imdb_affix_neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | joey234/imdb_affix_neg | [
"region:us"
]
| 2023-11-17T01:23:18+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "neg", "1": "pos"}}}}, {"name": "words_with_affixes", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 40896361, "num_examples": 18618}], "download_size": 11872416, "dataset_size": 40896361}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]} | 2023-11-17T01:23:23+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "imdb_affix_neg"
More Information needed | [
"# Dataset Card for \"imdb_affix_neg\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"imdb_affix_neg\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"imdb_affix_neg\"\n\nMore Information needed"
]
|
7dfb019ac6766e7126e298b7ebf676bb9d6d431f | # Dataset Card for "rotten_tomatoes_affix_neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | joey234/rotten_tomatoes_affix_neg | [
"region:us"
]
| 2023-11-17T01:24:07+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "neg", "1": "pos"}}}}, {"name": "words_with_affixes", "sequence": "string"}, {"name": "sentence_replace_affix", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 32423, "num_examples": 108}], "download_size": 25881, "dataset_size": 32423}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]} | 2023-12-11T05:37:29+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "rotten_tomatoes_affix_neg"
More Information needed | [
"# Dataset Card for \"rotten_tomatoes_affix_neg\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"rotten_tomatoes_affix_neg\"\n\nMore Information needed"
]
| [
6,
22
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"rotten_tomatoes_affix_neg\"\n\nMore Information needed"
]
|
99a3090ca0e4581589c9176086911cbabd1cf80c | # Dataset Card for "tweet_eval_affix_neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | joey234/tweet_eval_affix_neg | [
"region:us"
]
| 2023-11-17T01:28:17+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "neutral", "2": "positive"}}}}, {"name": "words_with_affixes", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 56170, "num_examples": 405}], "download_size": 0, "dataset_size": 56170}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]} | 2023-11-17T01:30:00+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "tweet_eval_affix_neg"
More Information needed | [
"# Dataset Card for \"tweet_eval_affix_neg\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"tweet_eval_affix_neg\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"tweet_eval_affix_neg\"\n\nMore Information needed"
]
|
14cd9824d0905dae2b065662160a17117b104a4f | # Dataset Card for "tencent_tts_encodec"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kuanhuggingface/tencent_tts_encodec | [
"region:us"
]
| 2023-11-17T01:33:49+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "file_id", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "src_encodec_0", "sequence": "int64"}, {"name": "src_encodec_1", "sequence": "int64"}, {"name": "src_encodec_2", "sequence": "int64"}, {"name": "src_encodec_3", "sequence": "int64"}, {"name": "src_encodec_4", "sequence": "int64"}, {"name": "src_encodec_5", "sequence": "int64"}, {"name": "src_encodec_6", "sequence": "int64"}, {"name": "src_encodec_7", "sequence": "int64"}, {"name": "tgt_encodec_0", "sequence": "int64"}, {"name": "tgt_encodec_1", "sequence": "int64"}, {"name": "tgt_encodec_2", "sequence": "int64"}, {"name": "tgt_encodec_3", "sequence": "int64"}, {"name": "tgt_encodec_4", "sequence": "int64"}, {"name": "tgt_encodec_5", "sequence": "int64"}, {"name": "tgt_encodec_6", "sequence": "int64"}, {"name": "tgt_encodec_7", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 18583644220, "num_examples": 266780}, {"name": "validation", "num_bytes": 527818324, "num_examples": 7620}, {"name": "test", "num_bytes": 508374588, "num_examples": 7620}], "download_size": 470732178, "dataset_size": 19619837132}} | 2023-11-17T01:35:57+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "tencent_tts_encodec"
More Information needed | [
"# Dataset Card for \"tencent_tts_encodec\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"tencent_tts_encodec\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"tencent_tts_encodec\"\n\nMore Information needed"
]
|
21ff6da7cb2bdc45a7f8bfca562762ea04e4322f | # Dataset Card for "tencent_tts_speech_tokenizer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kuanhuggingface/tencent_tts_speech_tokenizer | [
"region:us"
]
| 2023-11-17T01:38:28+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "file_id", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "src_speech_tokenizer_0", "sequence": "int64"}, {"name": "src_speech_tokenizer_1", "sequence": "int64"}, {"name": "src_speech_tokenizer_2", "sequence": "int64"}, {"name": "src_speech_tokenizer_3", "sequence": "int64"}, {"name": "src_speech_tokenizer_4", "sequence": "int64"}, {"name": "src_speech_tokenizer_5", "sequence": "int64"}, {"name": "src_speech_tokenizer_6", "sequence": "int64"}, {"name": "src_speech_tokenizer_7", "sequence": "int64"}, {"name": "tgt_speech_tokenizer_0", "sequence": "int64"}, {"name": "tgt_speech_tokenizer_1", "sequence": "int64"}, {"name": "tgt_speech_tokenizer_2", "sequence": "int64"}, {"name": "tgt_speech_tokenizer_3", "sequence": "int64"}, {"name": "tgt_speech_tokenizer_4", "sequence": "int64"}, {"name": "tgt_speech_tokenizer_5", "sequence": "int64"}, {"name": "tgt_speech_tokenizer_6", "sequence": "int64"}, {"name": "tgt_speech_tokenizer_7", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 12405025340, "num_examples": 266780}, {"name": "validation", "num_bytes": 352337364, "num_examples": 7620}, {"name": "test", "num_bytes": 339358908, "num_examples": 7620}], "download_size": 707880738, "dataset_size": 13096721612}} | 2023-11-17T01:40:16+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "tencent_tts_speech_tokenizer"
More Information needed | [
"# Dataset Card for \"tencent_tts_speech_tokenizer\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"tencent_tts_speech_tokenizer\"\n\nMore Information needed"
]
| [
6,
22
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"tencent_tts_speech_tokenizer\"\n\nMore Information needed"
]
|
aea08c0c98aa609965d8e69053755ac2ffd1794f | # Dataset Card for "amazon_affix_neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | joey234/amazon_affix_neg | [
"region:us"
]
| 2023-11-17T02:32:53+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}, {"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "words_with_affixes", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 44456447, "num_examples": 71215}], "download_size": 23517659, "dataset_size": 44456447}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]} | 2023-11-17T02:32:59+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "amazon_affix_neg"
More Information needed | [
"# Dataset Card for \"amazon_affix_neg\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"amazon_affix_neg\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"amazon_affix_neg\"\n\nMore Information needed"
]
|
ce317bca6c8895ce58daac1a2b44166a5fe7dd37 | huggingface-cli login | kakarads/remotes | [
"license:apache-2.0",
"region:us"
]
| 2023-11-17T02:33:14+00:00 | {"license": "apache-2.0"} | 2023-11-20T02:37:38+00:00 | []
| []
| TAGS
#license-apache-2.0 #region-us
| huggingface-cli login | []
| [
"TAGS\n#license-apache-2.0 #region-us \n"
]
| [
14
]
| [
"passage: TAGS\n#license-apache-2.0 #region-us \n"
]
|
ea5a9b6f4f7c9b1087a6c83d3f9c3d0607359fae | # Dataset Card for "snli-3way"
This dataset is the [snli](https://huggingface.co/datasets/snli) dataset where the labels are: `entailment`, `contradiction` and `neutral`.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | AntoineBlanot/snli-3way | [
"region:us"
]
| 2023-11-17T02:50:18+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 69083095, "num_examples": 549367}, {"name": "test", "num_bytes": 1300733, "num_examples": 9842}], "download_size": 19994363, "dataset_size": 70383828}} | 2023-11-17T02:57:32+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "snli-3way"
This dataset is the snli dataset where the labels are: 'entailment', 'contradiction' and 'neutral'.
More Information needed | [
"# Dataset Card for \"snli-3way\"\nThis dataset is the snli dataset where the labels are: 'entailment', 'contradiction' and 'neutral'.\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"snli-3way\"\nThis dataset is the snli dataset where the labels are: 'entailment', 'contradiction' and 'neutral'.\n\nMore Information needed"
]
| [
6,
47
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"snli-3way\"\nThis dataset is the snli dataset where the labels are: 'entailment', 'contradiction' and 'neutral'.\n\nMore Information needed"
]
|
bcbef18cc856f4f690c5d254a61a4064b931343f | # Dataset Card for "snli-binary"
This dataset is the [snli-3way](https://huggingface.co/datasets/AntoineBlanot/snli-3way) dataset where the `contradiction` and `neutral` classes has been merged together as a `non-entailment` class.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | AntoineBlanot/snli-binary | [
"region:us"
]
| 2023-11-17T02:50:45+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 70545630, "num_examples": 549367}, {"name": "test", "num_bytes": 1326656, "num_examples": 9842}], "download_size": 19925323, "dataset_size": 71872286}} | 2023-11-17T02:58:57+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "snli-binary"
This dataset is the snli-3way dataset where the 'contradiction' and 'neutral' classes has been merged together as a 'non-entailment' class.
More Information needed | [
"# Dataset Card for \"snli-binary\"\nThis dataset is the snli-3way dataset where the 'contradiction' and 'neutral' classes has been merged together as a 'non-entailment' class.\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"snli-binary\"\nThis dataset is the snli-3way dataset where the 'contradiction' and 'neutral' classes has been merged together as a 'non-entailment' class.\n\nMore Information needed"
]
| [
6,
56
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"snli-binary\"\nThis dataset is the snli-3way dataset where the 'contradiction' and 'neutral' classes has been merged together as a 'non-entailment' class.\n\nMore Information needed"
]
|
b1e4f750f1fe754ee67c67ac3a91f18f567a5382 | # Dataset Card for "CoTTrain-CoTCollection"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bigheiniuJ/CoTTrain-CoTCollection | [
"region:us"
]
| 2023-11-17T04:07:30+00:00 | {"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "rationale", "dtype": "string"}, {"name": "task", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "input", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 861688211, "num_examples": 384272}], "download_size": 522795521, "dataset_size": 861688211}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-17T04:59:52+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "CoTTrain-CoTCollection"
More Information needed | [
"# Dataset Card for \"CoTTrain-CoTCollection\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"CoTTrain-CoTCollection\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"CoTTrain-CoTCollection\"\n\nMore Information needed"
]
|
c0ceaba1807d6479e115da0bb3c8adae7eb7219a | # Dataset Card for "small_multiplication_whole"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jlbaker361/small_multiplication_whole | [
"region:us"
]
| 2023-11-17T04:47:33+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "float64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1343.111111111111, "num_examples": 40}, {"name": "test", "num_bytes": 167.88888888888889, "num_examples": 5}], "download_size": 4215, "dataset_size": 1511.0}} | 2023-11-17T05:53:40+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "small_multiplication_whole"
More Information needed | [
"# Dataset Card for \"small_multiplication_whole\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"small_multiplication_whole\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"small_multiplication_whole\"\n\nMore Information needed"
]
|
d2b0946d4ae83e4a9bc04fb3a4869d81baee4686 | # Dataset Card for "small_division_whole"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jlbaker361/small_division_whole | [
"region:us"
]
| 2023-11-17T04:47:34+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "float64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1231.111111111111, "num_examples": 32}, {"name": "test", "num_bytes": 153.88888888888889, "num_examples": 4}], "download_size": 4157, "dataset_size": 1385.0}} | 2023-11-17T05:53:41+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "small_division_whole"
More Information needed | [
"# Dataset Card for \"small_division_whole\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"small_division_whole\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"small_division_whole\"\n\nMore Information needed"
]
|
e5e39bbb362c9f06d749e6bd5b4ac6cd38457972 | # Dataset Card for "small_subtraction_whole"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jlbaker361/small_subtraction_whole | [
"region:us"
]
| 2023-11-17T04:47:35+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "float64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1320.0, "num_examples": 40}, {"name": "test", "num_bytes": 165.0, "num_examples": 5}], "download_size": 4097, "dataset_size": 1485.0}} | 2023-11-17T05:53:43+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "small_subtraction_whole"
More Information needed | [
"# Dataset Card for \"small_subtraction_whole\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"small_subtraction_whole\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"small_subtraction_whole\"\n\nMore Information needed"
]
|
8cd22c93ccc3171ff02a95d1502e794a7d57c0fd | # Dataset Card for "small_addition_whole"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jlbaker361/small_addition_whole | [
"region:us"
]
| 2023-11-17T04:47:37+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "float64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1337.7777777777778, "num_examples": 40}, {"name": "test", "num_bytes": 167.22222222222223, "num_examples": 5}], "download_size": 4158, "dataset_size": 1505.0}} | 2023-11-17T05:53:45+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "small_addition_whole"
More Information needed | [
"# Dataset Card for \"small_addition_whole\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"small_addition_whole\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"small_addition_whole\"\n\nMore Information needed"
]
|
d0eb2e5c309bfb77744fd79ac9ab22efbbf791db | # Dataset Card for "small_multiplication_decimal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jlbaker361/small_multiplication_decimal | [
"region:us"
]
| 2023-11-17T04:47:41+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "float64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1934.2222222222222, "num_examples": 40}, {"name": "test", "num_bytes": 241.77777777777777, "num_examples": 5}], "download_size": 4575, "dataset_size": 2176.0}} | 2023-11-17T05:53:55+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "small_multiplication_decimal"
More Information needed | [
"# Dataset Card for \"small_multiplication_decimal\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"small_multiplication_decimal\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"small_multiplication_decimal\"\n\nMore Information needed"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.