sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
bcb4779d231895fe7a2b77478f2a3846923120d9
|
may-ohta/kftt
|
[
"license:cc-by-sa-3.0",
"region:us"
] |
2023-04-15T05:56:38+00:00
|
{"license": "cc-by-sa-3.0", "dataset_info": {"features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "ja"]}}}], "splits": [{"name": "train", "num_bytes": 111632845, "num_examples": 440289}, {"name": "validation", "num_bytes": 250047, "num_examples": 1167}, {"name": "test", "num_bytes": 264052, "num_examples": 1161}, {"name": "tune", "num_bytes": 311095, "num_examples": 1236}], "download_size": 65992694, "dataset_size": 112458039}}
|
2023-04-15T06:03:28+00:00
|
|
70dc0221ac29200793f584d1722d5108125666d5
|
may-ohta/jparacrawl
|
[
"license:other",
"region:us"
] |
2023-04-15T06:07:27+00:00
|
{"license": "other"}
|
2024-01-30T18:30:36+00:00
|
|
e18bb13c2af3579f52452a8424b5e7b868d0ef9d
|
may-ohta/MUST-C
|
[
"license:other",
"region:us"
] |
2023-04-15T06:17:39+00:00
|
{"license": "other"}
|
2023-04-15T06:18:05+00:00
|
|
f4a395a8ca7e472769bc8f685d43e4884e84a446
|
A dataset created from the gradio documentation page. The 2 main groups are components and guides, where components are have all the information about What they do, and their parameters.
This is a first attempt at generating a dataset for alpaca training, feedback is welcome, and improvements on this will be made
|
ChobPT/gradio_docs_alpaca
|
[
"region:us"
] |
2023-04-15T06:32:42+00:00
|
{}
|
2023-04-16T09:48:58+00:00
|
083b5fec913afd82072bc2683b7639fac8267b81
|
# Alpaca GPT4 English-to-Italian Translated Instructions (WIP)
This dataset contains **15209** instructions that have been translated from English to Italian using `gpt-3.5-turbo`.
Alpaca GPT4: the original **alpaca_gpt4_data.json** dataset contains 52K instruction-following data generated by GPT-4 with prompts in Alpaca. The JSON file has the same format as Alpaca data, except the output is generated by GPT-4:
- instruction: str, describes the task the model should perform. Each of the 52K instructions is unique.
- input: str, optional context or input for the task.
- output: str, the answer to the instruction as generated by GPT-4.
## License
Please note that the original Alpaca GPT4 dataset and the translations generated by `gpt-3.5-turbo` may have their respective licenses, and it is important to comply with any usage restrictions specified by the original data sources. As this dataset contains partially translated data, proper attribution and compliance with relevant licenses is recommended.
The data is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
## Citation
```
@article{peng2023instruction,
title={Instruction Tuning with GPT-4},
author={Peng, Baolin and Li, Chunyuan and He, Pengcheng and Galley, Michel and Gao, Jianfeng},
journal={arXiv preprint arXiv:2304.03277},
year={2023}
}
```
|
efederici/alpaca-gpt4-it
|
[
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:it",
"license:cc-by-nc-2.0",
"alpaca",
"gpt4",
"it",
"region:us"
] |
2023-04-15T07:34:39+00:00
|
{"language": ["it"], "license": "cc-by-nc-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "tags": ["alpaca", "gpt4", "it"]}
|
2023-11-20T13:41:27+00:00
|
1c5b670891f61c01e3dc074a2ce9ad82c858fc0e
|
# Dataset Card for "imdb1m-top-100-users"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
gregorgabrovsek/imdb1m-top-100-users
|
[
"region:us"
] |
2023-04-15T07:53:48+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 19355289, "num_examples": 34600}, {"name": "test", "num_bytes": 5322604, "num_examples": 11500}], "download_size": 16388095, "dataset_size": 24677893}}
|
2023-04-15T07:53:58+00:00
|
b824e6aa59e56e68f692614964fbefb514a7ceaa
|
# Dataset Card for "imdb1m-top-5-users"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
gregorgabrovsek/imdb1m-top-5-users
|
[
"region:us"
] |
2023-04-15T07:54:31+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 5212923, "num_examples": 7350}, {"name": "test", "num_bytes": 1850367, "num_examples": 2445}], "download_size": 4683175, "dataset_size": 7063290}}
|
2023-04-15T07:54:39+00:00
|
53eb02dd32bf12e0e397e4e72e807339cba0dfd7
|
# Dataset Card for GalicianSRL
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Limitations](#limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [GalicianSRL Project Hub](https://github.com/mbruton0426/GalicianSRL)
- **Paper:** To be updated
- **Point of Contact:** [Micaella Bruton](mailto:[email protected])
### Dataset Summary
The GalicianSRL dataset is a Galician-language dataset of tokenized sentences and the semantic role for each token within a sentence. Semantic roles are limited to verbal roots, argument 0, argument 1, and argument 2. This dataset was created to support the task of semantic role labeling in the Galician language, as no publically available datasets existed as of the date of publication to the contributor's knowledge.
### Languages
The text in the dataset is in Galician.
## Dataset Structure
### Data Instances
A typical data point comprises a tokenized sentence, tags for each token, and a sentence id number. An example from the GalicianSRL dataset looks as follows:
```
{'tokens': ['O', 'Pleno', 'poderá', ',', 'con', 'todo', ',', 'avocar', 'en', 'calquera', 'momento', 'o', 'debate', 'e', 'votación', 'de', 'calquera', 'proxecto', 'ou', 'proposición', 'de', 'lei', 'que', 'xa', 'fora', 'obxecto', 'de', 'esta', 'delegación', '.'],
'tags': [0, 1, 4, 0, 0, 0, 0, 17, 0, 0, 16, 0, 15, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'ids': 504}
```
Tags are assigned an id number according to the index of its label as listed in:
```python
>>> dataset['train'].features['tags'].feature.names
```
### Data Fields
- `tokens`: a list of strings
- `tags`: a list of integers
- `ids`: a sentence id, as an integer
### Data Splits
The data is split into a training and test set. The final structure and split sizes are as follow:
```
DatasetDict({
train: Dataset({
features: ['tokens', 'tags', 'ids'],
num_rows: 1005
})
test: Dataset({
features: ['tokens', 'tags', 'ids'],
num_rows: 252
})
})
```
## Dataset Creation
### Curation Rationale
GalicianSRL was built to provide a dataset for semantic role labeling in Galician and expand NLP resources available for the Galician language.
### Source Data
#### Initial Data Collection and Normalization
Data was collected from both the [CTG UD annotated corpus](https://github.com/UniversalDependencies/UD_Galician-CTG) and the [TreeGal UD annotated corpus](https://github.com/UniversalDependencies/UD_Galician-TreeGal), and combined to collect the requsite information for this task. For more information, please refer to the publication listed in the citation.
## Considerations for Using the Data
### Limitations
The purpose of this dataset is to help develop a working semantic role labeling system for Galician, as SRL systems have been shown to improve a variety of NLP tasks. It should be noted however that Galician is considered a low-resource language at this time, and as such the dataset has an extrememly limited scope. This dataset would benefit from manual validation of a native speaker of Galician, the inclusion of additional sentences, and an extention of arguments past arg0, arg1, and arg2.
## Additional Information
### Dataset Curators
The dataset was created by Micaella Bruton, as part of her Master's thesis.
### Citation Information
```
@inproceedings{bruton-beloucif-2023-bertie,
title = "{BERT}ie Bott{'}s Every Flavor Labels: A Tasty Introduction to Semantic Role Labeling for {G}alician",
author = "Bruton, Micaella and
Beloucif, Meriem",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.671",
doi = "10.18653/v1/2023.emnlp-main.671",
pages = "10892--10902",
abstract = "In this paper, we leverage existing corpora, WordNet, and dependency parsing to build the first Galician dataset for training semantic role labeling systems in an effort to expand available NLP resources. Additionally, we introduce verb indexing, a new pre-processing method, which helps increase the performance when semantically parsing highly-complex sentences. We use transfer-learning to test both the resource and the verb indexing method. Our results show that the effects of verb indexing were amplified in scenarios where the model was both pre-trained and fine-tuned on datasets utilizing the method, but improvements are also noticeable when only used during fine-tuning. The best-performing Galician SRL model achieved an f1 score of 0.74, introducing a baseline for future Galician SRL systems. We also tested our method on Spanish where we achieved an f1 score of 0.83, outperforming the baseline set by the 2009 CoNLL Shared Task by 0.025 showing the merits of our verb indexing method for pre-processing.",
}
```
|
mbruton/galician_srl
|
[
"task_categories:token-classification",
"size_categories:1K<n<10K",
"language:gl",
"license:apache-2.0",
"region:us"
] |
2023-04-15T07:57:20+00:00
|
{"language": ["gl"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["token-classification"], "pretty_name": "GalicianSRL", "dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "tags", "sequence": {"class_label": {"names": {"0": "O", "1": "r0:arg0", "2": "r0:arg1", "3": "r0:arg2", "4": "r0:root", "5": "r10:arg0", "6": "r10:arg1", "7": "r10:root", "8": "r11:arg0", "9": "r11:root", "10": "r12:arg1", "11": "r12:root", "12": "r13:arg1", "13": "r13:root", "14": "r1:arg0", "15": "r1:arg1", "16": "r1:arg2", "17": "r1:root", "18": "r2:arg0", "19": "r2:arg1", "20": "r2:arg2", "21": "r2:root", "22": "r3:arg0", "23": "r3:arg1", "24": "r3:arg2", "25": "r3:root", "26": "r4:arg0", "27": "r4:arg1", "28": "r4:arg2", "29": "r4:root", "30": "r5:arg0", "31": "r5:arg1", "32": "r5:arg2", "33": "r5:root", "34": "r6:arg0", "35": "r6:arg1", "36": "r6:arg2", "37": "r6:root", "38": "r7:arg0", "39": "r7:arg1", "40": "r7:arg2", "41": "r7:root", "42": "r8:arg0", "43": "r8:arg1", "44": "r8:arg2", "45": "r8:root", "46": "r9:arg0", "47": "r9:arg1", "48": "r9:arg2", "49": "r9:root"}}}}, {"name": "ids", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2241310, "num_examples": 3986}, {"name": "test", "num_bytes": 555760, "num_examples": 997}], "download_size": 675236, "dataset_size": 2797070}}
|
2024-01-03T14:08:08+00:00
|
47e7903e9c8450f6ae33a894f65983fbea18caaa
|
jiaoyang623/ddpm-butterflies-128
|
[
"license:apache-2.0",
"region:us"
] |
2023-04-15T08:13:13+00:00
|
{"license": "apache-2.0"}
|
2023-04-15T08:13:13+00:00
|
|
25754d86e2a265c7fddeeb5881538b9558ad6e8e
|
andidu/paraphrase-ru-reviews
|
[
"size_categories:100K<n<1M",
"language:ru",
"region:us"
] |
2023-04-15T08:51:53+00:00
|
{"language": ["ru"], "size_categories": ["100K<n<1M"], "pretty_name": "andidu/paraphrase-ru-reviews"}
|
2023-04-15T08:57:58+00:00
|
|
82f5ff393a8752dfe953adb7dfbe7bd783f09eea
|
# Dataset Card for "chunk_259"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_259
|
[
"region:us"
] |
2023-04-15T08:59:17+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 21677745456.875, "num_examples": 225697}], "download_size": 19218840913, "dataset_size": 21677745456.875}}
|
2023-04-15T09:10:02+00:00
|
13e836e42f39c9abddc6083cf19d2a4eb45c08e2
|
andidu/paraphrase-ru-it
|
[
"size_categories:100K<n<1M",
"language:ru",
"region:us"
] |
2023-04-15T08:59:29+00:00
|
{"language": ["ru"], "size_categories": ["100K<n<1M"], "pretty_name": "andidu/paraphrase-ru-it"}
|
2023-04-15T09:01:34+00:00
|
|
74bea84459bea53d3ffc4b77971f270293077986
|
harshiv/placement
|
[
"license:unknown",
"region:us"
] |
2023-04-15T09:18:03+00:00
|
{"license": "unknown"}
|
2023-04-15T09:18:45+00:00
|
|
08146cd7591ff9e589383dd85d567a3256583f39
|
mr-oogway/kccdata
|
[
"region:us"
] |
2023-04-15T09:38:57+00:00
|
{}
|
2023-04-15T09:40:46+00:00
|
|
6c07a79a8e36ae2abfb092f1a8f71c8d2fc5db01
|
# Dataset Card for llm-book/ner-wikipedia-dataset
書籍『大規模言語モデル入門』で使用する、ストックマーク株式会社により作成された「Wikipediaを用いた日本語の固有表現抽出データセット」(Version 2.0)です。
Githubリポジトリ[stockmarkteam/ner-wikipedia-dataset](https://github.com/stockmarkteam/ner-wikipedia-dataset)で公開されているデータセットを利用しています。
### Citation
```bibtex
@inproceedings{omi-2021-wikipedia,
title = "Wikipediaを用いた日本語の固有表現抽出のデータセットの構築",
author = "近江 崇宏",
booktitle = "言語処理学会第27回年次大会",
year = "2021",
url = "https://anlp.jp/proceedings/annual_meeting/2021/pdf_dir/P2-7.pdf",
}
```
### Licence
Wikipedia日本語版と同じCC-BY-SA 3.0のライセンスに従います。
|
llm-book/ner-wikipedia-dataset
|
[
"task_categories:token-classification",
"size_categories:1K<n<10K",
"language:ja",
"license:cc-by-sa-3.0",
"region:us"
] |
2023-04-15T09:43:21+00:00
|
{"language": ["ja"], "license": ["cc-by-sa-3.0"], "size_categories": ["1K<n<10K"], "task_categories": ["token-classification"]}
|
2023-12-12T02:25:51+00:00
|
1dbdffc5fa9fd2118d18961306ff207c34410f72
|
Do0rMaMu/bookcorpus
|
[
"license:openrail",
"region:us"
] |
2023-04-15T09:48:53+00:00
|
{"license": "openrail"}
|
2023-04-15T09:48:53+00:00
|
|
1c451a2a41c234d4544bbf5884508b4e185c15dd
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
pheepa/jira-commentaries-mlm
|
[
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:en",
"jira",
"region:us"
] |
2023-04-15T10:03:54+00:00
|
{"language": ["en"], "size_categories": ["1M<n<10M"], "task_categories": ["text-generation"], "pretty_name": "jira-comments", "tags": ["jira"]}
|
2023-04-15T13:15:21+00:00
|
205ad29fe29b81dc41f398425bd84aada687e52c
|
QEEWDFHGFH/picc
|
[
"license:other",
"region:us"
] |
2023-04-15T10:27:57+00:00
|
{"license": "other"}
|
2023-04-15T19:25:19+00:00
|
|
feb11e553a8e0ffe3886ba0e9d81c5e54aae1358
|
vjain/Personality_em
|
[
"license:openrail",
"region:us"
] |
2023-04-15T10:40:19+00:00
|
{"license": "openrail"}
|
2023-04-15T10:40:51+00:00
|
|
7038e39b3fbf81ee4d1e0625d6785445ceae3d30
|
# Dataset Card for "chunk_252"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_252
|
[
"region:us"
] |
2023-04-15T10:44:01+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 16779297456.875, "num_examples": 174697}], "download_size": 14937788290, "dataset_size": 16779297456.875}}
|
2023-04-15T10:57:02+00:00
|
02761ad672884b284162fbe2eb2cb69cb1fa223f
|
# Dataset Card for "chunk_264"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_264
|
[
"region:us"
] |
2023-04-15T11:14:01+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 19192119264.75, "num_examples": 199818}], "download_size": 16929016847, "dataset_size": 19192119264.75}}
|
2023-04-15T11:31:09+00:00
|
19cdf6dacb0647058528e65904df8d7715d41453
|
# Dataset Card for "pie-perf"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
rootacess/pie-perf
|
[
"region:us"
] |
2023-04-15T11:44:49+00:00
|
{"dataset_info": {"features": [{"name": "user_id", "dtype": "string"}, {"name": "problem_id", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "submission_id_v0", "dtype": "string"}, {"name": "submission_id_v1", "dtype": "string"}, {"name": "cpu_time_v0", "dtype": "int64"}, {"name": "cpu_time_v1", "dtype": "int64"}, {"name": "memory_v0", "dtype": "int64"}, {"name": "memory_v1", "dtype": "int64"}, {"name": "status_v0", "dtype": "string"}, {"name": "status_v1", "dtype": "string"}, {"name": "improvement_frac", "dtype": "float64"}, {"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "code_v0_loc", "dtype": "int64"}, {"name": "code_v1_loc", "dtype": "int64"}, {"name": "code_v0_num_chars", "dtype": "int64"}, {"name": "code_v1_num_chars", "dtype": "int64"}, {"name": "code_v0_no_empty_lines", "dtype": "string"}, {"name": "code_v1_no_empty_lines", "dtype": "string"}, {"name": "code_same", "dtype": "bool"}, {"name": "relative_loc_diff_percent", "dtype": "float64"}, {"name": "diff", "sequence": "string"}, {"name": "diff_only_import_comment", "dtype": "bool"}, {"name": "measured_runtime_v0", "dtype": "float64"}, {"name": "measured_runtime_v1", "dtype": "float64"}, {"name": "runtime_lift", "dtype": "float64"}, {"name": "key", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 110329743, "num_examples": 36857}, {"name": "val", "num_bytes": 5942994, "num_examples": 1940}, {"name": "test", "num_bytes": 2714513, "num_examples": 1000}, {"name": "codegen_1shot_test", "num_bytes": 3003513, "num_examples": 1000}], "download_size": 56295756, "dataset_size": 121990763}}
|
2023-04-23T04:24:34+00:00
|
e5e6d2bbba8276b677b8f3780255ddfafa5d6c7d
|
# Dataset Card for "chunk_250"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_250
|
[
"region:us"
] |
2023-04-15T11:47:56+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 16532262000.375, "num_examples": 172125}], "download_size": 15176658045, "dataset_size": 16532262000.375}}
|
2023-04-15T12:01:42+00:00
|
22cd4ebacad9254f704234c0a3ca312a5bc80a85
|
# Dataset Card for "chunk_265"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_265
|
[
"region:us"
] |
2023-04-15T12:08:16+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 17890860960.25, "num_examples": 186270}], "download_size": 15764522810, "dataset_size": 17890860960.25}}
|
2023-04-15T12:22:40+00:00
|
a33cc7173ca7e647f061aa82ac0b8019c4b318a7
|
# Dataset Card for "chunk_266"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_266
|
[
"region:us"
] |
2023-04-15T12:09:36+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 17309386368.0, "num_examples": 180216}], "download_size": 15326439467, "dataset_size": 17309386368.0}}
|
2023-04-15T12:23:26+00:00
|
99863fcf6269ce9c3814019c84e99c10cd584740
|
# Dataset Card for "chunk_267"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_267
|
[
"region:us"
] |
2023-04-15T12:12:37+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 18997814160.625, "num_examples": 197795}], "download_size": 16943423935, "dataset_size": 18997814160.625}}
|
2023-04-15T12:27:59+00:00
|
30ac33fc01c78aed1277ddc5ae14c599d07fc80e
|
# Dataset Card for "chunk_270"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_270
|
[
"region:us"
] |
2023-04-15T12:13:04+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 15402737520.375, "num_examples": 160365}], "download_size": 12814543642, "dataset_size": 15402737520.375}}
|
2023-04-15T12:20:24+00:00
|
ced1ad97560534d18c36d86f4f0c0f787d4e07d1
|
# Dataset Card for "chunk_268"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_268
|
[
"region:us"
] |
2023-04-15T12:14:40+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 20271698784.75, "num_examples": 211058}], "download_size": 18222530794, "dataset_size": 20271698784.75}}
|
2023-04-15T12:31:06+00:00
|
998a0470653831a4134adf2764b215ef41e6d86e
|
# Dataset Card for "chunk_269"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_269
|
[
"region:us"
] |
2023-04-15T12:19:25+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 21010211856.625, "num_examples": 218747}], "download_size": 19104304478, "dataset_size": 21010211856.625}}
|
2023-04-15T12:37:02+00:00
|
d9e9a08f352843aa4bee542e408e4067893ead85
|
## 1. Introduction
Introducing a novel accent database "IndicAccentDB" which satisfies the below requirements:
* **Gender balance:** The speech database should be a collection of a wide range of speakers balancing both the male and female speakers to display the characteristics of the speakers speech.
* **Phonetically balanced uniform content:** To make the classification task simpler and models to distinguish the speakers, we considered building the IndicAccentDB with uniform content, a collection of speech recordings for
the Harvard sentences. These sentences gather intrinsic information by combining different phonemes and grammatically focused vocabulary. These sentences are appropriately expressing accents in sentence-level discourse.
You can access the Harvard sentences (sample shown below) dataset here: [Harvard Sentences](https://www.cs.columbia.edu/~hgs/audio/harvard.html) recited by the speakers in the recordings.
*The juice of lemons makes fine punch.*
*The fish twisted and turned on the bent hook.*
* IndicAccentDB contains speech recordings in six non-native English accents of Gujarati, Hindi, Kannada, Malayalam, Tamil, and Telugu. We collected six non-native accents from volunteers who had strong non-native
English accents and were well-versed in speaking at least one Indian language. Each speaker was asked to recite the Harvard sentences. The Harvard sentences dataset consists of 72 sets
of ten sentences each and is phonetically balanced sentences that are neither too short nor too long.
## 2. Dataset Usage
To use the dataset in your Python program, refer to the following script:
```python3
from datasets import load_dataset
accent_db = load_dataset("DarshanaS/IndicAccentDb")
```
## 3. Publications
1. [S. Darshana, H. Theivaprakasham, G. Jyothish Lal, B. Premjith, V. Sowmya and K. Soman, "MARS: A Hybrid Deep CNN-based Multi-Accent Recognition System for English Language," 2022 First International Conference on Artificial Intelligence Trends and Pattern Recognition (ICAITPR), Hyderabad, India, 2022, pp. 1-6, doi: 10.1109/ICAITPR51569.2022.9844177.](https://ieeexplore.ieee.org/document/9844177)
|
DarshanaS/IndicAccentDb
|
[
"license:c-uda",
"region:us"
] |
2023-04-15T12:32:13+00:00
|
{"license": "c-uda"}
|
2023-04-30T08:53:41+00:00
|
999cd36bf190c41b1a4195ddbe8b46fc941e48e1
|
# Dataset Card for "lfw-face-transformer-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sammyboi1801/lfw-face-transformer-dataset
|
[
"region:us"
] |
2023-04-15T13:13:51+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Abdullah_Gul", "1": "Adrien_Brody", "2": "Alejandro_Toledo", "3": "Alvaro_Uribe", "4": "Amelie_Mauresmo", "5": "Andre_Agassi", "6": "Andy_Roddick", "7": "Angelina_Jolie", "8": "Ann_Veneman", "9": "Anna_Kournikova", "10": "Ari_Fleischer", "11": "Ariel_Sharon", "12": "Arnold_Schwarzenegger", "13": "Atal_Bihari_Vajpayee", "14": "Bill_Clinton", "15": "Bill_Gates", "16": "Bill_Simon", "17": "Britney_Spears", "18": "Carlos_Menem", "19": "Carlos_Moya", "20": "Catherine_Zeta-Jones", "21": "Charles_Moose", "22": "Colin_Powell", "23": "Condoleezza_Rice", "24": "David_Beckham", "25": "David_Nalbandian", "26": "Dick_Cheney", "27": "Dominique_de_Villepin", "28": "Donald_Rumsfeld", "29": "Edmund_Stoiber", "30": "Eduardo_Duhalde", "31": "Fidel_Castro", "32": "George_HW_Bush", "33": "George_Robertson", "34": "George_W_Bush", "35": "Gerhard_Schroeder", "36": "Gloria_Macapagal_Arroyo", "37": "Gonzalo_Sanchez_de_Lozada", "38": "Gordon_Brown", "39": "Gray_Davis", "40": "Guillermo_Coria", "41": "Halle_Berry", "42": "Hamid_Karzai", "43": "Hans_Blix", "44": "Harrison_Ford", "45": "Hillary_Clinton", "46": "Howard_Dean", "47": "Hu_Jintao", "48": "Hugo_Chavez", "49": "Igor_Ivanov", "50": "Jack_Straw", "51": "Jackie_Chan", "52": "Jacques_Chirac", "53": "James_Blake", "54": "James_Kelly", "55": "Jean_Charest", "56": "Jean_Chretien", "57": "Jeb_Bush", "58": "Jennifer_Aniston", "59": "Jennifer_Capriati", "60": "Jennifer_Garner", "61": "Jennifer_Lopez", "62": "Jeremy_Greenstock", "63": "Jiang_Zemin", "64": "Jiri_Novak", "65": "Joe_Lieberman", "66": "John_Allen_Muhammad", "67": "John_Ashcroft", "68": "John_Bolton", "69": "John_Howard", "70": "John_Kerry", "71": "John_Negroponte", "72": "John_Paul_II", "73": "John_Snow", "74": "Joschka_Fischer", "75": "Jose_Maria_Aznar", "76": "Juan_Carlos_Ferrero", "77": "Julianne_Moore", "78": "Julie_Gerberding", "79": "Junichiro_Koizumi", "80": "Keanu_Reeves", "81": "Kim_Clijsters", "82": "Kim_Ryong-sung", "83": "Kofi_Annan", "84": "Lance_Armstrong", "85": "Laura_Bush", "86": "Lindsay_Davenport", "87": "Lleyton_Hewitt", "88": "Lucio_Gutierrez", "89": "Luiz_Inacio_Lula_da_Silva", "90": "Mahathir_Mohamad", "91": "Mahmoud_Abbas", "92": "Mark_Philippoussis", "93": "Megawati_Sukarnoputri", "94": "Meryl_Streep", "95": "Michael_Bloomberg", "96": "Michael_Jackson", "97": "Michael_Schumacher", "98": "Mike_Weir", "99": "Mohammed_Al-Douri", "100": "Nancy_Pelosi", "101": "Naomi_Watts", "102": "Nestor_Kirchner", "103": "Nicanor_Duarte_Frutos", "104": "Nicole_Kidman", "105": "Norah_Jones", "106": "Paul_Bremer", "107": "Paul_Burrell", "108": "Pervez_Musharraf", "109": "Pete_Sampras", "110": "Pierce_Brosnan", "111": "Queen_Elizabeth_II", "112": "Recep_Tayyip_Erdogan", "113": "Renee_Zellweger", "114": "Ricardo_Lagos", "115": "Richard_Gephardt", "116": "Richard_Myers", "117": "Roger_Federer", "118": "Roh_Moo-hyun", "119": "Rubens_Barrichello", "120": "Rudolph_Giuliani", "121": "Saddam_Hussein", "122": "Salma_Hayek", "123": "Serena_Williams", "124": "Sergey_Lavrov", "125": "Sergio_Vieira_De_Mello", "126": "Silvio_Berlusconi", "127": "Spencer_Abraham", "128": "Taha_Yassin_Ramadan", "129": "Tang_Jiaxuan", "130": "Tiger_Woods", "131": "Tim_Henman", "132": "Tom_Daschle", "133": "Tom_Ridge", "134": "Tommy_Franks", "135": "Tony_Blair", "136": "Trent_Lott", "137": "Venus_Williams", "138": "Vicente_Fox", "139": "Vladimir_Putin", "140": "Wen_Jiabao", "141": "Winona_Ryder", "142": "Yoriko_Kawaguchi"}}}}], "splits": [{"name": "train", "num_bytes": 33550885.462, "num_examples": 3846}, {"name": "test", "num_bytes": 2362162.0, "num_examples": 271}], "download_size": 35786453, "dataset_size": 35913047.462}}
|
2023-04-15T13:13:56+00:00
|
409e3c6a0c2c02609c98abf213a27f244d9bfd60
|
luck4ck/pre_hospitial_care
|
[
"license:openrail",
"region:us"
] |
2023-04-15T13:14:32+00:00
|
{"license": "openrail"}
|
2023-04-15T13:53:00+00:00
|
|
6edd9e626a7e569a9a762bbf633d56dc42216e14
|
yuwenlwl/longke
|
[
"license:mit",
"region:us"
] |
2023-04-15T13:18:39+00:00
|
{"license": "mit"}
|
2023-04-15T13:18:39+00:00
|
|
22b4c866b977584670119b0d09eda22f395b6179
|
# Dataset Card for "VQAv2_sample_validation_google_flan_t5_xxl_mode_T_A_D_PNP_NO_FILTER_C_Q_rices_ns_500"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_sample_validation_google_flan_t5_xxl_mode_T_A_D_PNP_NO_FILTER_C_Q_rices_ns_500
|
[
"region:us"
] |
2023-04-15T13:22:33+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "sequence": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0", "num_bytes": 1246308, "num_examples": 500}], "download_size": 289167, "dataset_size": 1246308}}
|
2023-04-15T13:22:35+00:00
|
c4e883f2fb7fb06e47ae20fb239beaa155eec790
|
liyucheng/zhihu_26k
|
[
"license:cc-by-2.0",
"region:us"
] |
2023-04-15T13:26:58+00:00
|
{"license": "cc-by-2.0"}
|
2023-04-15T19:41:37+00:00
|
|
e528f155dc4bcbe4c7790674cc7573529c78a65b
|
# Dataset Card for "VQAv2_sample_validation_google_flan_t5_xl_mode_T_A_D_PNP_NO_FILTER_C_Q_rices_ns_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_sample_validation_google_flan_t5_xl_mode_T_A_D_PNP_NO_FILTER_C_Q_rices_ns_1000
|
[
"region:us"
] |
2023-04-15T13:50:20+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "sequence": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0", "num_bytes": 8096797, "num_examples": 1000}], "download_size": 1594680, "dataset_size": 8096797}}
|
2023-04-15T13:50:22+00:00
|
e18b50a9bf8418de900efefd37f7e82f5374e606
|
***Masumi Kotsu from Yu-Gi-Oh! ARC-V***
- *Trained with Anime (final-full pruned) model.*
- *4 versions; 6 epochs, 8 epochs, 9 epochs, 10 epochs (Feel free to combine these for different and interesting results.)*
- *Expect good results with 0.5 - 0.7 weights (through txt2img) and 0.85 - 0.95 weights (through img2img), also you can try ALL, MIDD, OUTD, OUTALL.*
|
Cheetor1996/masumi_kotsu_LoRA
|
[
"language:en",
"license:cc-by-2.0",
"art",
"region:us"
] |
2023-04-15T13:52:18+00:00
|
{"language": ["en"], "license": "cc-by-2.0", "pretty_name": "Masumi Kotsu (Yu-Gi-Oh! ARC-V)", "tags": ["art"]}
|
2023-04-21T23:16:00+00:00
|
d3a0415604827f925f34cf5b0b86b8f6b6d201a7
|
# Dataset Card for "miniwob_plusplus_hierarchical_planning"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LucasThil/miniwob_plusplus_hierarchical_planning
|
[
"region:us"
] |
2023-04-15T14:07:21+00:00
|
{"dataset_info": {"features": [{"name": "hierarchical_plans", "dtype": "string"}, {"name": "standolone_instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1763813, "num_examples": 10960}], "download_size": 552684, "dataset_size": 1763813}}
|
2023-04-15T14:07:23+00:00
|
3f760dff22117f9f92f203d6d8d8a5b90461f50c
|
# Dataset Card for "miniwob_plusplus_hierarchical_training_actions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LucasThil/miniwob_plusplus_hierarchical_training_actions
|
[
"region:us"
] |
2023-04-15T14:09:39+00:00
|
{"dataset_info": {"features": [{"name": "history_episodes", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "actions", "dtype": "string"}, {"name": "refs", "dtype": "int64"}, {"name": "keydown_text", "dtype": "string"}, {"name": "subtask_completion", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 78295377, "num_examples": 42097}], "download_size": 11012527, "dataset_size": 78295377}}
|
2023-04-15T14:09:42+00:00
|
39a5c32f5d99eaab31c6f75926954754b02c64c5
|
Bielo/jorge_quora
|
[
"license:creativeml-openrail-m",
"region:us"
] |
2023-04-15T14:12:11+00:00
|
{"license": "creativeml-openrail-m"}
|
2023-04-15T14:12:11+00:00
|
|
907b3f0e57aa65844664c47b5b700ef5b1ac42b3
|
wybxc/open-yiri-eng
|
[
"license:odc-by",
"region:us"
] |
2023-04-15T14:39:05+00:00
|
{"license": "odc-by"}
|
2023-04-15T14:39:25+00:00
|
|
f6c443cd8e7499d0a811f7ef3d6ed19f9e809916
|
# ShareGPT-Chinese-English-90k Bilingual Human-Machine QA Dataset
A high-quality Chinese-English parallel bilingual human-machine QA dataset, covering user questions in real and complex scenarios. It is used for training high-quality dialogue models (more robust in instruction distribution than those datasets generated by repeatedly calling API interfaces to simulate machine-generated Q&A, like Moss)
Features:
- 1. Provides fully semantically equivalent Chinese-English parallel corpus, facilitating bilingual dialogue model training.
- 2. All questions are genuine inquiries from users, not fabricated by artificial imagination or API polling (like Moss), aligning more closely with the real distribution of user scenarios and their expressions of questions.
- 3. The ShareGPT dataset is collected through voluntary sharing by netizens, acting as a natural filter (via human perception) that screens out most dialogues with poor experience.
It is recommended to use the Firefly framework for quick and easy out-of-the-box loading of this data format: https://github.com/yangjianxin1/Firefly
Note: This dataset was collected at a time before ChatGPT showed signs of significant cognitive decline. (It is speculated that this may be partly because the official replaced the 150B gpt3.5 with a distilled version of about 10B to reduce expenses, and partly because the introduction of more refusal responses led to a degradation in the model's ability to connect knowledge and logic.)
The training of an excellent dialogue LLM cannot do without a high-quality multi-turn dialogue dataset. If you also wish to become a volunteer,
you are welcome to join the dataset QQ group: 130920969, to exchange, collect, and contribute to the construction of high-quality datasets.
# ShareGPT-Chinese-English-90k 中英文双语人机问答数据集
中英文平行双语优质人机问答数据集,覆盖真实复杂场景下的用户提问。用于训练高质量的对话模型 (比那些通过反复调用api接口生成机器模拟问答的数据在指令分布上更鲁棒)
特点:
- 1.同时提供意义表达完全相同的中英文平行对照语料,可进行双语对话模型训练。
- 2.所有问题均非人为臆想加上api轮询拟造的假数据(如Moss),更加符合真实用户场景的指令分布和提问表达。
- 3.sharegpt数据集是由网友自发分享而收集到的,相当于有一层非常天然的过滤(通过人类感觉),筛除了大部分体验不好的对话。
推荐使用firefly框架,可以快速开箱即用使用该数据格式的加载: https://github.com/yangjianxin1/Firefly
补充:该数据收集于chatGPT还未表现出明显智力退化的时间点。(猜测一方面可能是官方为了减小开支把150B的gpt3.5替换成10b左右的蒸馏版本了,另一方面可能是由于引入了更多的拒绝答复导致模型连接知识逻辑的程度退化)
优秀对话llm的训练离不开高质量的多轮对话数据集,如果你也想成为志愿者
欢迎加入数据集QQ群:130920969,共同进行优质数据集的交流、收集和建设工作
|
shareAI/ShareGPT-Chinese-English-90k
|
[
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"language:zh",
"license:apache-2.0",
"code",
"region:us"
] |
2023-04-15T15:23:35+00:00
|
{"language": ["en", "zh"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering", "text-generation"], "configs": [{"config_name": "default", "data_files": "sharegpt_jsonl/*.jsonl"}], "tags": ["code"]}
|
2024-01-29T12:00:38+00:00
|
ca1708f89ed41779f191c9051269f4750d4e6e90
|
KaraKaraWitch/SparklingDaydream
|
[
"license:apache-2.0",
"region:us"
] |
2023-04-15T15:42:36+00:00
|
{"license": "apache-2.0"}
|
2023-04-15T15:43:02+00:00
|
|
f118c8005b43bb239009c2fb41a9db9a696d7d50
|
This is an asset dataset for the BigCode deduplication blog post.
|
chenghao/dedup_blog_assets
|
[
"license:cc-by-nc-4.0",
"region:us"
] |
2023-04-15T15:44:06+00:00
|
{"license": "cc-by-nc-4.0"}
|
2023-04-15T16:09:09+00:00
|
20c42e66003a492e48e155378a3edfcb4a572daa
|
# Dataset Card for "chunk_271"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_271
|
[
"region:us"
] |
2023-04-15T15:45:32+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 17385648480.75, "num_examples": 181010}], "download_size": 15043859682, "dataset_size": 17385648480.75}}
|
2023-04-15T15:58:58+00:00
|
df1ff84f8069e47e05896cae227f944fd8468d24
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
Jandodev/file_test_1
|
[
"region:us"
] |
2023-04-15T15:53:02+00:00
|
{}
|
2023-04-15T16:17:40+00:00
|
97c801e016265144750cb6cae8e4a31156ff43a1
|
liyucheng/zhihu_rlhf_3k
|
[
"license:cc-by-2.0",
"region:us"
] |
2023-04-15T16:03:54+00:00
|
{"license": "cc-by-2.0"}
|
2023-04-15T16:06:05+00:00
|
|
47cff24e5651102a725e0a5be605cd18b916a13a
|
# h2oGPT Data Card
## Summary
H2O.ai's `h2ogpt-oig-instruct-cleaned-v3` is an open-source instruct-type dataset for fine-tuning of large language models, licensed for commercial use.
- Number of rows: `302276`
- Number of columns: `2`
- Column names: `['input', 'source']`
## Source
- [Original LAION OIG Dataset](https://github.com/LAION-AI/Open-Instruction-Generalist)
- [LAION OIG data detoxed and filtered down by scripts in h2oGPT repository](https://github.com/h2oai/h2ogpt/blob/bfc3778c8db938761ce2093351bf2bf82159291e/create_data.py)
|
h2oai/h2ogpt-oig-instruct-cleaned-v3
|
[
"language:en",
"license:apache-2.0",
"gpt",
"llm",
"large language model",
"open-source",
"region:us"
] |
2023-04-15T16:11:24+00:00
|
{"language": ["en"], "license": "apache-2.0", "thumbnail": "https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico", "tags": ["gpt", "llm", "large language model", "open-source"]}
|
2023-04-19T03:43:05+00:00
|
02f3030a8070041e6a8124ddd23b951210a7a03e
|
arnavmahapatra/fruit-detection-dataset
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-04-15T16:25:29+00:00
|
{"license": "cc-by-4.0"}
|
2023-04-15T16:29:40+00:00
|
|
911d4fa077ddd59e6d7fa750fc558e5251e0a8e6
|
AyoubChLin/northwind_purchase_requisitions
|
[
"license:apache-2.0",
"region:us"
] |
2023-04-15T16:29:22+00:00
|
{"license": "apache-2.0"}
|
2023-04-15T16:30:27+00:00
|
|
4cad56d8f1406f2ba4ae18522399ea2228005225
|
NOABOL35631y/T
|
[
"task_categories:text-classification",
"size_categories:n>1T",
"language:fr",
"language:en",
"license:openrail",
"music",
"art",
"region:us"
] |
2023-04-15T16:37:43+00:00
|
{"language": ["fr", "en"], "license": "openrail", "size_categories": ["n>1T"], "task_categories": ["text-classification"], "tags": ["music", "art"]}
|
2023-04-15T16:44:11+00:00
|
|
93ce16d5fa4a41ca47f380654647bcdb64d60e72
|
# Dataset Card for "spam-detection-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Deysi/spam-detection-dataset
|
[
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-04-15T16:39:24+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "pretty_name": "spam", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3161821, "num_examples": 8175}, {"name": "test", "num_bytes": 1094757, "num_examples": 2725}], "download_size": 2578551, "dataset_size": 4256578}}
|
2023-04-15T16:42:24+00:00
|
2f71cecba09554e9bd8ccb1c8792af8dede7fed1
|
winglian/llm-gpt
|
[
"language:en",
"license:mit",
"region:us"
] |
2023-04-15T16:39:40+00:00
|
{"language": ["en"], "license": "mit", "task-categories": ["text-generation"]}
|
2023-04-15T16:54:23+00:00
|
|
a0bf67edd77b2334cd74461a354ec594c05bcf6c
|
# ChatGPT3.5 Noisy Translation Banjarese
Notebooks at https://github.com/mesolitica/malaysian-dataset/tree/master/translation/chatgpt3.5-nllb-bjn
|
mesolitica/chatgpt-noisy-translation-banjarese
|
[
"task_categories:translation",
"language:ms",
"region:us"
] |
2023-04-15T16:50:26+00:00
|
{"language": ["ms"], "task_categories": ["translation"]}
|
2023-12-17T04:11:02+00:00
|
79a22879df90711c17ee3d6b739b5d70bda92c95
|
# Dataset Card for "grayscale_image_aesthetic_3M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ioclab/grayscale_image_aesthetic_3M
|
[
"region:us"
] |
2023-04-15T17:01:40+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "conditioning_image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 223038217282.0, "num_examples": 3000000}], "download_size": 222413091423, "dataset_size": 223038217282.0}}
|
2023-04-16T07:12:17+00:00
|
295ce8d50a67d0e43dec067fcc69c4e0966136d1
|
* Based on [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1).
* Only Russian message trees, only main branches.
* Script: [get_oasst_ru.py](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/src/data_processing/get_oasst_ru.py)
|
IlyaGusev/oasst1_ru_main_branch
|
[
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:ru",
"license:apache-2.0",
"region:us"
] |
2023-04-15T17:16:15+00:00
|
{"language": ["ru"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["conversational", "text-generation"], "dataset_info": {"features": [{"name": "messages", "sequence": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2040115, "num_examples": 614}], "download_size": 2105736, "dataset_size": 2040115}}
|
2023-09-15T19:58:01+00:00
|
bd52d622d5b9e29212103a68e76908cd91cf4599
|
# Dataset Card for "miniharem_validation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
arubenruben/miniharem_validation
|
[
"region:us"
] |
2023-04-15T18:29:24+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PESSOA", "2": "I-PESSOA", "3": "B-ORGANIZACAO", "4": "I-ORGANIZACAO", "5": "B-LOCAL", "6": "I-LOCAL", "7": "B-TEMPO", "8": "I-TEMPO", "9": "B-VALOR", "10": "I-VALOR"}}}}], "splits": [{"name": "validation", "num_bytes": 1062698, "num_examples": 128}], "download_size": 224794, "dataset_size": 1062698}}
|
2023-04-15T18:29:27+00:00
|
88f79d16bd1bfa2705c1482be61c40662de00689
|
# AnimeHeadsv3 Object Detection Dataset
The AnimeHeadsv3 Object Detection Dataset is a collection of anime and art images, including manga pages, that have been annotated with object bounding boxes for use in object detection tasks.
## Contents
There are two versions of the dataset available:
The dataset contains a total of 8157 images, split into training, validation, and testing sets. The images were collected from various sources and include a variety of anime and art styles, including manga.
- Dataset with augmentation: Contains 8157 images.
- Dataset without augmentation: Contains 2777 images.
The images were collected from various sources and include a variety of anime and art styles, including manga. The annotations were created using the COCO format, with each annotation file containing the bounding box coordinates and label for each object in the corresponding image. The dataset has only one class named "head".
## Preprocessing
The dataset with augmentation has the following preprocessing parameters:
Resize: Fit within 640x640
The dataset without augmentation does not have any preprocessing applied.
## Augmentation Parameters
The following augmentation parameters were applied to the dataset with augmentation:
Outputs per training example: 3
Flip: Horizontal
Saturation: Between -40% and +40%
Blur: Up to 4px
Noise: Up to 4% of pixels
|
nyuuzyou/AnimeHeadsv3
|
[
"task_categories:object-detection",
"license:wtfpl",
"region:us"
] |
2023-04-15T18:34:19+00:00
|
{"license": "wtfpl", "task_categories": ["object-detection"], "dataset_info": [{"config_name": "With augmentation", "features": [{"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "width", "dtype": "int32"}, {"name": "height", "dtype": "int32"}, {"name": "objects", "sequence": [{"name": "id", "dtype": "int64"}, {"name": "area", "dtype": "int64"}, {"name": "bbox", "sequence": "float32", "length": 4}, {"name": "category", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2817954, "num_examples": 8037}, {"name": "validation", "num_bytes": 37647, "num_examples": 100}, {"name": "test", "num_bytes": 8425, "num_examples": 20}], "download_size": 590150250, "dataset_size": 2864026}, {"config_name": "Without augmentation", "features": [{"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "width", "dtype": "int32"}, {"name": "height", "dtype": "int32"}, {"name": "objects", "sequence": [{"name": "id", "dtype": "int64"}, {"name": "area", "dtype": "int64"}, {"name": "bbox", "sequence": "float32", "length": 4}, {"name": "category", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 932413, "num_examples": 2659}, {"name": "validation", "num_bytes": 37647, "num_examples": 100}, {"name": "test", "num_bytes": 7393, "num_examples": 18}], "download_size": 512953012, "dataset_size": 977453}]}
|
2023-07-02T22:24:38+00:00
|
d18e88207ecd3130eba8e0c4efecb6932d292dda
|
**CTMatch Classification Dataset**
This is a combined set of 2 labelled datasets of:
`topic (patient descriptions), doc (clinical trials documents - selected fields), and label ({0, 1, 2})` triples, in jsonl format.
(Somewhat of a duplication of some of the `ir_dataset` also available on HF.)
These have been processed using ctproc, and in this state can be used by various tokenizers for fine-tuning (see ctmatch for examples).
These 2 datasets contain no patient identifying information are openly available in raw forms:
#### TREC: http://www.trec-cds.org/2021.html
#### CSIRO: https://data.csiro.au/collection/csiro:17152
---
**see repo for more information**:
https://github.com/semajyllek/ctmatch
|
semaj83/ctmatch_classification
|
[
"task_categories:text-classification",
"size_categories:10K<n<100K",
"license:mit",
"medical",
"region:us"
] |
2023-04-15T18:51:57+00:00
|
{"license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "tags": ["medical"]}
|
2023-05-10T10:05:13+00:00
|
1e44b44ddb4c27ab3130ed4b67aee65dba87158d
|
This data accompanies the WebUI project (https://dl.acm.org/doi/abs/10.1145/3544548.3581158)
For more information, check out the project website: https://uimodeling.github.io/
To download this dataset, you need to install the huggingface-hub package
```
pip install huggingface-hub
```
Use snapshot_download
```
from huggingface_hub import snapshot_download
snapshot_download(repo_id="biglab/webui-all", repo_type="dataset")
```
IMPORTANT
* Before downloading and using, please review the copyright info here: https://github.com/js0nwu/webui/blob/main/COPYRIGHT.txt
* Not all data samples have the same number of files (e.g., same number of device screenshots) due to the fact that the crawler used a timeout during collection
* The dataset released on HuggingFace was filtered using a list of explicit words and therefore contains fewer samples than the experiments originally used in the paper. The raw dataset is currently available (https://drive.google.com/drive/folders/1hcO75W2FjsZoibsj2TIbKz67hy9JkOBz?usp=share_link) but may be removed in the future.
|
biglab/webui-all
|
[
"license:other",
"region:us"
] |
2023-04-15T19:08:49+00:00
|
{"license": "other"}
|
2023-05-05T01:24:25+00:00
|
02e91be1b89b0c7e5120d173b7749db60eeefe1b
|
polae/tm-lectures
|
[
"license:mit",
"region:us"
] |
2023-04-15T19:26:44+00:00
|
{"license": "mit"}
|
2023-04-15T19:28:43+00:00
|
|
3bb545f798340d2bcb073229f4068baeb4d8a004
|
# Dataset Card for "github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joagonzalez/github-issues
|
[
"region:us"
] |
2023-04-15T20:20:43+00:00
|
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "repository_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "comments_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "user", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "labels", "list": [{"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "color", "dtype": "string"}, {"name": "default", "dtype": "bool"}, {"name": "description", "dtype": "string"}]}, {"name": "state", "dtype": "string"}, {"name": "locked", "dtype": "bool"}, {"name": "assignee", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "assignees", "list": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "milestone", "struct": [{"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "creator", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "open_issues", "dtype": "int64"}, {"name": "closed_issues", "dtype": "int64"}, {"name": "state", "dtype": "string"}, {"name": "created_at", "dtype": "timestamp[s]"}, {"name": "updated_at", "dtype": "timestamp[s]"}, {"name": "due_on", "dtype": "null"}, {"name": "closed_at", "dtype": "null"}]}, {"name": "comments", "dtype": "int64"}, {"name": "created_at", "dtype": "timestamp[s]"}, {"name": "updated_at", "dtype": "timestamp[s]"}, {"name": "closed_at", "dtype": "timestamp[s]"}, {"name": "author_association", "dtype": "string"}, {"name": "active_lock_reason", "dtype": "null"}, {"name": "body", "dtype": "string"}, {"name": "reactions", "struct": [{"name": "url", "dtype": "string"}, {"name": "total_count", "dtype": "int64"}, {"name": "+1", "dtype": "int64"}, {"name": "-1", "dtype": "int64"}, {"name": "laugh", "dtype": "int64"}, {"name": "hooray", "dtype": "int64"}, {"name": "confused", "dtype": "int64"}, {"name": "heart", "dtype": "int64"}, {"name": "rocket", "dtype": "int64"}, {"name": "eyes", "dtype": "int64"}]}, {"name": "timeline_url", "dtype": "string"}, {"name": "performed_via_github_app", "dtype": "null"}, {"name": "state_reason", "dtype": "string"}, {"name": "draft", "dtype": "bool"}, {"name": "pull_request", "struct": [{"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "diff_url", "dtype": "string"}, {"name": "patch_url", "dtype": "string"}, {"name": "merged_at", "dtype": "timestamp[s]"}]}], "splits": [{"name": "train", "num_bytes": 3384186, "num_examples": 1000}], "download_size": 819771, "dataset_size": 3384186}}
|
2023-04-15T20:20:48+00:00
|
9a14415289dea0e603687b14d4581a81ecdfe9c9
|
Modified dataset from:
Caufield, J. Harry (2019): MACCROBAT. figshare. Dataset. https://doi.org/10.6084/m9.figshare.9764942.v2
Example training notebook: https://colab.research.google.com/drive/1OzCY782KJSF0FBDS0d1CoMhfp3-RtJMV?usp=sharing
Labels:
```
0: B-Activity
1: B-Administration
2: B-Age
3: B-Area
4: B-Biological_attribute
5: B-Biological_structure
6: B-Clinical_event
7: B-Color
8: B-Coreference
9: B-Date
10: B-Detailed_description
11: B-Diagnostic_procedure
12: B-Disease_disorder
13: B-Distance
14: B-Dosage
15: B-Duration
16: B-Family_history
17: B-Frequency
18: B-Height
19: B-History
20: B-Lab_value
21: B-Mass
22: B-Medication
23: B-Nonbiological_location
24: B-Occupation
25: B-Other_entity
26: B-Other_event
27: B-Outcome
28: B-Personal_background
29: B-Qualitative_concept
30: B-Quantitative_concept
31: B-Severity
32: B-Sex
33: B-Shape
34: B-Sign_symptom
35: B-Subject
36: B-Texture
37: B-Therapeutic_procedure
38: B-Time
39: B-Volume
40: B-Weight
41: I-Activity
42: I-Administration
43: I-Age
44: I-Area
45: I-Biological_attribute
46: I-Biological_structure
47: I-Clinical_event
48: I-Color
49: I-Coreference
50: I-Date
51: I-Detailed_description
52: I-Diagnostic_procedure
53: I-Disease_disorder
54: I-Distance
55: I-Dosage
56: I-Duration
57: I-Family_history
58: I-Frequency
59: I-Height
60: I-History
61: I-Lab_value
62: I-Mass
63: I-Medication
64: I-Nonbiological_location
65: I-Occupation
66: I-Other_entity
67: I-Other_event
68: I-Outcome
69: I-Personal_background
70: I-Qualitative_concept
71: I-Quantitative_concept
72: I-Severity
73: I-Shape
74: I-Sign_symptom
75: I-Subject
76: I-Texture
77: I-Therapeutic_procedure
78: I-Time
79: I-Volume
80: I-Weight
81: O
```
|
ktgiahieu/maccrobat2018_2020
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-04-15T20:27:11+00:00
|
{"license": "cc-by-4.0"}
|
2023-05-21T09:39:53+00:00
|
1ba1b04545ba2e1b446e7540a182f2d0d863279c
|
# Dataset Card for "imdb_misspelled_5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sethapun/imdb_misspelled_5
|
[
"region:us"
] |
2023-04-15T20:35:34+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "neg", "1": "pos"}}}}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 33631096, "num_examples": 25000}, {"name": "validation", "num_bytes": 32850598, "num_examples": 25000}], "download_size": 56488953, "dataset_size": 66481694}}
|
2023-04-15T20:35:46+00:00
|
84127f4ab50eb869aba37e1c4f4e324f8063ac76
|
# Dataset Card for "naively_captioned_CUB2002011_one_per_class_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
anjunhu/naively_captioned_CUB2002011_test_1shot
|
[
"region:us"
] |
2023-04-15T20:47:07+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "text_cupl", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 5396113.0, "num_examples": 200}], "download_size": 5382841, "dataset_size": 5396113.0}}
|
2023-04-28T08:54:59+00:00
|
2ffa977b0d3d284487225bbd008e4d324dbf1788
|
# Dataset Card for "sft_language_hq_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
andersonbcdefg/sft_language_hq_v1
|
[
"region:us"
] |
2023-04-15T21:42:34+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 199022467.8071509, "num_examples": 206732}], "download_size": 88660076, "dataset_size": 199022467.8071509}}
|
2023-04-15T21:42:47+00:00
|
650da2e4c3d00b3d305820773144794543e862e1
|
**Sakura Yayoi - CUNO - Hisen Kaede (LoHa Ver.)**
- 6 Versions; *5 epochs, 6 epochs, 7 epochs, 8 epochs, 9 epochs, 10 epochs*
- Recommended LoRA Weight Blocks: **ALL, MIDD, OUTD, OUTALL**
- Recomended weights; 0.7 - 1.0
|
Cheetor1996/Sakura_Yayoi_CUNO_LoHa
|
[
"language:en",
"license:cc-by-2.0",
"art",
"region:us"
] |
2023-04-15T22:06:01+00:00
|
{"language": ["en"], "license": "cc-by-2.0", "pretty_name": "Sakura_Yayoi_CUNO (LoHA Ver.)", "tags": ["art"]}
|
2023-04-21T23:16:39+00:00
|
3fd6df5c5d0023eca732ff7543b775c5e8b6b37b
|
lime8817/reg_images
|
[
"license:creativeml-openrail-m",
"region:us"
] |
2023-04-15T22:50:58+00:00
|
{"license": "creativeml-openrail-m"}
|
2023-04-15T23:34:06+00:00
|
|
e848f3549901bf353a106b6d8cf600fb56b2fdb6
|
ashwinR/CodeXGlueShort
|
[
"license:mit",
"region:us"
] |
2023-04-15T22:59:48+00:00
|
{"license": "mit"}
|
2023-04-15T22:59:48+00:00
|
|
c4369717b8ccdbd8bd1875d00b41bb07c30d4edb
|
# Dataset for Circuit-GNN
### Download
```
1. Download GLM_1.3b.zip
2. Unzip it.
```
|
Looong/GLM_1.3b
|
[
"region:us"
] |
2023-04-16T00:00:02+00:00
|
{}
|
2023-04-16T11:48:32+00:00
|
47548972e88339ce263dd4b3efe234cfebfbbd64
|
# **CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society**
- **Github:** https://github.com/lightaime/camel
- **Website:** https://www.camel-ai.org/
- **Arxiv Paper:** https://arxiv.org/abs/2303.17760
## Dataset Summary
Biology dataset is composed of 20K problem-solution pairs obtained using gpt-4. The dataset problem-solutions pairs generating from 25 biology topics, 25 subtopics for each topic and 32 problems for each "topic,subtopic" pairs.
We provide the data in `biology.zip`.
## Data Fields
**The data fields for files in `biology.zip` are as follows:**
* `role_1`: assistant role
* `topic`: biology topic
* `sub_topic`: biology subtopic belonging to topic
* `message_1`: refers to the problem the assistant is asked to solve.
* `message_2`: refers to the solution provided by the assistant.
**Download in python**
```
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="camel-ai/biology", repo_type="dataset", filename="biology.zip",
local_dir="datasets/", local_dir_use_symlinks=False)
```
### Citation
```
@misc{li2023camel,
title={CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society},
author={Guohao Li and Hasan Abed Al Kader Hammoud and Hani Itani and Dmitrii Khizbullin and Bernard Ghanem},
year={2023},
eprint={2303.17760},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
## Disclaimer:
This data was synthetically generated by GPT4 and might contain incorrect information. The dataset is there only for research purposes.
---
license: cc-by-nc-4.0
---
|
camel-ai/biology
|
[
"task_categories:text-generation",
"language:en",
"license:cc-by-nc-4.0",
"instruction-finetuning",
"arxiv:2303.17760",
"region:us"
] |
2023-04-16T00:30:03+00:00
|
{"language": ["en"], "license": "cc-by-nc-4.0", "task_categories": ["text-generation"], "pretty_name": "CAMEL Biology", "tags": ["instruction-finetuning"], "arxiv": 2303.1776, "extra_gated_prompt": "By using this data, you acknowledge and agree to utilize it solely for research purposes, recognizing that the dataset may contain inaccuracies due to its artificial generation through ChatGPT.", "extra_gated_fields": {"Name": "text", "Email": "text"}, "I will adhere to the terms and conditions of this dataset": "checkbox"}
|
2023-05-23T20:11:56+00:00
|
16b27e68de06f5ae6958111834efb7a90a8b8d81
|
# **CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society**
- **Github:** https://github.com/lightaime/camel
- **Website:** https://www.camel-ai.org/
- **Arxiv Paper:** https://arxiv.org/abs/2303.17760
## Dataset Summary
Chemistry dataset is composed of 20K problem-solution pairs obtained using gpt-4. The dataset problem-solutions pairs generating from 25 chemistry topics, 25 subtopics for each topic and 32 problems for each "topic,subtopic" pairs.
We provide the data in `chemistry.zip`.
## Data Fields
**The data fields for files in `chemistry.zip` are as follows:**
* `role_1`: assistant role
* `topic`: chemistry topic
* `sub_topic`: chemistry subtopic belonging to topic
* `message_1`: refers to the problem the assistant is asked to solve.
* `message_2`: refers to the solution provided by the assistant.
**Download in python**
```
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="camel-ai/chemistry", repo_type="dataset", filename="chemistry.zip",
local_dir="datasets/", local_dir_use_symlinks=False)
```
### Citation
```
@misc{li2023camel,
title={CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society},
author={Guohao Li and Hasan Abed Al Kader Hammoud and Hani Itani and Dmitrii Khizbullin and Bernard Ghanem},
year={2023},
eprint={2303.17760},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
## Disclaimer:
This data was synthetically generated by GPT4 and might contain incorrect information. The dataset is there only for research purposes.
---
license: cc-by-nc-4.0
---
|
camel-ai/chemistry
|
[
"task_categories:text-generation",
"language:en",
"license:cc-by-nc-4.0",
"instruction-finetuning",
"arxiv:2303.17760",
"region:us"
] |
2023-04-16T00:30:56+00:00
|
{"language": ["en"], "license": "cc-by-nc-4.0", "task_categories": ["text-generation"], "pretty_name": "CAMEL Chemistry", "tags": ["instruction-finetuning"], "arxiv": 2303.1776, "extra_gated_prompt": "By using this data, you acknowledge and agree to utilize it solely for research purposes, recognizing that the dataset may contain inaccuracies due to its artificial generation through ChatGPT.", "extra_gated_fields": {"Name": "text", "Email": "text"}, "I will adhere to the terms and conditions of this dataset": "checkbox"}
|
2023-05-23T20:12:52+00:00
|
b69a66cba535e646cc3ace12b4eb672be78b44af
|
# PARARULE-Plus
This is a branch which includes the dataset from PARARULE-Plus Depth=2, Depth=3, Depth=4 and Depth=5. PARARULE Plus is a deep multi-step reasoning dataset over natural language. It can be seen as an improvement on the dataset of PARARULE (Peter Clark et al., 2020). Both PARARULE and PARARULE-Plus follow the closed-world assumption and negation as failure. The motivation is to generate deeper PARARULE training samples. We add more training samples for the case where the depth is greater than or equal to two to explore whether Transformer has reasoning ability. PARARULE Plus is a combination of two types of entities, animals and people, and corresponding relationships and attributes. From the depth of 2 to the depth of 5, we have around 100,000 samples in the depth of each layer, and there are nearly 400,000 samples in total.
Here is the original links for PARARULE-Plus including paper, project and data.
Paper: https://www.cs.ox.ac.uk/isg/conferences/tmp-proceedings/NeSy2022/paper15.pdf
Project: https://github.com/Strong-AI-Lab/Multi-Step-Deductive-Reasoning-Over-Natural-Language
Data: https://github.com/Strong-AI-Lab/PARARULE-Plus
PARARULE-Plus has been collected and merged by [LogiTorch.ai](https://www.logitorch.ai/), [ReasoningNLP](https://github.com/FreedomIntelligence/ReasoningNLP), [Prompt4ReasoningPapers](https://github.com/zjunlp/Prompt4ReasoningPapers) and [OpenAI/Evals](https://github.com/openai/evals/pull/651).
In this huggingface version, we pre-processed the dataset and use `1` to represent `true` and `0` to represent `false` to better help user train model.
## How to load the dataset?
```
from datasets import load_dataset
dataset = load_dataset("qbao775/PARARULE-Plus")
```
## How to train a model using the dataset?
We provide an [example](https://github.com/Strong-AI-Lab/PARARULE-Plus/blob/main/README.md#an-example-script-to-load-pararule-plus-and-fine-tune-bert) that you can `git clone` the project and fine-tune the dataset locally.
## Citation
```
@inproceedings{bao2022multi,
title={Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation},
author={Qiming Bao and Alex Yuxuan Peng and Tim Hartill and Neset Tan and Zhenyun Deng and Michael Witbrock and Jiamou Liu},
year={2022},
publisher={The 2nd International Joint Conference on Learning and Reasoning and 16th International Workshop on Neural-Symbolic Learning and Reasoning (IJCLR-NeSy 2022)}
}
```
|
qbao775/PARARULE-Plus
|
[
"task_categories:text-classification",
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"Reasoning",
"Multi-Step-Deductive-Reasoning",
"Logical-Reasoning",
"region:us"
] |
2023-04-16T00:53:56+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification", "question-answering"], "tags": ["Reasoning", "Multi-Step-Deductive-Reasoning", "Logical-Reasoning"]}
|
2023-06-05T02:56:52+00:00
|
9555cf3c09525bab2671631ee1b467475e1096d8
|
# h2oGPT Data Card
## Summary
H2O.ai's `openassistant_oasst1` is an open-source instruct-type dataset for fine-tuning of large language models, licensed for commercial use.
- Number of rows: `46283`
- Number of columns: `3`
- Column names: `['input', 'prompt_type', 'source']`
## Source
- [Original Open Assistant data in tree structure](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [This flattened dataset created by script in h2oGPT repository](https://github.com/h2oai/h2ogpt/blob/45e6183171fb16691ad7d3ab006fad973f971e98/create_data.py#L1253)
|
h2oai/openassistant_oasst1
|
[
"language:en",
"license:apache-2.0",
"gpt",
"llm",
"large language model",
"open-source",
"region:us"
] |
2023-04-16T00:58:01+00:00
|
{"language": ["en"], "license": "apache-2.0", "thumbnail": "https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico", "tags": ["gpt", "llm", "large language model", "open-source"]}
|
2023-04-19T03:43:13+00:00
|
5123e80eb6b8faca32491768e42a64f708c12d77
|
just add ocean data into alpaca-cleaned
|
ayuan0324/ocean
|
[
"region:us"
] |
2023-04-16T01:15:27+00:00
|
{}
|
2023-04-16T03:29:23+00:00
|
f3d12c5e4ea1553c2390ace2f5cfd063de7388c2
|
# h2oGPT Data Card
## Summary
H2O.ai's `h2ogpt-oig-oasst1-instruct-cleaned-v1` is an open-source instruct-type dataset for fine-tuning of large language models, licensed for commercial use.
- Number of rows: `349837`
- Number of columns: `3`
- Column names: `['input', 'source', 'prompt_type']`
## Source
- [Original LAION OIG Dataset](https://github.com/LAION-AI/Open-Instruction-Generalist)
- [LAION OIG data detoxed and filtered down by scripts in h2oGPT repository](https://github.com/h2oai/h2ogpt/blob/main/FINETUNE.md#high-quality-oig-based-instruct-data)
- [Original Open Assistant data in tree structure](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [This flattened dataset created by script in h2oGPT repository](https://github.com/h2oai/h2ogpt/blob/5fc91911bc2bfaaf3b6c2de577c4b0ae45a07a4a/create_data.py#L1253)
|
h2oai/h2ogpt-oig-oasst1-instruct-cleaned-v1
|
[
"language:en",
"license:apache-2.0",
"gpt",
"llm",
"large language model",
"open-source",
"region:us"
] |
2023-04-16T01:18:28+00:00
|
{"language": ["en"], "license": "apache-2.0", "thumbnail": "https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico", "tags": ["gpt", "llm", "large language model", "open-source"]}
|
2023-04-19T03:43:33+00:00
|
a47c0946e1419ce6dfe26ceec160978172164877
|
just collect these models from various online sources
|
m-mao/lora_collected
|
[
"region:us"
] |
2023-04-16T01:33:58+00:00
|
{}
|
2023-04-16T03:05:43+00:00
|
fd21f0392ca5692721d403325ce99a0a9559a8d1
|
原始未清洗、翻译的数据,用于存放方便文件传输。
|
shareAI/shareGPT_origin
|
[
"license:openrail",
"region:us"
] |
2023-04-16T01:58:43+00:00
|
{"license": "openrail"}
|
2023-04-16T02:46:54+00:00
|
f9729bc81a6ffc65e9aa3a0a3d9ff2a1c8b2ef12
|
BoganJustice/unblocker-dataset
|
[
"license:gpl-3.0",
"region:us"
] |
2023-04-16T02:51:16+00:00
|
{"license": "gpl-3.0"}
|
2023-04-16T02:51:16+00:00
|
|
341e74408ea34719c7d86a0dc92e3cb3b6bb94bb
|
# PARARULE-Plus-Depth-2
This is a branch which includes the dataset from PARARULE-Plus Depth=2. PARARULE Plus is a deep multi-step reasoning dataset over natural language. It can be seen as an improvement on the dataset of PARARULE (Peter Clark et al., 2020). Both PARARULE and PARARULE-Plus follow the closed-world assumption and negation as failure. The motivation is to generate deeper PARARULE training samples. We add more training samples for the case where the depth is greater than or equal to two to explore whether Transformer has reasoning ability. PARARULE Plus is a combination of two types of entities, animals and people, and corresponding relationships and attributes. From the depth of 2 to the depth of 5, we have around 100,000 samples in the depth of each layer, and there are nearly 400,000 samples in total.
Here is the original links for PARARULE-Plus including paper, project and data.
Paper: https://www.cs.ox.ac.uk/isg/conferences/tmp-proceedings/NeSy2022/paper15.pdf
Project: https://github.com/Strong-AI-Lab/Multi-Step-Deductive-Reasoning-Over-Natural-Language
Data: https://github.com/Strong-AI-Lab/PARARULE-Plus
PARARULE-Plus has been collected and merged by [LogiTorch.ai](https://www.logitorch.ai/), [ReasoningNLP](https://github.com/FreedomIntelligence/ReasoningNLP), [Prompt4ReasoningPapers](https://github.com/zjunlp/Prompt4ReasoningPapers) and [OpenAI/Evals](https://github.com/openai/evals/pull/651).
In this huggingface version, we pre-processed the dataset and use `1` to represent `true` and `0` to represent `false` to better help user train model.
## How to load the dataset?
```
from datasets import load_dataset
dataset = load_dataset("qbao775/PARARULE-Plus-Depth-2")
```
## How to train a model using the dataset?
We provide an [example](https://github.com/Strong-AI-Lab/PARARULE-Plus/blob/main/README.md#an-example-script-to-load-pararule-plus-and-fine-tune-bert) that you can `git clone` the project and fine-tune the dataset locally.
## Citation
```
@inproceedings{bao2022multi,
title={Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation},
author={Qiming Bao and Alex Yuxuan Peng and Tim Hartill and Neset Tan and Zhenyun Deng and Michael Witbrock and Jiamou Liu},
year={2022},
publisher={The 2nd International Joint Conference on Learning and Reasoning and 16th International Workshop on Neural-Symbolic Learning and Reasoning (IJCLR-NeSy 2022)}
}
```
|
qbao775/PARARULE-Plus-Depth-2
|
[
"task_categories:text-classification",
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"Reasoning",
"Multi-Step-Deductive-Reasoning",
"Logical-Reasoning",
"region:us"
] |
2023-04-16T04:22:51+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification", "question-answering"], "tags": ["Reasoning", "Multi-Step-Deductive-Reasoning", "Logical-Reasoning"]}
|
2023-06-05T02:57:27+00:00
|
69d19d47d9f9182598885e0459ebe5dad63d13b6
|
# PARARULE-Plus-Depth-3
This is a branch which includes the dataset from PARARULE-Plus Depth=3. PARARULE Plus is a deep multi-step reasoning dataset over natural language. It can be seen as an improvement on the dataset of PARARULE (Peter Clark et al., 2020). Both PARARULE and PARARULE-Plus follow the closed-world assumption and negation as failure. The motivation is to generate deeper PARARULE training samples. We add more training samples for the case where the depth is greater than or equal to two to explore whether Transformer has reasoning ability. PARARULE Plus is a combination of two types of entities, animals and people, and corresponding relationships and attributes. From the depth of 2 to the depth of 5, we have around 100,000 samples in the depth of each layer, and there are nearly 400,000 samples in total.
Here is the original links for PARARULE-Plus including paper, project and data.
Paper: https://www.cs.ox.ac.uk/isg/conferences/tmp-proceedings/NeSy2022/paper15.pdf
Project: https://github.com/Strong-AI-Lab/Multi-Step-Deductive-Reasoning-Over-Natural-Language
Data: https://github.com/Strong-AI-Lab/PARARULE-Plus
PARARULE-Plus has been collected and merged by [LogiTorch.ai](https://www.logitorch.ai/), [ReasoningNLP](https://github.com/FreedomIntelligence/ReasoningNLP), [Prompt4ReasoningPapers](https://github.com/zjunlp/Prompt4ReasoningPapers) and [OpenAI/Evals](https://github.com/openai/evals/pull/651).
In this huggingface version, we pre-processed the dataset and use `1` to represent `true` and `0` to represent `false` to better help user train model.
## How to load the dataset?
```
from datasets import load_dataset
dataset = load_dataset("qbao775/PARARULE-Plus-Depth-3")
```
## How to train a model using the dataset?
We provide an [example](https://github.com/Strong-AI-Lab/PARARULE-Plus/blob/main/README.md#an-example-script-to-load-pararule-plus-and-fine-tune-bert) that you can `git clone` the project and fine-tune the dataset locally.
## Citation
```
@inproceedings{bao2022multi,
title={Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation},
author={Qiming Bao and Alex Yuxuan Peng and Tim Hartill and Neset Tan and Zhenyun Deng and Michael Witbrock and Jiamou Liu},
year={2022},
publisher={The 2nd International Joint Conference on Learning and Reasoning and 16th International Workshop on Neural-Symbolic Learning and Reasoning (IJCLR-NeSy 2022)}
}
```
|
qbao775/PARARULE-Plus-Depth-3
|
[
"task_categories:text-classification",
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"Reasoning",
"Multi-Step-Deductive-Reasoning",
"Logical-Reasoning",
"region:us"
] |
2023-04-16T04:25:47+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification", "question-answering"], "tags": ["Reasoning", "Multi-Step-Deductive-Reasoning", "Logical-Reasoning"]}
|
2023-06-05T02:57:53+00:00
|
9bc18283cc5012155ee6b7f5f9829fd723df308c
|
# Dataset Card for Pick-a-Pic (v1)
## Dataset Description
- **Homepage: The web app can be found at [pickapic.io](https://pickapic.io/)**
- **Repository: The repository of [PickScore](https://github.com/yuvalkirstain/PickScore)**
- **Paper: [Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation](https://arxiv.org/abs/2305.01569).**
- **Leaderboard: TODO **
- **Point of Contact: TODO **
### Dataset Summary
The Pick-a-Pic dataset was collected with the [Pick-a-Pic web app](https://pickapic.io/) and contains over half-a-million examples of human preferences over model-generated images.
This dataset with URLs instead of the actual images (which makes it much smaller in size) can be found [here](https://huggingface.co/datasets/yuvalkirstain/pickapic_v1_no_images).
See the corresponding paper [Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation](https://arxiv.org/abs/2305.01569) for more details.
If you want to download this dataset with URLs instead of images to save space, please see [this version of the dataset](https://huggingface.co/datasets/yuvalkirstain/pickapic_v1_no_images).
### Supported Tasks and Leaderboards
Task: Select preferred image in test-set.
| **Models** | **Test-Set Accuracy (%)** |
| --- | --- |
| [PickScore](https://arxiv.org/abs/2305.01569) | 70.2% |
| Human Expert Baseline | 68.0% |
| [HPS](https://arxiv.org/abs/2303.14420) | 66.7% |
| [ImageReward](https://arxiv.org/abs/2304.05977) | 61.1% |
| [CLIP-H](https://arxiv.org/abs/2210.03927) | 60.8% |
| [Aesthetics](https://arxiv.org/abs/2210.08402) | 56.8% |
### Data Splits
The dataset has three main splits: train, validation, validation_unique (with one example per prompt), test, and test_unique.
### Citation Information
If you find this work useful, please cite:
```bibtex
@inproceedings{Kirstain2023PickaPicAO,
title={Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation},
author={Yuval Kirstain and Adam Polyak and Uriel Singer and Shahbuland Matiana and Joe Penna and Omer Levy},
year={2023}
}
```
### LICENSE
MIT License
Copyright (c) 2021
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
|
yuvalkirstain/pickapic_v1
|
[
"arxiv:2305.01569",
"arxiv:2303.14420",
"arxiv:2304.05977",
"arxiv:2210.03927",
"arxiv:2210.08402",
"region:us"
] |
2023-04-16T04:26:09+00:00
|
{"dataset_info": {"features": [{"name": "are_different", "dtype": "bool"}, {"name": "best_image_uid", "dtype": "string"}, {"name": "caption", "dtype": "string"}, {"name": "created_at", "dtype": "timestamp[ns]"}, {"name": "has_label", "dtype": "bool"}, {"name": "image_0_uid", "dtype": "string"}, {"name": "image_0_url", "dtype": "string"}, {"name": "image_1_uid", "dtype": "string"}, {"name": "image_1_url", "dtype": "string"}, {"name": "jpg_0", "dtype": "binary"}, {"name": "jpg_1", "dtype": "binary"}, {"name": "label_0", "dtype": "float64"}, {"name": "label_1", "dtype": "float64"}, {"name": "model_0", "dtype": "string"}, {"name": "model_1", "dtype": "string"}, {"name": "ranking_id", "dtype": "int64"}, {"name": "user_id", "dtype": "int64"}, {"name": "num_example_per_prompt", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 193273338802, "num_examples": 583747}, {"name": "validation", "num_bytes": 5638295249, "num_examples": 17439}, {"name": "test", "num_bytes": 4621428929, "num_examples": 14073}, {"name": "validation_unique", "num_bytes": 178723392, "num_examples": 500}, {"name": "test_unique", "num_bytes": 178099641, "num_examples": 500}], "download_size": 202289408791, "dataset_size": 203889886013}}
|
2023-05-05T14:00:30+00:00
|
0e3169fedacdfc3dcda8a7a4573c491c2a404126
|
# PARARULE-Plus-Depth-4
This is a branch which includes the dataset from PARARULE-Plus Depth=4. PARARULE Plus is a deep multi-step reasoning dataset over natural language. It can be seen as an improvement on the dataset of PARARULE (Peter Clark et al., 2020). Both PARARULE and PARARULE-Plus follow the closed-world assumption and negation as failure. The motivation is to generate deeper PARARULE training samples. We add more training samples for the case where the depth is greater than or equal to two to explore whether Transformer has reasoning ability. PARARULE Plus is a combination of two types of entities, animals and people, and corresponding relationships and attributes. From the depth of 2 to the depth of 5, we have around 100,000 samples in the depth of each layer, and there are nearly 400,000 samples in total.
Here is the original links for PARARULE-Plus including paper, project and data.
Paper: https://www.cs.ox.ac.uk/isg/conferences/tmp-proceedings/NeSy2022/paper15.pdf
Project: https://github.com/Strong-AI-Lab/Multi-Step-Deductive-Reasoning-Over-Natural-Language
Data: https://github.com/Strong-AI-Lab/PARARULE-Plus
PARARULE-Plus has been collected and merged by [LogiTorch.ai](https://www.logitorch.ai/), [ReasoningNLP](https://github.com/FreedomIntelligence/ReasoningNLP), [Prompt4ReasoningPapers](https://github.com/zjunlp/Prompt4ReasoningPapers) and [OpenAI/Evals](https://github.com/openai/evals/pull/651).
In this huggingface version, we pre-processed the dataset and use `1` to represent `true` and `0` to represent `false` to better help user train model.
## How to load the dataset?
```
from datasets import load_dataset
dataset = load_dataset("qbao775/PARARULE-Plus-Depth-4")
```
## How to train a model using the dataset?
We provide an [example](https://github.com/Strong-AI-Lab/PARARULE-Plus/blob/main/README.md#an-example-script-to-load-pararule-plus-and-fine-tune-bert) that you can `git clone` the project and fine-tune the dataset locally.
## Citation
```
@inproceedings{bao2022multi,
title={Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation},
author={Qiming Bao and Alex Yuxuan Peng and Tim Hartill and Neset Tan and Zhenyun Deng and Michael Witbrock and Jiamou Liu},
year={2022},
publisher={The 2nd International Joint Conference on Learning and Reasoning and 16th International Workshop on Neural-Symbolic Learning and Reasoning (IJCLR-NeSy 2022)}
}
```
|
qbao775/PARARULE-Plus-Depth-4
|
[
"task_categories:text-classification",
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"Reasoning",
"Multi-Step-Deductive-Reasoning",
"Logical-Reasoning",
"region:us"
] |
2023-04-16T04:28:17+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification", "question-answering"], "tags": ["Reasoning", "Multi-Step-Deductive-Reasoning", "Logical-Reasoning"]}
|
2023-06-05T02:58:20+00:00
|
dd9a6234a580a4056f2d3ea6dbe64c4fa91f233d
|
# PARARULE-Plus-Depth-5
This is a branch which includes the dataset from PARARULE-Plus Depth=5. PARARULE Plus is a deep multi-step reasoning dataset over natural language. It can be seen as an improvement on the dataset of PARARULE (Peter Clark et al., 2020). Both PARARULE and PARARULE-Plus follow the closed-world assumption and negation as failure. The motivation is to generate deeper PARARULE training samples. We add more training samples for the case where the depth is greater than or equal to two to explore whether Transformer has reasoning ability. PARARULE Plus is a combination of two types of entities, animals and people, and corresponding relationships and attributes. From the depth of 2 to the depth of 5, we have around 100,000 samples in the depth of each layer, and there are nearly 400,000 samples in total.
Here is the original links for PARARULE-Plus including paper, project and data.
Paper: https://www.cs.ox.ac.uk/isg/conferences/tmp-proceedings/NeSy2022/paper15.pdf
Project: https://github.com/Strong-AI-Lab/Multi-Step-Deductive-Reasoning-Over-Natural-Language
Data: https://github.com/Strong-AI-Lab/PARARULE-Plus
PARARULE-Plus has been collected and merged by [LogiTorch.ai](https://www.logitorch.ai/), [ReasoningNLP](https://github.com/FreedomIntelligence/ReasoningNLP), [Prompt4ReasoningPapers](https://github.com/zjunlp/Prompt4ReasoningPapers) and [OpenAI/Evals](https://github.com/openai/evals/pull/651).
In this huggingface version, we pre-processed the dataset and use `1` to represent `true` and `0` to represent `false` to better help user train model.
## How to load the dataset?
```
from datasets import load_dataset
dataset = load_dataset("qbao775/PARARULE-Plus-Depth-5")
```
## How to train a model using the dataset?
We provide an [example](https://github.com/Strong-AI-Lab/PARARULE-Plus/blob/main/README.md#an-example-script-to-load-pararule-plus-and-fine-tune-bert) that you can `git clone` the project and fine-tune the dataset locally.
## Citation
```
@inproceedings{bao2022multi,
title={Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation},
author={Qiming Bao and Alex Yuxuan Peng and Tim Hartill and Neset Tan and Zhenyun Deng and Michael Witbrock and Jiamou Liu},
year={2022},
publisher={The 2nd International Joint Conference on Learning and Reasoning and 16th International Workshop on Neural-Symbolic Learning and Reasoning (IJCLR-NeSy 2022)}
}
```
|
qbao775/PARARULE-Plus-Depth-5
|
[
"task_categories:text-classification",
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"Reasoning",
"Multi-Step-Deductive-Reasoning",
"Logical-Reasoning",
"region:us"
] |
2023-04-16T04:32:24+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification", "question-answering"], "tags": ["Reasoning", "Multi-Step-Deductive-Reasoning", "Logical-Reasoning"]}
|
2023-06-05T02:58:48+00:00
|
7f25f46593769f19a7924afc4666aeb73c5b14e0
|
# Dataset Card for "amateur_drawings-controlnet-dataset"
WIP... Come back later....
|
keshan/amateur_drawings-controlnet-dataset
|
[
"region:us"
] |
2023-04-16T04:54:06+00:00
|
{"dataset_info": {"features": [{"name": "original_image", "dtype": "image"}, {"name": "segment_image", "dtype": "image"}, {"name": "keypoint_image", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 49810154042.961, "num_examples": 177723}], "download_size": 50168061092, "dataset_size": 49810154042.961}}
|
2023-04-18T15:59:55+00:00
|
383e46f37ab694355fa27c98ae02c336334b4c78
|
Moui/yoyo
|
[
"license:other",
"region:us"
] |
2023-04-16T05:49:41+00:00
|
{"license": "other"}
|
2023-04-16T05:49:41+00:00
|
|
de0ab62de645676e55ea1f71d555d1215cc30ada
|
WilliamLeeking/JARVISDataset
|
[
"license:bigscience-openrail-m",
"region:us"
] |
2023-04-16T05:58:39+00:00
|
{"license": "bigscience-openrail-m"}
|
2023-04-16T05:58:39+00:00
|
|
acb5186b3b7a8c1e289a8ea08034843eb493425a
|
1113132das/5
|
[
"license:openrail",
"region:us"
] |
2023-04-16T06:01:00+00:00
|
{"license": "openrail"}
|
2023-04-16T06:01:00+00:00
|
|
a0e37290a3dca49f637da7ee7cfae04e28a0a967
|
ooferdoodles/danbooru2021-captioned
|
[
"license:cc",
"region:us"
] |
2023-04-16T06:47:13+00:00
|
{"license": "cc"}
|
2023-06-21T08:09:48+00:00
|
|
73b63c042e40583d37ca028b27f8dbc105788c59
|
# Dataset Card for "chunk_253"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_253
|
[
"region:us"
] |
2023-04-16T07:38:18+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 17687335248.125, "num_examples": 184151}], "download_size": 15509941942, "dataset_size": 17687335248.125}}
|
2023-04-16T07:48:38+00:00
|
3cb20ea9f1044c315f55e026d1e6ac33e8f55a30
|
# Dataset Card for "blaupunkt_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Akshita15/blaupunkt_data
|
[
"region:us"
] |
2023-04-16T09:21:29+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Product Queries", "1": "Product shipping", "2": "bank emi", "3": "cancel order", "4": "complain", "5": "courier products", "6": "discount code", "7": "exchange offer", "8": "invoice", "9": "payment", "10": "promo coupon", "11": "redeem voucher", "12": "replace", "13": "return", "14": "service center", "15": "tickets", "16": "warranty"}}}}], "splits": [{"name": "train", "num_bytes": 80209, "num_examples": 877}], "download_size": 0, "dataset_size": 80209}}
|
2023-04-16T17:41:11+00:00
|
6e05b227586b1a9d99f51a23b5053b044a216725
|
INSTALL REQUIREMENTS
!wget -q https://github.com/ShivamShrirao/diffusers/raw/main/examples/dreambooth/train_dreambooth.py
!wget -q https://github.com/ShivamShrirao/diffusers/raw/main/scripts/convert_diffusers_to_original_stable_diffusion.py
%pip install -qq git+https://github.com/ShivamShrirao/diffusers
%pip install -q -U --pre triton
%pip install -q accelerate transformers ftfy bitsandbytes==0.35.0 gradio natsort safetensors
WEIGHTS
save_to_gdrive = False
if save_to_gdrive:
from google.colab import drive
drive.mount('/content/drive')
MODEL_NAME = "runwayml/stable-diffusion-v1-5"
OUTPUT_DIR = "stable_diffusion_weights/zwx"
if save_to_gdrive:
OUTPUT_DIR = "/content/drive/MyDrive/" + OUTPUT_DIR
else:
OUTPUT_DIR = "/content/" + OUTPUT_DIR
print(f"[*] Weights will be saved at {OUTPUT_DIR}")
!mkdir -p $OUTPUT_DIR
CONCEPT LIST
concepts_list = [
{
"instance_prompt": "photo of sacristy",
"class_prompt": "photo of a room",
"instance_data_dir": "/content/data/sacristy",
"class_data_dir": "/content/data/room"
},
{
"instance_prompt": "photo of screens furniture",
"class_prompt": "photo of a furniture",
"instance_data_dir": "/content/data/screens",
"class_data_dir": "/content/data/furniture"
}
]
import json
import os
for c in concepts_list:
os.makedirs(c["instance_data_dir"], exist_ok=True)
with open("concepts_list.json", "w") as f:
json.dump(concepts_list, f, indent=4)
UPLOADS
import os
from google.colab import files
import shutil
for c in concepts_list:
print(f"Uploading instance images for `{c['instance_prompt']}`")
uploaded = files.upload()
for filename in uploaded.keys():
dst_path = os.path.join(c['instance_data_dir'], filename)
shutil.move(filename, dst_path)
TRAINING
!accelerate launch train_dreambooth.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--pretrained_vae_name_or_path="stabilityai/sd-vae-ft-mse" \
--output_dir=$OUTPUT_DIR \
--revision="fp16" \
--with_prior_preservation --prior_loss_weight=1.0 \
--seed=1337 \
--resolution=512 \
--train_batch_size=1 \
--train_text_encoder \
--mixed_precision="fp16" \
--use_8bit_adam \
--gradient_accumulation_steps=1 \
--learning_rate=1e-6 \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--num_class_images=50 \
--sample_batch_size=4 \
--max_train_steps=800 \
--save_interval=10000 \
--concepts_list="concepts_list.json"
|
M1dataset/sacristy
|
[
"region:us"
] |
2023-04-16T09:24:57+00:00
|
{"pretty_name": "sacristy"}
|
2023-04-16T09:36:25+00:00
|
31f9811671aa67c53ad81c8549667e535169bb14
|
# Dataset Card for "seinfeld-scripts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Adam173/seinfeld-scripts
|
[
"region:us"
] |
2023-04-16T09:25:13+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "script", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3909219, "num_examples": 176}], "download_size": 2212310, "dataset_size": 3909219}}
|
2023-04-16T09:25:15+00:00
|
0f9846e8dbfb74999bc48ca32471c62f2508b487
|
harish03/catbreed
|
[
"license:apache-2.0",
"region:us"
] |
2023-04-16T09:43:31+00:00
|
{"license": "apache-2.0"}
|
2023-04-16T12:59:07+00:00
|
|
7d0fe01a1f8594579b68eaa56cc0552a5c1bc115
|
## Dataset
### Train
| Dataset | Link | Rows | Task-specific prefix |
| ------ | ------ | ------ | ------ |
| **Paraphrase** | [Paraphrase](https://huggingface.co/datasets/BlackKakapo/paraphrase-ro) | 131951 | *paraphrase:* **string** |
| **Grammar** | [Grammar](https://huggingface.co/datasets/BlackKakapo/grammar-ro) | 1686054 | *grammar:* **string** |
| **Synonyms** | - | 14085 | *synonyms:* **word** |
| **Translate** | - | 999725 | *translate Romanian to English:* **string** |
| **Summarize** | [Summarize](https://huggingface.co/datasets/readerbench/ro-text-summarization) | 71999 | *summarize:* **string** |
| **Sentiment analysis** | [Sentiment analysis](https://huggingface.co/datasets/ro_sent) | 36498 | *sentiment analysis:* **string** |
| **STS** | [STS](https://huggingface.co/datasets/ro_sts) | 7499 | *sts:* **string** |
| **Offense analysis** | [Offense analysis](https://huggingface.co/datasets/readerbench/ro-fb-offense) | 3199 | *offense analysis:* **string** |
| **Gsm8k-ro** | [Gsm8k-ro](https://huggingface.co/datasets/BlackKakapo/gsm8k-ro) | 7474 | **string** |
| **Qasc-ro** | [Qasc-ro](https://huggingface.co/datasets/BlackKakapo/qasc-ro) | 8134 | **string** |
| **Recipes-ro** | [Recipes-ro](https://huggingface.co/datasets/BlackKakapo/recipes-ro) | 818 | 1. *Spune-mi reteta pentru* **string** 2. *Cum as putea face* **string** 3. *Spune-mi te rog cum as putea face* **string** |
| **Qaworld-ro** | [Qaworld-ro](https://huggingface.co/datasets/BlackKakapo/qaworld-ro) | 722659 | **string** |
| **News-ro** | - | 102369 | 1. *Genereaza o știre cu titlul dat si incepe-o astfel* **string** 2. *Scrie o știre cu denumirea asta si cu acest inceput* **string**|
| **Newsagro-ro** | - | 568 | 1. *Genereaza o știre cu titlul dat si incepe-o astfel* **string** 2. *Scrie o știre cu denumirea asta si cu acest inceput* **string**|
| **Instruction-dataset-ro** | [Instruction-dataset-ro](https://huggingface.co/datasets/BlackKakapo/instruction-dataset-ro) | 326 | **string**|
| **TOTAL** | [Multitask-ro](https://huggingface.co/datasets/BlackKakapo/multitask-ro) | **~3.792.698** | |
### Eval
| Dataset | Link | Rows | Task-specific prefix |
| ------ | ------ | ------ | ------ |
| **Paraphrase** | [Paraphrase](https://huggingface.co/datasets/BlackKakapo/paraphrase-ro) | 3540 | *paraphrase:* **string** |
| **Grammar** | [Grammar](https://huggingface.co/datasets/BlackKakapo/grammar-ro) | 200 | *grammar:* **string** |
| **Synonyms** | - | 318 | *synonyms:* **word** |
| **Translate** | [Translate](https://huggingface.co/datasets/opus100/viewer/en-ro/train) | 3271 | *translate Romanian to English:* **string** |
| **Summarize** | [Summarize](https://huggingface.co/datasets/readerbench/ro-text-summarization) | 449 | *summarize:* **string** |
| **Sentiment analysis** | [Sentiment analysis](https://huggingface.co/datasets/ro_sent) | 789 | *sentiment analysis:* **string** |
| **STS** | [STS](https://huggingface.co/datasets/ro_sts) | 1119 | *sts:* **string** |
| **Offense analysis** | [Offense analysis](https://huggingface.co/datasets/readerbench/ro-fb-offense) | 1251 | *offense analysis:* **string** |
| **Gsm8k-ro** | [Gsm8k-ro](https://huggingface.co/datasets/BlackKakapo/gsm8k-ro) | 1319 | **string** |
| **Qasc-ro** | [Qasc-ro](https://huggingface.co/datasets/BlackKakapo/qasc-ro) | 926 | **string** |
| **Recipes-ro** | [Recipes-ro](https://huggingface.co/datasets/BlackKakapo/recipes-ro) | 63 | 1. *Spune-mi reteta pentru* **string** 2. *Cum as putea face* **string** 3. *Spune-mi te rog cum as putea face* **string** |
| **Qaworld-ro** | [Qaworld-ro](https://huggingface.co/datasets/BlackKakapo/qaworld-ro) | 3350 | **string** |
| **News-ro** | - | 140 | 1. *Genereaza o știre cu titlul dat si incepe-o astfel* **string** 2. *Scrie o știre cu denumirea asta si cu acest inceput* **string**|
| **Newsagro-ro** | - | 112 | 1. *Genereaza o știre cu titlul dat si incepe-o astfel* **string** 2. *Scrie o știre cu denumirea asta si cu acest inceput* **string**|
| **TOTAL** | [Multitask-ro](https://huggingface.co/datasets/BlackKakapo/multitask-ro) | **16847** | |
[Original summarize]: <https://huggingface.co/datasets/readerbench/ro-text-summarization>
[Original sent]: <https://huggingface.co/datasets/ro_sent>
[Original sts]: <https://huggingface.co/datasets/ro_sts>
[Original offense]: <https://huggingface.co/datasets/readerbench/ro-fb-offense>
|
BlackKakapo/multitask-ro
|
[
"task_categories:text2text-generation",
"task_categories:question-answering",
"task_categories:sentence-similarity",
"task_categories:text-classification",
"task_categories:translation",
"task_categories:summarization",
"multilinguality:monolingual",
"size_categories:1M<n<5M",
"language:ro",
"license:apache-2.0",
"region:us"
] |
2023-04-16T09:49:43+00:00
|
{"language": "ro", "license": "apache-2.0", "multilinguality": "monolingual", "size_categories": "1M<n<5M", "task_categories": ["text2text-generation", "question-answering", "sentence-similarity", "text-classification", "translation", "summarization"]}
|
2023-09-21T13:35:01+00:00
|
80e63301858514c77674e8a5715d91d69cdbe931
|
# Dataset Card for "kamisatoayaka"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Asmedeus/kamisatoayaka
|
[
"region:us"
] |
2023-04-16T10:01:47+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1968132.0, "num_examples": 12}], "download_size": 1972127, "dataset_size": 1968132.0}}
|
2023-04-16T10:24:04+00:00
|
9f25758ec94f82762fb9c09a5c60e908cfb83632
|
# This is the Chinese Open Instruction Generalist project
We propose the Chinese Open Instruction Generalist (**COIG**) project to maintain a harmless, helpful, and diverse set of Chinese instruction corpora. We welcome all researchers in the community to contribute to the corpus set and collaborate with us. We only release the first chip of COIG to help the Chinese LLMs' development in the exploration stage and appeal to more researchers joining us in building COIG. We introduce a manually verified translated general instruction corpus, a manually annotated exam instruction corpus, a human value alignment instruction corpus, a multi-round counterfactual correction chat corpus, and a leetcode instruction corpus. We provide these new instruction corpora to assist the community with instruction tuning on Chinese LLMs. These instruction corpora are also template workflows for how new Chinese instruction corpora can be built and expanded effectively.
It is best to download the individual data files directly that you wish to use instead of using HF load_datasets. All datasets can be downloaded from: https://huggingface.co/datasets/BAAI/COIG/tree/main
This dataset card is modified from [OIG](https://huggingface.co/datasets/laion/OIG).
### Translated Instructions (66,858)
There are 66,858 instructions in total, which are composed of 1,616 task descriptions in [Super-NaturalInstructions](https://arxiv.org/abs/2204.07705) along with a single instance for each of them, 175 seed tasks in [Self-Instruct](https://arxiv.org/abs/2212.10560), and 66,007 instructions from [Unnatural Instructions](https://arxiv.org/abs/2212.09689). To reduce the cost and further improve the quality of the instruction corpus, we separate the translation procedure into three phases: automatic translation, manual verification, and manual correction. These strict quality verification procedures assure the reliability of the translated corpus.
### Exam Instructions (63,532)
The Chinese National College Entrance Examination, Middle School Entrance Examinations, and Civil Servant Examination are the main Chinese commonsense tests. These exams contain various question formats and detailed analysis that can be used as the Chain-of-Thought (**CoT**) corpus. We extract six informative elements from original exam questions, including instruction, question context, question, answer, answer analysis, and coarse-grained subject. There are six main coarse-grained subjects: Chinese, English, Politics, Biology, History, and Geology. There are very few Math, Physics, and Chemistry questions in the corpus because these questions are often with complex symbols which are hard to annotate. For many choice questions, we recommend that the researchers utilize this corpus to further post-process it using prompts or post-process it to blank-filling questions to increase the instructions' diversity further.
### Human Value Alignment Instructions (34,471)
To respect and reflect the major difference caused by different cultural backgrounds, different from other tasks in COIG that leverage one unified collection of instruction-following samples, we categorize the value alignment data into two separate series:
- A set of samples that present shared human values in the Chinese-speaking world. In total, we choose 50 instructions as the augmentation seeds, and produce 3k resulting instructions following samples for general-purpose value alignment in the Chinese-speaking world.
- Some additional sets of samples that present regional-culture or country-specific human values.
### Counterfactural Correction Multi-round Chat (13,653)
The Counterfactual Correction Multi-round Chat dataset (CCMC) is constructed based on the [CN-DBpedia knowledge graph dataset](https://link.springer.com/chapter/10.1007/978-3-319-60045-1_44) with the aim of alleviating and resolving the pain points of hallucination and factual inconsistency in current LLMs. The CCMC dataset includes 5 rounds of role-playing chat between a student and a teacher, and the corresponding knowledge they refer to. The dataset contains ~13,000 dialogues with an average of 5 rounds per dialogue, resulting in ~65,000 rounds of chat.
### Leetcode Instructions (11,737)
Given that the code-related tasks potentially contribute to the ability emergence of LLMs, we argue that code-related tasks aligned with the Chinese natural language should be considered in our datasets. Therefore, we build the Leetcode instructions from a **CC-BY-SA-4.0** license [collection](https://github.com/doocs/leetcode) of 2,589 programming questions. The questions contain problem descriptions, multiple programming languages, and explanations (834 questions do not have explanations).
## Support this project
Your contributions and feedback support the open source ecosystem, improve the bot and provide datasets for future AI research. To participate you can:
Submit Github issues, track issues and help create datasets that need improvement. https://github.com/BAAI-Zlab/COIG
## Update: May 27, 2023
- v0.3: Update counterfactural_correction_multi_round_chat.tar.gz and make sure all round responses can be decoded as json.
- v0.2: Update exam_instructions.jsonl, translated_instructions.jsonl and human_value_alignment_instructions_part2.json.
- v0.1: Release the five datasets of COIG.
## Disclaimer
These datasets contain synthetic data and in some cases data that includes humans trying to get the language model to say toxic/offensive/trolling things. If you are concerned about the presence of this type of material in the dataset please make sure you carefully inspect each of the entries and filter appropriately. Our goal is for the model to be as helpful and non-toxic as possible and we are actively evaluating ways to reduce or eliminate undesirable content from the instruction tuning datasets.
## License
The COIG dataset that is authored by BAAI is released under an Apache 2.0 license. However, the data also includes content licensed under other permissive licenses such as unnatural instructions data which is licensed under MIT License, or web-crawled data which is used under fair use principles.
## BibTeX & Citation
```
@misc{zhang2023chinese,
title={Chinese Open Instruction Generalist: A Preliminary Release},
author={Ge Zhang and Yemin Shi and Ruibo Liu and Ruibin Yuan and Yizhi Li and Siwei Dong and Yu Shu and Zhaoqun Li and Zekun Wang and Chenghua Lin and Wenhao Huang and Jie Fu},
year={2023},
eprint={2304.07987},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
BAAI/COIG
|
[
"language:zh",
"license:apache-2.0",
"arxiv:2204.07705",
"arxiv:2212.10560",
"arxiv:2212.09689",
"arxiv:2304.07987",
"region:us"
] |
2023-04-16T10:09:32+00:00
|
{"language": ["zh"], "license": "apache-2.0", "arxiv": 2304.07987}
|
2023-07-12T14:38:35+00:00
|
50d347a681092e160c861fa45419b9992fdeffbf
|
vjain/AP_statistics
|
[
"license:mit",
"region:us"
] |
2023-04-16T10:35:13+00:00
|
{"license": "mit"}
|
2023-04-16T10:35:40+00:00
|
|
88033e5b2569349cdb391e52d0655324b473df01
|
# Dataset Card for "0.5M-zh.json"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
IANZHU/0.5M-zh.json
|
[
"region:us"
] |
2023-04-16T10:49:49+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 265574399, "num_examples": 519255}], "download_size": 183442000, "dataset_size": 265574399}}
|
2023-04-16T10:50:17+00:00
|
506cc4cf5f3f0ac940ba77692931d4a429413d55
|
# Dataset Card for "1M-zh.json"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
IANZHU/1M-zh.json
|
[
"region:us"
] |
2023-04-16T10:51:41+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 421728994, "num_examples": 917424}], "download_size": 289608434, "dataset_size": 421728994}}
|
2023-04-16T10:52:23+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.