sha
stringlengths
40
40
text
stringlengths
0
13.4M
id
stringlengths
2
117
tags
list
created_at
stringlengths
25
25
metadata
stringlengths
2
31.7M
last_modified
stringlengths
25
25
7321d307885e875c389424be9b3f1a169f3c6458
# Dataset Card for "java_unifiedbug" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nguyenminh871/java_unifiedbug
[ "region:us" ]
2023-04-12T05:52:33+00:00
{"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "func", "dtype": "string"}, {"name": "target", "dtype": {"class_label": {"names": {"0": true, "1": false}}}}, {"name": "project", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6837107.213843388, "num_examples": 2233}, {"name": "test", "num_bytes": 2278015.1218537753, "num_examples": 744}, {"name": "validation", "num_bytes": 2281076.970135837, "num_examples": 745}], "download_size": 5854348, "dataset_size": 11396199.305833}}
2023-04-13T05:27:07+00:00
28a6ea19450be611eee6eb0c1f4136de12d4f70b
# Dataset Card for Alpaca Cleaned Dutch ## Dataset Description - **Homepage:** N/A - **Repository:** N/A - **Paper:** N/A - **Leaderboard:** N/A - **Point of Contact:** Bram Vanroy ### Dataset Summary This dataset contains 51,712 conversations between een AI assistant and a (fake) "Human" (generated) in Dutch. They are translations of [Alpaca Cleaned Dataset](https://huggingface.co/datasets/yahma/alpaca-cleaned). ☕ [**Want to help me out?**](https://www.buymeacoffee.com/bramvanroy) Translating the data with the OpenAI API, and prompt testing, cost me 💸$57.99💸. If you like this dataset, please consider [buying me a coffee](https://www.buymeacoffee.com/bramvanroy) to offset a portion of this cost, I appreciate it a lot! ☕ If you use this dataset or refer to it, please use the following citation: Vanroy, B. (2023). *Language Resources for Dutch Large Language Modelling*. [https://arxiv.org/abs/2312.12852](https://arxiv.org/abs/2312.12852) ```bibtext @article{vanroy2023language, title={Language Resources for {Dutch} Large Language Modelling}, author={Vanroy, Bram}, journal={arXiv preprint arXiv:2312.12852}, year={2023} } ``` ### Languages - Dutch ## Dataset Structure ### Data Instances ```python { 'id': 7, 'instruction': 'Leg uit waarom de volgende breuk gelijk is aan 1/4', 'input': '4/16', 'output': 'De breuk 4/16 is gelijk aan 1/4 omdat zowel de teller als de ' 'noemer deelbaar zijn door 4. Door zowel de teller als de noemer ' 'door 4 te delen, krijgen we de breuk 1/4.' } ``` ### Data Fields - **id**: the ID of the item. The following ID is not included because they could not be translated: `[23019]` - **instruction**: the given instruction **input**: optional input to accompany the instruction. Can be empty. - **output**: the "answer" to the instruction ## Dataset Creation The instructions, inputs and outputs were translated with OpenAI's API for `gpt-3.5-turbo`. `max_tokens=1024, temperature=0` as parameters. The prompt template to translate is (where `src_lang` is English and `tgt_lang` is Dutch): ```python TRANSLATION_PROMPT = """You are asked to translate a task's instruction, optional input to the task, and the output of the task, from {src_lang} into {tgt_lang}. Here are the requirements that you should adhere to: 1. maintain the format: the task consists of a task instruction (marked `instruction: `), optional input to the task (marked `input: `) and output for the task marked with `output: `; 2. do not translate the identifiers `instruction: `, `input: `, and `output: ` but instead copy them to your output; 3. make sure that text is fluent to read and does not contain grammatical errors. Use standard {tgt_lang} without regional bias; 4. translate the instruction and input text using informal, but standard, language; 5. make sure to avoid biases (such as gender bias, grammatical bias, social bias); 6. if the instruction is to correct grammar mistakes or spelling mistakes then you have to generate a similar mistake in the input in {tgt_lang}, and then also generate a corrected output version in the output in {tgt_lang}; 7. if the instruction is to translate text from one language to another, then you do not translate the text that needs to be translated in the instruction or the input, nor the translation in the output (just copy them as-is); 8. do not translate code fragments but copy them to your output. If there are English examples, variable names or definitions in code fragments, keep them in English. Now translate the following task with the requirements set out above. Do not provide an explanation and do not add anything else.\n\n""" ``` This prompt is concatenated with the instruction, optionally the input, and the output. In code, that last part looks like this: ```python text = f'instruction: "{instruction}"\n\n' if inputstr: text += f'input: "{inputstr}"\n\n' text += f'output: "{outputstr}"' ``` The system message was: ``` You are a helpful assistant that translates English to Dutch to the requirements that are given to you. ``` Note that 1 item (0.0001%) was not successfully translated. The translation was missing the input, instruction, or output keywords where those were expected. The ID for the missing item is `[23019]`. ### Source Data #### Initial Data Collection and Normalization Initial data creation by [Tatsu lab](https://huggingface.co/datasets/tatsu-lab/alpaca) and cleaned by [Yahma](https://huggingface.co/datasets/yahma/alpaca-cleaned). #### Who are the source language producers? The original dataset was generated with OpenAI's `text-davinci-003`. ## Considerations for Using the Data Note that the translations in this new dataset have not been verified by humans. ### Discussion of Biases As with any machine-generated texts, users should be aware of potential biases that are included in this dataset. Although the prompt specifically includes `make sure to avoid biases (such as gender bias, grammatical bias, social bias)`, of course the impact of such command is not known. It is likely that biases remain in the dataset so use with caution. ### Other Known Limitations The translation quality has not been verified. Use at your own risk! ### Licensing Information As per OpenAI's terms of use, this dataset cannot be used to build [a commercial system that competes with OpenAI's services](https://openai.com/policies/terms-of-use). Similar to the original Alpaca dataset, this dataset is released under CC NC 4.0. This text was generated (either in part or in full) with GPT-3 (`gpt-3.5-turbo`), OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication. If you use this dataset, you must also follow the [Sharing](https://openai.com/policies/sharing-publication-policy) and [Usage](https://openai.com/policies/usage-policies) policies. As clearly stated in their [Terms of Use](https://openai.com/policies/terms-of-use), specifically 2c.iii, "[you may not] use output from the Services to develop models that compete with OpenAI". That means that you cannot use this dataset to build models that are intended to commercially compete with OpenAI. [As far as I am aware](https://law.stackexchange.com/questions/93308/licensing-material-generated-with-chatgpt), that is a specific restriction that should serve as an addendum to the current license. ### Contributions Thanks to [Tatsu lab](https://huggingface.co/datasets/tatsu-lab/alpaca) for the initial machine-generated dataset and yahma for [cleaning it](https://huggingface.co/datasets/yahma/alpaca-cleaned).
BramVanroy/alpaca-cleaned-dutch
[ "task_categories:question-answering", "task_categories:text-generation", "size_categories:10K<n<100K", "language:nl", "license:cc-by-nc-4.0", "alpaca", "instruct", "instruction", "arxiv:2312.12852", "doi:10.57967/hf/0530", "region:us" ]
2023-04-12T06:02:22+00:00
{"language": ["nl"], "license": "cc-by-nc-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering", "text-generation"], "pretty_name": "Alpaca Cleaned Dutch", "tags": ["alpaca", "instruct", "instruction"], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "prompt_id", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train_sft", "num_bytes": 26762446, "num_examples": 46163}, {"name": "test_sft", "num_bytes": 2942031, "num_examples": 5132}], "download_size": 18382591, "dataset_size": 29704477}, "configs": [{"config_name": "default", "data_files": [{"split": "train_sft", "path": "data/train_sft-*"}, {"split": "test_sft", "path": "data/test_sft-*"}]}]}
2024-01-22T10:45:23+00:00
6ffaaa6dbb180868f45b65af474d029cf73ad796
xiaojuan0920/cskg_2
[ "license:openrail", "region:us" ]
2023-04-12T06:20:10+00:00
{"license": "openrail"}
2023-04-12T07:53:07+00:00
2842aece2a0bf1f9ca9eedd8aca725b45ae09244
# Dataset Card for "test_mini_kbuhist2_v6" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Riksarkivet/cleaned_Diachronic_swe
[ "language:sv", "region:us" ]
2023-04-12T06:42:24+00:00
{"language": ["sv"], "dataset_info": {"features": [{"name": "flatten_chunked_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 23478559.45919323, "num_examples": 12237}, {"name": "train", "num_bytes": 1150443657.5408068, "num_examples": 599610}], "download_size": 808495849, "dataset_size": 1173922217}}
2023-11-05T09:08:56+00:00
0710f090a25db7e32390b2300900c1578c23e5d2
# Dataset Card for "cifar10c_snow" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Phoebechen123/cifar10c_snow
[ "region:us" ]
2023-04-12T06:45:52+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 47962052.0, "num_examples": 50000}], "download_size": 19565402, "dataset_size": 47962052.0}}
2023-04-12T10:18:00+00:00
b6f98885b2a0f8f6476e1de89319f65463783cb7
Anime prompt dataset (动漫风格数据集): + danbooru-160000.parquet Natural scenes prompt dataset (真实风格数据集): + stable-diffusion-prompts-160000.parquet + stable-diffusion-prompts2-320000.parquet Artistic style dataset (艺术风格数据集): + Lexica.art.parquet
7eu7d7/HCP-Diffusion-datas
[ "license:apache-2.0", "region:us" ]
2023-04-12T06:57:22+00:00
{"license": "apache-2.0"}
2023-05-12T04:09:23+00:00
0930ec11ded28fa0eaa910fde2f6fc3538acbeac
# WavCaps WavCaps is a ChatGPT-assisted weakly-labelled audio captioning dataset for audio-language multimodal research, where the audio clips are sourced from three websites ([FreeSound](https://freesound.org/), [BBC Sound Effects](https://sound-effects.bbcrewind.co.uk/), and [SoundBible](https://soundbible.com/)) and a sound event detection dataset ([AudioSet Strongly-labelled Subset](https://research.google.com/audioset/download_strong.html)). - **Paper:** https://arxiv.org/abs/2303.17395 - **Github:** https://github.com/XinhaoMei/WavCaps ## Statistics | Data Source | # audio | avg. audio duration (s) | avg. text length | |--------------------|----------|-------------------------|------------------| | FreeSound | 262300 | 85.98 | 6.77 | | BBC Sound Effects | 31201 | 115.04 | 9.67 | | SoundBible | 1232 | 13.12 | 5.87 | | AudioSet SL subset | 108317 | 10.00 | 9.79 | | WavCaps | 403050 | 67.59 | 7.80 | ## Download We provide a json file for each data source. For audio clips sourced from websites, we provide processed caption, raw description, as well as other metadata. For audio clips from AudioSet, we use the version from PANNs, where each file name is appended with a 'Y' at the start. For the start time, please refer to the original metadata of AudioSet SL subset. Waveforms with flac format can be downloaded through [Zip_files](https://huggingface.co/datasets/cvssp/WavCaps/tree/main/Zip_files) directory. Pretrained models can be downloaded [here](https://drive.google.com/drive/folders/1pFr8IRY3E1FAtc2zjYmeuSVY3M5a-Kdj?usp=share_link). <font color='red'>If you get "error: invalid zip file with overlapped components (possible zip bomb)" when unzipping, please try the following commands: </font> `zip -F AudioSet_SL.zip --out AS.zip` `unzip AS.zip` ## License Only academic uses are allowed for WavCaps dataset. By downloading audio clips through the links provided in the json files, you agree that you will use the audios for research purposes only. For credits for audio clips from FreeSound, please refer to its own page. For detailed license information, please refer to: [FreeSound](https://freesound.org/help/faq/#licenses), [BBC Sound Effects](https://sound-effects.bbcrewind.co.uk/licensing), [SoundBible](https://soundbible.com/about.php) The models we provided are created under a UK data copyright exemption for non-commercial research. ## Code for related tasks We provide codes and pre-trained models for audio-language retrieval, automated audio captioning, and zero-shot audio classification. * [Retrieval](https://github.com/XinhaoMei/WavCaps/tree/master/retrieval) * [Captioning](https://github.com/XinhaoMei/WavCaps/tree/master/captioning) * [Zero-shot Audio Classification](https://github.com/XinhaoMei/WavCaps/blob/master/retrieval/zero_shot_classification.py) * [Text-to-Sound Generation](https://github.com/haoheliu/AudioLDM) ## Citation Please cite the following if you make use of the dataset. ```bibtex @article{mei2023wavcaps, title={WavCaps: A ChatGPT-Assisted Weakly-Labelled Audio Captioning Dataset for Audio-Language Multimodal Research}, author={Mei, Xinhao and Meng, Chutong and Liu, Haohe and Kong, Qiuqiang and Ko, Tom and Zhao, Chengqi and Plumbley, Mark D and Zou, Yuexian and Wang, Wenwu}, journal={arXiv preprint arXiv:2303.17395}, year={2023} } ```
cvssp/WavCaps
[ "size_categories:100B<n<1T", "language:en", "license:cc-by-4.0", "arxiv:2303.17395", "region:us" ]
2023-04-12T07:09:04+00:00
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["100B<n<1T"]}
2023-07-06T12:28:10+00:00
4e552ab5d7a57cb2688b3e99e72141e18526578b
LingoTransformer/test
[ "license:openrail", "region:us" ]
2023-04-12T07:12:46+00:00
{"license": "openrail"}
2023-04-12T07:12:46+00:00
06cecdbf0c8dfd83203049dee4e5c102a08af994
# Car The [Car dataset](https://archive-beta.ics.uci.edu/dataset/19/car+evaluation) from the [UCI repository](https://archive-beta.ics.uci.edu). Classify the acceptability level of a car for resale. # Configurations and tasks | **Configuration** | **Task** | **Description** | |-------------------|---------------------------|-------------------------| | car | Multiclass classification | What is the acceptability level of the car?| | car_binary | Binary classification | Is the car acceptable?| # Usage ```python from datasets import load_dataset dataset = load_dataset("mstz/car", "car_binary")["train"] ```
mstz/car
[ "task_categories:tabular-classification", "size_categories:n<1K", "language:en", "license:cc", "car", "tabular_classification", "binary_classification", "UCI", "region:us" ]
2023-04-12T07:20:52+00:00
{"language": ["en"], "license": "cc", "size_categories": ["n<1K"], "task_categories": ["tabular-classification"], "pretty_name": "Car evaluation", "tags": ["car", "tabular_classification", "binary_classification", "UCI"], "configs": ["car"]}
2023-04-16T15:55:11+00:00
6fb56629ef6915bbe01fcb040d126a695305ad55
# Contraceptive The [Contraceptive dataset](https://archive-beta.ics.uci.edu/dataset/30/contraceptive+method+choice) from the [UCI repository](https://archive-beta.ics.uci.edu). Does the couple use contraceptives? # Configurations and tasks | **Configuration** | **Task** | **Description** | |-------------------|---------------------------|-------------------------| | contraceptive | Binary classification | Does the couple use contraceptives?| # Usage ```python from datasets import load_dataset dataset = load_dataset("mstz/contraceptive", "contraceptive")["train"] ```
mstz/contraceptive
[ "task_categories:tabular-classification", "size_categories:1K<n<10K", "language:en", "license:cc", "contraceptive", "tabular_classification", "binary_classification", "UCI", "region:us" ]
2023-04-12T07:32:09+00:00
{"language": ["en"], "license": "cc", "size_categories": ["1K<n<10K"], "task_categories": ["tabular-classification"], "pretty_name": "Contraceptive evaluation", "tags": ["contraceptive", "tabular_classification", "binary_classification", "UCI"], "configs": ["contraceptive"]}
2023-04-16T16:03:10+00:00
bb65d4abc73481ea2b08f2cd9804a9effc8cdfde
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
JoeJYu/sexismDetection
[ "region:us" ]
2023-04-12T07:37:46+00:00
{}
2023-04-13T08:10:20+00:00
8fe18fce8e9ad648e11e9bb921c6f9b73908f314
# Dataset Card for "chunk_188" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_188
[ "region:us" ]
2023-04-12T08:05:15+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 14444274528.75, "num_examples": 150386}], "download_size": 12006682852, "dataset_size": 14444274528.75}}
2023-04-12T08:12:43+00:00
4ddcc420d8922e566be17ac8e4448c41f772fd04
thewall/jolma_split
[ "license:openrail", "region:us" ]
2023-04-12T08:20:55+00:00
{"license": "openrail"}
2023-04-17T07:20:17+00:00
298416473964bf522bfad793e5bd52a5c3cdfd3c
# data summary instruction dataset for code bugfix # Reference [1]. [TSSB-3M-ext](https://huggingface.co/datasets/zirui3/TSSB-3M-ext)
zirui3/TSSB-3M-instructions
[ "language:code", "license:cc-by-4.0", "code", "instruct", "region:us" ]
2023-04-12T08:29:25+00:00
{"language": ["code"], "license": "cc-by-4.0", "datasets": ["zirui3/TSSB-3M-ext"], "tags": ["code", "instruct"], "programming_language": ["Python"]}
2023-05-26T13:20:20+00:00
bb4730d4834ed9cab0c4db25d0d9f4f0ebc68b3c
# Dataset Card for "chunk_202" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_202
[ "region:us" ]
2023-04-12T08:32:12+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 22293605232.375, "num_examples": 232109}], "download_size": 20517956740, "dataset_size": 22293605232.375}}
2023-04-12T08:44:22+00:00
57b0a5bc482cfa9698d603a1529ab03099e5a5a1
Circularmachines/Batch_indexing_machine_pred_csv
[ "license:cc-by-4.0", "region:us" ]
2023-04-12T08:45:38+00:00
{"license": "cc-by-4.0"}
2023-08-09T07:53:50+00:00
859961fe0754898de95078eb0c0c3281f041578c
# Dataset Card for "chunk_209" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_209
[ "region:us" ]
2023-04-12T08:52:59+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 21532616928.75, "num_examples": 224186}], "download_size": 20204672507, "dataset_size": 21532616928.75}}
2023-04-12T09:07:18+00:00
9400ead5a850875cde7bdda840c99f2920cb4de1
yiqing07/data
[ "license:apache-2.0", "region:us" ]
2023-04-12T08:53:37+00:00
{"license": "apache-2.0"}
2023-04-12T08:53:37+00:00
e59ad1e3b0b5f6a8e89455e4e9d47a103abd7779
# Glass The [Glass dataset](https://archive-beta.ics.uci.edu/dataset/42/glass+identification) from the [UCI repository](https://archive-beta.ics.uci.edu). Classify the type of glass. # Configurations and tasks | **Configuration** | **Task** | **Description** | |-------------------|---------------------------|--------------------------| | glass | Multiclass classification | Classify glass type. | | windows | Binary classification | Is this windows glass? | | vehicles | Binary classification | Is this vehicles glass? | | containers | Binary classification | Is this containers glass?| | tableware | Binary classification | Is this tableware glass? | | headlamps | Binary classification | Is this headlamps glass? | # Usage ```python from datasets import load_dataset dataset = load_dataset("mstz/glass", "glass")["train"] ```
mstz/glass
[ "task_categories:tabular-classification", "size_categories:n<1k", "language:en", "license:cc", "glass", "tabular_classification", "binary_classification", "UCI", "region:us" ]
2023-04-12T08:53:57+00:00
{"language": ["en"], "license": "cc", "size_categories": ["n<1k"], "task_categories": ["tabular-classification"], "pretty_name": "Glass evaluation", "tags": ["glass", "tabular_classification", "binary_classification", "UCI"], "configs": ["glass", "windows", "vehicles", "containers", "tableware", "headlamps"]}
2023-04-16T16:29:45+00:00
f45e2e8bd8f75d0b3bf0d7086486e8d78d79cc03
Argument Mining in Scientific Reviews (AMSR) We release a new dataset of peer-reviews from different computer science conferences with annotated arguments, called AMSR (**A**rgument **M**ining in **S**cientific **R**eviews). 1. Raw Data conferences_raw/ contains directories for each conference we scraped (e.g., [iclr20](./data/iclr20)). The respective directory of each conference comprises multiple `*.json` files, where every file contains the information belonging to a single paper, such as the title, the abstract, the submission date and the reviews. The reviews are stored in a list called `"review_content"`. 2. Cleaned Data conferences_cleaned/ contains reviews and papers where we removed all unwated character sequences from the reviews. For details on the details of the preprocessing steps, please refer to our paper "Argument Mining Driven Analysis of Peer-Reviews". 3. Annotated Data conferences_annotated/ contains sentence_level and token_level data of 77 reviews, annotated each by 3 annotators. We have three labels: PRO - Arguments supporting the acceptance of the paper. CON - Arguments opposing the acceptance of the paper. NON - Non-argumentative sentences/tokens which have no influence on the acceptance of the paper. And following we have three tasks: Argumentation Detection: A binary classification of whether a text span is an argument. The classes are denoted by ARG and NON, where ARG is the union of PRO and CON classes. Stance Detection: A binary classification whether an argumentative text span is supporting or opposing the paper acceptance. he model is trained and evaluated only on argumentative PRO and CON text spans. Joint Detection: A multi-class classification between the classes PRO, CON and NON, i.e. the combination of argumentation and stance detection. 4. Generalization across Conferences conferences_annotated_generalization/ contains token_level data of 77 reviews split diffrently than in 3. We studied the model’s generalization to peer-reviews for papers from other (sub)domains. To this end, wereduce the test set to only contain reviews from the GI’20conference. The focus of the GI’20 conference is ComputerGraphics and Human-Computer Interaction, while the otherconferences are focused on Representation Learning, AI andMedical Imaging. We consider the GI’20 as a subdomain since all conferences are from the domain of computer science. NO-GI: The original training dataset with all sentences from reviews of GI’20 removed. ALL A resampling of the original training dataset of the same size as NO-GI, with sentences from all conferences. 5. jupyter-Notebook ReviewStat is a jupyter notebook, which shows interesting statistics of the raw dataset.
mfromm/AMSR
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:en", "license:openrail", "argument-mining", "argument-identification", "region:us" ]
2023-04-12T09:21:14+00:00
{"language": ["en"], "license": "openrail", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "pretty_name": "AMSR", "tags": ["argument-mining", "argument-identification"]}
2023-04-12T14:58:08+00:00
802eef54d08dd7c1ef461f77a549cd5df146c414
# Dataset Card for "chunk_199" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_199
[ "region:us" ]
2023-04-12T09:21:15+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 20747904768.0, "num_examples": 216016}], "download_size": 18800690248, "dataset_size": 20747904768.0}}
2023-04-12T09:39:10+00:00
a56dead5d87bb78d734e54d663fb331c7f487fa4
# Hayes The [Hayes-Roth dataset](https://archive-beta.ics.uci.edu/dataset/44/hayes+roth) from the [UCI repository](https://archive-beta.ics.uci.edu). # Configurations and tasks | **Configuration** | **Task** | **Description** | |-------------------|---------------------------|--------------------------------| | hayes | Multiclass classification | Classify hayes type. | | hayes_1 | Binary classification | Is this instance of class 1? | | hayes_2 | Binary classification | Is this instance of class 2? | | hayes_3 | Binary classification | Is this instance of class 3? | # Usage ```python from datasets import load_dataset dataset = load_dataset("mstz/hayes", "hayes")["train"] ```
mstz/hayes_roth
[ "task_categories:tabular-classification", "size_categories:n<1K", "language:en", "license:cc", "hayes", "tabular_classification", "binary_classification", "multiclass_classification", "UCI", "region:us" ]
2023-04-12T09:24:15+00:00
{"language": ["en"], "license": "cc", "size_categories": ["n<1K"], "task_categories": ["tabular-classification"], "pretty_name": "Hayes evaluation", "tags": ["hayes", "tabular_classification", "binary_classification", "multiclass_classification", "UCI"], "configs": ["hayes", "hayes_1", "hayes_2", "hayes_3"]}
2023-04-16T16:30:45+00:00
e99306d5df05d2ea66a188ea936037f6588a20e9
# Iris The [Iris dataset](https://archive-beta.ics.uci.edu/dataset/53/iris) from the [UCI repository](https://archive-beta.ics.uci.edu). # Configurations and tasks | **Configuration** | **Task** | **Description** | |-------------------|---------------------------|-------------------------------| | iris | Multiclass classification | Classify iris type. | | setosa | Binary classification | Is this a iris-setosa? | | versicolor | Binary classification | Is this a iris-versicolor? | | virginica | Binary classification | Is this a iris-virginica? | # Usage ```python from datasets import load_dataset dataset = load_dataset("mstz/iris", "iris")["train"] ```
mstz/iris
[ "task_categories:tabular-classification", "size_categories:n<1k", "language:en", "license:cc", "iris", "tabular_classification", "binary_classification", "multiclass_classification", "UCI", "region:us" ]
2023-04-12T09:52:47+00:00
{"language": ["en"], "license": "cc", "size_categories": ["n<1k"], "task_categories": ["tabular-classification"], "pretty_name": "Iris", "tags": ["iris", "tabular_classification", "binary_classification", "multiclass_classification", "UCI"], "configs": ["iris", "setosa", "versicolor", "virginica"]}
2023-04-28T12:35:36+00:00
9aaeff993a2eee2d8fa4f85f6381d17937e47305
# MIDS24 Class Photos
belladu0201/facedata
[ "region:us" ]
2023-04-12T10:09:06+00:00
{}
2023-04-13T05:14:32+00:00
53cae007dc1e5d599077b6eeacc4c17199f406ad
# Lrs The [Lrs dataset](https://archive-beta.ics.uci.edu/dataset/93/low+resolution+spectrometer) from the [UCI repository](https://archive-beta.ics.uci.edu). # Configurations and tasks | **Configuration** | **Task** | **Description** | |-------------------|---------------------------|------------------------------| | lrs | Multiclass classification | Classify lrs type. | | lrs_0 | Binary classification | Is this instance of class 0? | | lrs_1 | Binary classification | Is this instance of class 1? | | lrs_2 | Binary classification | Is this instance of class 2? | | lrs_3 | Binary classification | Is this instance of class 3? | | lrs_4 | Binary classification | Is this instance of class 4? | | lrs_5 | Binary classification | Is this instance of class 5? | | lrs_6 | Binary classification | Is this instance of class 6? | | lrs_7 | Binary classification | Is this instance of class 7? | | lrs_8 | Binary classification | Is this instance of class 8? | # Usage ```python from datasets import load_dataset dataset = load_dataset("mstz/lrs", "lrs")["train"] ```
mstz/lrs
[ "task_categories:tabular-classification", "size_categories:n<1k", "language:en", "license:cc", "lrs", "tabular_classification", "binary_classification", "multiclass_classification", "UCI", "region:us" ]
2023-04-12T10:26:25+00:00
{"language": ["en"], "license": "cc", "size_categories": ["n<1k"], "task_categories": ["tabular-classification"], "pretty_name": "Lrs", "tags": ["lrs", "tabular_classification", "binary_classification", "multiclass_classification", "UCI"], "configs": ["lrs", "lrs_0", "lrs_1", "lrs_2", "lrs_3", "lrs_4", "lrs_5", "lrs_6", "lrs_7", "lrs_8"]}
2023-04-21T22:10:35+00:00
7335288588f14e5a687d97fc979194c2abe6f4e7
## FarsTail: a Persian natural language inference dataset ![alt-text](./farstail.png) Natural Language Inference (NLI), also called [Textual Entailment](https://en.wikipedia.org/wiki/Textual_entailment), is an important task in NLP with the goal of determining the inference relationship between a premise `p` and a hypothesis `h`. It is a three-class problem where each pair `(p, h)` is assigned to one of these classes: "ENTAILMENT" if the hypothesis can be inferred from the premise, "CONTRADICTION" if the hypothesis contradicts the premise, and "NEUTRAL" if none of the above holds. <br>There are large datasets such as [SNLI](https://www.aclweb.org/anthology/D15-1075/), [MNLI](https://www.aclweb.org/anthology/N18-1101/), and [SciTail](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/viewFile/17368/16067) for NLI in English, but there are few datasets for poor-data languages like [Persian](https://en.wikipedia.org/wiki/Persian_language). <br>Persian (Farsi) language is a pluricentric language spoken by around 110 million people in countries like Iran, Afghanistan, and Tajikistan. Here, we present the first relatively large-scale Persian dataset for NLI task, called FarsTail. A total of 10,367 samples are generated from a collection of 3,539 multiple-choice questions. The train, validation, and test portions include 7,266, 1,537, and 1,564 instances, respectively. Please refer to [the manuscript](https://arxiv.org/abs/2009.08820) for more details. ## Reading data To read the raw data in Persian alphabet, use the following code: ```python train_data = pd.read_csv('data/Train-word.csv', sep='\t') val_data = pd.read_csv('data/Val-word.csv', sep='\t') test_data = pd.read_csv('data/Test-word.csv', sep='\t') ``` The `train_data` and `val_data` have three columns, `premise`, `hypothesis`, and `label`. The `test_data` has two more columns denoted as *hard(hypothesis)* and *hard(overlap)* which indicate whether or not each sample belongs to the hard subset based on the *hypothesis-only* and *overlap-based* biased models, respectively. Non-Persian researchers can use the following code to read the indexed data: ```python with np.load('data/Indexed-FarsTail.npz', allow_pickle=True) as f: train_ind, val_ind, test_ind, dictionary = f['train_ind'], f['val_ind'], f['test_ind'], f['dictionary'].item() ``` The `train_ind` and `val_ind` are numpy arrays with the shape of `(n, 3)` where `n` is the number of samples in each set. Each entry in these arrays includes the tokenized, indexed version of the premise and hypothesis along with the respective label for one instance. The entries of `test_ind` variable have two more elements corresponding to the *hard(hypothesis)* and *hard(overlap)* columns, respectively. The `dictionary` variable maps the indexes to tokens. ## Results Here is test accuracies obtained by training some models on FarsTail training set. Please refer to the manuscript for more results. | Model | Test Accuracy | Hypothesis-only (Easy) | Hypothesis-only (Hard) | Overlap-based (Easy) | Overlap-based (Hard) | | --- | --- | --- | --- | --- | --- | |**DecompAtt (word2vec)** | **0.6662** | **0.7341** | **0.5823** | **0.7633** | **0.5404**| |**HBMP (word2vec)** | **0.6604** | **0.7618** | **0.5350** | **0.7565** | **0.5360** | |**ESIM (fastText)** | **0.7116** | **0.7931** | **0.6109** | **0.8120** | **0.5815** | |**mBERT** | **0.8338** | **0.8763** | **0.7811** | **0.8981** | **0.7504** | ## Reference If you use this dataset, please cite the following paper: Hossein Amirkhani, Mohammad AzariJafari, Soroush Faridan-Jahromi, Zeinab Kouhkan, Zohreh Pourjafari, Azadeh Amirak (2023). [FarsTail: a Persian natural language inference dataset](https://doi.org/10.1007/s00500-023-08959-3). *Soft Computing*. ```bibtex @article{amirkhani2023farstail, title={FarsTail: a Persian natural language inference dataset}, author={Amirkhani, Hossein and AzariJafari, Mohammad and Faridan-Jahromi, Soroush and Kouhkan, Zeinab and Pourjafari, Zohreh and Amirak, Azadeh}, journal={Soft Computing}, year={2023}, publisher={Springer}, doi={10.1007/s00500-023-08959-3} } ```
azarijafari/FarsTail
[ "arxiv:2009.08820", "region:us" ]
2023-04-12T10:32:36+00:00
{}
2023-07-26T15:21:04+00:00
d30f65c43ce485ec3752f7b357253e95f0bdb810
# Dataset Card for "chunk_210" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_210
[ "region:us" ]
2023-04-12T10:58:26+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 21185211312.875, "num_examples": 220569}], "download_size": 20221288182, "dataset_size": 21185211312.875}}
2023-04-12T11:12:25+00:00
938c2b8b842c8aa970d9eec675a79358b32bd3db
## E Dataset This is the card for e dataset
jquave/e
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "region:us" ]
2023-04-12T11:07:59+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "EDataset", "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "config_name": "plain_text"}}
2023-04-12T14:45:45+00:00
a0df995eb8150d04888716bfe1699b11097708cc
# Dataset Card for "odex-data" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
loubnabnl/odex-data
[ "region:us" ]
2023-04-12T11:10:47+00:00
{"dataset_info": {"features": [{"name": "predictions", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 852038, "num_examples": 439}], "download_size": 195724, "dataset_size": 852038}}
2023-04-12T11:10:50+00:00
3cb24a96d8c5ee5dfaeb6efbae3a25a568866c32
# Dataset Card for "chunk_198" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_198
[ "region:us" ]
2023-04-12T11:21:31+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 20214166032.625, "num_examples": 210459}], "download_size": 18270712837, "dataset_size": 20214166032.625}}
2023-04-12T11:57:48+00:00
2bbc5c798c2a4bdac348bb18786974e074ca7f14
yuniv/sample2
[ "region:us" ]
2023-04-12T11:26:09+00:00
{}
2023-04-12T11:52:46+00:00
0f2b7770cadd776d52585b12efcacecf9abe56dc
# Dataset Card for "chatgpt-news-articles" ## Dataset Description - **Homepage:** - **Repository:** [ChatGPT CNN / DailyMail Dataset repository]() - **Original Dataset Paper:** [Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond](https://papers.nips.cc/paper/5945-teaching-machines-to-read-and-comprehend.pdf), [Get To The Point: Summarization with Pointer-Generator Networks](https://www.aclweb.org/anthology/K16-1028.pdf) - **Point of Contact:** [Sarthak Anand](mailto: [email protected]) ### Dataset Summary The ChatGPT CNN / DailyMail Dataset is a small sample of the original CNN / DailyMaily English-language dataset containing 25k unique news articles. For each corresponding article written by journalists at CNN and the Daily Mail, there is an article written by ChatGPT using the highlights provided by human annotators. The current version supports can be used to study the language comparison between human and ChatGPT news writing. ### Languages The BCP-47 code for English as generally spoken in the United States is en-US and the BCP-47 code for English as generally spoken in the United Kingdom is en-GB. It is unknown if other varieties of English are represented in the data. ## Dataset Structure ### Data Instances For each instance, there is a string for the article, a string for the highlights, a string for the id, and a string for an article written by ChatGPT ``` {'article': "Michael Phelps has been crowned Male Athlete of the Year for a fifth time at the 2014 USA Swimming Golden Goggle Awards despite being suspended from competition for six months after a drunken driving arrest in September. Phelps was not at the New York ceremony where Keenan Robinson, an official from his training base, accepted the award on his behalf and confirmed Phelps had returned to the pool. The 18-time Olympic gold medallist stepped away from training in early October. Michael Phelps has been crowned Male Athlete of the Year at the 2014 USA Swimming Golden Goggle Awards . Phelps is the most decorated Olympian in sports history, winning 18 Olympic golds during his career . Olympic gold medallist and world record-holder Katie Ledecky capped her memorable 2014 season by claiming three awards, including USA Swimming's Female Athlete of the Year.", 'highlights': 'Michael Phelps was not present at the New York ceremony . Phelps was handed a six-month suspension by USA Swimming following his arrest for allegedly drink driving last month . Phelps confirmed in October that he would be taking a break from\xa0swimming\xa0to focus on his personal issues . Phelps is the most successful Olympic athlete in history, with 22 medals in total including 18 golds .', 'id': '95ef5b45d999dc9a78c5efa2de87e84f21912086', 'chatgpt': 'Michael Phelps, the most successful Olympic athlete in history, was noticeably absent from a ceremony held in New York City yesterday. The reason for the absence is due to a recent six-month suspension handed to Phelps by USA Swimming following his arrest for allegedly drink driving last month. In October, Phelps confirmed that he would be taking a break from swimming in order to focus on his personal issues. The suspension now means that Phelps will not be able to compete in the upcoming World Championships in Kazan, Russia in August. This will be a disappointing blow to his fans across the world as Phelps holds the record for the most Olympic gold medals, with a total of 18. However, Phelps can take this time to focus on his health and address his personal concerns.'} ``` The average token count for the articles and the highlights are provided below: | Feature | Mean Word Count | | ---------- | ---------------- | | Article | 358 | | ChatGPT | 352 | | Highlights | 42 | ### Data Fields - `id`: a string containing the heximal formated SHA1 hash of the url where the story was retrieved from - `article`: a string containing the news article written by journalists - `highlights`: a string containing the highlight of the article as written by the article author - `chatgpt`: a string containing the news article written by ChatGPT ### Data Splits The CNN/DailyMail dataset has 2 splits: _train_ and _test_. Below are the statistics of the dataset. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 20,000 | | Test | 5,000 | ## Dataset Creation ## ChatGPT Prompt The number of words for an article (N) was the same as the original article ``` openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are a AI assistant that generates news articles from a summary."}, {"role": "user", "content": f'Write a news article using the following summary: {HIGHLIGHTS} \n Write about {N} words only'} ],) ``` ### Source Data ### Original Dataset Curators The data was originally collected by Karl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom of Google DeepMind. Tomáš Kočiský and Phil Blunsom are also affiliated with the University of Oxford. They released scripts to collect and process the data into the question answering format. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, and Bing Xiang of IMB Watson and Çağlar Gu̇lçehre of Université de Montréal modified Hermann et al's collection scripts to restore the data to a summary format. They also produced both anonymized and non-anonymized versions. The code for the non-anonymized version is made publicly available by Abigail See of Stanford University, Peter J. Liu of Google Brain and Christopher D. Manning of Stanford University at <https://github.com/abisee/cnn-dailymail>. The work at Stanford University was supported by the DARPA DEFT ProgramAFRL contract no. FA8750-13-2-0040. #### Who are the source language producers? The text was written by journalists at CNN and the Daily Mail and ChatGPT ### Annotations The dataset does not contain any additional annotations. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information The original dataset is not anonymized, therefore individuals' names can be found in this dataset as well. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to access the quality and writing style of ChatGPT for writing news articles using highlights provided by humans and further study the biases if present. ### Discussion of Biases There have been studies measuring gender bias in the original dataset which could be interesting [Bordia and Bowman (2019)](https://www.aclweb.org/anthology/N19-3002.pdf) ### Licensing Information The ChatGPT CNN / Daily Mail dataset uses the same licence as the original dataset, which is [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0).
isarth/chatgpt-news-articles
[ "region:us" ]
2023-04-12T11:27:52+00:00
{"dataset_info": {"features": [{"name": "article", "dtype": "string"}, {"name": "highlights", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "chatgpt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 91883734, "num_examples": 20000}, {"name": "test", "num_bytes": 22989445, "num_examples": 5000}], "download_size": 69781166, "dataset_size": 114873179}}
2023-04-13T13:08:02+00:00
111192c1dbdda5ebd3b2f50adf5c0af47a89aaba
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
siddharthtumre/Revised-JNLPBA
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "license:unknown", "region:us" ]
2023-04-12T11:29:56+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "IASL-BNER Revised JNLPBA", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-DNA", "2": "I-DNA", "3": "B-RNA", "4": "I-RNA", "5": "B-cell_line", "6": "I-cell_line", "7": "B-cell_type", "8": "I-cell_type", "9": "B-protein", "10": "I-protein"}}}}], "config_name": "revised-jnlpba"}}
2023-04-12T11:43:52+00:00
a12110fd1519fd495a5ce6b285e20ec09ba8e57a
shawarmas/Censored-Words
[ "region:us" ]
2023-04-12T11:37:38+00:00
{}
2023-05-24T16:36:36+00:00
5d556abec88968bca2e56c73552e67872889e20b
# Dataset Card for "voice_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
0x-YuAN/voice_dataset
[ "region:us" ]
2023-04-12T11:44:55+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "ID", "dtype": "string"}, {"name": "Sex", "dtype": "int64"}, {"name": "Age", "dtype": "int64"}, {"name": "Disease category", "dtype": "int64"}, {"name": "Narrow pitch range", "dtype": "int64"}, {"name": "Decreased volume", "dtype": "int64"}, {"name": "Fatigue", "dtype": "int64"}, {"name": "Dryness", "dtype": "int64"}, {"name": "Lumping", "dtype": "int64"}, {"name": "heartburn", "dtype": "int64"}, {"name": "Choking", "dtype": "int64"}, {"name": "Eye dryness", "dtype": "int64"}, {"name": "PND", "dtype": "int64"}, {"name": "Smoking", "dtype": "int64"}, {"name": "PPD", "dtype": "float64"}, {"name": "Drinking", "dtype": "int64"}, {"name": "frequency", "dtype": "int64"}, {"name": "Diurnal pattern", "dtype": "int64"}, {"name": "Onset of dysphonia ", "dtype": "int64"}, {"name": "Noise at work", "dtype": "int64"}, {"name": "Occupational vocal demand", "dtype": "int64"}, {"name": "Diabetes", "dtype": "int64"}, {"name": "Hypertension", "dtype": "int64"}, {"name": "CAD", "dtype": "int64"}, {"name": "Head and Neck Cancer", "dtype": "int64"}, {"name": "Head injury", "dtype": "int64"}, {"name": "CVA", "dtype": "int64"}, {"name": "Voice handicap index - 10", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 340418666.0, "num_examples": 1000}], "download_size": 323237441, "dataset_size": 340418666.0}}
2023-04-21T15:20:35+00:00
f8165750c3bfd4506b947a9c46229552c53aa61f
# Dataset Card for "chunk_206" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_206
[ "region:us" ]
2023-04-12T11:58:35+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 20712943296.5, "num_examples": 215652}], "download_size": 19099832302, "dataset_size": 20712943296.5}}
2023-04-12T12:36:21+00:00
56366ac6cee2bf240957a5daf0a97029ed11612e
# Dataset Card for "multiple-preds-new" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
loubnabnl/multiple-preds-new
[ "region:us" ]
2023-04-12T12:14:52+00:00
{"dataset_info": {"features": [{"name": "predictions", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 21710012, "num_examples": 161}], "download_size": 3267344, "dataset_size": 21710012}}
2023-04-12T12:14:55+00:00
a937c364cc180eb65f62e00caecce6ff1c24beb8
madabou/dkt-images
[ "license:mit", "region:us" ]
2023-04-12T12:30:45+00:00
{"license": "mit", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 35688.0, "num_examples": 6}], "download_size": 32507, "dataset_size": 35688.0}}
2023-04-17T06:29:51+00:00
003a863687bbef9212e50047210481348edfa71b
# Dataset Card for "chunk_211" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_211
[ "region:us" ]
2023-04-12T12:59:26+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 24316280064.0, "num_examples": 253168}], "download_size": 23035611129, "dataset_size": 24316280064.0}}
2023-04-12T13:22:47+00:00
431f4cb2ef06faad9044a583319df7b2a1d692fa
carlosdasi/dasi
[ "license:openrail", "region:us" ]
2023-04-12T13:35:40+00:00
{"license": "openrail"}
2023-04-12T13:35:40+00:00
916223ded2c058e53e252f4f46a6ab5646c3a48b
P01son/instructions
[ "license:cc-by-4.0", "region:us" ]
2023-04-12T13:45:40+00:00
{"license": "cc-by-4.0"}
2023-07-07T07:56:18+00:00
25578909ff91772a2df8b8e56a8e198e14d41563
# Dataset Card for "chunk_212" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_212
[ "region:us" ]
2023-04-12T13:52:18+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 21982121568.75, "num_examples": 228866}], "download_size": 20298984341, "dataset_size": 21982121568.75}}
2023-04-12T14:11:44+00:00
f86f2d06e5548f0f6c3c217e774d9e3c422eb478
# Dataset Card for "chunk_214" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_214
[ "region:us" ]
2023-04-12T13:56:42+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 21026443968.5, "num_examples": 218916}], "download_size": 19471412737, "dataset_size": 21026443968.5}}
2023-04-12T14:08:57+00:00
a71bc206ea6ec1f52b897ce88876a28f2dc9dcde
# Dataset Card for "chunk_205" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_205
[ "region:us" ]
2023-04-12T13:59:24+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 21514752000.0, "num_examples": 224000}], "download_size": 20178552223, "dataset_size": 21514752000.0}}
2023-04-12T14:18:48+00:00
d1af295e8eccb47a97aaa01094643505e0378b53
# oa_dolly_15k Dolly 15k dataset converted to OpenAssistant QA format.
OllieStanley/oa_dolly_15k
[ "region:us" ]
2023-04-12T14:14:10+00:00
{"dataset_info": {"features": [{"name": "INSTRUCTION", "dtype": "string"}, {"name": "RESPONSE", "dtype": "string"}, {"name": "SOURCE", "dtype": "string"}, {"name": "METADATA", "struct": [{"name": "CATEGORY", "dtype": "string"}, {"name": "CONTEXT", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 12686692, "num_examples": 15015}], "download_size": 7872978, "dataset_size": 12686692}}
2023-05-02T13:27:18+00:00
c4f99bc41e69a18714d544ee02db53607cb1bbee
# Dataset Card for Snippet-MLSUM-500 ### Dataset Summary This dataset is a sample of ~500 news articles from the [MLSUM](https://huggingface.co/datasets/mlsum) dataset, augmented with machine generated news snippets. ### Supported Tasks This dataset was created to support the task of generating news snippets such as title, teaser, keywords, serp and tweet for news articles in German language. ### Languages de - German ## Dataset Structure text: a string feature. title: a string feature. teaser: a string feature. keywords: a string feature. serp_title: a string feature. serp_description: a string feature. tweet: a string feature. url: a string feature. date: a string feature. topic: a string feature. ## Dataset Creation The news articles in this dataset are a random sample of ~500 news articles from MLSUM balanced by topic. Features text, title, teaser (originally summary in MLSUM), url, date and topic are copied from MLSUM. Features keywords, serp_title, serp_description and tweet are machine generated with GPT-3.5. Generated features comply with length limits in place for SERPs and Tweets at the time of publishing. ## Considerations for Using the Data ### Known Limitations Part of the snippet data is machine generated. Be aware that these features (specifically: keywords, serp_title, serp_description and tweet) may exhibit signs of model hallucination. ## Additional Information See [Snippet-MLSUM-500-V2](https://huggingface.co/datasets/snipaid/snippet-mlsum-500-v2) if you are interested in a dataset with combined serp and additional summary data. ### Licensing Information This dataset is licensed under MIT license.
snipaid/snippet-mlsum-500
[ "task_categories:summarization", "task_categories:text2text-generation", "size_categories:n<1K", "language:de", "license:mit", "news", "headline", "teaser", "keywords", "tweet", "serp title-tag", "serp meta-description", "news snippets", "region:us" ]
2023-04-12T14:15:59+00:00
{"language": "de", "license": "mit", "size_categories": ["n<1K"], "task_categories": ["summarization", "text2text-generation"], "tags": ["news", "headline", "teaser", "keywords", "tweet", "serp title-tag", "serp meta-description", "news snippets"]}
2023-04-19T17:24:33+00:00
9227761e4465fa61b771e39bcbd900fa3558a16e
# Dataset Card for "amazon-shoe-reviews" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
P3ps/amazon-shoe-reviews
[ "region:us" ]
2023-04-12T14:53:21+00:00
{"dataset_info": {"features": [{"name": "labels", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16847665.2, "num_examples": 90000}, {"name": "test", "num_bytes": 1871962.8, "num_examples": 10000}], "download_size": 11141108, "dataset_size": 18719628.0}}
2023-04-12T14:53:35+00:00
aae02063412b6ee32ca69bfb5788220bb59fbc6b
# Dataset Card for "processed_gpt_dataset_max" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sanagnos/processed_gpt_dataset_max
[ "region:us" ]
2023-04-12T15:19:31+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 27715208700.0, "num_examples": 2253269}], "download_size": 7902918885, "dataset_size": 27715208700.0}}
2023-04-12T15:44:04+00:00
222c42f7bef7c78770f538e8364a40b65aad2d2e
# Dataset Card for Dolly_15K # Summary `databricks-dolly-15k` is an open source dataset of instruction-following records generated by thousands of Databricks employees in several of the behavioral categories outlined in the [InstructGPT](https://arxiv.org/abs/2203.02155) paper, including brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization. This dataset can be used for any purpose, whether academic or commercial, under the terms of the [Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode). Supported Tasks: - Training LLMs - Synthetic Data Generation - Data Augmentation Languages: English Version: 1.0 **Owner: Databricks, Inc.** # Dataset Overview `databricks-dolly-15k` is a corpus of more than 15,000 records generated by thousands of Databricks employees to enable large language models to exhibit the magical interactivity of ChatGPT. Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories, including the seven outlined in the InstructGPT paper, as well as an open-ended free-form category. The contributors were instructed to avoid using information from any source on the web with the exception of Wikipedia (for particular subsets of instruction categories), and explicitly instructed to avoid using generative AI in formulating instructions or responses. Examples of each behavior were provided to motivate the types of questions and instructions appropriate to each category. Halfway through the data generation process, contributors were given the option of answering questions posed by other contributors. They were asked to rephrase the original question and only select questions they could be reasonably expected to answer correctly. For certain categories contributors were asked to provide reference texts copied from Wikipedia. Reference text (indicated by the `context` field in the actual dataset) may contain bracketed Wikipedia citation numbers (e.g. `[42]`) which we recommend users remove for downstream applications. # Intended Uses While immediately valuable for instruction fine tuning large language models, as a corpus of human-generated instruction prompts, this dataset also presents a valuable opportunity for synthetic data generation in the methods outlined in the Self-Instruct paper. For example, contributor--generated prompts could be submitted as few-shot examples to a large open language model to generate a corpus of millions of examples of instructions in each of the respective InstructGPT categories. Likewise, both the instructions and responses present fertile ground for data augmentation. A paraphrasing model might be used to restate each prompt or short responses, with the resulting text associated to the respective ground-truth sample. Such an approach might provide a form of regularization on the dataset that could allow for more robust instruction-following behavior in models derived from these synthetic datasets. # Dataset ## Purpose of Collection As part of our continuing commitment to open source, Databricks developed what is, to the best of our knowledge, the first open source, human-generated instruction corpus specifically designed to enable large language models to exhibit the magical interactivity of ChatGPT. Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including academic or commercial applications. ## Sources - **Human-generated data**: Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories. - **Wikipedia**: For instruction categories that require an annotator to consult a reference text (information extraction, closed QA, summarization) contributors selected passages from Wikipedia for particular subsets of instruction categories. No guidance was given to annotators as to how to select the target passages. ## Annotator Guidelines To create a record, employees were given a brief description of the annotation task as well as examples of the types of prompts typical of each annotation task. Guidelines were succinct by design so as to encourage a high task completion rate, possibly at the cost of rigorous compliance to an annotation rubric that concretely and reliably operationalizes the specific task. Caveat emptor. The annotation guidelines for each of the categories are as follows: - **Creative Writing**: Write a question or instruction that requires a creative, open-ended written response. The instruction should be reasonable to ask of a person with general world knowledge and should not require searching. In this task, your prompt should give very specific instructions to follow. Constraints, instructions, guidelines, or requirements all work, and the more of them the better. - **Closed QA**: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form. - **Open QA**: Write a question that can be answered using general world knowledge or at most a single search. This task asks for opinions and facts about the world at large and does not provide any reference text for consultation. - **Summarization**: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form. - **Information Extraction**: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form. - **Classification**: These prompts contain lists or examples of entities to be classified, e.g. movie reviews, products, etc. In this task the text or list of entities under consideration is contained in the prompt (e.g. there is no reference text.). You can choose any categories for classification you like, the more diverse the better. - **Brainstorming**: Think up lots of examples in response to a question asking to brainstorm ideas. ## Personal or Sensitive Data This dataset contains public information (e.g., some information from Wikipedia). To our knowledge, there are no private person’s personal identifiers or sensitive information. ## Language American English # Known Limitations - Wikipedia is a crowdsourced corpus and the contents of this dataset may reflect the bias, factual errors and topical focus found in Wikipedia - Some annotators may not be native English speakers - Annotator demographics and subject matter may reflect the makeup of Databricks employees # License/Attribution **Copyright (2023) Databricks, Inc.** This dataset was developed at Databricks (https://www.databricks.com) and its use is subject to the CC BY-SA 3.0 license. Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license: Wikipedia (various pages) - https://www.wikipedia.org/ Copyright © Wikipedia editors and contributors.
HuggingFaceH4/databricks_dolly_15k
[ "license:cc-by-3.0", "arxiv:2203.02155", "region:us" ]
2023-04-12T15:51:27+00:00
{"license": "cc-by-3.0", "dataset_info": {"features": [{"name": "category", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12326332, "num_examples": 15015}], "download_size": 0, "dataset_size": 12326332}}
2023-04-12T16:11:41+00:00
c1bd2f32cbb7e55339dc3b972d560839fa3d2fe4
# Dataset Card for "dreambooth-hackathon-images-jindo" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
wonsangkim/dreambooth-hackathon-images-jindo
[ "region:us" ]
2023-04-12T15:53:21+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3620773.0, "num_examples": 20}], "download_size": 3618987, "dataset_size": 3620773.0}}
2023-04-12T15:53:23+00:00
a96d9ae127eeba6dd980ab9efd73af12b691a28f
### Preprocessing used Removing Stopwords, Removing Punctuation ### Data Fields The data fields are the same among all splits. #### pubmed - `article`: a `string` feature. - `abstract`: a `string` feature. - `section_names`: a `string` feature. ### Data Splits | name |train |validation|test| |------|-----:|---------:|---:| |pubmed|119924| 6633|6658|
JYumeko/processed_scientific_papers
[ "doi:10.57967/hf/0534", "region:us" ]
2023-04-12T16:03:08+00:00
{"dataset_info": {"features": [{"name": "abstract", "dtype": "string"}, {"name": "summary", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1713154010, "num_examples": 119924}, {"name": "validation", "num_bytes": 96932057, "num_examples": 6633}, {"name": "test", "num_bytes": 96752765, "num_examples": 6658}], "download_size": 879691152, "dataset_size": 1906838832}}
2023-04-14T10:17:31+00:00
c3ca5445e0b6cb43d3f99d689e5d6b4b8cb3d39a
# Dataset Card for "chunk_208" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_208
[ "region:us" ]
2023-04-12T16:09:48+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 22526137440.75, "num_examples": 234530}], "download_size": 20948738535, "dataset_size": 22526137440.75}}
2023-04-12T16:42:16+00:00
25be4013c3fb2cd6fa1581f87d95fe41d343ee01
PIDray - 100 Tensors with their annotations
AlShurbaji/PIDray_Tensors
[ "license:apache-2.0", "region:us" ]
2023-04-12T16:15:35+00:00
{"license": "apache-2.0"}
2023-04-27T05:17:50+00:00
ea1f55bcbefeab9073594cdfcc6659f8d316d089
# Dataset Card for "chunk_197" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_197
[ "region:us" ]
2023-04-12T16:19:17+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 16920007776.75, "num_examples": 176162}], "download_size": 15321877063, "dataset_size": 16920007776.75}}
2023-04-12T16:43:39+00:00
fa11784728a82a6379b1f328b14c014c3f0c106a
# Dataset Card for "pedro-embeddings" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nateraw/pedro-embeddings
[ "region:us" ]
2023-04-12T16:31:37+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 15978849.0, "num_examples": 161}], "download_size": 16163063, "dataset_size": 15978849.0}}
2023-04-12T16:31:45+00:00
22d140b210de4c2f7f40e83aa30bca244f86ffa9
# Dataset Card for "miniviquae_dataset" Reduced version of ViQuAE, see https://github.com/PaulLerner/ViQuAE
Paulgrim/miniviquae_dataset
[ "license:cc-by-4.0", "region:us" ]
2023-04-12T16:44:55+00:00
{"license": "cc-by-4.0", "dataset_info": {"features": [{"name": "clip-RN50", "sequence": "float64"}, {"name": "face", "sequence": {"sequence": {"sequence": {"sequence": "float64"}}}}, {"name": "face_box", "sequence": {"sequence": "float64"}}, {"name": "face_embedding", "sequence": "float64"}, {"name": "face_landmarks", "sequence": {"sequence": {"sequence": "float64"}}}, {"name": "face_prob", "sequence": "float64"}, {"name": "id", "dtype": "string"}, {"name": "imagenet-RN50", "sequence": "float64"}, {"name": "input", "dtype": "string"}, {"name": "kilt_id", "dtype": "string"}, {"name": "meta", "struct": [{"name": "left_context", "dtype": "string"}, {"name": "mention", "dtype": "string"}, {"name": "obj_surface", "struct": [{"name": "text", "sequence": "null"}]}, {"name": "partial_evidence", "struct": [{"name": "end_paragraph_id", "sequence": "null"}, {"name": "meta", "sequence": "null"}, {"name": "section", "sequence": "null"}, {"name": "start_paragraph_id", "sequence": "null"}, {"name": "title", "sequence": "null"}, {"name": "wikipedia_id", "sequence": "null"}]}, {"name": "right_context", "dtype": "string"}, {"name": "sub_surface", "struct": [{"name": "text", "sequence": "null"}]}, {"name": "subj_aliases", "struct": [{"name": "text", "sequence": "null"}]}, {"name": "template_questions", "struct": [{"name": "text", "sequence": "null"}]}]}, {"name": "original_question", "dtype": "string"}, {"name": "output", "struct": [{"name": "answer", "sequence": "string"}, {"name": "meta", "sequence": "null"}, {"name": "original_answer", "dtype": "string"}, {"name": "provenance", "list": [{"name": "bleu_score", "sequence": "float64"}, {"name": "end_character", "sequence": "int64"}, {"name": "end_paragraph_id", "sequence": "int64"}, {"name": "meta", "sequence": "null"}, {"name": "section", "sequence": "string"}, {"name": "start_character", "sequence": "int64"}, {"name": "start_paragraph_id", "sequence": "int64"}, {"name": "title", "sequence": "string"}, {"name": "wikipedia_id", "sequence": "string"}]}]}, {"name": "url", "dtype": "string"}, {"name": "wikidata_id", "dtype": "string"}, {"name": "search_indices", "sequence": "int64"}, {"name": "search_provenance_indices", "sequence": "int64"}, {"name": "search_irrelevant_indices", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 313513980, "num_examples": 1165}, {"name": "validation", "num_bytes": 299175833, "num_examples": 1216}, {"name": "test", "num_bytes": 304124128, "num_examples": 1237}], "download_size": 139611306, "dataset_size": 916813941}}
2023-04-12T17:16:14+00:00
fb30bfabfbc14e0ae83072f3fbc927b24c6a5735
# databricks-dolly-15k **This dataset was not originally created by AI Squared.** This dataset was curated and created by [Databricks](https://databricks.com). The below text comes from the original release of the dataset's README file in GitHub (available at https://github.com/databrickslabs/dolly/tree/master/data): # Summary `databricks-dolly-15k` is an open source dataset of instruction-following records generated by thousands of Databricks employees in several of the behavioral categories outlined in the [InstructGPT](https://arxiv.org/abs/2203.02155) paper, including brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization. This dataset can be used for any purpose, whether academic or commercial, under the terms of the [Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode). Supported Tasks: - Training LLMs - Synthetic Data Generation - Data Augmentation Languages: English Version: 1.0 **Owner: Databricks, Inc.** # Dataset Overview `databricks-dolly-15k` is a corpus of more than 15,000 records generated by thousands of Databricks employees to enable large language models to exhibit the magical interactivity of ChatGPT. Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories, including the seven outlined in the InstructGPT paper, as well as an open-ended free-form category. The contributors were instructed to avoid using information from any source on the web with the exception of Wikipedia (for particular subsets of instruction categories), and explicitly instructed to avoid using generative AI in formulating instructions or responses. Examples of each behavior were provided to motivate the types of questions and instructions appropriate to each category. Halfway through the data generation process, contributors were given the option of answering questions posed by other contributors. They were asked to rephrase the original question and only select questions they could be reasonably expected to answer correctly. For certain categories contributors were asked to provide reference texts copied from Wikipedia. Reference text (indicated by the `context` field in the actual dataset) may contain bracketed Wikipedia citation numbers (e.g. `[42]`) which we recommend users remove for downstream applications. # Intended Uses While immediately valuable for instruction fine tuning large language models, as a corpus of human-generated instruction prompts, this dataset also presents a valuable opportunity for synthetic data generation in the methods outlined in the Self-Instruct paper. For example, contributor--generated prompts could be submitted as few-shot examples to a large open language model to generate a corpus of millions of examples of instructions in each of the respective InstructGPT categories. Likewise, both the instructions and responses present fertile ground for data augmentation. A paraphrasing model might be used to restate each prompt or short responses, with the resulting text associated to the respective ground-truth sample. Such an approach might provide a form of regularization on the dataset that could allow for more robust instruction-following behavior in models derived from these synthetic datasets. # Dataset ## Purpose of Collection As part of our continuing commitment to open source, Databricks developed what is, to the best of our knowledge, the first open source, human-generated instruction corpus specifically designed to enable large language models to exhibit the magical interactivity of ChatGPT. Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including academic or commercial applications. ## Sources - **Human-generated data**: Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories. - **Wikipedia**: For instruction categories that require an annotator to consult a reference text (information extraction, closed QA, summarization) contributors selected passages from Wikipedia for particular subsets of instruction categories. No guidance was given to annotators as to how to select the target passages. ## Annotator Guidelines To create a record, employees were given a brief description of the annotation task as well as examples of the types of prompts typical of each annotation task. Guidelines were succinct by design so as to encourage a high task completion rate, possibly at the cost of rigorous compliance to an annotation rubric that concretely and reliably operationalizes the specific task. Caveat emptor. The annotation guidelines for each of the categories are as follows: - **Creative Writing**: Write a question or instruction that requires a creative, open-ended written response. The instruction should be reasonable to ask of a person with general world knowledge and should not require searching. In this task, your prompt should give very specific instructions to follow. Constraints, instructions, guidelines, or requirements all work, and the more of them the better. - **Closed QA**: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form. - **Open QA**: Write a question that can be answered using general world knowledge or at most a single search. This task asks for opinions and facts about the world at large and does not provide any reference text for consultation. - **Summarization**: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form. - **Information Extraction**: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form. - **Classification**: These prompts contain lists or examples of entities to be classified, e.g. movie reviews, products, etc. In this task the text or list of entities under consideration is contained in the prompt (e.g. there is no reference text.). You can choose any categories for classification you like, the more diverse the better. - **Brainstorming**: Think up lots of examples in response to a question asking to brainstorm ideas. ## Personal or Sensitive Data This dataset contains public information (e.g., some information from Wikipedia). To our knowledge, there are no private person’s personal identifiers or sensitive information. ## Language American English # Known Limitations - Wikipedia is a crowdsourced corpus and the contents of this dataset may reflect the bias, factual errors and topical focus found in Wikipedia - Some annotators may not be native English speakers - Annotator demographics and subject matter may reflect the makeup of Databricks employees # License/Attribution **Copyright (2023) Databricks, Inc.** This dataset was developed at Databricks (https://www.databricks.com) and its use is subject to the CC BY-SA 3.0 license. Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license: Wikipedia (various pages) - https://www.wikipedia.org/ Copyright © Wikipedia editors and contributors.
aisquared/databricks-dolly-15k
[ "language:en", "license:cc-by-sa-3.0", "databricks", "dolly", "arxiv:2203.02155", "region:us" ]
2023-04-12T16:45:01+00:00
{"language": ["en"], "license": "cc-by-sa-3.0", "pretty_name": "Dataset ", "tags": ["databricks", "dolly"]}
2023-04-12T17:14:46+00:00
238f76d01b1b8924f607e32512d5fc413c6ac2b2
# Dataset Card for "miniviquae_passages" Reduced version of ViQuAE, see https://github.com/PaulLerner/ViQuAE
Paulgrim/miniviquae_passages
[ "license:cc-by-3.0", "region:us" ]
2023-04-12T16:47:05+00:00
{"license": "cc-by-3.0", "dataset_info": {"features": [{"name": "passage", "dtype": "string"}, {"name": "index", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 76346268, "num_examples": 166712}], "download_size": 44356531, "dataset_size": 76346268}}
2023-04-12T17:16:36+00:00
35ad8254e6366c43355de3ecef2f5ba405997c09
# Dataset Card for "chunk_213" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_213
[ "region:us" ]
2023-04-12T17:05:19+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 23810491296.25, "num_examples": 247902}], "download_size": 22645976382, "dataset_size": 23810491296.25}}
2023-04-12T17:40:04+00:00
e29a49d40f7ed6bf2afeebb2e557eb6b351f5207
# Dataset Card for "miniviquae_wikipedia" Reduced version of ViQuAE, see https://github.com/PaulLerner/ViQuAE
Paulgrim/miniviquae_wikipedia
[ "license:cc-by-3.0", "region:us" ]
2023-04-12T17:05:38+00:00
{"license": "cc-by-3.0", "dataset_info": {"features": [{"name": "anchors", "sequence": [{"name": "end", "dtype": "int32"}, {"name": "href", "dtype": "string"}, {"name": "paragraph_id", "dtype": "int32"}, {"name": "start", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "wikipedia_id", "dtype": "string"}, {"name": "wikipedia_title", "dtype": "string"}]}, {"name": "categories", "dtype": "string"}, {"name": "image", "dtype": "string"}, {"name": "kilt_id", "dtype": "string"}, {"name": "text", "sequence": [{"name": "paragraph", "dtype": "string"}]}, {"name": "url", "dtype": "string"}, {"name": "wikidata_info", "struct": [{"name": "aliases", "sequence": [{"name": "alias", "dtype": "string"}]}, {"name": "description", "dtype": "string"}, {"name": "enwikiquote_title", "dtype": "string"}, {"name": "wikidata_id", "dtype": "string"}, {"name": "wikidata_label", "dtype": "string"}, {"name": "wikipedia_title", "dtype": "string"}]}, {"name": "wikipedia_id", "dtype": "string"}, {"name": "wikipedia_title", "dtype": "string"}, {"name": "passage_index", "sequence": "int64"}, {"name": "clip-RN50", "sequence": "float32"}, {"name": "imagenet-RN50", "sequence": "float32"}, {"name": "face_box", "sequence": {"sequence": "float64"}}, {"name": "face_landmarks", "sequence": {"sequence": {"sequence": "float64"}}}, {"name": "face_prob", "sequence": "float64"}, {"name": "face_embedding", "sequence": "float64"}], "splits": [{"name": "non_humans", "num_bytes": 98327419, "num_examples": 2962}, {"name": "humans_with_faces", "num_bytes": 91308756, "num_examples": 1874}, {"name": "humans_without_faces", "num_bytes": 3388671, "num_examples": 104}], "download_size": 129733163, "dataset_size": 193024846}}
2023-04-12T17:17:15+00:00
db14fd237b8f503f62e1276384e4dc97686dc3b6
smell161/1
[ "license:openrail", "region:us" ]
2023-04-12T17:06:33+00:00
{"license": "openrail"}
2023-04-12T17:06:33+00:00
a258ffc3f82ccb961c1f7eb73d788606b15c4332
jfischoff/super-channel-control-net-images
[ "license:openrail", "region:us" ]
2023-04-12T17:15:11+00:00
{"license": "openrail"}
2023-04-12T17:15:11+00:00
f8dda6917eea8e8e05d7037cbc18c240aa9af2a9
The corresponding GitHub repo can be found here:https://github.com/leap-stc/ClimSim Read more: https://arxiv.org/abs/2306.08754.
LEAP/ClimSim_high-res
[ "license:cc-by-4.0", "arxiv:2306.08754", "doi:10.57967/hf/0739", "region:us" ]
2023-04-12T17:27:42+00:00
{"license": "cc-by-4.0"}
2023-09-29T19:30:24+00:00
bc6c4fc1562b4730988788316f38c3667b0d9d32
# Dataset Card for "translated_german_alpaca_validation" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
LEL-A/translated_german_alpaca_validation
[ "language:de", "region:us" ]
2023-04-12T17:39:19+00:00
{"language": ["de"], "dataset_info": {"features": [{"name": "text", "dtype": "null"}, {"name": "inputs", "struct": [{"name": "_instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}]}, {"name": "prediction", "list": [{"name": "label", "dtype": "string"}, {"name": "score", "dtype": "float64"}]}, {"name": "prediction_agent", "dtype": "string"}, {"name": "annotation", "dtype": "string"}, {"name": "annotation_agent", "dtype": "string"}, {"name": "vectors", "struct": [{"name": "input", "sequence": "float64"}, {"name": "instruction", "sequence": "float64"}, {"name": "output", "sequence": "float64"}]}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "struct": [{"name": "original_id", "dtype": "int64"}, {"name": "translation_model", "dtype": "string"}]}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 152890, "num_examples": 8}], "download_size": 0, "dataset_size": 152890}}
2023-10-02T15:50:04+00:00
60a5a7f11e88a225cf19fd5a19ac229f1b3b03a3
langeheris/bak-dataset
[ "license:unknown", "region:us" ]
2023-04-12T17:48:20+00:00
{"license": "unknown"}
2023-04-12T17:51:53+00:00
04215cca9f23e1e9d8055d563d6a7f6d11e238df
# AutoTrain Dataset for project: entericdisease50articlefinetune ## Dataset Description This dataset has been automatically processed by AutoTrain for project entericdisease50articlefinetune. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "context": "\ufeff\n\nNew Role for the ibeA Gene in H2O2 Stress Resistance of\nEscherichia coli\nMaud Fl\u00e9chard,a,b M\u00e9lanie A. M. Cortes,a,b Maryline R\u00e9p\u00e9rant,a,b and Pierre Germona,b\nINRA, UMR 1282 Infectiologie et Sant\u00e9 Publique, Nouzilly, France,a and Universit\u00e9 Fran\u00e7ois Rabelais, UMR 1282, Tours, Franceb\n\nibeA is a virulence factor found in", "question": "What genes are mentioned in this article", "answers.text": [ "ibeA" ], "answers.answer_start": [ 20 ], "feat_answer_id": [ 798798 ], "feat_document_id": [ 1442778 ], "feat_question_id": [ 915217 ], "feat_answer_end": [ 24.0 ], "feat_answer_category": [ null ], "feat_file_name": [ "53_JB.00089-12.txt" ] }, { "context": "\ufeffInternational Journal of Food Microbiology 166 (2013) 65\u201371\n\n\n\n\n\n\n\n\n\n\nLoss of cAMP/CRP regulation confers extreme high hydrostatic pressure resistance in Escherichia coli O157:H7\nDietrich Vanlint, Brecht J.Y. Pype, Nele Rutten, Kristof G.A. Vanoirbeek, Chris W. Michiels, Abram Aertsen \u204e\nLaboratory of Food Microbiology and Leuven Food Science and Nutrition Research Centre (LFoRCe), De", "question": "What genes are mentioned in this article", "answers.text": [ "CRP" ], "answers.answer_start": [ 84 ], "feat_answer_id": [ 798875 ], "feat_document_id": [ 1442807 ], "feat_question_id": [ 915217 ], "feat_answer_end": [ 87.0 ], "feat_answer_category": [ null ], "feat_file_name": [ "82_1-s2.0-S0168160513003115-main.txt" ] } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "context": "Value(dtype='string', id=None)", "question": "Value(dtype='string', id=None)", "answers.text": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)", "answers.answer_start": "Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None)", "feat_answer_id": "Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)", "feat_document_id": "Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)", "feat_question_id": "Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)", "feat_answer_end": "Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None)", "feat_answer_category": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)", "feat_file_name": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 80 | | valid | 20 |
twielema/50EntericDiseaseArticleQADataset
[ "region:us" ]
2023-04-12T17:50:30+00:00
{}
2023-04-12T17:51:22+00:00
d9e0d75a52912f356fb7138ca53290f7bf54e60d
# Dataset Card for "masked_language_model_v0_1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mekaneeky/masked_language_model_v0_1
[ "region:us" ]
2023-04-12T17:58:37+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2584006520, "num_examples": 1541770}], "download_size": 207750292, "dataset_size": 2584006520}}
2023-04-24T15:14:23+00:00
683f8f6a74f7d57665e6c932200b88bf6ec6d59a
SummerSigh/TrollHunter
[ "license:apache-2.0", "region:us" ]
2023-04-12T18:06:37+00:00
{"license": "apache-2.0"}
2023-04-12T18:07:08+00:00
72be58c487a0590b3d955231d03c55c922fc418f
hounsouthohin/bears-fastai-2021
[ "license:apache-2.0", "region:us" ]
2023-04-12T18:48:06+00:00
{"license": "apache-2.0"}
2023-04-12T18:48:06+00:00
9bd19049205847a506ac4141e0f40d8e4e8e3e2a
# Dataset Card for "chunk_216" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_216
[ "region:us" ]
2023-04-12T19:26:27+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 23157556992.0, "num_examples": 241104}], "download_size": 22056093647, "dataset_size": 23157556992.0}}
2023-04-12T19:47:08+00:00
57eea52805026e189f2a52b00dd51f20ea4636fe
# Dataset Card for "chunk_218" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_218
[ "region:us" ]
2023-04-12T19:30:15+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 20988505008.875, "num_examples": 218521}], "download_size": 19838852943, "dataset_size": 20988505008.875}}
2023-04-12T19:48:53+00:00
22f6331e8d8cf2ac69ce0debc3bef7a4c82732c6
Dimmas/Landscape_Segmentation
[ "license:bigscience-openrail-m", "region:us" ]
2023-04-12T19:35:21+00:00
{"license": "bigscience-openrail-m"}
2023-04-13T08:37:02+00:00
ef540cc712d59089adbf65a295847af1f3b8eb07
# Dataset Card for "chunk_219" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_219
[ "region:us" ]
2023-04-12T19:36:32+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 22381585200.875, "num_examples": 233025}], "download_size": 21384215846, "dataset_size": 22381585200.875}}
2023-04-12T19:55:52+00:00
6ab837e95e156f7ed3f9ffee2d05fb2f5c9b88a0
# Dataset Card for "chunk_215" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_215
[ "region:us" ]
2023-04-12T19:43:44+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 20548028880.125, "num_examples": 213935}], "download_size": 18678046887, "dataset_size": 20548028880.125}}
2023-04-12T20:01:10+00:00
1a13d0332fee7df6331ce5ff6dc48d8e506bda70
# Dataset Card for "chunk_222" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_222
[ "region:us" ]
2023-04-12T19:50:53+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 24192090000.625, "num_examples": 251875}], "download_size": 22551122041, "dataset_size": 24192090000.625}}
2023-04-12T20:06:33+00:00
6f93afb3ce4a3b50deaa7a3cb89ce7f5e2380eff
RealTimeData/bbc_news_march_2023
[ "license:cc-by-2.0", "region:us" ]
2023-04-12T19:58:46+00:00
{"license": "cc-by-2.0"}
2023-04-12T19:59:10+00:00
446ed981a59732da29e6b46ea9e7d8b243a461be
# Dataset Card for "chunk_217" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_217
[ "region:us" ]
2023-04-12T20:26:28+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 21216042720.75, "num_examples": 220890}], "download_size": 20002988350, "dataset_size": 21216042720.75}}
2023-04-12T20:44:48+00:00
ba9251ef6586903f034500bc1c5f603d29b2c594
# Dataset Card for "pedro-embeddings-new" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sasha/pedro-embeddings-new
[ "region:us" ]
2023-04-12T20:27:54+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 4762798.0, "num_examples": 150}], "download_size": 4945449, "dataset_size": 4762798.0}}
2023-04-12T20:27:56+00:00
9999999683ef6ebeb5ab46599709ea739c32c2bb
# Dataset Card for "chunk_220" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_220
[ "region:us" ]
2023-04-12T20:40:13+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 21843332208.375, "num_examples": 227421}], "download_size": 19732408435, "dataset_size": 21843332208.375}}
2023-04-12T20:58:40+00:00
dd2795125fbe7a57912147371b7d10fca55110d7
# Dataset Card for "testing_self_instruct_small" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
HuggingFaceH4/testing_self_instruct_small
[ "region:us" ]
2023-04-12T20:53:12+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "completion", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20379, "num_examples": 100}, {"name": "test", "num_bytes": 26586, "num_examples": 100}], "download_size": 35875, "dataset_size": 46965}}
2023-04-12T20:53:16+00:00
71a80d4ba17d70406a891b03e89f609ef0024b38
# Dataset Card for "testing_alpaca_small" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
HuggingFaceH4/testing_alpaca_small
[ "region:us" ]
2023-04-12T20:55:01+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "completion", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 33856, "num_examples": 100}, {"name": "test", "num_bytes": 32475, "num_examples": 100}], "download_size": 52543, "dataset_size": 66331}}
2023-04-12T20:55:05+00:00
bab33c4c2123cef6a53416ac40fd6ff2cddbd07a
# Dataset Card for "testing_codealpaca_small" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
HuggingFaceH4/testing_codealpaca_small
[ "region:us" ]
2023-04-12T20:57:20+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "completion", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 31503, "num_examples": 100}, {"name": "test", "num_bytes": 29802, "num_examples": 100}], "download_size": 44006, "dataset_size": 61305}}
2023-04-12T20:57:24+00:00
10bc87502a94b87ebf689d38bb7b01ada35a21ab
# Dataset Card for "chunk_221" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_221
[ "region:us" ]
2023-04-12T21:07:48+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 21840066576.625, "num_examples": 227387}], "download_size": 21127298946, "dataset_size": 21840066576.625}}
2023-04-12T21:29:37+00:00
bab654cfb028954ffb5088c80a40cf99baa31e36
Jehu27/Jehu
[ "license:openrail", "region:us" ]
2023-04-12T21:19:29+00:00
{"license": "openrail"}
2023-04-12T21:19:29+00:00
3926e01dcee9465a56a34253ef29f932513df69a
openmuscle/forearm_muscle_om12
[ "license:gpl-3.0", "region:us" ]
2023-04-12T21:24:56+00:00
{"license": "gpl-3.0"}
2023-04-12T21:26:34+00:00
df860ebe366434a75f82fb52d2311621c88c8920
vjain/AP_physics_embeddings
[ "license:mit", "region:us" ]
2023-04-12T21:37:33+00:00
{"license": "mit"}
2023-04-22T02:18:05+00:00
ad57f3da21fd1756bc9f4714ae65e1edb8570db2
## E smol Dataset This is the card for e smol dataset
jquave/e_smol
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "region:us" ]
2023-04-12T21:50:23+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "EDataset", "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "config_name": "plain_text"}}
2023-04-12T21:51:45+00:00
54a8734f02635d513e7b5e1787dcc51c6dcbf76b
This is a [long conversation with toolformer] api calls dataset generated by GPT3.5. There are 61900 conversations currently, each one of them have 10–15 turns with 2-3 API calls.
CloudTron/ConvToolFormer
[ "license:cc-by-4.0", "region:us" ]
2023-04-12T21:51:33+00:00
{"license": "cc-by-4.0"}
2023-04-12T21:53:56+00:00
88702ee250ecac03945e01fc25d400be4ca428e9
Borba101010/Borba
[ "license:bigscience-openrail-m", "region:us" ]
2023-04-12T21:53:15+00:00
{"license": "bigscience-openrail-m"}
2023-04-12T21:53:15+00:00
9fcb2ba8132d6a65111c6526b21e7cfab1a29c79
## E micro Dataset This is the card for e micro dataset
jquave/e_micro
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "region:us" ]
2023-04-12T22:15:14+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "EDataset", "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "config_name": "plain_text"}}
2023-04-12T22:15:52+00:00
67c88bad4d7d049fd47ee22b9f071e812316314f
# Dataset Card for "chunk_207" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_207
[ "region:us" ]
2023-04-12T22:37:30+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 21204516960.75, "num_examples": 220770}], "download_size": 19871082276, "dataset_size": 21204516960.75}}
2023-04-12T23:08:42+00:00
ddc17b9e44a49e5c2f702e5b9d1ea7e9aced80aa
# Dataset Card for "coco_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
gokuls/coco_dataset
[ "region:us" ]
2023-04-12T23:33:43+00:00
{"dataset_info": {"features": [{"name": "image_id", "dtype": "int64"}, {"name": "caption_id", "dtype": "int64"}, {"name": "caption", "dtype": "string"}, {"name": "height", "dtype": "int64"}, {"name": "width", "dtype": "int64"}, {"name": "file_name", "dtype": "string"}, {"name": "coco_url", "dtype": "string"}, {"name": "image_path", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 187874990, "num_examples": 591753}, {"name": "validation", "num_bytes": 7839895, "num_examples": 25014}, {"name": "test", "num_bytes": 10777550, "num_examples": 40670}], "download_size": 35917412, "dataset_size": 206492435}}
2023-04-12T23:33:53+00:00
b6411d02290cddb03e314b99ea6d81b479cb2817
2 datasets, GPT4 pure data and GPT3.5 + GPT4, GPT4 pure is around 71k Instructions. The other one, called UltraSet, L variation(large) has over 1.5 Million prompts, This dataset has been gathered from everywhere, i added math, alpaca data, vicuna, ShareGPT and ALOT MORE. Theres a raw version of this, it is deduped. There is a S variation, meaning small that should have over 400,000 prompts.
Dampish/QuickTrain_v2
[ "license:cc-by-nc-4.0", "region:us" ]
2023-04-12T23:58:51+00:00
{"license": "cc-by-nc-4.0", "viewer": true}
2023-04-17T12:25:36+00:00
2f825401921b3337317d6006a7c68ba1fe9dc42a
baicuya/images
[ "license:openrail", "region:us" ]
2023-04-13T00:12:49+00:00
{"license": "openrail"}
2023-04-18T05:56:19+00:00
217b8e03f86a1d0af8630adb3dece83d763223ba
# AutoTrain Dataset for project: medocr-berta ## Dataset Description This dataset has been automatically processed by AutoTrain for project medocr-berta. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "context": "[597, 612]", "question": "What are the contents?", "answers.text": [ " INCL.ALL TAXES" ], "answers.answer_start": [ 597 ], "feat_Unnamed: 0": [ 831 ], "feat_image_path": [ "/content/train/images/20221121_165501_71f9_jpg.rf.446486cc21bc5e8b824c6554a12e5dfd.jpg" ], "feat_context": [ "chim cuated tablet conte\nArmycin Dydrate\nFa Aithromycin\nExclarence\nCateur titanium Dioxide\nDusage directed by the c\nStori: Stere temperature w\nreach of sisteren\n25C Protest from Right & moisture\nCup the medicine ou\nAMSTER LABS UNIT\nFals\nBext Delautan\n14 01 2013)\nMarketed by\n500m\nak\nHAUZ\nPharma Pvt. Ltd\n0013019 Cered Compor\n301, Industnal Ae Phase-I\nPenchkula, Haryana-154 109\n* Azithromycin\nTablets IP 500 mg\nAzeewah-500\nMig. Lic. No:6/13/14\nCompation\nLash film copted tablet contains (\nAainamycin Dydrats\nEto Arm\nThankan Dhode I\nB.NO.MATO74 M.R.P.Rs.119.50\nMFG.JAN.2022 PER 5 TABS.\nEXP.DEC.2023 INCL.ALL TAXES" ] }, { "context": "[100, 111]", "question": "What is Expiry Date?", "answers.text": [ "\nEXP.APR.25" ], "answers.answer_start": [ 100 ], "feat_Unnamed: 0": [ 25 ], "feat_image_path": [ "/content/train/images/20221121_161649_7e1f_jpg.rf.f9286b7e77dc417400d771a1274f3ab8.jpg" ], "feat_context": [ "www.al\nMain NAS\nwwwwww\nKES 105 HER BO\n2012 10 10 10 10 10 10 10 10\nAvomine\nnometrazine\nB.NO.AVA22016\nEXP.APR.25\nM.R.P.Rs.50.89 PER 10 TABS. INCL.OFALLTAXES\nMFD.MAY 22\nTONDA\nCON UNUDU U\nAvomine\nTheoclate Tablets f\nPromehatine\nB.No.AVA22016 MFD.MAY 22\nEXP.APR.25\nM.R.P.Rs.50.89 PER 10 TABS INCL.OFALL TAXES" ] } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "context": "Value(dtype='string', id=None)", "question": "Value(dtype='string', id=None)", "answers.text": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)", "answers.answer_start": "Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None)", "feat_Unnamed: 0": "Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)", "feat_image_path": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)", "feat_context": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 773 | | valid | 194 |
muneeb-py/autotrain-data-medocr-berta
[ "region:us" ]
2023-04-13T00:13:12+00:00
{}
2023-04-13T01:21:17+00:00
30a61dbc9ca8e54947be92231b82d2324da07886
houck2040/artisci
[ "license:mit", "region:us" ]
2023-04-13T00:31:31+00:00
{"license": "mit"}
2023-04-13T00:32:39+00:00
b6598361e65101361ff944aa24215d12e69c5981
# Dataset Card for "chunk_227" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_227
[ "region:us" ]
2023-04-13T00:35:29+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 16740686160.125, "num_examples": 174295}], "download_size": 13637226571, "dataset_size": 16740686160.125}}
2023-04-13T00:50:45+00:00
520d34dccd51e99bd0f22f5db818db975b09b345
# Dataset Card for "chunk_224" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
one-sec-cv12/chunk_224
[ "region:us" ]
2023-04-13T01:08:34+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 17088091776.0, "num_examples": 177912}], "download_size": 15759502674, "dataset_size": 17088091776.0}}
2023-04-13T01:23:09+00:00
c8f6aea9ad9797ac5674ddb741a61a59a6fd9cf9
# Dataset Card for "arithmetic_2as_1to5" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sethapun/arithmetic_2as_1to5
[ "region:us" ]
2023-04-13T01:18:38+00:00
{"dataset_info": {"features": [{"name": "expression", "dtype": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}], "splits": [{"name": "train", "num_bytes": 54000, "num_examples": 2000}, {"name": "validation", "num_bytes": 10800, "num_examples": 400}], "download_size": 9321, "dataset_size": 64800}}
2023-04-13T01:18:40+00:00
006ccf5663438ba29c4f4cd628bcaddc42380a94
# Dataset Card for "riffdata-001" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
gafotech/riffdata-001
[ "region:us" ]
2023-04-13T01:21:48+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 2514345883.392, "num_examples": 20576}], "download_size": 2088455506, "dataset_size": 2514345883.392}}
2023-04-13T14:32:50+00:00