sha
stringlengths
40
40
text
stringlengths
0
13.4M
id
stringlengths
2
117
tags
list
created_at
stringlengths
25
25
metadata
stringlengths
2
31.7M
last_modified
stringlengths
25
25
d370089b399492cc158548e9589fc3af76f4712a
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: Luciano/bertimbau-base-lener-br-finetuned-lener-br * Dataset: lener_br * Config: lener_br * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model.
autoevaluate/autoeval-eval-lener_br-lener_br-39d19a-1775961623
[ "autotrain", "evaluation", "region:us" ]
2022-10-16T10:36:44+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/bertimbau-base-lener-br-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-10-16T10:37:33+00:00
2bf5a7e1402a6f32c2073a75c61d75f4c9cca2e7
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: bigscience/bloom-1b1 * Dataset: inverse-scaling/NeQA * Config: inverse-scaling--NeQA * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@CG80499](https://huggingface.co/CG80499) for evaluating this model.
autoevaluate/autoeval-eval-inverse-scaling__NeQA-inverse-scaling__NeQA-e4c053-1775661622
[ "autotrain", "evaluation", "region:us" ]
2022-10-16T10:44:20+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/NeQA"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b1", "metrics": [], "dataset_name": "inverse-scaling/NeQA", "dataset_config": "inverse-scaling--NeQA", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}}
2022-10-16T10:47:19+00:00
1261b0ab43c1f488e329bb4b8e0fae03ece768c4
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: Luciano/bertimbau-base-lener-br-finetuned-lener-br * Dataset: lener_br * Config: lener_br * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model.
autoevaluate/autoeval-eval-lener_br-lener_br-b36dee-1776161639
[ "autotrain", "evaluation", "region:us" ]
2022-10-16T11:07:26+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/bertimbau-base-lener-br-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "validation", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-10-16T11:08:13+00:00
ac7d6e4063103d3c15fbef1983c89e4760be6f4f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: Luciano/bertimbau-base-lener_br * Dataset: lener_br * Config: lener_br * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model.
autoevaluate/autoeval-eval-lener_br-lener_br-b36dee-1776161640
[ "autotrain", "evaluation", "region:us" ]
2022-10-16T11:07:30+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/bertimbau-base-lener_br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "validation", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-10-16T11:08:15+00:00
7fac43a456593157221805407acd8171014c9259
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: Luciano/bertimbau-large-lener_br * Dataset: lener_br * Config: lener_br * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model.
autoevaluate/autoeval-eval-lener_br-lener_br-b36dee-1776161641
[ "autotrain", "evaluation", "region:us" ]
2022-10-16T11:07:36+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/bertimbau-large-lener_br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "validation", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-10-16T11:08:41+00:00
8a6f9b98bdf89c8fef01ee76b1fab91d5ce74981
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: Luciano/xlm-roberta-base-finetuned-lener-br * Dataset: lener_br * Config: lener_br * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model.
autoevaluate/autoeval-eval-lener_br-lener_br-b36dee-1776161642
[ "autotrain", "evaluation", "region:us" ]
2022-10-16T11:07:42+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/xlm-roberta-base-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "validation", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-10-16T11:08:35+00:00
799c1d1c06895d834d846f0c09bbff283499a0ca
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: Luciano/xlm-roberta-large-finetuned-lener-br * Dataset: lener_br * Config: lener_br * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model.
autoevaluate/autoeval-eval-lener_br-lener_br-b36dee-1776161643
[ "autotrain", "evaluation", "region:us" ]
2022-10-16T11:07:47+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/xlm-roberta-large-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "validation", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-10-16T11:09:02+00:00
f1c3b175632a7bffcefefea70fa7a92d4e36d1ed
pythonist/PubMedQA
[ "region:us" ]
2022-10-16T11:11:07+00:00
{"train-eval-index": [{"config": "pythonist--PubMedQA", "task": "question-answering", "task_id": "extractive_question_answering", "splits": {"eval_split": "train"}, "col_mapping": {"id": "answers.answer_start"}}]}
2022-11-10T10:15:08+00:00
1f70cbec44ea6b75058fdad68ff55b8de9d4a522
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: Luciano/bertimbau-large-lener_br * Dataset: lener_br * Config: lener_br * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model.
autoevaluate/autoeval-eval-lener_br-lener_br-c186f5-1776861660
[ "autotrain", "evaluation", "region:us" ]
2022-10-16T11:48:37+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/bertimbau-large-lener_br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "train", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-10-16T11:52:21+00:00
230cdaa657e88026dab1f182c34af5653d8a55ef
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: Luciano/bertimbau-base-lener_br * Dataset: lener_br * Config: lener_br * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model.
autoevaluate/autoeval-eval-lener_br-lener_br-c186f5-1776861659
[ "autotrain", "evaluation", "region:us" ]
2022-10-16T11:48:37+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/bertimbau-base-lener_br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "train", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-10-16T11:51:17+00:00
7f85845a7030e1397ee63b931b90de06a6ee7847
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: Luciano/xlm-roberta-base-finetuned-lener-br * Dataset: lener_br * Config: lener_br * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model.
autoevaluate/autoeval-eval-lener_br-lener_br-c186f5-1776861661
[ "autotrain", "evaluation", "region:us" ]
2022-10-16T11:48:41+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/xlm-roberta-base-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "train", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-10-16T11:51:38+00:00
4ac218a6129db895ec2ed0e960154742245b0d61
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: Luciano/xlm-roberta-large-finetuned-lener-br * Dataset: lener_br * Config: lener_br * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model.
autoevaluate/autoeval-eval-lener_br-lener_br-c186f5-1776861662
[ "autotrain", "evaluation", "region:us" ]
2022-10-16T11:48:47+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/xlm-roberta-large-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "train", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-10-16T11:52:40+00:00
2ad1b139cd7f1240d4046d69387149f0d2f52938
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: pierreguillou/ner-bert-base-cased-pt-lenerbr * Dataset: lener_br * Config: lener_br * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model.
autoevaluate/autoeval-eval-lener_br-lener_br-280a5d-1776961678
[ "autotrain", "evaluation", "region:us" ]
2022-10-16T12:18:37+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "pierreguillou/ner-bert-base-cased-pt-lenerbr", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-10-16T12:19:26+00:00
cffeb21da0785a570afcf98be56916319f867852
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: pierreguillou/ner-bert-large-cased-pt-lenerbr * Dataset: lener_br * Config: lener_br * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model.
autoevaluate/autoeval-eval-lener_br-lener_br-280a5d-1776961679
[ "autotrain", "evaluation", "region:us" ]
2022-10-16T12:18:44+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "pierreguillou/ner-bert-large-cased-pt-lenerbr", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-10-16T12:19:52+00:00
530cf9d0b4b5d10007e8722680b6175b5d11d4bb
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: pierreguillou/ner-bert-base-cased-pt-lenerbr * Dataset: lener_br * Config: lener_br * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model.
autoevaluate/autoeval-eval-lener_br-lener_br-2a71c5-1777061680
[ "autotrain", "evaluation", "region:us" ]
2022-10-16T12:18:50+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "pierreguillou/ner-bert-base-cased-pt-lenerbr", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "validation", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-10-16T12:19:34+00:00
0679fb25b4bb759691c22388d45706c8f85ba4b2
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: pierreguillou/ner-bert-large-cased-pt-lenerbr * Dataset: lener_br * Config: lener_br * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model.
autoevaluate/autoeval-eval-lener_br-lener_br-2a71c5-1777061681
[ "autotrain", "evaluation", "region:us" ]
2022-10-16T12:18:58+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "pierreguillou/ner-bert-large-cased-pt-lenerbr", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "validation", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-10-16T12:20:02+00:00
449dcb775af777bad2fe5cb43070e97c76f65e05
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: pierreguillou/ner-bert-base-cased-pt-lenerbr * Dataset: lener_br * Config: lener_br * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model.
autoevaluate/autoeval-eval-lener_br-lener_br-851daf-1777161682
[ "autotrain", "evaluation", "region:us" ]
2022-10-16T12:19:04+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "pierreguillou/ner-bert-base-cased-pt-lenerbr", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "train", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-10-16T12:21:40+00:00
13f46098ec2521c788887fd931319674601c0f47
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: pierreguillou/ner-bert-large-cased-pt-lenerbr * Dataset: lener_br * Config: lener_br * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model.
autoevaluate/autoeval-eval-lener_br-lener_br-851daf-1777161683
[ "autotrain", "evaluation", "region:us" ]
2022-10-16T12:19:11+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "pierreguillou/ner-bert-large-cased-pt-lenerbr", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "train", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-10-16T12:22:52+00:00
78e20d4b2a97e79041a4cb8d8d270a31c4de8f2c
whatafok/lul
[ "license:other", "region:us" ]
2022-10-16T12:46:03+00:00
{"license": "other"}
2022-10-16T12:46:03+00:00
f142a503e35ed32d56bf8a1d202195df1b5b9b2b
alfredodeza/temporary-dataset
[ "license:apache-2.0", "region:us" ]
2022-10-16T13:03:45+00:00
{"license": "apache-2.0"}
2022-10-16T13:05:33+00:00
84e4f21a1e84ae47897a32f8177d4d096c2630f1
# Dataset Card for "punctuation-nilc-t5" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tiagoblima/punctuation-nilc-t5
[ "region:us" ]
2022-10-16T16:02:13+00:00
{"dataset_info": {"features": [{"name": "text_id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "level", "dtype": "string"}, {"name": "text_input", "dtype": "string"}, {"name": "labels", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1209863.2760485518, "num_examples": 2604}, {"name": "train", "num_bytes": 4340741.560763889, "num_examples": 9371}, {"name": "validation", "num_bytes": 491897.36016821867, "num_examples": 1041}], "download_size": 3084741, "dataset_size": 6042502.196980659}}
2022-11-13T18:07:55+00:00
bccf1066fa8172c13cd146e59793f7a09ff06023
awacke1/MusicGenreLearnerAI
[ "license:apache-2.0", "region:us" ]
2022-10-16T16:06:36+00:00
{"license": "apache-2.0"}
2022-10-16T16:06:36+00:00
0f9daa96611fe978caa9d14ca1b9d07f99380ccc
## ESC benchmark diagnostic dataset ## Dataset Summary As a part of ESC benchmark, we provide a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESC validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESC dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions. All eight datasets in ESC can be downloaded and prepared in just a single line of code through the Hugging Face Datasets library: ```python from datasets import load_dataset esc_diagnostic = load_dataset("esc-benchmark/esc-diagnostic-dataset") ``` Datasets provided as splits, so to have clean diagnostic subset of AMI: ```python ami_diagnostic_clean = esc_diagnostic["ami.clean"] ``` Splits are: `"ami.clean"`, `"ami.other"`, `"earnings22.clean"`, `"earnings22.other"`, `"tedlium.clean"`, `"tedlium.other"`, `"voxpopuli.clean"`, `"voxpopuli.other"`, `"spgispeech.clean"`, `"spgispeech.other"`, `"gigaspeech.clean"`, `"gigaspeech.other"`, `"librispeech.clean"`, `"librispeech.other"`, `"common_voice.clean"`, `"common_voice.other"`. The datasets are full prepared, such that the audio and transcription files can be used directly in training/evaluation scripts. ## Dataset Information A data point can be accessed by indexing the dataset object loaded through `load_dataset`: ```python print(esc_diagnostic[0]) ``` A typical data point comprises the path to the audio file and its transcription. Also included is information of the dataset from which the sample derives and a unique identifier name: ```python { 'audio': {'path': None, 'array': array([ 7.01904297e-04, 7.32421875e-04, 7.32421875e-04, ..., -2.74658203e-04, -1.83105469e-04, -3.05175781e-05]), 'sampling_rate': 16000}, 'ortho_transcript': 'So, I guess we have to reflect on our experiences with remote controls to decide what, um, we would like to see in a convenient practical', 'norm_transcript': 'so i guess we have to reflect on our experiences with remote controls to decide what um we would like to see in a convenient practical', 'id': 'AMI_ES2011a_H00_FEE041_0062835_0064005' } ``` ### Data Fields - `audio`: a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. - `ortho_transcript`: the orthographic transcription of the audio file. - - `norm_transcript`: the normalized transcription of the audio file. - `id`: unique id of the data sample. ### Data Preparation #### Audio The audio for all ESC datasets is segmented into sample lengths suitable for training ASR systems. The Hugging Face datasets library decodes audio files on the fly, reading the segments and converting them to a Python arrays. Consequently, no further preparation of the audio is required to be used in training/evaluation scripts. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`. #### Transcriptions The transcriptions corresponding to each audio file are provided in their 'error corrected' format. No transcription pre-processing is applied to the text, only necessary 'error correction' steps such as removing junk tokens (_<unk>_) or converting symbolic punctuation to spelled out form (_<comma>_ to _,_). As such, no further preparation of the transcriptions is required to be used in training/evaluation scripts. Transcriptions are provided for training and validation splits. The transcriptions are **not** provided for the test splits. The ESC benchmark requires you to generate predictions for the test sets and upload them to https://huggingface.co/spaces/esc-benchmark/esc for scoring. ### Access All eight of the datasets in ESC are accessible and licensing is freely available. Three of the ESC datasets have specific terms of usage that must be agreed to before using the data. To do so, fill in the access forms on the specific datasets' pages: * Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0 * GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech * SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech ### Diagnostic Dataset ESC contains a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESC validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESC dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions. For more information, visit: [esc-bench/esc-diagnostic-dataset](https://huggingface.co/datasets/esc-bench/esc-diagnostic-datasets). ## LibriSpeech The LibriSpeech corpus is a standard large-scale corpus for assessing ASR systems. It consists of approximately 1,000 hours of narrated audiobooks from the [LibriVox](https://librivox.org) project. It is licensed under CC-BY-4.0. Example Usage: ```python librispeech = load_dataset("esc-benchmark/esc-datasets", "librispeech") ``` Train/validation splits: - `train` (combination of `train.clean.100`, `train.clean.360` and `train.other.500`) - `validation.clean` - `validation.other` Test splits: - `test.clean` - `test.other` Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument: ```python librispeech = load_dataset("esc-benchmark/esc-datasets", "librispeech", subconfig="clean.100") ``` - `clean.100`: 100 hours of training data from the 'clean' subset - `clean.360`: 360 hours of training data from the 'clean' subset - `other.500`: 500 hours of training data from the 'other' subset ## Common Voice Common Voice is a series of crowd-sourced open-licensed speech datasets where speakers record text from Wikipedia in various languages. The English subset of contains approximately 1,400 hours of audio data from speakers of various nationalities, accents and different recording conditions. It is licensed under CC0-1.0. Example usage: ```python common_voice = load_dataset("esc-benchmark/esc-datasets", "common_voice", use_auth_token=True) ``` Training/validation splits: - `train` - `validation` Test splits: - `test` ## VoxPopuli VoxPopuli s a large-scale multilingual speech corpus consisting of political data sourced from 2009-2020 European Parliament event recordings. The English subset contains approximately 550 hours of speech largely from non-native English speakers. It is licensed under CC0. Example usage: ```python voxpopuli = load_dataset("esc-benchmark/esc-datasets", "voxpopuli") ``` Training/validation splits: - `train` - `validation` Test splits: - `test` ## TED-LIUM TED-LIUM consists of English-language TED Talk conference videos covering a range of different cultural, political, and academic topics. It contains approximately 450 hours of transcribed speech data. It is licensed under CC-BY-NC-ND 3.0. Example usage: ```python tedlium = load_dataset("esc-benchmark/esc-datasets", "tedlium") ``` Training/validation splits: - `train` - `validation` Test splits: - `test` ## GigaSpeech GigaSpeech is a multi-domain English speech recognition corpus created from audiobooks, podcasts and YouTube. We provide the large train set (2,500 hours) and the standard validation and test splits. It is licensed under apache-2.0. Example usage: ```python gigaspeech = load_dataset("esc-benchmark/esc-datasets", "gigaspeech", use_auth_token=True) ``` Training/validation splits: - `train` (`l` subset of training data (2,500 h)) - `validation` Test splits: - `test` Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument: ```python gigaspeech = load_dataset("esc-benchmark/esc-datasets", "spgispeech", subconfig="xs", use_auth_token=True) ``` - `xs`: extra-small subset of training data (10 h) - `s`: small subset of training data (250 h) - `m`: medium subset of training data (1,000 h) - `xl`: extra-large subset of training data (10,000 h) ## SPGISpeech SPGISpeech consists of company earnings calls that have been manually transcribed by S&P Global, Inc according to a professional style guide. We provide the large train set (5,000 hours) and the standard validation and test splits. It is licensed under a Kensho user agreement. Loading the dataset requires authorization. Example usage: ```python spgispeech = load_dataset("esc-benchmark/esc-datasets", "spgispeech", use_auth_token=True) ``` Training/validation splits: - `train` (`l` subset of training data (~5,000 h)) - `validation` Test splits: - `test` Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument: ```python spgispeech = load_dataset("esc-benchmark/esc-datasets", "spgispeech", subconfig="s", use_auth_token=True) ``` - `s`: small subset of training data (~200 h) - `m`: medium subset of training data (~1,000 h) ## Earnings-22 Earnings-22 is a 119-hour corpus of English-language earnings calls collected from global companies, with speakers of many different nationalities and accents. It is licensed under CC-BY-SA-4.0. Example usage: ```python earnings22 = load_dataset("esc-benchmark/esc-datasets", "earnings22") ``` Training/validation splits: - `train` - `validation` Test splits: - `test` ## AMI The AMI Meeting Corpus consists of 100 hours of meeting recordings from multiple recording devices synced to a common timeline. It is licensed under CC-BY-4.0. Example usage: ```python ami = load_dataset("esc-benchmark/esc-datasets", "ami") ``` Training/validation splits: - `train` - `validation` Test splits: - `test`
esc-bench/esc-diagnostic-backup
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "annotations_creators:crowdsourced", "annotations_creators:machine-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:1M<n<10M", "source_datasets:original", "source_datasets:extended|librispeech_asr", "source_datasets:extended|common_voice", "language:en", "license:cc-by-4.0", "license:apache-2.0", "license:cc0-1.0", "license:cc-by-nc-3.0", "license:other", "asr", "benchmark", "speech", "esc", "region:us" ]
2022-10-16T16:31:24+00:00
{"annotations_creators": ["expert-generated", "crowdsourced", "machine-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "license": ["cc-by-4.0", "apache-2.0", "cc0-1.0", "cc-by-nc-3.0", "other"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M", "1M<n<10M"], "source_datasets": ["original", "extended|librispeech_asr", "extended|common_voice"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "ESC Diagnostic Dataset", "tags": ["asr", "benchmark", "speech", "esc"], "extra_gated_prompt": "Three of the ESC datasets have specific terms of usage that must be agreed to before using the data. \nTo do so, fill in the access forms on the specific datasets' pages:\n * Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0\n * GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech\n * SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech", "extra_gated_fields": {"I hereby confirm that I have registered on the original Common Voice page and agree to not attempt to determine the identity of speakers in the Common Voice dataset": "checkbox", "I hereby confirm that I have accepted the terms of usages on GigaSpeech page": "checkbox", "I hereby confirm that I have accepted the terms of usages on SPGISpeech page": "checkbox"}}
2022-10-17T14:05:05+00:00
d7309301bd51eac707cc1e80d7bf4209c2f71365
# Dataset Card for "punctuation-nilc" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tiagoblima/punctuation-nilc-bert
[ "language:pt", "region:us" ]
2022-10-16T17:02:29+00:00
{"language": "pt", "dataset_info": {"features": [{"name": "text_id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "level", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "labels", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 1177684.2701598366, "num_examples": 2604}, {"name": "train", "num_bytes": 4224993.504240118, "num_examples": 9371}, {"name": "validation", "num_bytes": 479472.5920696906, "num_examples": 1041}], "download_size": 1802076, "dataset_size": 5882150.366469645}}
2023-07-19T16:03:29+00:00
52759bdb3898dd3c5b6154cf8c5e19a96cb14f78
alvez29/myface
[ "region:us" ]
2022-10-16T17:48:57+00:00
{}
2022-10-16T17:50:25+00:00
8250a56c880f2b98abc5c77adf3e19732a09c6e3
Surn/Fett
[ "license:gpl-3.0", "region:us" ]
2022-10-16T18:48:14+00:00
{"license": "gpl-3.0"}
2022-10-16T18:48:14+00:00
13f42fa99ebc10fedecf1e7ee52546d8fd9e6667
w0lfandbehem0th/test-images
[ "license:apache-2.0", "region:us" ]
2022-10-17T02:35:15+00:00
{"license": "apache-2.0"}
2022-10-17T02:46:04+00:00
899b75095a573984a124727dcce8de7e30ad67dc
# laion2b_multi_korean_subset_with_image ## Dataset Description - **Download Size** 342 GB img2dataset์„ ํ†ตํ•ด ๋‹ค์šด๋กœ๋“œ์— ์„ฑ๊ณตํ•œ [Bingsu/laion2B-multi-korean-subset](https://huggingface.co/datasets/Bingsu/laion2B-multi-korean-subset) ์ด๋ฏธ์ง€๋ฅผ ์ •๋ฆฌํ•œ ๋ฐ์ดํ„ฐ์…‹์ž…๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€๋Š” 9,800,137์žฅ์ž…๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€๋Š” ์งง์€ ์ชฝ ๊ธธ์ด๊ฐ€ 256์ด ๋˜๋„๋ก ๋ฆฌ์‚ฌ์ด์ฆˆ ๋˜์—ˆ์œผ๋ฉฐ, ํ’ˆ์งˆ 100์ธ webpํŒŒ์ผ๋กœ ๋‹ค์šด๋กœ๋“œ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ## Usage ### 1. datasets ```python >>> from datasets import load_dataset >>> dataset = load_dataset("Bingsu/laion2b_multi_korean_subset_with_image", streaming=True, split="train") >>> dataset.features {'image': Image(decode=True, id=None), 'text': Value(dtype='string', id=None), 'width': Value(dtype='int32', id=None), 'height': Value(dtype='int32', id=None)} >>> next(iter(dataset)) {'image': <PIL.WebPImagePlugin.WebPImageFile image mode=RGB size=256x256>, 'text': '์†Œ๋‹‰๊ธฐ์–ด ์—์–ดํฐ5 ํœด๋Œ€์šฉ ์Šคํ…Œ๋ ˆ์˜ค ๋ธ”๋ฃจํˆฌ์Šค ํ—ค๋“œํฐ', 'width': 256, 'height': 256} ``` ### 2. webdataset ์ด ๋ฐ์ดํ„ฐ์…‹์€ [webdataset](https://github.com/webdataset/webdataset)์œผ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ๊ตฌ์„ฑ๋˜์–ด์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ๋ฅผ ๋‹ค์šด๋กœ๋“œํ•˜์ง€ ์•Š๊ณ  ์ŠคํŠธ๋ฆฌ๋ฐ์œผ๋กœ ์ฒ˜๋ฆฌํ•œ๋‹ค๋ฉด 1๋ฒˆ ๋ฐฉ๋ฒ•๋ณด๋‹ค ํ›จ์”ฌ ๋น ๋ฆ…๋‹ˆ๋‹ค. !! ์•„๋ž˜ ๋ฐฉ๋ฒ•์€ Windows์—์„œ๋Š” ์—๋Ÿฌ๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ```python >>> import webdataset as wds >>> url = "https://huggingface.co/datasets/Bingsu/laion2b_multi_korean_subset_with_image/resolve/main/data/{00000..02122}.tar" >>> dataset = wds.WebDataset(url).shuffle(1000).decode("pil").to_tuple("webp", "json") ``` ```python >>> next(iter(dataset)) ... ``` ์ด ๊ธ€์„ ์ž‘์„ฑํ•˜๋Š” ํ˜„์žฌ(22-10-18), webp์ด๋ฏธ์ง€์˜ ์ž๋™ ๋””์ฝ”๋”ฉ์„ ์ง€์›ํ•˜์ง€ ์•Š๊ณ  ์žˆ๊ธฐ ๋•Œ๋ฌธ์—([PR #215](https://github.com/webdataset/webdataset/pull/215)), ์ง์ ‘ ๋””์ฝ”๋”ฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```python import io import webdataset as wds from PIL import Image def preprocess(data): webp, jsn = data img = Image.open(io.BytesIO(webp)) out = { "image": img, "text": jsn["caption"], "width": jsn["width"], "height": jsn["height"] } return out url = "https://huggingface.co/datasets/Bingsu/laion2b_multi_korean_subset_with_image/resolve/main/data/{00000..02122}.tar" dataset = wds.WebDataset(url).shuffle(1000).decode("pil").to_tuple("webp", "json").map(preprocess) ``` ```python >>> next(iter(dataset)) {'image': <PIL.WebPImagePlugin.WebPImageFile image mode=RGB size=427x256>, 'text': '[๋”ฐ๋ธ”๋ฆฌ์—]์œ ์•„๋™ ๋ฏธ์ˆ ๊ฐ€์šด, ๋ฏธ์ˆ  ์ „์‹ ๋ณต', 'width': 427, 'height': 256} ``` ## Note ![tar_image](https://huggingface.co/datasets/Bingsu/laion2b_multi_korean_subset_with_image/resolve/main/tar_example.png) ๊ฐ๊ฐ์˜ tar ํŒŒ์ผ์€ ์œ„ ์ฒ˜๋Ÿผ ๊ตฌ์„ฑ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์šด๋กœ๋“œ์— ์‹คํŒจํ•œ ์ด๋ฏธ์ง€๋Š” ๊ฑด๋„ˆ๋›ฐ์–ด์ ธ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ํŒŒ์ผ ์ด๋ฆ„์€ ์™„์ „ํžˆ ์—ฐ์†์ ์ด์ง€๋Š” ์•Š์Šต๋‹ˆ๋‹ค. ๊ฐ๊ฐ์˜ json ํŒŒ์ผ์€ ๋‹ค์Œ์ฒ˜๋Ÿผ ๋˜์–ด์žˆ์Šต๋‹ˆ๋‹ค. ```json { "caption": "\ub514\uc790\uc778 \uc53d\ud0b9\uacfc \ub514\uc9c0\ud138 \ud2b8\ub79c\uc2a4\ud3ec\uba54\uc774\uc158", "url": "https://image.samsungsds.com/kr/insights/dt1.jpg?queryString=20210915031642", "key": "014770069", "status": "success", "error_message": null, "width": 649, "height": 256, "original_width": 760, "original_height": 300, "exif": "{}" } ``` txtํŒŒ์ผ์€ jsonํŒŒ์ผ์˜ "caption"์„ ๋‹ด๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค.
Bingsu/laion2b_multi_korean_subset_with_image
[ "task_categories:feature-extraction", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:extended|laion/laion2B-multi", "language:ko", "license:cc-by-4.0", "region:us" ]
2022-10-17T03:32:45+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["ko"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["extended|laion/laion2B-multi"], "task_categories": ["feature-extraction"], "task_ids": [], "pretty_name": "laion2b multi korean subset with image", "tags": []}
2022-11-03T05:10:40+00:00
86820e0d48153d64153a2f70ace60c1090697f07
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: csarron/bert-base-uncased-squad-v1 * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model.
autoevaluate/autoeval-eval-squad-plain_text-f76498-1781661804
[ "autotrain", "evaluation", "region:us" ]
2022-10-17T04:17:50+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "csarron/bert-base-uncased-squad-v1", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-10-17T04:20:41+00:00
1a2e776f38e29c4e70e3ce299b76b1933b463e60
# Dataset Card for "Watts" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
eliwill/Watts
[ "region:us" ]
2022-10-17T04:50:45+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5040818, "num_examples": 17390}, {"name": "validation", "num_bytes": 99856, "num_examples": 399}], "download_size": 2976066, "dataset_size": 5140674}}
2022-10-17T04:50:50+00:00
6cce5a33286e7ed4d15bfd98ef6b9b476093cc97
inmortalkaktus/pokemon-pixel-art
[ "region:us" ]
2022-10-17T05:33:18+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 228723, "num_examples": 49}], "download_size": 178327, "dataset_size": 228723}}
2022-10-18T05:46:02+00:00
18235dd7fd84b7f6868f9cc1a046d799e18da82f
Jamesroyal/blogging
[ "license:afl-3.0", "region:us" ]
2022-10-17T05:54:05+00:00
{"license": "afl-3.0"}
2022-10-17T05:54:05+00:00
e2303edabe49ef87791c19764cb0dbdb39177b1d
test
acurious/testdreambooth
[ "region:us" ]
2022-10-17T06:59:16+00:00
{}
2022-10-17T07:05:53+00:00
09e8932bb30d97e57848c117429e8f944acd3dfd
# Dataset Card for "lsf_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
vvincentt/lsf_dataset
[ "region:us" ]
2022-10-17T09:04:19+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 1391968, "num_examples": 1400}, {"name": "validation", "num_bytes": 497849, "num_examples": 500}], "download_size": 629433, "dataset_size": 1889817}}
2022-10-17T09:41:39+00:00
81eca6cc0ab38a851d62f8fe8632acbd6c12c531
# Dataset Card for SloIE ### Dataset Summary SloIE is a manually labelled dataset of Slovene idiomatic expressions. It contains 29399 sentences with 75 different expressions that can occur with either a literal or an idiomatic meaning, with appropriate manual annotations for each token. The idiomatic expressions were selected from the [Slovene Lexical Database]( (http://hdl.handle.net/11356/1030). Only expressions that can occur with both a literal and an idiomatic meaning were selected. The sentences were extracted from the Gigafida corpus. For a more detailed description of the dataset, please see the paper ล kvorc et al. (2022) - see below. ### Supported Tasks and Leaderboards Idiom detection. ### Languages Slovenian. ## Dataset Structure ### Data Instances A sample instance from the dataset: ```json { 'sentence': 'Fantje regljajo v enem kotu, deklice pa svoje obrazke barvajo s pisanimi barvami.', 'expression': 'barvati kaj s ฤrnimi barvami', 'word_order': [11, 10, 12, 13, 14], 'sentence_words': ['Fantje', 'regljajo', 'v', 'enem', 'kotu,', 'deklice', 'pa', 'svoje', 'obrazke', 'barvajo', 's', 'pisanimi', 'barvami.'], 'is_idiom': ['*', '*', '*', '*', '*', '*', '*', '*', 'NE', 'NE', 'NE', 'NE', 'NE'] } ``` In this `sentence`, the words of the expression "barvati kaj s ฤrnimi barvami" are used in a literal sense, as indicated by the "NE" annotations inside `is_idiom`. The "*" annotations indicate the words are not part of the expression. ### Data Fields - `sentence`: raw sentence in string form - **WARNING**: this is at times slightly different from the words inside `sentence_words` (e.g., "..." here could be "." in `sentence_words`); - `expression`: the annotated idiomatic expression; - `word_order`: numbers indicating the positions of tokens that belong to the expression; - `sentence_words`: words in the sentence; - `is_idiom`: a string denoting whether each word has an idiomatic (`"DA"`), literal (`"NE"`), or ambiguous (`"NEJASEN ZGLED"`) meaning. `"*"` means that the word is not part of the expression. ## Additional Information ### Dataset Curators Tadej ล kvorc, Polona Gantar, Marko Robnik-ล ikonja. ### Licensing Information CC BY-NC-SA 4.0. ### Citation Information ``` @article{skvorc2022mice, title = {MICE: Mining Idioms with Contextual Embeddings}, journal = {Knowledge-Based Systems}, volume = {235}, pages = {107606}, year = {2022}, doi = {https://doi.org/10.1016/j.knosys.2021.107606}, url = {https://www.sciencedirect.com/science/article/pii/S0950705121008686}, author = {{\v S}kvorc, Tadej and Gantar, Polona and Robnik-{\v S}ikonja, Marko}, } ``` ### Contributions Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
cjvt/sloie
[ "task_categories:text-classification", "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "size_categories:100K<n<1M", "language:sl", "license:cc-by-nc-sa-4.0", "idiom-detection", "multiword-expression-detection", "region:us" ]
2022-10-17T11:55:41+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["sl"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K", "100K<n<1M"], "source_datasets": [], "task_categories": ["text-classification", "token-classification"], "task_ids": [], "pretty_name": "Dataset of Slovene idiomatic expressions SloIE", "tags": ["idiom-detection", "multiword-expression-detection"]}
2022-10-21T06:36:18+00:00
42237ba9cdc8ce88397b1874e73925abba4f338a
# Sample
sfujiwara/sample
[ "region:us" ]
2022-10-17T12:20:15+00:00
{}
2022-10-18T20:27:18+00:00
e3bf91f4447ac06d3138f7d1591e551911197362
Mrdanizm/Mestablediffusion
[ "license:other", "region:us" ]
2022-10-17T12:52:11+00:00
{"license": "other"}
2022-10-17T13:02:07+00:00
73f431e1a8282900c1475d0eea185ca3e2af0b0f
creaoy/me
[ "license:apache-2.0", "region:us" ]
2022-10-17T13:20:56+00:00
{"license": "apache-2.0"}
2022-10-17T13:25:02+00:00
850a76c5c794ae87d5e4a15665b3de5bd2e61d95
#@markdown Add here the URLs to the images of the concept you are adding. 3-5 should be fine urls = [ "https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_3228-01_512.png", "https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_3228-01_512_02.png", "https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_3229-01_512.png", "https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_3229-01_512.png", "https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_3229-01_512_02.png", "https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_3870-01-edit-02_crop_512.png", "https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_4520_512.png", "https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_4589-01_512.png", "https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_4622-01-crop_512.png", "https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/ScanImage066_crop_512.png", "https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_4589-01_512.png", "https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_3348-01_512.png", ]
Nyckelpiga/images
[ "license:other", "region:us" ]
2022-10-17T13:49:35+00:00
{"license": "other"}
2022-10-17T16:19:59+00:00
d6c91cf96df74ce879bb4e8837f4a59a8e7341f0
# Dataset Card for "error_correction_model_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
shahidul034/error_correction_model_dataset_raw
[ "region:us" ]
2022-10-17T13:52:04+00:00
{"dataset_info": {"features": [{"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 706774777.602151, "num_examples": 4141927}], "download_size": 301173004, "dataset_size": 706774777.602151}}
2022-10-17T13:54:36+00:00
69b23927e6d5f4d09321dad6df33479be3ddee12
A dataset that has NBA data as well as social media data including twitter and wikipedia
noahgift/social-power-nba
[ "license:cc-by-nc-nd-4.0", "region:us" ]
2022-10-17T15:03:02+00:00
{"license": "cc-by-nc-nd-4.0"}
2022-10-17T15:07:45+00:00
ac8e5493caf159f4f717379cc7f434ad3c52e2f6
Source: [https://github.com/google-research/mnist-c](https://github.com/google-research/mnist-c) # MNIST-C This repository contains the source code used to create the MNIST-C dataset, a corrupted MNIST benchmark for testing out-of-distribution robustness of computer vision models. Please see our full paper [https://arxiv.org/abs/1906.02337](https://arxiv.org/abs/1906.02337) for more details. ## Dataset The static dataset is available for download at [https://zenodo.org/record/3239543](https://zenodo.org/record/3239543).
barkermrl/mnist-c
[ "license:apache-2.0", "arxiv:1906.02337", "region:us" ]
2022-10-17T15:26:58+00:00
{"license": "apache-2.0"}
2022-10-17T16:28:57+00:00
ddf8d4553b13ae4aae9c5218e53851e38d721096
# AutoTrain Dataset for project: ethos-sentiments ## Dataset Description This dataset has been automatically processed by AutoTrain for project ethos-sentiments. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "well it's clear now why europeans can't differ niggers and shitskins from human", "target": 0 }, { "text": "These boys will then grow up with people making fun of them and they will then hate their parents for ruining their lives.", "target": 1 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "ClassLabel(num_classes=2, names=['hate_speech', 'no_hate_speech'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 798 | | valid | 200 |
pachi107/autotrain-data-ethos-sentiments
[ "task_categories:text-classification", "language:en", "region:us" ]
2022-10-17T15:28:10+00:00
{"language": ["en"], "task_categories": ["text-classification"]}
2022-10-17T15:28:44+00:00
8b13d64114106c6b43907f587001677a75070655
gr8brit/noahnerd
[ "license:other", "region:us" ]
2022-10-17T15:46:14+00:00
{"license": "other"}
2022-10-17T15:47:29+00:00
6b3560071a612197aae2199e792e0300d859933b
elisachen/example_dataset
[ "license:bsd", "region:us" ]
2022-10-17T16:54:10+00:00
{"license": "bsd"}
2022-10-17T16:55:11+00:00
f4a8763be5f754cf18d1c70b25fba896ba7fe7d1
Emmawang/newsdataset
[ "license:bsd", "region:us" ]
2022-10-17T16:54:17+00:00
{"license": "bsd"}
2022-10-19T16:37:48+00:00
743025dc0b2c16c83448073633cf74307faa013a
suzyanil/nba-data
[ "license:creativeml-openrail-m", "region:us" ]
2022-10-17T16:54:25+00:00
{"license": "creativeml-openrail-m"}
2022-10-17T16:54:58+00:00
960fa647fd98411d7a135ec128147abfbd0c0bdc
Patrickkkk/Patrick_dataset
[ "license:bsd", "region:us" ]
2022-10-17T16:54:55+00:00
{"license": "bsd"}
2022-10-17T21:26:17+00:00
e0187676daa260497dc399625e90ecfc30f70de0
emmanuelr/yelp-review
[ "license:other", "region:us" ]
2022-10-17T16:55:12+00:00
{"license": "other"}
2022-10-17T16:55:12+00:00
9cca94a438905a40883b71c3b5f91c9673b6eec9
genesisqu/fake-real-news
[ "license:bsd", "region:us" ]
2022-10-17T17:06:19+00:00
{"license": "bsd"}
2022-10-17T17:06:58+00:00
2058f015aaded08f35656df841a841f829fc10d8
SamHernandez/computer-style
[ "license:afl-3.0", "region:us" ]
2022-10-17T17:16:17+00:00
{"license": "afl-3.0"}
2022-10-17T17:17:49+00:00
a4ed436fa9208153c41e658f5666b9be05e00456
MedNorm2SnomedCT2UMLS Paper on Mednorm and harmonisation: https://aclanthology.org/W19-3204.pdf The medical concept normalisation task aims to map textual descriptions to standard terminologies such as SNOMED-CT or MedDRA. Existing publicly available datasets annotated using different terminologies cannot be simply merged and utilised, and therefore become less valuable when developing machine learningbased concept normalisation systems. To address that, we designed a data harmonisation pipeline and engineered a corpus of 27,979 textual descriptions simultaneously mapped to both MedDRA and SNOMED-CT, sourced from five publicly available datasets across biomedical and social media domains.
awacke1/MedNorm2SnomedCT2UMLS
[ "license:mit", "region:us" ]
2022-10-17T17:17:16+00:00
{"license": "mit"}
2023-01-05T14:05:26+00:00
275329715cd6047b261b5c25b4373f8a6d68b659
ivanfdzm/Arq-Style
[ "license:afl-3.0", "region:us" ]
2022-10-17T20:38:39+00:00
{"license": "afl-3.0"}
2022-10-17T20:40:50+00:00
e0124a3be3482b1196e6b716e19b4fb903214da4
moisestech/beyond-money
[ "license:unknown", "region:us" ]
2022-10-18T00:55:32+00:00
{"license": "unknown"}
2022-10-18T01:17:59+00:00
98d16bdab800245e1e81a1bd4d1506c67a702736
BioBlast3r/Train-01-Maxx
[ "license:unknown", "region:us" ]
2022-10-18T04:12:58+00:00
{"license": "unknown"}
2022-10-18T04:13:35+00:00
2651add0c4bc0b204470551eae0a1f531004c25b
Chiba1sonny/RMD
[ "license:openrail", "region:us" ]
2022-10-18T05:31:40+00:00
{"license": "openrail"}
2022-10-18T05:31:40+00:00
b8cf42f0d1cf99a04313dd8d7d77bb0fb1d42a19
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: bigscience/bloom-560m * Dataset: phpthinh/ex1 * Config: all * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model.
autoevaluate/autoeval-eval-phpthinh__ex1-all-65db7c-1796062129
[ "autotrain", "evaluation", "region:us" ]
2022-10-18T05:38:53+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["phpthinh/ex1"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-560m", "metrics": [], "dataset_name": "phpthinh/ex1", "dataset_config": "all", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-10-18T06:27:01+00:00
7969c200d7b0eec370bce6870897ef06d678248e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: bigscience/bloom-560m * Dataset: phpthinh/ex2 * Config: all * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model.
autoevaluate/autoeval-eval-phpthinh__ex2-all-93c06b-1796162130
[ "autotrain", "evaluation", "region:us" ]
2022-10-18T05:39:26+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["phpthinh/ex2"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-560m", "metrics": [], "dataset_name": "phpthinh/ex2", "dataset_config": "all", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-10-18T06:26:43+00:00
8d80667af835aa457c6ae62f3e4331dc3e299461
nayan06/conversion1.0
[ "region:us" ]
2022-10-18T05:44:35+00:00
{"train-eval-index": [{"config": "default", "task": "text-classification", "task_id": "multi_class_classification", "splits": {"eval_split": "test"}, "col_mapping": {"text": "text", "label": "target"}}]}
2022-10-18T09:40:06+00:00
40e52ccedb5de97ee785848924f08544f3ca0969
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: Emanuel/bertweet-emotion-base * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nayan06](https://huggingface.co/nayan06) for evaluating this model.
autoevaluate/autoeval-eval-emotion-default-1b690b-1797662163
[ "autotrain", "evaluation", "region:us" ]
2022-10-18T05:54:54+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "Emanuel/bertweet-emotion-base", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-10-18T05:55:22+00:00
645a6fcac4e0a587b428e545ccd8d68d71d847b3
pip install diffusers transformers nvidia-ml-py3 ftfy pytorch pillow
Makokokoko/aaaaa
[ "region:us" ]
2022-10-18T06:37:03+00:00
{}
2022-10-18T06:38:02+00:00
eee375a1b72660d3bd2c5b468d18615279f6d992
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: bigscience/bloom-560m * Dataset: phpthinh/ex3 * Config: all * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model.
autoevaluate/autoeval-eval-phpthinh__ex3-all-630c04-1799362235
[ "autotrain", "evaluation", "region:us" ]
2022-10-18T07:18:52+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["phpthinh/ex3"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-560m", "metrics": [], "dataset_name": "phpthinh/ex3", "dataset_config": "all", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-10-18T08:07:43+00:00
d686b6b41eaf45992b5f30ef384e42071168f021
bdvysg/test
[ "license:openrail", "region:us" ]
2022-10-18T07:22:27+00:00
{"license": "openrail"}
2022-10-18T07:22:27+00:00
b45637b6ba33e8c7709e20385ac1c8dbc0cdec1f
# Dataset Card for Pokรฉmon type captions Contains official artwork and type-specific caption for Pokรฉmon #1-898 (Bulbasaur-Calyrex). Each Pokรฉmon is represented once by the default form from [PokรฉAPI](https://pokeapi.co/) Each row contains `image` and `text` keys: - `image` is a 475x475 PIL jpg of the Pokรฉmon's official artwork. - `text` is a label describing the Pokรฉmon by its type(s) ## Attributions _Images and typing information pulled from [PokรฉAPI](https://pokeapi.co/)_ _Based on the [Lambda Labs Pokรฉmon Blip Captions Dataset](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions)_
GabeHD/pokemon-type-captions
[ "region:us" ]
2022-10-18T07:38:18+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19372532.0, "num_examples": 898}], "download_size": 0, "dataset_size": 19372532.0}}
2022-10-23T03:40:59+00:00
a9ce511fb3bfa4898bd7dcebe6f23e08211243a8
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: 0ys/mt5-small-finetuned-amazon-en-es * Dataset: conceptual_captions * Config: unlabeled * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@DonaldDaz](https://huggingface.co/DonaldDaz) for evaluating this model.
autoevaluate/autoeval-eval-conceptual_captions-unlabeled-ccbde0-1800162251
[ "autotrain", "evaluation", "region:us" ]
2022-10-18T08:04:43+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["conceptual_captions"], "eval_info": {"task": "summarization", "model": "0ys/mt5-small-finetuned-amazon-en-es", "metrics": ["accuracy"], "dataset_name": "conceptual_captions", "dataset_config": "unlabeled", "dataset_split": "train", "col_mapping": {"text": "image_url", "target": "caption"}}}
2022-10-18T22:14:21+00:00
b0a9932fc04d28f16f0af3bc21233ce7e8a164bf
guillaume-chervet/test
[ "license:mit", "region:us" ]
2022-10-18T08:31:45+00:00
{"license": "mit"}
2022-10-18T08:31:46+00:00
3d0d0a2113f3a35a0163f68d96c6307d641f1a5a
# Dataset Card for TED descriptions [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
gigant/ted_descriptions
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
2022-10-18T09:24:43+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "TED descriptions", "tags": [], "dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "descr", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2617778, "num_examples": 5705}], "download_size": 1672988, "dataset_size": 2617778}}
2022-10-18T10:16:29+00:00
9d7a3960c7b4b1f6efb1e97bd4d469a217b46930
# Dataset Card for "German LER" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/elenanereiss/Legal-Entity-Recognition](https://github.com/elenanereiss/Legal-Entity-Recognition) - **Paper:** [https://arxiv.org/pdf/2003.13016v1.pdf](https://arxiv.org/pdf/2003.13016v1.pdf) - **Point of Contact:** [[email protected]]([email protected]) ### Dataset Summary A dataset of Legal Documents from German federal court decisions for Named Entity Recognition. The dataset is human-annotated with 19 fine-grained entity classes. The dataset consists of approx. 67,000 sentences and contains 54,000 annotated entities. NER tags use the `BIO` tagging scheme. The dataset includes two different versions of annotations, one with a set of 19 fine-grained semantic classes (`ner_tags`) and another one with a set of 7 coarse-grained classes (`ner_coarse_tags`). There are 53,632 annotated entities in total, the majority of which (74.34 %) are legal entities, the others are person, location and organization (25.66 %). ![](https://raw.githubusercontent.com/elenanereiss/Legal-Entity-Recognition/master/docs/Distribution.png) For more details see [https://arxiv.org/pdf/2003.13016v1.pdf](https://arxiv.org/pdf/2003.13016v1.pdf). ### Supported Tasks and Leaderboards - **Tasks:** Named Entity Recognition - **Leaderboards:** ### Languages German ## Dataset Structure ### Data Instances ```python { 'id': '1', 'tokens': ['Eine', 'solchermaรŸen', 'verzรถgerte', 'oder', 'bewusst', 'eingesetzte', 'Verkettung', 'sachgrundloser', 'Befristungen', 'schlieรŸt', 'ยง', '14', 'Abs.', '2', 'Satz', '2', 'TzBfG', 'aus', '.'], 'ner_tags': [38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 3, 22, 22, 22, 22, 22, 22, 38, 38], 'ner_coarse_tags': [14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 2, 9, 9, 9, 9, 9, 9, 14, 14] } ``` ### Data Fields ```python { 'id': Value(dtype='string', id=None), 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'ner_tags': Sequence(feature=ClassLabel(num_classes=39, names=['B-AN', 'B-EUN', 'B-GRT', 'B-GS', 'B-INN', 'B-LD', 'B-LDS', 'B-LIT', 'B-MRK', 'B-ORG', 'B-PER', 'B-RR', 'B-RS', 'B-ST', 'B-STR', 'B-UN', 'B-VO', 'B-VS', 'B-VT', 'I-AN', 'I-EUN', 'I-GRT', 'I-GS', 'I-INN', 'I-LD', 'I-LDS', 'I-LIT', 'I-MRK', 'I-ORG', 'I-PER', 'I-RR', 'I-RS', 'I-ST', 'I-STR', 'I-UN', 'I-VO', 'I-VS', 'I-VT', 'O'], id=None), length=-1, id=None), 'ner_coarse_tags': Sequence(feature=ClassLabel(num_classes=15, names=['B-LIT', 'B-LOC', 'B-NRM', 'B-ORG', 'B-PER', 'B-REG', 'B-RS', 'I-LIT', 'I-LOC', 'I-NRM', 'I-ORG', 'I-PER', 'I-REG', 'I-RS', 'O'], id=None), length=-1, id=None) } ``` ### Data Splits | | train | validation | test | |-------------------------|------:|-----------:|-----:| | Input Sentences | 53384 | 6666 | 6673 | ## Dataset Creation ### Curation Rationale Documents in the legal domain contain multiple references to named entities, especially domain-specific named entities, i. e., jurisdictions, legal institutions, etc. Legal documents are unique and differ greatly from newspaper texts. On the one hand, the occurrence of general-domain named entities is relatively rare. On the other hand, in concrete applications, crucial domain-specific entities need to be identified in a reliable way, such as designations of legal norms and references to other legal documents (laws, ordinances, regulations, decisions, etc.). Most NER solutions operate in the general or news domain, which makes them inapplicable to the analysis of legal documents. Accordingly, there is a great need for an NER-annotated dataset consisting of legal documents, including the corresponding development of a typology of semantic concepts and uniform annotation guidelines. ### Source Data Court decisions from 2017 and 2018 were selected for the dataset, published online by the [Federal Ministry of Justice and Consumer Protection](http://www.rechtsprechung-im-internet.de). The documents originate from seven federal courts: Federal Labour Court (BAG), Federal Fiscal Court (BFH), Federal Court of Justice (BGH), Federal Patent Court (BPatG), Federal Social Court (BSG), Federal Constitutional Court (BVerfG) and Federal Administrative Court (BVerwG). #### Initial Data Collection and Normalization From the table of [contents](http://www.rechtsprechung-im-internet.de/rii-toc.xml), 107 documents from each court were selected (see Table 1). The data was collected from the XML documents, i. e., it was extracted from the XML elements `Mitwirkung, Titelzeile, Leitsatz, Tenor, Tatbestand, Entscheidungsgrรผnde, Grรผnden, abweichende Meinung, and sonstiger Titel`. The metadata at the beginning of the documents (name of court, date of decision, file number, European Case Law Identifier, document type, laws) and those that belonged to previous legal proceedings was deleted. Paragraph numbers were removed. The extracted data was split into sentences, tokenised using [SoMaJo](https://github.com/tsproisl/SoMaJo) and manually annotated in [WebAnno](https://webanno.github.io/webanno/). #### Who are the source language producers? The Federal Ministry of Justice and the Federal Office of Justice provide selected decisions. Court decisions were produced by humans. ### Annotations #### Annotation process For more details see [annotation guidelines](https://github.com/elenanereiss/Legal-Entity-Recognition/blob/master/docs/Annotationsrichtlinien.pdf) (in German). <!-- #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)--> ### Personal and Sensitive Information A fundamental characteristic of the published decisions is that all personal information have been anonymised for privacy reasons. This affects the classes person, location and organization. <!-- ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)--> ### Licensing Information [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/) ### Citation Information ``` @misc{https://doi.org/10.48550/arxiv.2003.13016, doi = {10.48550/ARXIV.2003.13016}, url = {https://arxiv.org/abs/2003.13016}, author = {Leitner, Elena and Rehm, Georg and Moreno-Schneider, Juliรกn}, keywords = {Computation and Language (cs.CL), Information Retrieval (cs.IR), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {A Dataset of German Legal Documents for Named Entity Recognition}, publisher = {arXiv}, year = {2020}, copyright = {arXiv.org perpetual, non-exclusive license} } ``` ### Contributions
elenanereiss/german-ler
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:de", "license:cc-by-4.0", "ner, named entity recognition, legal ner, legal texts, label classification", "arxiv:2003.13016", "doi:10.57967/hf/0046", "region:us" ]
2022-10-18T10:10:32+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["de"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "paperswithcode_id": "dataset-of-legal-documents", "pretty_name": "German Named Entity Recognition in Legal Documents", "tags": ["ner, named entity recognition, legal ner, legal texts, label classification"], "train-eval-index": [{"config": "conll2003", "task": "token-classification", "task_id": "entity_extraction", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"tokens": "tokens", "ner_tags": "tags"}}]}
2022-10-26T07:32:17+00:00
e11292434fb9e789f80041723bdf2e61cfdc7e0b
SRL annotated corpora for extracting experiencer and cause of emotions
Maxstan/srl_for_emotions_russian
[ "license:cc-by-nc-4.0", "region:us" ]
2022-10-18T10:46:43+00:00
{"license": "cc-by-nc-4.0"}
2022-10-18T11:39:32+00:00
72be14bbf250a379c8cef9e1f0ea6031b84610f7
Sombrax/Cat
[ "license:openrail", "region:us" ]
2022-10-18T11:07:19+00:00
{"license": "openrail"}
2022-10-18T11:31:59+00:00
1e7a79e5c2bcd57dc6324cb159732771229bc89a
The data contains comments from political and nonpolitical Russian-speaking YouTube channels. Date interval: 1 year between April 30, 2020, and April 30, 2021
Maxstan/russian_youtube_comments_political_and_nonpolitical
[ "license:cc-by-nc-4.0", "region:us" ]
2022-10-18T11:40:44+00:00
{"license": "cc-by-nc-4.0"}
2022-10-18T11:57:07+00:00
6e31833b336cf58776371b31804dc3285108347c
odepraz/rvl_cdip_1percentofdata
[ "license:unknown", "region:us" ]
2022-10-18T11:58:20+00:00
{"license": "unknown"}
2022-10-18T12:00:07+00:00
3cc76d8c9536209e9772338d9567b8ae2a767d79
--- annotations_creators: - crowdsourced language: - ja language_creators: - crowdsourced license: - cc-by-sa-4.0 multilinguality: - monolingual paperswithcode_id: squad pretty_name: squad-ja size_categories: - 100K<n<1M source_datasets: - original tags: [] task_categories: - question-answering task_ids: - open-domain-qa - extractive-qa train-eval-index: - col_mapping: answers: answer_start: answer_start text: text context: context question: question config: squad_v2 metrics: - name: SQuAD v2 type: squad_v2 splits: eval_split: validation train_split: train task: question-answering task_id: extractive_question_answering --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Google็ฟป่จณAPIใง็ฟป่จณใ—ใŸๆ—ฅๆœฌ่ชž็‰ˆSQuAD2.0 ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Japanese ## Dataset Structure ### Data Instances ``` { "start": 43, "end": 88, "question": "ใƒ“ใƒจใƒณใ‚ป ใฏ ใ„ใค ใ‹ใ‚‰ ไบบๆฐ— ใ‚’ ๅšใ— ๅง‹ใ‚ ใพใ—ใŸ ใ‹ ๏ผŸ", "context": "BeyoncรฉGiselleKnowles - Carter ๏ผˆ /b i ห หˆ j ษ’ nse ษช / bee - YON - say ๏ผ‰ ๏ผˆ 1981 ๅนด 9 ๆœˆ 4 ๆ—ฅ ็”Ÿใพใ‚Œ ๏ผ‰ ใฏ ใ€ ใ‚ขใƒกใƒชใ‚ซ ใฎ ใ‚ทใƒณใ‚ฌใƒผ ใ€ ใ‚ฝใƒณใ‚ฐ ใƒฉใ‚คใ‚ฟใƒผ ใ€ ใƒฌใ‚ณใƒผใƒ‰ ใƒ—ใƒญใƒ‡ใƒฅใƒผใ‚ตใƒผ ใ€ ๅฅณๅ„ช ใงใ™ ใ€‚ ใƒ†ใ‚ญใ‚ตใ‚น ๅทž ใƒ’ใƒฅใƒผใ‚นใƒˆใƒณ ใง ็”Ÿใพใ‚Œ่‚ฒใฃใŸ ๅฝผๅฅณ ใฏ ใ€ ๅญไพ› ใฎ ้ ƒ ใซ ใ•ใพใ–ใพใช ๆญŒ ใจ ่ธŠใ‚Š ใฎ ใ‚ณใƒณใƒ†ใ‚นใƒˆ ใซ ๅ‡บๆผ” ใ— ใ€ 1990 ๅนด ไปฃ ๅพŒๅŠ ใซ R ๏ผ† B ใ‚ฌใƒผใƒซใ‚ฐใƒซใƒผใƒ— Destiny & 39 ; sChild ใฎ ใƒชใƒผใƒ‰ ใ‚ทใƒณใ‚ฌใƒผ ใจ ใ—ใฆ ๅๅฃฐ ใ‚’ ๅšใ— ใพใ—ใŸ ใ€‚ ็ˆถ่ฆช ใฎ ใƒžใ‚ทใƒฅใƒผใƒŽใ‚ฆใƒซใ‚บ ใŒ ็ฎก็† ใ™ใ‚‹ ใ“ใฎ ใ‚ฐใƒซใƒผใƒ— ใฏ ใ€ ไธ–็•Œ ใง ๆœ€ใ‚‚ ๅฃฒใ‚Œใฆ ใ„ใ‚‹ ๅฐ‘ๅฅณ ใ‚ฐใƒซใƒผใƒ— ใฎ 1 ใค ใซ ใชใ‚Š ใพใ—ใŸ ใ€‚ ๅฝผ ใ‚‰ ใฎ ไผ‘ใฟ ใฏ ใƒ“ใƒจใƒณใ‚ป ใฎ ใƒ‡ใƒ“ใƒฅใƒผ ใ‚ขใƒซใƒใƒ  ใ€ DangerouslyinLove ๏ผˆ 2003 ๏ผ‰ ใฎ ใƒชใƒชใƒผใ‚น ใ‚’ ่ฆ‹ ใพใ—ใŸ ใ€‚ ๅฝผๅฅณ ใฏ ไธ–็•Œ ไธญ ใง ใ‚ฝใƒญ ใ‚ขใƒผใƒ†ใ‚ฃใ‚นใƒˆ ใจ ใ—ใฆ ็ขบ็ซ‹ ใ— ใ€ 5 ใค ใฎ ใ‚ฐใƒฉใƒŸใƒผ ่ณž ใ‚’ ็ฒๅพ— ใ— ใ€ ใƒ“ใƒซ ใƒœใƒผใƒ‰ ใƒ›ใƒƒใƒˆ 100 ใƒŠใƒณใƒใƒผใƒฏใƒณ ใ‚ทใƒณใ‚ฐใƒซ ใ€Œ CrazyinLove ใ€ ใจ ใ€Œ BabyBoy ใ€ ใ‚’ ใƒ•ใ‚ฃใƒผใƒใƒฃใƒผ ใ— ใพใ—ใŸ ใ€‚", "id": "56be85543aeaaa14008c9063" } ``` ### Data Fields - start - end - question - context - id ### Data Splits - train 86820 - valid 5927 ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
Kentaline/hf-dataset-study
[ "license:other", "region:us" ]
2022-10-18T12:49:15+00:00
{"license": "other"}
2022-10-18T13:35:42+00:00
bd9407e7063dce3c2df673ed6c088cadd17fb268
# event classifcaiton dataset
fanxiaonan/event_classification
[ "license:mit", "region:us" ]
2022-10-18T13:05:44+00:00
{"license": "mit"}
2022-10-18T13:10:19+00:00
23ca06a50ba8b4860567c1112ba142820d223743
# SimKoR We provide korean sentence text similarity pair dataset using sentiment analysis corpus from [bab2min/corpus](https://github.com/bab2min/corpus). This data crawling korean review from naver shopping website. we reconstruct subset of dataset to make our dataset. ## Dataset description The original dataset description can be found at the link [[here]](https://github.com/bab2min/corpus/tree/master/sentiment). ![๊ทธ๋ฆผ6](https://user-images.githubusercontent.com/54879393/189065508-240b6449-6a26-463f-bd02-64785d76fa02.png) In korean Contrastive Learning, There are few suitable validation dataset (only KorNLI). To create contrastive learning validation dataset, we changed original sentiment analysis dataset to sentence text similar dataset. Our simkor dataset was created by grouping pair of sentence. Each score [0,1,2,4,5] means how far the meaning is between sentences. ## Data Distribution Our dataset class consist of text similarity score [0, 1,2,4,5]. each score consists of data of the same size. <table> <tr><th>Score</th><th>train</th><th>valid</th><th>test</th></tr> <tr><th>5</th><th>4,000</th><th>1,000</th><th>1,000</th></tr> <tr><th>4</th><th>4,000</th><th>1,000</th><th>1,000</th></tr> <tr><th>2</th><th>4,000</th><th>1,000</th><th>1,000</th></tr> <tr><th>1</th><th>4,000</th><th>1,000</th><th>1,000</th></tr> <tr><th>0</th><th>4,000</th><th>1,000</th><th>1,000</th></tr> <tr><th>All</th><th>20,000</th><th>5,000</th><th>5,000</th></tr> </table> ## Example ``` text1 text2 label ๊ณ ์†์ถฉ์ „์ด ์•ˆ๋จใ… ใ…  ์ง‘์—๋งค์—ฐ๋ƒ„์ƒˆ์—†์•จ๋ คํ–ˆ๋Š”๋ฐ ๊ทธ๋ƒฅ์ฐฝ๋ฌธ์—ฌ๋Š”๊ฒŒ๋” ๊ณต๊ธฐ๊ฐ€์ข‹๋„ค์š” 5 ์ ๋‹นํžˆ ๋งต๊ณ  ๊ดœ์ฐฎ๋„ค์š” ์–ด์ œ ์‹œํ‚จ๊ฒŒ ๋ฒŒ์จ ์™”์–ด์š” ใ…Žใ…Ž ๋ฐฐ์†ก๋น ๋ฅด๊ณ  ํ’ˆ์งˆ์–‘ํ˜ธํ•ฉ๋‹ˆ๋‹ค 4 ๋‹ค ๊ดœ์ฐฎ์€๋ฐ ๋ฐฐ์†ก์ด 10์ผ์ด๋‚˜ ๊ฑธ๋ฆฐ๊ฒŒ ๋งŽ์ด ์•„์‰ฝ๋„ค์š”. ์„ ๋ฐ˜ ์„ค์น˜ํ•˜๊ณ  ๋‚˜๋‹ˆ ์ฃผ๋ฐฉ ๋ฒ ๋ž€๋‹ค ์™„์ „ ๋‹ค์‹œ ํƒœ์–ด๋‚ฌ์–ด์š”~ 2 ๊ฐ€๊ฒฉ ์‹ธ์ง€๋งŒ ์ฟ ์…˜์ด ์•ฝํ•ด ๋ฌด๋ฆŽ ์•„ํŒŒ์š”~ ๋ฐ˜ํ’ˆํ•˜๋ ค๊ตฌ์š”~ ํŠผํŠผํ•˜๊ณ  ๋นจ๋ž˜๋„ ๋งŽ์ด ๊ฑธ ์ˆ˜ ์žˆ๊ณ  ์ž˜์“ฐ๊ณ  ์žˆ์–ด์š” 1 ๊ฐ์ธ์ด ์ฐŒ๊ทธ์ €์ ธ์žˆ๊ณ  ์—‰์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ฒ˜์Œ ํ•ด๋ณด๋Š” ๋ฐฉํƒˆ์ถœ์ด์—ˆ๋Š”๋ฐ ๋„ˆ๋ฌด ์žฌ๋ฏธ์žˆ์—ˆ์–ด์š”. 0 ``` ## Contributors The main contributors of the work are : - [Jaemin Kim](https://github.com/kimfunn)\* - [Yohan Na](https://github.com/nayohan)\* - [Kangmin Kim](https://github.com/Gangsss) - [Sangrak Lee](https://github.com/PangRAK) \*: Equal Contribution Hanyang University Data Intelligence Lab[(DILAB)](http://dilab.hanyang.ac.kr/) providing support โค๏ธ ## Github - **Repository :** [SimKoR](https://github.com/nayohan/SimKoR) ## License <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a>This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.
DILAB-HYU/SimKoR
[ "license:cc-by-4.0", "region:us" ]
2022-10-18T13:51:49+00:00
{"license": "cc-by-4.0"}
2022-10-18T16:27:05+00:00
e44a22761da9acdd42b4181f33aac27a95436824
# Dataset Card for "stackoverflow-ner" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mrm8488/stackoverflow-ner
[ "region:us" ]
2022-10-18T13:55:02+00:00
{"dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 680079, "num_examples": 3108}, {"name": "train", "num_bytes": 2034117, "num_examples": 9263}, {"name": "validation", "num_bytes": 640935, "num_examples": 2936}], "download_size": 692070, "dataset_size": 3355131}}
2022-10-18T13:55:17+00:00
6ae613ba744ee56f645aee9c577d04f7e3d48b30
# M2D2: A Massively Multi-domain Language Modeling Dataset *From the paper "[M2D2: A Massively Multi-domain Language Modeling Dataset](https://arxiv.org/abs/2210.07370)", (Reid et al., EMNLP 2022)* Load the dataset as follows: ```python import datasets dataset = datasets.load_dataset("machelreid/m2d2", "cs.CL") # replace cs.CL with the domain of your choice print(dataset['train'][0]['text']) ``` ## Domains - Culture_and_the_arts - Culture_and_the_arts__Culture_and_Humanities - Culture_and_the_arts__Games_and_Toys - Culture_and_the_arts__Mass_media - Culture_and_the_arts__Performing_arts - Culture_and_the_arts__Sports_and_Recreation - Culture_and_the_arts__The_arts_and_Entertainment - Culture_and_the_arts__Visual_arts - General_referece - General_referece__Further_research_tools_and_topics - General_referece__Reference_works - Health_and_fitness - Health_and_fitness__Exercise - Health_and_fitness__Health_science - Health_and_fitness__Human_medicine - Health_and_fitness__Nutrition - Health_and_fitness__Public_health - Health_and_fitness__Self_care - History_and_events - History_and_events__By_continent - History_and_events__By_period - History_and_events__By_region - Human_activites - Human_activites__Human_activities - Human_activites__Impact_of_human_activity - Mathematics_and_logic - Mathematics_and_logic__Fields_of_mathematics - Mathematics_and_logic__Logic - Mathematics_and_logic__Mathematics - Natural_and_physical_sciences - Natural_and_physical_sciences__Biology - Natural_and_physical_sciences__Earth_sciences - Natural_and_physical_sciences__Nature - Natural_and_physical_sciences__Physical_sciences - Philosophy - Philosophy_and_thinking - Philosophy_and_thinking__Philosophy - Philosophy_and_thinking__Thinking - Religion_and_belief_systems - Religion_and_belief_systems__Allah - Religion_and_belief_systems__Belief_systems - Religion_and_belief_systems__Major_beliefs_of_the_world - Society_and_social_sciences - Society_and_social_sciences__Social_sciences - Society_and_social_sciences__Society - Technology_and_applied_sciences - Technology_and_applied_sciences__Agriculture - Technology_and_applied_sciences__Computing - Technology_and_applied_sciences__Engineering - Technology_and_applied_sciences__Transport - alg-geom - ao-sci - astro-ph - astro-ph.CO - astro-ph.EP - astro-ph.GA - astro-ph.HE - astro-ph.IM - astro-ph.SR - astro-ph_l1 - atom-ph - bayes-an - chao-dyn - chem-ph - cmp-lg - comp-gas - cond-mat - cond-mat.dis-nn - cond-mat.mes-hall - cond-mat.mtrl-sci - cond-mat.other - cond-mat.quant-gas - cond-mat.soft - cond-mat.stat-mech - cond-mat.str-el - cond-mat.supr-con - cond-mat_l1 - cs.AI - cs.AR - cs.CC - cs.CE - cs.CG - cs.CL - cs.CR - cs.CV - cs.CY - cs.DB - cs.DC - cs.DL - cs.DM - cs.DS - cs.ET - cs.FL - cs.GL - cs.GR - cs.GT - cs.HC - cs.IR - cs.IT - cs.LG - cs.LO - cs.MA - cs.MM - cs.MS - cs.NA - cs.NE - cs.NI - cs.OH - cs.OS - cs.PF - cs.PL - cs.RO - cs.SC - cs.SD - cs.SE - cs.SI - cs.SY - cs_l1 - dg-ga - econ.EM - econ.GN - econ.TH - econ_l1 - eess.AS - eess.IV - eess.SP - eess.SY - eess_l1 - eval_sets - funct-an - gr-qc - hep-ex - hep-lat - hep-ph - hep-th - math-ph - math.AC - math.AG - math.AP - math.AT - math.CA - math.CO - math.CT - math.CV - math.DG - math.DS - math.FA - math.GM - math.GN - math.GR - math.GT - math.HO - math.IT - math.KT - math.LO - math.MG - math.MP - math.NA - math.NT - math.OA - math.OC - math.PR - math.QA - math.RA - math.RT - math.SG - math.SP - math.ST - math_l1 - mtrl-th - nlin.AO - nlin.CD - nlin.CG - nlin.PS - nlin.SI - nlin_l1 - nucl-ex - nucl-th - patt-sol - physics.acc-ph - physics.ao-ph - physics.app-ph - physics.atm-clus - physics.atom-ph - physics.bio-ph - physics.chem-ph - physics.class-ph - physics.comp-ph - physics.data-an - physics.ed-ph - physics.flu-dyn - physics.gen-ph - physics.geo-ph - physics.hist-ph - physics.ins-det - physics.med-ph - physics.optics - physics.plasm-ph - physics.pop-ph - physics.soc-ph - physics.space-ph - physics_l1 - plasm-ph - q-alg - q-bio - q-bio.BM - q-bio.CB - q-bio.GN - q-bio.MN - q-bio.NC - q-bio.OT - q-bio.PE - q-bio.QM - q-bio.SC - q-bio.TO - q-bio_l1 - q-fin.CP - q-fin.EC - q-fin.GN - q-fin.MF - q-fin.PM - q-fin.PR - q-fin.RM - q-fin.ST - q-fin.TR - q-fin_l1 - quant-ph - solv-int - stat.AP - stat.CO - stat.ME - stat.ML - stat.OT - stat.TH - stat_l1 - supr-con supr-con ## Citation Please cite this work if you found this data useful. ```bib @article{reid2022m2d2, title = {M2D2: A Massively Multi-domain Language Modeling Dataset}, author = {Machel Reid and Victor Zhong and Suchin Gururangan and Luke Zettlemoyer}, year = {2022}, journal = {arXiv preprint arXiv: Arxiv-2210.07370} } ```
machelreid/m2d2
[ "license:cc-by-nc-4.0", "arxiv:2210.07370", "region:us" ]
2022-10-18T14:14:07+00:00
{"license": "cc-by-nc-4.0"}
2022-10-25T11:57:24+00:00
5bc518f5c3350f2c92f405e8223c982c8b9dc9f0
This dataset is designed to be used in testing multimodal text/image models. It's derived from cm4-10k dataset. The current splits are: `['100.unique', '100.repeat', '300.unique', '300.repeat', '1k.unique', '1k.repeat', '10k.unique', '10k.repeat']`. The `unique` ones ensure uniqueness across text entries. The `repeat` ones are repeating the same 10 unique records: - these are useful for memory leaks debugging as the records are always the same and thus remove the record variation from the equation. The default split is `100.unique` The full process of this dataset creation, including which records were used to build it, is documented inside [cm4-synthetic-testing.py](https://huggingface.co/datasets/HuggingFaceM4/cm4-synthetic-testing/blob/main/cm4-synthetic-testing.py)
stas/cm4-synthetic-testing
[ "license:bigscience-openrail-m", "region:us" ]
2022-10-18T15:08:16+00:00
{"license": "bigscience-openrail-m"}
2022-10-18T15:20:31+00:00
3b1395c1f2fa1e3432227828e5e917aefe3bade8
This dataset is designed to be used in testing. It's derived from general-pmd/localized_narratives__ADE20k dataset The current splits are: `['100.unique', '100.repeat', '300.unique', '300.repeat', '1k.unique', '1k.repeat', '10k.unique', '10k.repeat']`. The `unique` ones ensure uniqueness across `text` entries. The `repeat` ones are repeating the same 10 unique records: - these are useful for memory leaks debugging as the records are always the same and thus remove the record variation from the equation. The default split is `100.unique` The full process of this dataset creation, including which records were used to build it, is documented inside [general-pmd-synthetic-testing.py](https://huggingface.co/datasets/HuggingFaceM4/general-pmd-synthetic-testing/blob/main/general-pmd-synthetic-testing.py)
stas/general-pmd-synthetic-testing
[ "license:bigscience-openrail-m", "region:us" ]
2022-10-18T15:08:31+00:00
{"license": "bigscience-openrail-m"}
2022-10-18T15:21:21+00:00
ef9fcb2ac1c9b7aa009131db6ce844f05bf9273f
chloeliu/try
[ "license:bsd", "region:us" ]
2022-10-18T15:21:22+00:00
{"license": "bsd"}
2022-10-18T15:22:41+00:00
44bae37e90157953b355c34f478fcb528b436617
# Dataset Card for "huggingface_hub-dependents" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
open-source-metrics/huggingface_hub-dependents
[ "region:us" ]
2022-10-18T16:42:53+00:00
{"dataset_info": {"features": [{"name": "name", "dtype": "null"}, {"name": "stars", "dtype": "null"}, {"name": "forks", "dtype": "null"}], "splits": [{"name": "package"}, {"name": "repository"}], "download_size": 1798, "dataset_size": 0}}
2024-02-16T18:19:35+00:00
98c965d524897e68b0601d52308e8be096832be1
# Dataset Card for "hub-docs-dependents" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
open-source-metrics/hub-docs-dependents
[ "region:us" ]
2022-10-18T16:43:02+00:00
{"dataset_info": {"features": [{"name": "name", "dtype": "null"}, {"name": "stars", "dtype": "null"}, {"name": "forks", "dtype": "null"}], "splits": [{"name": "package"}, {"name": "repository"}], "download_size": 1798, "dataset_size": 0}}
2024-02-16T18:08:15+00:00
7e1f136ac970901e3c0f3e7d3c1a767c15f20a31
SNOMED-CT-Code-Value-Semantic-Set.csv
awacke1/SNOMED-CT-Code-Value-Semantic-Set.csv
[ "license:mit", "region:us" ]
2022-10-18T17:19:41+00:00
{"license": "mit"}
2022-10-29T11:42:02+00:00
64caf7a2bb26ccdd78697ba041707e394f07ff1b
eCQM-Code-Value-Semantic-Set.csv
awacke1/eCQM-Code-Value-Semantic-Set.csv
[ "license:mit", "region:us" ]
2022-10-18T17:48:30+00:00
{"license": "mit"}
2022-10-29T11:40:54+00:00
40b8fae1f422ec5e32cc6fad58130638209791cb
awacke1/LOINC-Code-Value-Semantic-Set.csv
[ "license:mit", "region:us" ]
2022-10-18T17:56:06+00:00
{"license": "mit"}
2022-10-18T17:57:05+00:00
56c2617cb61295fc224c6f00b716d272db14a338
awacke1/LOINC-CodeSet-Value-Description-Semantic-Set.csv
[ "license:mit", "region:us" ]
2022-10-18T18:02:20+00:00
{"license": "mit"}
2022-10-18T18:02:20+00:00
2e48dde411fd4172fa5196bfe0f6aa1ad75f20d5
# Dataset Card for BBBicycles ## Dataset Summary Bent & Broken Bicycles (BBBicycles) dataset is a benchmark set for the novel task of **damaged object re-identification**, which aims to identify the same object in multiple images even in the presence of breaks, deformations, and missing parts. You can find an interactive preview [here](https://huggingface.co/spaces/GrainsPolito/BBBicyclesPreview). ## Dataset Structure The final dataset contains: - Total of 39,200 image - 2,800 unique IDs - 20 models - 140 IDs for each model <table border-collapse="collapse"> <tr> <td><b style="font-size:25px">Information for each ID:</b></td> <td><b style="font-size:25px">Information for each render:</b></td> </tr> <tr> <td> <ul> <li>Model</li> <li>Type</li> <li>Texture type</li> <li>Stickers</li> </ul> </td> <td> <ul> <li>Background</li> <li>Viewing Side</li> <li>Focal Length</li> <li>Presence of dirt</li> </ul> </td> </tr> </table> ### Citation Information ``` @inproceedings{bbb_2022, title={Bent & Broken Bicycles: Leveraging synthetic data for damaged object re-identification}, author={Luca Piano, Filippo Gabriele Pratticรฒ, Alessandro Sebastian Russo, Lorenzo Lanari, Lia Morra, Fabrizio Lamberti}, booktitle={2022 IEEE Winter Conference on Applications of Computer Vision (WACV)}, year={2022}, organization={IEEE} } ``` ### Credits The authors gratefully acknowledge the financial support of Reale Mutua Assicurazioni.
GrainsPolito/BBBicycles
[ "license:cc-by-nc-4.0", "region:us" ]
2022-10-18T18:05:32+00:00
{"license": "cc-by-nc-4.0"}
2022-10-20T10:14:59+00:00
f53f236e7cef0060169e534d6125f3c7d949a0f2
LOINC-CodeSet-Value-Description.csv
awacke1/LOINC-CodeSet-Value-Description.csv
[ "license:mit", "region:us" ]
2022-10-18T18:08:21+00:00
{"license": "mit"}
2022-10-29T11:43:25+00:00
b3c1ead7e05c84f8605ed4cae91199639940046a
# Dataset Card for "ott-qa-20k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) The data was obtained from [here](https://github.com/wenhuchen/OTT-QA)
ashraq/ott-qa-20k
[ "region:us" ]
2022-10-18T18:30:29+00:00
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "header", "sequence": "string"}, {"name": "data", "sequence": {"sequence": "string"}}, {"name": "section_title", "dtype": "string"}, {"name": "section_text", "dtype": "string"}, {"name": "uid", "dtype": "string"}, {"name": "intro", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 41038376, "num_examples": 20000}], "download_size": 23329221, "dataset_size": 41038376}}
2022-10-21T08:06:25+00:00
62fb164338c7c00e759cf0dc38e284e529ebfb5e
Yeezgoy/dataset-depeche-mode
[ "license:unknown", "region:us" ]
2022-10-18T18:33:28+00:00
{"license": "unknown"}
2022-10-18T18:33:28+00:00
9b7912f8ecdc299d4262565c1b08e9da0c9e5a45
spacemanidol/orcas
[ "license:mit", "region:us" ]
2022-10-18T18:39:05+00:00
{"license": "mit"}
2022-10-18T18:44:24+00:00
4027643008b46f5ef6e4ed9529a236d9d17a9777
# Arcane Diffusion Dataset Dataset containing the 75 images used to train the [Arcane Diffusion](https://huggingface.co/nitrosocke/Arcane-Diffusion) model. Settings for training: ```class prompt: illustration style instance prompt: illustration arcane style learning rate: 5e-6 lr scheduler: constant num class images: 1000 max train steps: 5000 ```
nitrosocke/arcane-diffusion-dataset
[ "license:creativeml-openrail-m", "region:us" ]
2022-10-18T19:47:20+00:00
{"license": "creativeml-openrail-m"}
2022-10-18T19:58:23+00:00
f389cdfbd8dd5bde6122f9d88c9312adccbb9eb0
zhang3gang/zhang3gang
[ "region:us" ]
2022-10-18T20:33:21+00:00
{}
2022-10-18T20:52:21+00:00
4ae617cd840824150cedbf4b991b3dd76fc13a89
eytanc/FestAbilityTranscripts
[ "license:cc-by-nc-sa-4.0", "region:us" ]
2022-10-18T21:55:49+00:00
{"license": "cc-by-nc-sa-4.0"}
2022-10-19T05:25:24+00:00
65312b1ae759cb963e15e67d641b76c975f2da5b
This dataset contains two subsets: NEG-136-SIMP and NEG-136-NAT. NEG-136-SIMP items come from Fischler et al. (1983). NEG-136-NAT items come from Nieuwland & Kuperberg (2008). The `NEG-136-SIMP.tsv` and `NEG-136-NAT.tsv` files contain for each item the affirmative and negative version of the context (context_aff, context_neg), and completions that are true with the affirmative context (target_aff) and with the negative context (target_neg). * For NEG-136-SIMP, determiners (*a*/*an*) are left ambiguous, and need to be selected based on the completion noun (this is done already in `proc_datasets.py`). **References**: * Ira Fischler, Paul A Bloom, Donald G Childers, Salim E Roucos, and Nathan W Perry Jr. 1983. *Brain potentials related to stages of sentence verification.* * Mante S Nieuwland and Gina R Kuperberg. 2008. *When the truth is not too hard to handle: An event-related potential study on the pragmatics of negation.*
joey234/neg-136
[ "region:us" ]
2022-10-18T22:29:36+00:00
{}
2022-10-18T22:30:32+00:00
0a578cf440b735e43aae48aee71e470b7274f095
# Dataset Card for "laion-2b-vietnamese-subset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
truongpdd/laion-2b-vietnamese-subset
[ "region:us" ]
2022-10-19T01:36:05+00:00
{"dataset_info": {"features": [{"name": "SAMPLE_ID", "dtype": "int64"}, {"name": "URL", "dtype": "string"}, {"name": "TEXT", "dtype": "string"}, {"name": "HEIGHT", "dtype": "int32"}, {"name": "WIDTH", "dtype": "int32"}, {"name": "LICENSE", "dtype": "string"}, {"name": "LANGUAGE", "dtype": "string"}, {"name": "NSFW", "dtype": "string"}, {"name": "similarity", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 10669843542.009588, "num_examples": 48169285}], "download_size": 7285732213, "dataset_size": 10669843542.009588}}
2022-10-19T05:09:26+00:00
7ab1aa86a74d3d9632d9fcaa21c88a54f224f2af
Tomsonvisual/prueba
[ "license:afl-3.0", "region:us" ]
2022-10-19T02:14:52+00:00
{"license": "afl-3.0"}
2022-10-19T02:14:52+00:00
d1d88d27e2e912c28703113bc3ddcdc86211e8bc
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-cnn_dailymail * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@DongfuTingle](https://huggingface.co/DongfuTingle) for evaluating this model.
autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-2bc9e0-1812262541
[ "autotrain", "evaluation", "region:us" ]
2022-10-19T04:49:28+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "google/pegasus-cnn_dailymail", "metrics": ["bleu"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-10-19T06:36:46+00:00
70eadbf169fd5ab7249fc8fcebced984b1b0f1de
# Dataset Card for "CelebA-faces-cropped-128" Just a 128px version of the CelebA-faces dataset, which I've cropped to the face regions using dlib. Processing notebook: https://colab.research.google.com/drive/1-P5mKb5VEQrzCmpx5QWomlq0-WNXaSxn?usp=sharing [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tglcourse/CelebA-faces-cropped-128
[ "region:us" ]
2022-10-19T05:00:14+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "test", "num_bytes": 274664364.23, "num_examples": 10130}, {"name": "train", "num_bytes": 5216078696.499, "num_examples": 192469}], "download_size": 0, "dataset_size": 5490743060.729}}
2022-10-19T09:36:16+00:00
e5667df9f571ec439dad39a04299daaf0c0315d1
# DATASET: AlephNews
readerbench/AlephNews
[ "region:us" ]
2022-10-19T05:50:14+00:00
{}
2022-10-19T05:51:00+00:00