sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
4999eabea03b3d717350115864fe5735723d75fe | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: inverse-scaling/NeQA
* Config: inverse-scaling--NeQA
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__NeQA-inverse-scaling__NeQA-1e740e-1694759586 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T11:53:27+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/NeQA"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-6.7b_eval", "metrics": [], "dataset_name": "inverse-scaling/NeQA", "dataset_config": "inverse-scaling--NeQA", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T12:05:18+00:00 |
914470378063a1728d3d56e4e073c9780d46eeed | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: inverse-scaling/NeQA
* Config: inverse-scaling--NeQA
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__NeQA-inverse-scaling__NeQA-1e740e-1694759588 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T11:53:33+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/NeQA"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-30b_eval", "metrics": [], "dataset_name": "inverse-scaling/NeQA", "dataset_config": "inverse-scaling--NeQA", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T12:36:52+00:00 |
03eb6a1fc07a027243874b8fef1082de40393f5e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: inverse-scaling/NeQA
* Config: inverse-scaling--NeQA
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__NeQA-inverse-scaling__NeQA-1e740e-1694759585 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T11:53:39+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/NeQA"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-2.7b_eval", "metrics": [], "dataset_name": "inverse-scaling/NeQA", "dataset_config": "inverse-scaling--NeQA", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T11:57:46+00:00 |
86f1a83ee4128a2fc4bf083542c7add2b57649e8 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: inverse-scaling/NeQA
* Config: inverse-scaling--NeQA
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__NeQA-inverse-scaling__NeQA-1e740e-1694759589 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T11:53:39+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/NeQA"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-66b_eval", "metrics": [], "dataset_name": "inverse-scaling/NeQA", "dataset_config": "inverse-scaling--NeQA", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T13:34:29+00:00 |
73e04df0f426f7045dccd85eb562b18893430efe | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: inverse-scaling/quote-repetition
* Config: inverse-scaling--quote-repetition
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__quote-repetition-inverse-scaling__quot-3aff83-1695059590 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T11:53:45+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/quote-repetition"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-125m_eval", "metrics": [], "dataset_name": "inverse-scaling/quote-repetition", "dataset_config": "inverse-scaling--quote-repetition", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T11:54:39+00:00 |
0806ad91a62c545f50b137c248b5520862f8c52f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: inverse-scaling/NeQA
* Config: inverse-scaling--NeQA
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__NeQA-inverse-scaling__NeQA-1e740e-1694759587 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T11:53:51+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/NeQA"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-13b_eval", "metrics": [], "dataset_name": "inverse-scaling/NeQA", "dataset_config": "inverse-scaling--NeQA", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T12:13:51+00:00 |
196bdb9986f0a0fea54f769ed49d25fce68c1cac | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: inverse-scaling/quote-repetition
* Config: inverse-scaling--quote-repetition
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__quote-repetition-inverse-scaling__quot-3aff83-1695059592 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T11:53:54+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/quote-repetition"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-1.3b_eval", "metrics": [], "dataset_name": "inverse-scaling/quote-repetition", "dataset_config": "inverse-scaling--quote-repetition", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T11:57:06+00:00 |
1eabff70f9e475801a26b8647f1a892cc8af1402 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: inverse-scaling/quote-repetition
* Config: inverse-scaling--quote-repetition
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__quote-repetition-inverse-scaling__quot-3aff83-1695059594 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T11:54:03+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/quote-repetition"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-6.7b_eval", "metrics": [], "dataset_name": "inverse-scaling/quote-repetition", "dataset_config": "inverse-scaling--quote-repetition", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T12:07:25+00:00 |
87bcd1f3ea92970013f321a4eaa4b989d4c4e69f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: inverse-scaling/quote-repetition
* Config: inverse-scaling--quote-repetition
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__quote-repetition-inverse-scaling__quot-3aff83-1695059591 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T11:54:04+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/quote-repetition"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-350m_eval", "metrics": [], "dataset_name": "inverse-scaling/quote-repetition", "dataset_config": "inverse-scaling--quote-repetition", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T11:55:38+00:00 |
226769fa2d9bb013746d418f9cff3e8d2052b01b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: inverse-scaling/quote-repetition
* Config: inverse-scaling--quote-repetition
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__quote-repetition-inverse-scaling__quot-3aff83-1695059593 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T11:54:15+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/quote-repetition"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-2.7b_eval", "metrics": [], "dataset_name": "inverse-scaling/quote-repetition", "dataset_config": "inverse-scaling--quote-repetition", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T11:59:45+00:00 |
48388b5a59cb46f873613df94fc86a512e077a84 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: inverse-scaling/quote-repetition
* Config: inverse-scaling--quote-repetition
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__quote-repetition-inverse-scaling__quot-3aff83-1695059595 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T11:54:19+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/quote-repetition"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-13b_eval", "metrics": [], "dataset_name": "inverse-scaling/quote-repetition", "dataset_config": "inverse-scaling--quote-repetition", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T12:17:22+00:00 |
82581cdd50eb84bc67d4c4ab925ca0a766f7e944 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: inverse-scaling/quote-repetition
* Config: inverse-scaling--quote-repetition
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__quote-repetition-inverse-scaling__quot-3aff83-1695059596 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T11:54:22+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/quote-repetition"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-30b_eval", "metrics": [], "dataset_name": "inverse-scaling/quote-repetition", "dataset_config": "inverse-scaling--quote-repetition", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T12:51:20+00:00 |
7a62af53f10a837d38dc08c37f8b0717068b8e07 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: inverse-scaling/quote-repetition
* Config: inverse-scaling--quote-repetition
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__quote-repetition-inverse-scaling__quot-3aff83-1695059597 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T11:59:45+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/quote-repetition"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-66b_eval", "metrics": [], "dataset_name": "inverse-scaling/quote-repetition", "dataset_config": "inverse-scaling--quote-repetition", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T14:04:09+00:00 |
69c9978984342029f664e38b202880415b966f64 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: inverse-scaling/redefine-math
* Config: inverse-scaling--redefine-math
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__redefine-math-inverse-scaling__redefin-f7efd9-1695359598 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T12:00:16+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/redefine-math"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-125m_eval", "metrics": [], "dataset_name": "inverse-scaling/redefine-math", "dataset_config": "inverse-scaling--redefine-math", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T12:01:24+00:00 |
f58d2bec0f51fba1aefa6c6b6c0fbc73cecd08ba | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: inverse-scaling/redefine-math
* Config: inverse-scaling--redefine-math
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__redefine-math-inverse-scaling__redefin-f7efd9-1695359599 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T12:00:49+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/redefine-math"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-350m_eval", "metrics": [], "dataset_name": "inverse-scaling/redefine-math", "dataset_config": "inverse-scaling--redefine-math", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T12:03:00+00:00 |
6151fe1fc86df62b84a98e36639814c046c56de4 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: inverse-scaling/redefine-math
* Config: inverse-scaling--redefine-math
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__redefine-math-inverse-scaling__redefin-f7efd9-1695359600 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T12:01:46+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/redefine-math"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-1.3b_eval", "metrics": [], "dataset_name": "inverse-scaling/redefine-math", "dataset_config": "inverse-scaling--redefine-math", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T12:07:45+00:00 |
31ef4b0d31434c7e2ff3ea13109ab7176bd94bf4 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: inverse-scaling/redefine-math
* Config: inverse-scaling--redefine-math
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__redefine-math-inverse-scaling__redefin-f7efd9-1695359601 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T12:02:06+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/redefine-math"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-2.7b_eval", "metrics": [], "dataset_name": "inverse-scaling/redefine-math", "dataset_config": "inverse-scaling--redefine-math", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T12:09:52+00:00 |
54bb5ed36a085c27baced04fd5cc266022b56e63 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: inverse-scaling/redefine-math
* Config: inverse-scaling--redefine-math
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__redefine-math-inverse-scaling__redefin-f7efd9-1695359602 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T12:02:42+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/redefine-math"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-6.7b_eval", "metrics": [], "dataset_name": "inverse-scaling/redefine-math", "dataset_config": "inverse-scaling--redefine-math", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T12:27:39+00:00 |
67d77e07eec8000ac20e7b3875d132ee98ce0305 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: inverse-scaling/redefine-math
* Config: inverse-scaling--redefine-math
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__redefine-math-inverse-scaling__redefin-f7efd9-1695359603 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T12:03:34+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/redefine-math"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-13b_eval", "metrics": [], "dataset_name": "inverse-scaling/redefine-math", "dataset_config": "inverse-scaling--redefine-math", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T12:41:22+00:00 |
1068ccdaf75c16d3b74a731031c1f27cb95f25ea | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: inverse-scaling/redefine-math
* Config: inverse-scaling--redefine-math
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__redefine-math-inverse-scaling__redefin-f7efd9-1695359604 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T12:05:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/redefine-math"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-30b_eval", "metrics": [], "dataset_name": "inverse-scaling/redefine-math", "dataset_config": "inverse-scaling--redefine-math", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T13:29:52+00:00 |
0e9cf3a49220dfd08fdb8e2a535f934f8c63cb0f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: inverse-scaling/redefine-math
* Config: inverse-scaling--redefine-math
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__redefine-math-inverse-scaling__redefin-f7efd9-1695359605 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T12:07:20+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/redefine-math"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-66b_eval", "metrics": [], "dataset_name": "inverse-scaling/redefine-math", "dataset_config": "inverse-scaling--redefine-math", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T15:13:43+00:00 |
50a17bbe351d2986ed808d809001a823bb117403 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: inverse-scaling/hindsight-neglect-10shot
* Config: inverse-scaling--hindsight-neglect-10shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__hindsight-neglect-10shot-inverse-scali-383fe9-1695459608 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T12:23:48+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/hindsight-neglect-10shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-1.3b_eval", "metrics": [], "dataset_name": "inverse-scaling/hindsight-neglect-10shot", "dataset_config": "inverse-scaling--hindsight-neglect-10shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T12:39:13+00:00 |
cd7c5257edd53f6dc43cef6f418de9487a4a34d7 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: inverse-scaling/hindsight-neglect-10shot
* Config: inverse-scaling--hindsight-neglect-10shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__hindsight-neglect-10shot-inverse-scali-383fe9-1695459606 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T12:23:51+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/hindsight-neglect-10shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-125m_eval", "metrics": [], "dataset_name": "inverse-scaling/hindsight-neglect-10shot", "dataset_config": "inverse-scaling--hindsight-neglect-10shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T12:27:32+00:00 |
d60576aace2a380fd604dda0fde82148117e51e0 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: inverse-scaling/hindsight-neglect-10shot
* Config: inverse-scaling--hindsight-neglect-10shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__hindsight-neglect-10shot-inverse-scali-383fe9-1695459609 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T12:23:58+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/hindsight-neglect-10shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-2.7b_eval", "metrics": [], "dataset_name": "inverse-scaling/hindsight-neglect-10shot", "dataset_config": "inverse-scaling--hindsight-neglect-10shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T12:46:42+00:00 |
bfccf4c6974ec6bda55c6ca28809d0a277b271d0 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: inverse-scaling/hindsight-neglect-10shot
* Config: inverse-scaling--hindsight-neglect-10shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__hindsight-neglect-10shot-inverse-scali-383fe9-1695459607 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T12:24:03+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/hindsight-neglect-10shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-350m_eval", "metrics": [], "dataset_name": "inverse-scaling/hindsight-neglect-10shot", "dataset_config": "inverse-scaling--hindsight-neglect-10shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T12:29:38+00:00 |
578e73ac947921de25830e802e9e334e458684e0 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: inverse-scaling/hindsight-neglect-10shot
* Config: inverse-scaling--hindsight-neglect-10shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__hindsight-neglect-10shot-inverse-scali-383fe9-1695459610 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T12:24:08+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/hindsight-neglect-10shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-6.7b_eval", "metrics": [], "dataset_name": "inverse-scaling/hindsight-neglect-10shot", "dataset_config": "inverse-scaling--hindsight-neglect-10shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T13:11:14+00:00 |
db5652baee079e0f2522705d3188d85a76c53e52 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: inverse-scaling/hindsight-neglect-10shot
* Config: inverse-scaling--hindsight-neglect-10shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__hindsight-neglect-10shot-inverse-scali-383fe9-1695459611 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T12:24:16+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/hindsight-neglect-10shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-13b_eval", "metrics": [], "dataset_name": "inverse-scaling/hindsight-neglect-10shot", "dataset_config": "inverse-scaling--hindsight-neglect-10shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T13:48:28+00:00 |
e563c7fc762b04876922a546d16cdfda2a380bca | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: inverse-scaling/hindsight-neglect-10shot
* Config: inverse-scaling--hindsight-neglect-10shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__hindsight-neglect-10shot-inverse-scali-383fe9-1695459612 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T12:24:18+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/hindsight-neglect-10shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-30b_eval", "metrics": [], "dataset_name": "inverse-scaling/hindsight-neglect-10shot", "dataset_config": "inverse-scaling--hindsight-neglect-10shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T16:12:47+00:00 |
9369ee2304123e8424dd2aab5f182d4f6de29e63 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: inverse-scaling/hindsight-neglect-10shot
* Config: inverse-scaling--hindsight-neglect-10shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__hindsight-neglect-10shot-inverse-scali-383fe9-1695459613 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T12:34:18+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/hindsight-neglect-10shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-66b_eval", "metrics": [], "dataset_name": "inverse-scaling/hindsight-neglect-10shot", "dataset_config": "inverse-scaling--hindsight-neglect-10shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-08T21:07:01+00:00 |
c93c05d8c319745ce5529015b8b15634e7b75cb8 | ravener/data | [
"license:mit",
"region:us"
] | 2022-10-08T13:42:07+00:00 | {"license": "mit"} | 2022-10-08T13:42:07+00:00 |
|
c4990154dab8a5f813f7cbfffcede9dd4878fa64 | # Dataset Card for "biobert-ner-diseases-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | rjac/biobert-ner-diseases-dataset | [
"region:us"
] | 2022-10-08T14:34:44+00:00 | {"dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-Disease", "2": "I-Disease"}, "id": [0, 1, 2]}}}, {"name": "sentence_id", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 2614997, "num_examples": 5737}, {"name": "train", "num_bytes": 6947635, "num_examples": 15488}], "download_size": 1508920, "dataset_size": 9562632}} | 2022-11-04T11:12:13+00:00 |
4e3504ac55aa91bca08f169a1c56975f4ca3409f | Dgajjar/Nwnek_ref_images | [
"region:us"
] | 2022-10-08T16:34:48+00:00 | {} | 2022-10-08T16:36:04+00:00 |
|
fd4ee1d2ea5de9f8ab7fbded3a043b85b83ce08f | inditechie/cavalier | [
"license:unknown",
"region:us"
] | 2022-10-08T20:03:52+00:00 | {"license": "unknown"} | 2022-10-08T22:01:26+00:00 |
|
d1f4b56c03d5937c4b01c749c2ba7449ea35b474 | shivanshjayara2991/ner_resume_data | [
"license:other",
"region:us"
] | 2022-10-08T20:22:05+00:00 | {"license": "other"} | 2022-10-08T20:22:05+00:00 |
|
18870a8addd736c309f007855ea121d00c6d7f3e | Dustroit/RealDustin | [
"license:openrail",
"region:us"
] | 2022-10-08T21:42:28+00:00 | {"license": "openrail"} | 2022-10-08T21:45:32+00:00 |
|
28b7e1e88373646c5523ff20d243fe6c3a24b986 | EmpoweringArts/planar-head | [
"license:cc",
"region:us"
] | 2022-10-08T21:47:05+00:00 | {"license": "cc"} | 2022-10-08T21:49:45+00:00 |
|
5d76b13867da8e0ba4d7f606fdbf7f2cd789dc1e | # Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | salaz055/celeb-identities | [
"region:us"
] | 2022-10-08T22:03:52+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Adam_Levine", "1": "Anna_Kendrick", "2": "John_Mayer", "3": "Michael_B_Jordan", "4": "Rihanna", "5": "Taylor_Swift"}}}}], "splits": [{"name": "train", "num_bytes": 2647071.0, "num_examples": 18}], "download_size": 2649140, "dataset_size": 2647071.0}} | 2023-01-11T18:12:53+00:00 |
dead82ed57176c8e6d9459b08626a70269f9a8fb |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#Summarization)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#text)
- [Annotations](#summary)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: Exercises ModifiedOrangeSumm-Abstract**
- **Repository: krm/modified-orangeSum**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[Ceci est un petit essai et résulte de l'adjonction de quelques données personnelles à OrangeSum Abstract]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed] | krm/modified-orangeSum | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other",
"license:unknown",
"'krm'",
"region:us"
] | 2022-10-08T22:26:03+00:00 | {"annotations_creators": ["other"], "language_creators": ["other"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other"], "task_categories": ["summarization"], "task_ids": ["news-articles-summarization"], "pretty_name": "modified-orangeSum", "tags": ["'krm'"]} | 2022-10-08T23:06:23+00:00 |
9e80672f5df6d0b1e1d07cee50c5b1f990789063 | tkuhn1988/tkuhnstyle | [
"license:afl-3.0",
"region:us"
] | 2022-10-09T01:24:18+00:00 | {"license": "afl-3.0"} | 2022-10-09T01:24:18+00:00 |
|
4ee59671691893687a2a0569618bdfedfbd77537 | This is a small dataset containing celebrity faces. This dataset was created for educational purposes and is far too small for any sort of model training. However, these images can be used for demo examples or other educational purposes. | brendenc/celeb-identities | [
"region:us"
] | 2022-10-09T01:31:19+00:00 | {} | 2022-10-09T01:33:12+00:00 |
d3a5563357d54263eac5e2a474551f31d587f250 | gradio/NYC-Airbnb-Open-Data | [
"license:afl-3.0",
"region:us"
] | 2022-10-09T04:31:18+00:00 | {"license": "afl-3.0"} | 2022-10-09T04:31:38+00:00 |
|
e11e9d7b5b84d5b50b12de433ba7823ef85ca40c |
XFUND dataset
see more detail at [this](https://github.com/doc-analysis/XFUND)
### Citation Information
``` latex
@inproceedings{xu-etal-2022-xfund,
title = "{XFUND}: A Benchmark Dataset for Multilingual Visually Rich Form Understanding",
author = "Xu, Yiheng and
Lv, Tengchao and
Cui, Lei and
Wang, Guoxin and
Lu, Yijuan and
Florencio, Dinei and
Zhang, Cha and
Wei, Furu",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.253",
doi = "10.18653/v1/2022.findings-acl.253",
pages = "3214--3224",
abstract = "Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually rich document understanding tasks recently, which demonstrates the great potential for joint learning across different modalities. However, the existed research work has focused only on the English domain while neglecting the importance of multilingual generalization. In this paper, we introduce a human-annotated multilingual form understanding benchmark dataset named XFUND, which includes form understanding samples in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese). Meanwhile, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually rich document understanding. Experimental results show that the LayoutXLM model has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUND dataset. The XFUND dataset and the pre-trained LayoutXLM model have been publicly available at https://aka.ms/layoutxlm.",
}
``` | rogerdehe/xfund | [
"task_categories:text-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"language:de",
"language:es",
"language:fr",
"language:it",
"language:ja",
"license:other",
"layoutlmv3",
"xfund",
"funsd",
"region:us"
] | 2022-10-09T07:22:00+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["de", "es", "fr", "it", "ja"], "license": ["other"], "multilinguality": ["multilingual"], "task_categories": ["text-classification"], "tags": ["layoutlmv3", "xfund", "funsd"]} | 2022-10-12T11:42:35+00:00 |
f6c39cfd22c38d417be19b76bcd7c83954188907 | albertvillanova/datasets-report | [
"license:cc-by-4.0",
"region:us"
] | 2022-10-09T09:46:37+00:00 | {"license": "cc-by-4.0"} | 2022-11-22T07:26:28+00:00 |
|
9310a01e876be1fe69ab698fc11910c2f608b2d2 | Yeagob/me | [
"region:us"
] | 2022-10-09T10:57:42+00:00 | {} | 2022-10-09T10:58:04+00:00 |
|
d80f3dbafc2ae811fbe1a5d51357f0898aaf4d8c | ett/sam | [
"region:us"
] | 2022-10-09T11:12:59+00:00 | {} | 2022-10-09T11:26:32+00:00 |
|
d6c3cd99c7f466dde28eb0a8054e525585e9725f |
This dataset is uploading. | rdp-studio/paimon-voice | [
"license:cc-by-nc-sa-4.0",
"doi:10.57967/hf/0034",
"region:us"
] | 2022-10-09T11:22:07+00:00 | {"license": "cc-by-nc-sa-4.0"} | 2022-10-10T01:58:45+00:00 |
f243803da5b069db9b040820a57515533b18996d | Pavankalyan/chitti_data | [
"region:us"
] | 2022-10-09T13:38:50+00:00 | {} | 2022-11-29T04:28:39+00:00 |
|
f6dc7afc05475ad4a73b96924ca0aec26f76e676 | EstebanMax/lighthouse | [
"license:afl-3.0",
"region:us"
] | 2022-10-09T15:13:17+00:00 | {"license": "afl-3.0"} | 2022-10-09T15:14:19+00:00 |
|
0c32d435c1f8f10f37bac8dd01f0cc6a5a5acfd7 |
# Dataset Card for BrWac
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [BrWaC homepage](https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC)
- **Repository:** [BrWaC repository](https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC)
- **Paper:** [The brWaC Corpus: A New Open Resource for Brazilian Portuguese](https://www.aclweb.org/anthology/L18-1686/)
- **Point of Contact:** [Jorge A. Wagner Filho](mailto:[email protected])
### Dataset Summary
The BrWaC (Brazilian Portuguese Web as Corpus) is a large corpus constructed following the Wacky framework,
which was made public for research purposes. The current corpus version, released in January 2017, is composed by
3.53 million documents, 2.68 billion tokens and 5.79 million types. Please note that this resource is available
solely for academic research purposes, and you agreed not to use it for any commercial applications.
Manually download at https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC
This is a Tiny version of the entire dataset for educational purposes. Please, refer to https://github.com/the-good-fellas/xlm-roberta-pt-br
### Supported Tasks and Leaderboards
Initially meant for fill-mask task.
### Languages
Brazilian Portuguese
## Dataset Creation
### Personal and Sensitive Information
All data were extracted from public sites.
### Licensing Information
MIT
### Citation Information
```
@inproceedings{wagner2018brwac,
title={The brwac corpus: A new open resource for brazilian portuguese},
author={Wagner Filho, Jorge A and Wilkens, Rodrigo and Idiart, Marco and Villavicencio, Aline},
booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation
(LREC 2018)},
year={2018}
}
```
### Contributions
Thanks to [@the-good-fellas](https://github.com/the-good-fellas) for adding this dataset as hf format. | thegoodfellas/brwac_tiny | [
"task_categories:fill-mask",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:pt",
"license:mit",
"ufrgs",
"nlp",
"brazil",
"region:us"
] | 2022-10-09T16:55:56+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["pt"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["fill-mask"], "task_ids": ["masked-language-modeling"], "pretty_name": "brwac", "tags": ["ufrgs", "nlp", "brazil"]} | 2022-10-10T19:27:54+00:00 |
ebf83c7a90646795d8f15a1f48d6ed74afea9ae3 | # Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ThankGod/celeb-identities | [
"region:us"
] | 2022-10-09T17:37:35+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Andrew_Ng", "1": "Elon_Musk", "2": "Jay_Z", "3": "Kanye", "4": "Obama", "5": "Queen"}}}}], "splits": [{"name": "train", "num_bytes": 624532.0, "num_examples": 16}], "download_size": 626669, "dataset_size": 624532.0}} | 2023-04-25T11:00:42+00:00 |
12519e04c4930fef330c30e4bae59225b6e8073e | dmg2105/jaimito | [
"region:us"
] | 2022-10-09T18:17:53+00:00 | {} | 2022-10-09T18:21:09+00:00 |
|
109e92f1a0b4940f3eb57ca250d552376ecb6458 | ## Titanic dataset | osanseviero/titanic | [
"region:us"
] | 2022-10-09T18:23:55+00:00 | {} | 2022-10-10T06:36:31+00:00 |
2a5b99243fdb0b148955a3c6dffee19b88dad87d | Maxobelix/maxo1 | [
"license:artistic-2.0",
"region:us"
] | 2022-10-09T18:56:33+00:00 | {"license": "artistic-2.0"} | 2022-10-09T18:56:33+00:00 |
|
fd8eacf41caca879e9e06c02d93675c082bafbd5 | 1,2,3,4
2,3,4,5 | LeFluffyPunk/Data | [
"region:us"
] | 2022-10-09T19:11:41+00:00 | {} | 2022-10-09T19:11:52+00:00 |
0b6d8290c2f90626192427dfeff9af7e53800bd4 | inditechie/peppa | [
"license:other",
"region:us"
] | 2022-10-09T20:18:41+00:00 | {"license": "other"} | 2022-10-09T20:27:09+00:00 |
|
2c4f775963e4a7f94552ebe989d316d648f0e300 | # Dataset Card for "lat_en_loeb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | grosenthal/lat_en_loeb | [
"region:us"
] | 2022-10-09T21:31:22+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "la", "dtype": "string"}, {"name": "en", "dtype": "string"}, {"name": "file", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 31372661.713349972, "num_examples": 81096}, {"name": "test", "num_bytes": 3921582.7141687465, "num_examples": 10137}, {"name": "valid", "num_bytes": 3921969.5724812816, "num_examples": 10138}], "download_size": 25067983, "dataset_size": 39216214.0}} | 2023-01-29T23:21:06+00:00 |
b59f33a77abdc4b7b5c11d67685e8c8d43ce2307 | Chinchis/imagenes | [
"license:gpl",
"region:us"
] | 2022-10-09T21:56:58+00:00 | {"license": "gpl"} | 2022-10-13T04:44:07+00:00 |
|
1fa9ca6910a87c20259ae78e09ffec3738d5194c | nerdie01/emotions-modified | [
"license:apache-2.0",
"region:us"
] | 2022-10-09T23:56:49+00:00 | {"license": "apache-2.0"} | 2022-10-09T23:56:49+00:00 |
|
3bf6a6ebfa290c875386a62ad15d0d9612dc6470 | liuwei33/images | [
"license:mit",
"region:us"
] | 2022-10-10T00:24:54+00:00 | {"license": "mit"} | 2022-11-22T15:43:16+00:00 |
|
266a789657f551170b540c38555a03be58b55650 | Bioskop/BeccaCP | [
"license:unknown",
"region:us"
] | 2022-10-10T00:52:00+00:00 | {"license": "unknown"} | 2022-10-10T00:52:28+00:00 |
|
7a0de57544433aedf02f1e597bf2ac01bc4b8d7b | Bioskop/BeccaER | [
"license:other",
"region:us"
] | 2022-10-10T01:24:30+00:00 | {"license": "other"} | 2022-10-10T01:24:30+00:00 |
|
3a206d464eacf0492d232e1a2d80ecfebdd6dc0c | # AutoTrain Dataset for project: beccacp
## Dataset Description
This dataset has been automatically processed by AutoTrain for project beccacp.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<1600x838 RGB PIL image>",
"target": 1
},
{
"image": "<1200x628 RGB PIL image>",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(num_classes=2, names=['Becca', 'Lucy'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 9 |
| valid | 4 |
| Bioskop/autotrain-data-beccacp | [
"task_categories:image-classification",
"region:us"
] | 2022-10-10T01:32:21+00:00 | {"task_categories": ["image-classification"]} | 2022-10-10T01:51:18+00:00 |
97dbedc331f1ea8069ed26e03c0121fe701808f9 | susu727/jahe1 | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-10-10T01:35:20+00:00 | {"license": "creativeml-openrail-m"} | 2022-10-10T01:35:20+00:00 |
|
91ee647b51edc6a9c4256d2fe64f83593e49d168 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 21iridescent/distilroberta-base-finetuned-squad2-lwt
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@[email protected]](https://huggingface.co/[email protected]) for evaluating this model. | autoevaluate/autoeval-eval-squad-plain_text-07b8d6-1707959801 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-10T02:40:32+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "21iridescent/distilroberta-base-finetuned-squad2-lwt", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-10-10T02:43:04+00:00 |
36f5e4bd11b69ae7aafba8b86e7b55aea3dc4bab | lcw99/wikipedia-korean-20221001 | [
"language:ko",
"region:us"
] | 2022-10-10T02:49:37+00:00 | {"language": ["ko"]} | 2022-10-10T02:55:17+00:00 |
|
f74ad9d67f6d5765539968663fa797c0f7b81921 | # Dataset Card for "CMeEE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nlhappy/CMeEE | [
"region:us"
] | 2022-10-10T03:17:53+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "ents", "list": [{"name": "indices", "sequence": "int64"}, {"name": "label", "dtype": "string"}, {"name": "score", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 8592427, "num_examples": 14897}, {"name": "validation", "num_bytes": 2851335, "num_examples": 4968}], "download_size": 3572845, "dataset_size": 11443762}} | 2023-07-26T23:39:56+00:00 |
f228a309e333d7f992089ab44951e19d794d54e3 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b7
* Dataset: phpthinh/exampletx
* Config: toxic
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. | autoevaluate/autoeval-eval-phpthinh__exampletx-toxic-7252ee-1708159804 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-10T03:33:12+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["phpthinh/exampletx"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b7", "metrics": [], "dataset_name": "phpthinh/exampletx", "dataset_config": "toxic", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-10T03:44:20+00:00 |
23b183ed5068335a41e7128da800134aa7a042ed | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: phpthinh/exampletx
* Config: toxic
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. | autoevaluate/autoeval-eval-phpthinh__exampletx-toxic-7252ee-1708159806 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-10T03:33:12+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["phpthinh/exampletx"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-7b1", "metrics": [], "dataset_name": "phpthinh/exampletx", "dataset_config": "toxic", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-10T04:09:11+00:00 |
41cd1f2cfb65b63b8a2c571fad704a7f64e385a8 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: phpthinh/exampletx
* Config: toxic
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. | autoevaluate/autoeval-eval-phpthinh__exampletx-toxic-7252ee-1708159803 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-10T03:33:13+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["phpthinh/exampletx"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b1", "metrics": [], "dataset_name": "phpthinh/exampletx", "dataset_config": "toxic", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-10T03:40:56+00:00 |
5067892309121cade0cb7ce4231a96ad2e5736b3 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: phpthinh/exampletx
* Config: toxic
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. | autoevaluate/autoeval-eval-phpthinh__exampletx-toxic-7252ee-1708159802 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-10T03:33:14+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["phpthinh/exampletx"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-560m", "metrics": [], "dataset_name": "phpthinh/exampletx", "dataset_config": "toxic", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-10T03:39:48+00:00 |
650a54cb2da8c4ca1093c5b498e6c0999255169c | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: phpthinh/exampletx
* Config: toxic
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. | autoevaluate/autoeval-eval-phpthinh__exampletx-toxic-7252ee-1708159805 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-10T03:33:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["phpthinh/exampletx"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-3b", "metrics": [], "dataset_name": "phpthinh/exampletx", "dataset_config": "toxic", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-10T03:47:39+00:00 |
b9cf3eeb5e208ffddf34723a1e1227c1fdd5a7a8 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: phpthinh/exampletx
* Config: constructive
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. | autoevaluate/autoeval-eval-phpthinh__exampletx-constructive-7f6ba0-1708559813 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-10T04:11:30+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["phpthinh/exampletx"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b1", "metrics": [], "dataset_name": "phpthinh/exampletx", "dataset_config": "constructive", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-10T04:19:15+00:00 |
19f463dd86eec9daad55fa037f232127535ec837 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: phpthinh/exampletx
* Config: constructive
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. | autoevaluate/autoeval-eval-phpthinh__exampletx-constructive-7f6ba0-1708559812 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-10T04:11:30+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["phpthinh/exampletx"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-560m", "metrics": [], "dataset_name": "phpthinh/exampletx", "dataset_config": "constructive", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-10T04:18:19+00:00 |
cfc6cc3d10c7e7875c31082d2c031b19165fa071 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: phpthinh/exampletx
* Config: constructive
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. | autoevaluate/autoeval-eval-phpthinh__exampletx-constructive-7f6ba0-1708559816 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-10T04:11:32+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["phpthinh/exampletx"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-7b1", "metrics": [], "dataset_name": "phpthinh/exampletx", "dataset_config": "constructive", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-10T04:46:34+00:00 |
c5aca6e7b5825b9e2a2b864d33e90cd1436c7665 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b7
* Dataset: phpthinh/exampletx
* Config: constructive
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. | autoevaluate/autoeval-eval-phpthinh__exampletx-constructive-7f6ba0-1708559814 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-10T04:11:33+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["phpthinh/exampletx"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b7", "metrics": [], "dataset_name": "phpthinh/exampletx", "dataset_config": "constructive", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-10T04:22:41+00:00 |
39766769c99aa887f9adf4da7b08f7b28539cc6d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: phpthinh/exampletx
* Config: constructive
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. | autoevaluate/autoeval-eval-phpthinh__exampletx-constructive-7f6ba0-1708559815 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-10T04:11:37+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["phpthinh/exampletx"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-3b", "metrics": [], "dataset_name": "phpthinh/exampletx", "dataset_config": "constructive", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-10T04:25:28+00:00 |
cd3fc7ebe3bf95f1f800f50448b0361f7f43a06a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: gpt2
* Dataset: phpthinh/exampletx
* Config: toxic
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. | autoevaluate/autoeval-eval-phpthinh__exampletx-toxic-b86aaf-1709259817 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-10T06:53:44+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["phpthinh/exampletx"], "eval_info": {"task": "text_zero_shot_classification", "model": "gpt2", "metrics": ["f1"], "dataset_name": "phpthinh/exampletx", "dataset_config": "toxic", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-10T06:57:16+00:00 |
dab944b274fe6e047f0cc6b8dc5e0ca68f4dcd36 |
# Dataset Card for the EUR-Lex-Sum Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/achouhan93/eur-lex-sum
- **Paper:** [EUR-Lex-Sum: A Multi-and Cross-lingual Dataset for Long-form Summarization in the Legal Domain](https://arxiv.org/abs/2210.13448)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Dennis Aumiller](mailto:[email protected])
### Dataset Summary
The EUR-Lex-Sum dataset is a multilingual resource intended for text summarization in the legal domain.
It is based on human-written summaries of legal acts issued by the European Union.
It distinguishes itself by introducing a smaller set of high-quality human-written samples, each of which have much longer references (and summaries!) than comparable datasets.
Additionally, the underlying legal acts provide a challenging domain-specific application to legal texts, which are so far underrepresented in non-English languages.
For each legal act, the sample can be available in up to 24 languages (the officially recognized languages in the European Union); the validation and test samples consist entirely of samples available in *all* languages, and are aligned across all languages at the paragraph level.
### Supported Tasks and Leaderboards
- `summarization`: The dataset is primarily suitable for summarization tasks, where it can be used as a small-scale training resource. The primary evaluation metric used in the underlying experiments is [ROUGE](https://huggingface.co/metrics/rouge). The EUR-Lex-Sum data is particularly interesting, because traditional lead-based baselines (such as lead-3) do not work well, given the extremely long reference summaries. However, we can provide reasonably good summaries by applying a modified LexRank approach on the paragraph level.
- `cross-lingual-summarization`: Given that samples of the dataset exist across multiple languages, and both the validation and test set are fully aligned across languages, this dataset can further be used as a cross-lingual benchmark. In these scenarios, language pairs (e.g., EN to ES) can be compared against monolingual systems. Suitable baselines include automatic translations of gold summaries, or translations of simple LexRank-generated monolingual summaries.
- `long-form-summarization`: We further note the particular case for *long-form summarization*. In comparison to news-based summarization datasets, this resource provides around 10x longer *summary texts*. This is particularly challenging for transformer-based models, which struggle with limited context lengths.
### Languages
The dataset supports all [official languages of the European Union](https://european-union.europa.eu/principles-countries-history/languages_en). At the time of collection, those were 24 languages:
Bulgarian, Croationa, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Irish, Italian, Latvian, Lithuanian, Maltese, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish, and Swedish.
Both the reference texts, as well as the summaries, are translated from an English original text (this was confirmed by private correspondence with the Publications Office of the European Union). Translations and summaries are written by external (professional) parties, contracted by the EU.
Depending on availability of document summaries in particular languages, we have between 391 (Irish) and 1505 (French) samples available. Over 80% of samples are available in at least 20 languages.
## Dataset Structure
### Data Instances
Data instances contain fairly minimal information. Aside from a unique identifier, corresponding to the Celex ID generated by the EU, two further fields specify the original long-form legal act and its associated summary.
```
{
"celex_id": "3A32021R0847",
"reference": "REGULATION (EU) 2021/847 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL\n [...]"
"summary": "Supporting EU cooperation in the field of taxation: Fiscalis (2021-2027)\n\n [...]"
}
```
### Data Fields
- `celex_id`: The [Celex ID](https://eur-lex.europa.eu/content/tools/eur-lex-celex-infographic-A3.pdf) is a naming convention used for identifying EU-related documents. Among other things, the year of publication and sector codes are embedded in the Celex ID.
- `reference`: This is the full text of a Legal Act published by the EU.
- `summary`: This field contains the summary associated with the respective Legal Act.
### Data Splits
We provide pre-split training, validation and test splits.
To obtain the validation and test splits, we randomly assigned all samples that are available across all 24 languages into two equally large portions. In total, 375 instances are available in 24 languages, which means we obtain a validation split of 187 samples and 188 test instances.
All remaining instances are assigned to the language-specific training portions, which differ in their exact size.
We particularly ensured that no duplicates exist across the three splits. For this purpose, we ensured that no exactly matching reference *or* summary exists for any sample. Further information on the length distributions (for the English subset) can be found in the paper.
## Dataset Creation
### Curation Rationale
The dataset was curated to provide a resource for under-explored aspects of automatic text summarization research.
In particular, we want to encourage the exploration of abstractive summarization systems that are not limited by the usual 512 token context window, which usually works well for (short) news articles, but fails to generate long-form summaries, or does not even work with longer source texts in the first place.
Also, existing resources primarily focus on a single (and very specialized) domain, namely news article summarization. We wanted to provide a further resource for *legal* summarization, for which many languages do not even have any existing datasets.
We further noticed that no previous system had utilized the human-written samples from the [EUR-Lex platform](https://eur-lex.europa.eu/homepage.html), which provide an excellent source for training instances suitable for summarization research. We later found out about a resource created in parallel based on EUR-Lex documents, which provides a [monolingual (English) corpus](https://github.com/svea-klaus/Legal-Document-Summarization) constructed in similar fashion. However, we provide a more thorough filtering, and extend the process to the remaining 23 EU languages.
### Source Data
#### Initial Data Collection and Normalization
The data was crawled from the aforementioned EUR-Lex platform. In particular, we only use samples which have *HTML* versions of the texts available, which ensure the alignment across languages, given that translations have to retain the original paragraph structure, which is encoded in HTML elements.
We further filter out samples that do not have associated document summaries available.
One particular design choice has to be expanded upon: For some summaries, *several source documents* are considered as an input by the EU. However, since we construct a single-document summarization corpus, we decided to use the **longest reference document only**. This means we explicitly drop the other reference texts from the corpus.
One alternative would have been to concatenated all relevant source texts; however, this generally leads to degradation of positional biases in the text, which can be an important learned feature for summarization systems. Our paper details the effect of this decision in terms of n-gram novelty, which we find is affected by the processing choice.
#### Who are the source language producers?
The language producers are external professionals contracted by the European Union offices. As previously noted, all non-English texts are generated from the respective English document (all summaries are direct translations the English summary, all reference texts are translated from the English reference text).
No further information on the demographic of annotators is provided.
### Annotations
#### Annotation process
The European Union publishes their [annotation guidelines](https://etendering.ted.europa.eu/cft/cft-documents.html?cftId=6490) for summaries, which targets a length between 600-800 words.
No information on the guidelines for translations is known.
#### Who are the annotators?
The language producers are external professionals contracted by the European Union offices. No further information on the annotators is available.
### Personal and Sensitive Information
The original text was not modified in any way by the authors of this dataset. Explicit mentions of personal names can occur in the dataset, however, we rely on the European Union that no further sensitive information is provided in these documents.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset can be used to provide summarization systems in languages that are previously under-represented. For example, language samples in Irish and Maltese (among others) enable the development and evaluation for these languages.
A successful cross-lingual system would further enable the creation of automated legal summaries for legal acts, possibly enabling foreigners in European countries to automatically translate similar country-specific legal acts.
Given the limited amount of training data, this dataset is also suitable as a test bed for low-resource approaches, especially in comparsion to strong unsupervised (extractive) summarization systems.
We also note that the summaries are explicitly provided as "not legally binding" by the EU. The implication of left-out details (a necessary evil of summaries) implies the existence of differences between the (legally binding) original legal act.
Risks associated with this dataset also largely stem from the potential application of systems trained on it. Decisions in the legal domain require careful analysis of the full context, and should not be made based on system-generated summaries at this point in time. Known biases of summarization, specifically factual hallucinations, should act as further deterrents.
### Discussion of Biases
Given the availability bias, some of the languages in the dataset are more represented than others. We attempt to mitigate influence on the evaluation by providing validation and test sets of the same size across all languages.
Given that we require the availability of HTML documents, we see a particular temporal bias in our dataset, which features more documents from the years of 1990 onwards, simply due to the increase in EU-related activities, but also the native use of the internet as a data storage.
This could imply a particular focus on more recent topics (e.g., Brexit, renewable eneriges, etc. come to mind).
Finally, due to the source of these documents being the EU, we expect a natural bias towards EU-centric (and therefore Western-centric) content; other nations and continents will be under-represented in the data.
### Other Known Limitations
As previously outlined, we are aware of some summaries relating to multiple (different) legal acts. For these samples, only one (the longest) text will be available in our dataset.
## Additional Information
### Dataset Curators
The web crawler was originally implemented by Ashish Chouhan.
Post-filtering and sample correction was later performed by Dennis Aumiller.
Both were PhD students employed at the Database Systems Research group of Heidelberg University, under the guidance of Prof. Dr. Michael Gertz.
### Licensing Information
Data from the EUR-Lex platform is available under the CC-BY SA 4.0 license. We redistribute the dataset under the same license.
### Citation Information
For the pre-print version, please cite:
```
@article{aumiller-etal-2022-eur,
author = {Aumiller, Dennis and Chouhan, Ashish and Gertz, Michael},
title = {{EUR-Lex-Sum: A Multi- and Cross-lingual Dataset for Long-form Summarization in the Legal Domain}},
journal = {CoRR},
volume = {abs/2210.13448},
eprinttype = {arXiv},
eprint = {2210.13448},
url = {https://arxiv.org/abs/2210.13448}
}
``` | dennlinger/eur-lex-sum | [
"task_categories:translation",
"task_categories:summarization",
"annotations_creators:found",
"annotations_creators:expert-generated",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:bg",
"language:hr",
"language:cs",
"language:da",
"language:nl",
"language:en",
"language:et",
"language:fi",
"language:fr",
"language:de",
"language:el",
"language:hu",
"language:ga",
"language:it",
"language:lv",
"language:lt",
"language:mt",
"language:pl",
"language:pt",
"language:ro",
"language:sk",
"language:sl",
"language:es",
"language:sv",
"license:cc-by-4.0",
"legal",
"eur-lex",
"expert summary",
"parallel corpus",
"multilingual",
"arxiv:2210.13448",
"region:us"
] | 2022-10-10T07:07:37+00:00 | {"annotations_creators": ["found", "expert-generated"], "language_creators": ["found", "expert-generated"], "language": ["bg", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "de", "el", "hu", "ga", "it", "lv", "lt", "mt", "pl", "pt", "ro", "sk", "sl", "es", "sv"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["translation", "summarization"], "pretty_name": "eur-lex-sum", "tags": ["legal", "eur-lex", "expert summary", "parallel corpus", "multilingual"]} | 2022-11-11T14:25:06+00:00 |
b59e463c9599e735fe6da105cdc0c9509153062e | # Dataset Card for Skateboarding tricks
Dataset used to train [Text to skateboarding image model](https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning).
For each row the dataset contains `image` and `text` keys.
`image` is a varying size PIL jpeg, and `text` is the accompanying text caption.
| vogloblinsky/skateboarding-tricks | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:mit",
"region:us"
] | 2022-10-10T07:10:46+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["other"], "language": ["en"], "license": "mit", "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "task_ids": [], "pretty_name": "Skateboarding tricks", "tags": []} | 2022-10-10T11:38:17+00:00 |
9a7f50e1fa08109c89fef504eb7095861057d455 | This Dataset contains many (as many as I could find) False Friends for English and German Language.
False Friends are words that are same / similar in sound or spelling.
This dataset is created as part of the Stanford NLU course XCS224u final project.
**Example:**
A) False Friend Word: "bald"
B) Meaning of Word in English: "not having hair"
C) Actual, Translated Meaning of German Word: "soon"
D) Translation of English "bald" in German: "glatzköpfig"
**Columns:**
False Friend / False Friend Word: Like A), A word with different meanings in both languages.
Correct False Friend Synonym: A true German synonym for the A) False Friend.
Wrong False Friend Synonym: Like D), a translation of the English False Friend into German.
Sentence: A Sentence, where the A) False Friend Word is used.
Correct Sentence: The Same Sentence as before, however the False Friend Word A) is replaced by The Correct False Friend Synonym
Wrong Sentence: The Same Sentence as before, however the False Friend Word A) is replaced by The Wrong False Friend Synonym like D)
Correct English Translation: The actual meaning of the False Friend, like in C)
Wrong English Translation: The wrong meaning of the False Friend, a word sounding or is written similar / same as the False Friend.
Source: The Source (Website) where the False Friend was mentioned. | aari1995/false_friends_en_de | [
"region:us"
] | 2022-10-10T07:56:43+00:00 | {} | 2022-10-10T10:42:11+00:00 |
cc026d85280aa8a3695332f632b428f1c523e695 | annotations_creators:
- no-annotation
language:
- en
language_creators:
- other
license:
- artistic-2.0
multilinguality:
- monolingual
pretty_name: Grief and Beauty by Milo Rau
size_categories:
- n<1K
source_datasets:
- original
tags: []
task_categories:
- text-to-image
task_ids: [] | Gr3en/MIlo_Rau_Grief_and_Beauty | [
"region:us"
] | 2022-10-10T07:58:26+00:00 | {} | 2022-10-10T08:02:24+00:00 |
238d80ffa879a51e86ae88dd8d545c951d92acbd | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: gpt2
* Dataset: phpthinh/exampletx
* Config: constructive
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. | autoevaluate/autoeval-eval-phpthinh__exampletx-constructive-666f04-1710259829 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-10T08:50:00+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["phpthinh/exampletx"], "eval_info": {"task": "text_zero_shot_classification", "model": "gpt2", "metrics": [], "dataset_name": "phpthinh/exampletx", "dataset_config": "constructive", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-10T08:53:28+00:00 |
0ab0411dca6e222e62d210bc681dbcb476d6fe4c |
# Dataset Card for xP3
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/bigscience-workshop/xmtf
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:[email protected])
### Dataset Summary
> xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3). We provide this version to save processing time and ease reproducibility.
- **Languages:** 46 (Can be extended by [recreating with more splits](https://github.com/bigscience-workshop/xmtf#create-xp3))
- **xP3 Dataset Family:**
<table>
<tr>
<th>Name</th>
<th>Explanation</th>
<th>Example models</th>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t>
<td>Mixture of 17 tasks in 277 languages with English prompts</td>
<td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t>
<td>Mixture of 13 training tasks in 46 languages with English prompts</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t>
<td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t>
<td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td>
<td></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t>
<td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t>
<td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
</tr>
</table>
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"inputs": "Sentence 1: Fue académico en literatura metafísica, teología y ciencias clásicas.\nSentence 2: Fue académico en literatura metafísica, teología y ciencia clásica.\nQuestion: Can we rewrite Sentence 1 to Sentence 2? Yes or No?",
"targets": "Yes"
}
```
### Data Fields
The data fields are the same among all splits:
- `inputs`: the natural language input fed to the model
- `targets`: the natural language target that the model has to generate
### Data Splits
The below table summarizes sizes per language (computed from the `merged_{lang}.jsonl` files). Due to languages like `tw` only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage. Adding a new language is very simple, you can take [this script adding Russian](https://huggingface.co/datasets/bs-la/xP3ru/blob/main/xp3_ru.py) as an example.
|Language|Kilobytes|%|Samples|%|
|--------|------:|-:|---:|-:|
|tw|106288|0.11|265071|0.34|
|bm|107056|0.11|265180|0.34|
|ak|108096|0.11|265071|0.34|
|eu|108112|0.11|269973|0.34|
|ca|110608|0.12|271191|0.34|
|fon|113072|0.12|265063|0.34|
|st|114080|0.12|265063|0.34|
|ki|115040|0.12|265180|0.34|
|tum|116032|0.12|265063|0.34|
|wo|122560|0.13|365063|0.46|
|ln|126304|0.13|365060|0.46|
|as|156256|0.16|265063|0.34|
|or|161472|0.17|265063|0.34|
|kn|165456|0.17|265063|0.34|
|ml|175040|0.18|265864|0.34|
|rn|192992|0.2|318189|0.4|
|nso|229712|0.24|915051|1.16|
|tn|235536|0.25|915054|1.16|
|lg|235936|0.25|915021|1.16|
|rw|249360|0.26|915043|1.16|
|ts|250256|0.26|915044|1.16|
|sn|252496|0.27|865056|1.1|
|xh|254672|0.27|915058|1.16|
|zu|263712|0.28|915061|1.16|
|ny|272128|0.29|915063|1.16|
|ig|325232|0.34|950097|1.2|
|yo|352784|0.37|918416|1.16|
|ne|393680|0.41|315754|0.4|
|pa|523248|0.55|339210|0.43|
|gu|560688|0.59|347499|0.44|
|sw|560896|0.59|1114455|1.41|
|mr|666240|0.7|417269|0.53|
|bn|832720|0.88|428843|0.54|
|ta|924496|0.97|410633|0.52|
|te|1332912|1.4|573364|0.73|
|ur|1918272|2.02|855756|1.08|
|vi|3101408|3.27|1667306|2.11|
|code|4330752|4.56|2707724|3.43|
|hi|4393696|4.63|1543441|1.96|
|zh|4589904|4.83|3560556|4.51|
|id|4606288|4.85|2627392|3.33|
|ar|4677264|4.93|2148955|2.72|
|fr|5546688|5.84|5055942|6.41|
|pt|6129584|6.46|3562772|4.52|
|es|7571808|7.98|5151349|6.53|
|en|37261104|39.25|31495184|39.93|
|total|94941936|100.0|78883588|100.0|
## Dataset Creation
### Source Data
#### Training datasets
- Code Miscellaneous
- [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex)
- [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus)
- [GreatCode](https://huggingface.co/datasets/great_code)
- [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes)
- Closed-book QA
- [Hotpot QA](https://huggingface.co/datasets/hotpot_qa)
- [Trivia QA](https://huggingface.co/datasets/trivia_qa)
- [Web Questions](https://huggingface.co/datasets/web_questions)
- [Wiki QA](https://huggingface.co/datasets/wiki_qa)
- Extractive QA
- [Adversarial QA](https://huggingface.co/datasets/adversarial_qa)
- [CMRC2018](https://huggingface.co/datasets/cmrc2018)
- [DRCD](https://huggingface.co/datasets/clue)
- [DuoRC](https://huggingface.co/datasets/duorc)
- [MLQA](https://huggingface.co/datasets/mlqa)
- [Quoref](https://huggingface.co/datasets/quoref)
- [ReCoRD](https://huggingface.co/datasets/super_glue)
- [ROPES](https://huggingface.co/datasets/ropes)
- [SQuAD v2](https://huggingface.co/datasets/squad_v2)
- [xQuAD](https://huggingface.co/datasets/xquad)
- TyDI QA
- [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary)
- [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp)
- Multiple-Choice QA
- [ARC](https://huggingface.co/datasets/ai2_arc)
- [C3](https://huggingface.co/datasets/c3)
- [CoS-E](https://huggingface.co/datasets/cos_e)
- [Cosmos](https://huggingface.co/datasets/cosmos)
- [DREAM](https://huggingface.co/datasets/dream)
- [MultiRC](https://huggingface.co/datasets/super_glue)
- [OpenBookQA](https://huggingface.co/datasets/openbookqa)
- [PiQA](https://huggingface.co/datasets/piqa)
- [QUAIL](https://huggingface.co/datasets/quail)
- [QuaRel](https://huggingface.co/datasets/quarel)
- [QuaRTz](https://huggingface.co/datasets/quartz)
- [QASC](https://huggingface.co/datasets/qasc)
- [RACE](https://huggingface.co/datasets/race)
- [SciQ](https://huggingface.co/datasets/sciq)
- [Social IQA](https://huggingface.co/datasets/social_i_qa)
- [Wiki Hop](https://huggingface.co/datasets/wiki_hop)
- [WiQA](https://huggingface.co/datasets/wiqa)
- Paraphrase Identification
- [MRPC](https://huggingface.co/datasets/super_glue)
- [PAWS](https://huggingface.co/datasets/paws)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [QQP](https://huggingface.co/datasets/qqp)
- Program Synthesis
- [APPS](https://huggingface.co/datasets/codeparrot/apps)
- [CodeContests](https://huggingface.co/datasets/teven/code_contests)
- [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs)
- [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp)
- [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search)
- [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code)
- Structure-to-text
- [Common Gen](https://huggingface.co/datasets/common_gen)
- [Wiki Bio](https://huggingface.co/datasets/wiki_bio)
- Sentiment
- [Amazon](https://huggingface.co/datasets/amazon_polarity)
- [App Reviews](https://huggingface.co/datasets/app_reviews)
- [IMDB](https://huggingface.co/datasets/imdb)
- [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes)
- [Yelp](https://huggingface.co/datasets/yelp_review_full)
- Simplification
- [BiSECT](https://huggingface.co/datasets/GEM/BiSECT)
- Summarization
- [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail)
- [Gigaword](https://huggingface.co/datasets/gigaword)
- [MultiNews](https://huggingface.co/datasets/multi_news)
- [SamSum](https://huggingface.co/datasets/samsum)
- [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua)
- [XLSum](https://huggingface.co/datasets/GEM/xlsum)
- [XSum](https://huggingface.co/datasets/xsum)
- Topic Classification
- [AG News](https://huggingface.co/datasets/ag_news)
- [DBPedia](https://huggingface.co/datasets/dbpedia_14)
- [TNEWS](https://huggingface.co/datasets/clue)
- [TREC](https://huggingface.co/datasets/trec)
- [CSL](https://huggingface.co/datasets/clue)
- Translation
- [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200)
- [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt)
- Word Sense disambiguation
- [WiC](https://huggingface.co/datasets/super_glue)
- [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic)
#### Evaluation datasets (included in [xP3all](https://huggingface.co/datasets/bigscience/xP3all) except for NLI datasets & HumanEval)
- Natural Language Inference (NLI)
- [ANLI](https://huggingface.co/datasets/anli)
- [CB](https://huggingface.co/datasets/super_glue)
- [RTE](https://huggingface.co/datasets/super_glue)
- [XNLI](https://huggingface.co/datasets/xnli)
- Coreference Resolution
- [Winogrande](https://huggingface.co/datasets/winogrande)
- [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd)
- Program Synthesis
- [HumanEval](https://huggingface.co/datasets/openai_humaneval)
- Sentence Completion
- [COPA](https://huggingface.co/datasets/super_glue)
- [Story Cloze](https://huggingface.co/datasets/story_cloze)
- [XCOPA](https://huggingface.co/datasets/xcopa)
- [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze)
## Additional Information
### Licensing Information
The dataset is released under Apache 2.0.
### Citation Information
```bibtex
@article{muennighoff2022crosslingual,
title={Crosslingual generalization through multitask finetuning},
author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others},
journal={arXiv preprint arXiv:2211.01786},
year={2022}
}
```
### Contributions
Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset. | bigscience/xP3 | [
"task_categories:other",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:100M<n<1B",
"language:ak",
"language:ar",
"language:as",
"language:bm",
"language:bn",
"language:ca",
"language:code",
"language:en",
"language:es",
"language:eu",
"language:fon",
"language:fr",
"language:gu",
"language:hi",
"language:id",
"language:ig",
"language:ki",
"language:kn",
"language:lg",
"language:ln",
"language:ml",
"language:mr",
"language:ne",
"language:nso",
"language:ny",
"language:or",
"language:pa",
"language:pt",
"language:rn",
"language:rw",
"language:sn",
"language:st",
"language:sw",
"language:ta",
"language:te",
"language:tn",
"language:ts",
"language:tum",
"language:tw",
"language:ur",
"language:vi",
"language:wo",
"language:xh",
"language:yo",
"language:zh",
"language:zu",
"license:apache-2.0",
"arxiv:2211.01786",
"region:us"
] | 2022-10-10T09:38:53+00:00 | {"annotations_creators": ["expert-generated", "crowdsourced"], "language": ["ak", "ar", "as", "bm", "bn", "ca", "code", "en", "es", "eu", "fon", "fr", "gu", "hi", "id", "ig", "ki", "kn", "lg", "ln", "ml", "mr", "ne", "nso", "ny", "or", "pa", "pt", "rn", "rw", "sn", "st", "sw", "ta", "te", "tn", "ts", "tum", "tw", "ur", "vi", "wo", "xh", "yo", "zh", "zu"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": ["100M<n<1B"], "task_categories": ["other"], "pretty_name": "xP3", "programming_language": ["C", "C++", "C#", "Go", "Java", "JavaScript", "Lua", "PHP", "Python", "Ruby", "Rust", "Scala", "TypeScript"]} | 2023-05-30T14:49:59+00:00 |
58ac54322470b66af0c4c947047cd737fe3bf242 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt
* Dataset: KETI-AIR/korquad
* Config: v1.0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@HANSOLYOO](https://huggingface.co/HANSOLYOO) for evaluating this model. | autoevaluate/autoeval-eval-KETI-AIR__korquad-v1.0-acb0d1-1711659840 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-10T10:38:00+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["KETI-AIR/korquad"], "eval_info": {"task": "extractive_question_answering", "model": "21iridescent/RoBERTa-base-finetuned-squad2-lwt", "metrics": ["angelina-wang/directional_bias_amplification"], "dataset_name": "KETI-AIR/korquad", "dataset_config": "v1.0", "dataset_split": "train", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-10-10T11:25:13+00:00 |
89b6ab985e756336632c5d97fb0429dc5ef12756 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: mrp/bert-finetuned-squad
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mbartolo](https://huggingface.co/mbartolo) for evaluating this model. | autoevaluate/autoeval-eval-adversarial_qa-adversarialQA-3783aa-1711959846 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-10T12:23:08+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "mrp/bert-finetuned-squad", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-10-10T12:24:10+00:00 |
ece7013ae771554dd462b0e744d20bf601b31fea |
# Dataset Card for OLM May 2022 Common Crawl
Cleaned and deduplicated pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from 15% of the May 2022 Common Crawl snapshot.
Note: `last_modified_timestamp` was parsed from whatever a website returned in it's `Last-Modified` header; there are likely a small number of outliers that are incorrect, so we recommend removing the outliers before doing statistics with `last_modified_timestamp`. | olm/olm-CC-MAIN-2022-21-sampling-ratio-0.14775510204 | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"language:en",
"pretraining",
"language modelling",
"common crawl",
"web",
"region:us"
] | 2022-10-10T13:33:47+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": [], "task_categories": [], "task_ids": [], "pretty_name": "OLM May 2022 Common Crawl", "tags": ["pretraining", "language modelling", "common crawl", "web"]} | 2022-11-04T17:13:26+00:00 |
710db3c996b2ed741ba555cbe277a7c27566d0c0 |
# Dataset Card for OLM June/July 2022 Common Crawl
Cleaned and deduplicated pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from 16% of the June/July 2022 Common Crawl snapshot.
Note: `last_modified_timestamp` was parsed from whatever a website returned in it's `Last-Modified` header; there are likely a small number of outliers that are incorrect, so we recommend removing the outliers before doing statistics with `last_modified_timestamp`. | olm/olm-CC-MAIN-2022-27-sampling-ratio-0.16142697881 | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"language:en",
"pretraining",
"language modelling",
"common crawl",
"web",
"region:us"
] | 2022-10-10T13:46:41+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": [], "task_categories": [], "task_ids": [], "pretty_name": "OLM June/July 2022 Common Crawl", "tags": ["pretraining", "language modelling", "common crawl", "web"]} | 2022-11-04T17:13:43+00:00 |
4a6938ce94446f324c6629e7de00ac591710044b |
## Dataset Description

A small subset (~0.1%) of [the-stack](https://huggingface.co/datasets/bigcode/the-stack) dataset, each programming language has 10,000 random samples from the original dataset. The dataset has 2.6GB of text (code).
## Languages
The dataset contains 30 programming languages:
````
"assembly", "batchfile", "c++", "c", "c-sharp", "cmake", "css", "dockerfile", "fortran", "go", "haskell", "html", "java",
"javascript", "julia", "lua", "makefile", "markdown", "perl", "php", "powershell", "python", "ruby", "rust",
"scala", "shell", "sql", "tex", "typescript", "visual-basic"
`````
## Dataset Structure
```python
from datasets import load_dataset
load_dataset("bigcode/the-stack-smol")
DatasetDict({
train: Dataset({
features: ['content', 'avg_line_length', 'max_line_length', 'alphanum_fraction', 'licenses', 'repository_name', 'path', 'size', 'lang'],
num_rows: 300000
})
})
```
### How to use it
You can either load the whole dataset like above, or load a specific language such as python by specifying the folder directory:
```python
load_dataset("bigcode/the-stack-smol", data_dir="data/python")
DatasetDict({
train: Dataset({
features: ['content', 'avg_line_length', 'max_line_length', 'alphanum_fraction', 'licenses', 'repository_name', 'path', 'size', 'lang'],
num_rows: 10000
})
})
```
| bigcode/the-stack-smol | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:unknown",
"language:code",
"region:us"
] | 2022-10-10T14:56:44+00:00 | {"annotations_creators": [], "language_creators": ["crowdsourced"], "language": ["code"], "multilinguality": ["multilingual"], "size_categories": ["unknown"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "extra_gated_prompt": "## Terms of Use for The Stack\n\nThe Stack dataset is a collection of 3.1 TB of source code in 30 programming languages. We ask that you read and acknowledge the following points before using the dataset:\n1. The Stack is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.\n2. The Stack is regularly updated to enact validated data removal requests. By clicking on \"Access repository\", you agree to update your own version of The Stack to the most recent usable version specified by the maintainers in [the following thread](https://huggingface.co/datasets/bigcode/the-stack/discussions/7). If you have questions about dataset versions and allowed uses, please also ask them in the dataset\u2019s [community discussions](https://huggingface.co/datasets/bigcode/the-stack/discussions/new). We will also notify users via email when the latest usable version changes.\n3. To host, share, or otherwise provide access to The Stack dataset, you must include [these Terms of Use](https://huggingface.co/datasets/bigcode/the-stack#terms-of-use-for-the-stack) and require users to agree to it.\n\nBy clicking on \"Access repository\" below, you accept that your contact information (email address and username) can be shared with the dataset maintainers as well.\n ", "extra_gated_fields": {"Email": "text", "I have read the License and agree with its terms": "checkbox"}} | 2023-05-02T09:14:19+00:00 |
106cb46160afb4151c8a0818369135b97016428f | futura555/test_rendering | [
"license:cc-by-nc-2.0",
"region:us"
] | 2022-10-10T15:07:08+00:00 | {"license": "cc-by-nc-2.0"} | 2022-10-10T15:12:33+00:00 |
|
111153981b3e2fcf277938d82dce5fd7b80c6d5f | Arjun1234/Arjun | [
"license:apache-2.0",
"region:us"
] | 2022-10-10T15:11:26+00:00 | {"license": "apache-2.0"} | 2022-10-10T15:11:27+00:00 |
|
08b3038756476d5e56bfb40da882c17647e88253 | Appdemon/profile | [
"license:other",
"region:us"
] | 2022-10-10T16:45:21+00:00 | {"license": "other"} | 2022-10-10T16:46:54+00:00 |
|
062625dc342d3391112ce81e0a1f103f702a5732 |
# Dataset Card for OLM August 2022 Wikipedia
Pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from an August 2022 Wikipedia snapshot. | olm/olm-wikipedia-20220701 | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:en",
"pretraining",
"language modelling",
"wikipedia",
"web",
"region:us"
] | 2022-10-10T17:02:46+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": [], "task_categories": [], "task_ids": [], "pretty_name": "OLM August 2022 Wikipedia", "tags": ["pretraining", "language modelling", "wikipedia", "web"]} | 2022-10-18T18:18:45+00:00 |
e4f891065dcf0b7d404f3c14d6cbb610ee33e038 |
# Dataset Card for OLM October 2022 Wikipedia
Pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from an October 2022 Wikipedia snapshot. | olm/olm-wikipedia-20221001 | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:en",
"pretraining",
"language modelling",
"wikipedia",
"web",
"region:us"
] | 2022-10-10T17:06:43+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": [], "task_categories": [], "task_ids": [], "pretty_name": "OLM October 2022 Wikipedia", "tags": ["pretraining", "language modelling", "wikipedia", "web"]} | 2022-10-18T18:18:07+00:00 |
7ef6e591bdd8c2b532a808f9568b42107038aef1 | jajejijuasjuas/alfonso | [
"license:mit",
"region:us"
] | 2022-10-10T17:18:39+00:00 | {"license": "mit"} | 2022-10-10T17:18:39+00:00 |
|
fc5895c785d2eb73f4071a40385344c74714f9d2 |
## Titanic Survival
from https://web.stanford.edu/class/archive/cs/cs109/cs109.1166/problem12.html | julien-c/titanic-survival | [
"task_categories:tabular-classification",
"license:cc",
"tabular-classification",
"region:us"
] | 2022-10-10T18:15:48+00:00 | {"license": "cc", "task_categories": ["tabular-classification"], "tags": ["tabular-classification"]} | 2022-10-10T18:20:30+00:00 |
27bcbcb611387e7476310e9e9efa471921ad0807 | muchojarabe/images-mxjr | [
"license:cc",
"region:us"
] | 2022-10-10T19:51:53+00:00 | {"license": "cc"} | 2022-10-10T20:17:07+00:00 |
|
925491e6eadf4687ec121c6e99138729540c0152 | simioterapia/otoniel | [
"region:us"
] | 2022-10-10T20:06:22+00:00 | {} | 2022-10-10T20:07:42+00:00 |
|
e8014e52ee40592a516f3e66ef04393aa9c59e38 | Mintykev/Test-Style | [
"license:cc",
"region:us"
] | 2022-10-10T21:45:00+00:00 | {"license": "cc"} | 2022-10-10T22:33:48+00:00 |
|
8c59624177cfa46af7177482c266633bd83aace7 | bob80333/animefacesv2 | [
"license:unknown",
"region:us"
] | 2022-10-10T23:45:45+00:00 | {"license": "unknown"} | 2022-10-12T23:46:25+00:00 |
|
205f0391fc1f10320ec3c10708eaa27e88db04c7 | RTT1/SentiMix | [
"license:openrail",
"region:us"
] | 2022-10-11T04:41:56+00:00 | {"license": "openrail"} | 2022-10-11T04:43:18+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.