sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
5b631720ed23ce3367f2326eee0e4663e4274929 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Translation
* Model: facebook/wmt19-en-de
* Dataset: wmt19
* Config: de-en
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@WillHeld](https://huggingface.co/WillHeld) for evaluating this model. | autoevaluate/autoeval-eval-wmt19-de-en-04c9e1-2082967144 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:37:55+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["wmt19"], "eval_info": {"task": "translation", "model": "facebook/wmt19-en-de", "metrics": [], "dataset_name": "wmt19", "dataset_config": "de-en", "dataset_split": "validation", "col_mapping": {"source": "translation.en", "target": "translation.de"}}} | 2022-11-14T09:40:55+00:00 |
f32c7211c0ac30a750b3fc382a8a3bf880efd44c | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-02148524-0081-4ca2-963d-7e44c726ec75-1311 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:40:00+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-11-14T09:40:38+00:00 |
748f9dc5044e188c60bbe9aadd91b61b9e032c30 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-b6a817-2053667122 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:42:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-125m_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T09:46:08+00:00 |
40c04a6a5193bca2029e35a7a50e945e69a55aea | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-0d414f0c-bce8-44f6-9c83-f356bfaf679d-1412 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:42:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-11-14T09:43:19+00:00 |
07aecb1e8d8a44720b52a7c8a6cf1e905ad2acce | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-b6a817-2053667123 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:54:01+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-1.3b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T10:06:28+00:00 |
d828e884d8d6d9c8e33da4b2e66c852a38df67a2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-273d91c9-dc40-4345-bb99-8afa33082ce8-1513 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:54:07+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-11-14T09:54:45+00:00 |
f81fd31a5fae77bb6fee6de66ccc0db474c2049f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: WillHeld/stereoset_zero
* Config: WillHeld--stereoset_zero
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@WillHeld](https://huggingface.co/WillHeld) for evaluating this model. | autoevaluate/autoeval-eval-WillHeld__stereoset_zero-WillHeld__stereoset_zero-7a6673-2074067131 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:57:48+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["WillHeld/stereoset_zero"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-560m", "metrics": [], "dataset_name": "WillHeld/stereoset_zero", "dataset_config": "WillHeld--stereoset_zero", "dataset_split": "train", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T10:21:26+00:00 |
9d7c55692e372e87fe5a7d291e244bab84ff5a9e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-add2aed1-25d6-4cd6-9646-ff8855a9d1a4-1614 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:59:50+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-11-14T10:00:28+00:00 |
8dd7772dfee471c60cc36decc221d4b5b507091c | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: WillHeld/stereoset_zero
* Config: WillHeld--stereoset_zero
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@WillHeld](https://huggingface.co/WillHeld) for evaluating this model. | autoevaluate/autoeval-eval-WillHeld__stereoset_zero-WillHeld__stereoset_zero-7a6673-2074067132 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T10:00:20+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["WillHeld/stereoset_zero"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-7b1", "metrics": [], "dataset_name": "WillHeld/stereoset_zero", "dataset_config": "WillHeld--stereoset_zero", "dataset_split": "train", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T11:59:20+00:00 |
c55684216bee9eac1c9150f30d9926eb3825b0e6 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: WillHeld/stereoset_zero
* Config: WillHeld--stereoset_zero
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@WillHeld](https://huggingface.co/WillHeld) for evaluating this model. | autoevaluate/autoeval-eval-WillHeld__stereoset_zero-WillHeld__stereoset_zero-7a6673-2074067133 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T10:15:23+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["WillHeld/stereoset_zero"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-3b", "metrics": [], "dataset_name": "WillHeld/stereoset_zero", "dataset_config": "WillHeld--stereoset_zero", "dataset_split": "train", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T10:55:00+00:00 |
8334e7723b14d0e56beac90446aa22960af5a0c9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-9a279865-5267-44c3-8be5-f8885af614f3-1715 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T10:19:01+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-11-14T10:19:38+00:00 |
100d881c253a7d035636b6de0297248093f088df | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b7
* Dataset: WillHeld/stereoset_zero
* Config: WillHeld--stereoset_zero
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@WillHeld](https://huggingface.co/WillHeld) for evaluating this model. | autoevaluate/autoeval-eval-WillHeld__stereoset_zero-WillHeld__stereoset_zero-7a6673-2074067134 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T10:26:25+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["WillHeld/stereoset_zero"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b7", "metrics": [], "dataset_name": "WillHeld/stereoset_zero", "dataset_config": "WillHeld--stereoset_zero", "dataset_split": "train", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T10:59:32+00:00 |
75f16c33ac974de771fb2bed632b0b098a1bc5a0 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: WillHeld/stereoset_zero
* Config: WillHeld--stereoset_zero
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@WillHeld](https://huggingface.co/WillHeld) for evaluating this model. | autoevaluate/autoeval-eval-WillHeld__stereoset_zero-WillHeld__stereoset_zero-7a6673-2074067135 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T10:29:41+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["WillHeld/stereoset_zero"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b1", "metrics": [], "dataset_name": "WillHeld/stereoset_zero", "dataset_config": "WillHeld--stereoset_zero", "dataset_split": "train", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T10:49:54+00:00 |
5fe45167e722e5a3ebf13d083c49080e3edd65e8 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-78963b-2087067145 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T10:31:38+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-7b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T13:41:15+00:00 |
1b8248affdb664ba0aa8e9d21ddcc61443f85f62 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-78963b-2087067146 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T10:31:38+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-3b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T11:51:01+00:00 |
7b2a00c90ab7898d94ecca9300987379b1636fa7 |
DIODE dataset: https://diode-dataset.org/
Code to prepare the archive: TBA | sayakpaul/diode-subset-train | [
"license:mit",
"depth-estimation",
"region:us"
] | 2022-11-14T10:36:21+00:00 | {"license": "mit", "tags": ["depth-estimation"]} | 2022-11-15T06:32:49+00:00 |
ebeea77810c9218ff8bde4129a4dec6173b82e13 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b7
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-78963b-2087067147 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T10:56:27+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b7", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T12:01:39+00:00 |
4f2420692f3798d8a47133bed141fcc78fe491ee | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-78963b-2087067148 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T10:56:29+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T11:44:55+00:00 |
297593418be2802a97602d976e5e0838c6271235 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-78963b-2087067149 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T11:01:20+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-560m", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T12:01:52+00:00 |
a831b9c804f632f7ca8edcbebd7c4196efb84365 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-d44dbe-2087167150 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T11:01:20+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-7b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T14:20:24+00:00 |
3cb1fc8a34aaee20357c43a99310bf991caa9aeb | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-d44dbe-2087167151 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T11:04:32+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-3b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T12:31:57+00:00 |
5ac34ce5a4b774a7e2411dba7d1eee9e7dae6ea1 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b7
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-d44dbe-2087167152 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T11:05:27+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b7", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T12:16:37+00:00 |
dc7a264f0737e94de2d896f96eb5a0cdbdd475f9 |
# Dataset Card for BnL Newspapers 1841-1879
## Table of Contents
- [Dataset Card for bnl_newspapers1841-1879](#dataset-card-for-bnl_newspapers1841-1879)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [size of dataset](#size-of-dataset)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://data.bnl.lu](https://data.bnl.lu)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** opendata at bnl.etat.lu
### Dataset Summary
630.709 articles from historical newspapers (1841-1879) along with metadata and the full text.
21 newspaper titles
24.415 newspaper issues
99.957 scanned pages
Transcribed using a variety of OCR engines and corrected using [https://github.com/natliblux/nautilusocr](https://github.com/natliblux/nautilusocr) (95% threshold)
Public Domain, CC0 (See copyright notice)
The newspapers used are:
- Der Arbeiter (1878)
- L'Arlequin (1848-1848)
- L'Avenir (1868-1871)
- Courrier du Grand-Duché de Luxembourg (1844-1868)
- Cäcilia (1863-1871)
- Diekircher Wochenblatt (1841-1848)
- Le Gratis luxembourgeois (1857-1858)
- L'Indépendance luxembourgeoise (1871-1879)
- Kirchlicher Anzeiger für die Diözese Luxemburg (1871-1879)
- La Gazette du Grand-Duché de Luxembourg (1878)
- Luxemburger Anzeiger (1856)
- Luxemburger Bauernzeitung (1857)
- Luxemburger Volks-Freund (1869-1876)
- Luxemburger Wort (1848-1879)
- Luxemburger Zeitung (1844-1845)
- Luxemburger Zeitung = Journal de Luxembourg (1858-1859)
- L'Union (1860-1871)
- Das Vaterland (1869-1870)
- Der Volksfreund (1848-1849)
- Der Wächter an der Sauer (1849-1869)
- D'Wäschfra (1868-1879)
### Supported Tasks and Leaderboards
### Languages
German, French, Luxembourgish
## Dataset Structure
JSONL file zipped.
### Data Instances
### Data Fields
- `identifier` : unique and persistent identifier using ARK for the Article.
- `date` : publishing date of the document e.g "1848-12-15".
- `metsType` : set to "newspaper".
- `newpaperTitle` : title of the newspaper. It is transcribed as in the masthead of the individual issue and can thus change.
- `paperID` : local identifier for the newspaper title. It remains the same, even for short-term title changes.
- `publisher` : publisher of the document e.g. "Verl. der St-Paulus-Druckerei".
- `title` : main title of the article, section, advertisement, etc.
- `text` : full text of the entire article, section, advertisement etc. It includes any titles and subtitles as well. The content does not contain layout information, such as headings, paragraphs or lines.
- `creator` : author of the article, section, advertisement etc. Most articles do not have an associated author.
- `type` : type of the exported data e.g. ARTICLE, SECTION, ADVERTISEMENT, ...
## Dataset Creation
The dataset was created by the National library of Luxembourg with the output of its newspaper digitisation program.
### Curation Rationale
The selection of newspapers represent the current state of digitisation of the Luxembourg legal deposit collection of newspapers that are in the public domain. That means all newspapers printed in Luxembourg before and including 1879.
### Source Data
Printed historical newspapers.
#### Initial Data Collection and Normalization
The data was created through digitisation. The full digitisation specifications are available at [https://data.bnl.lu/data/historical-newspapers/](https://data.bnl.lu/data/historical-newspapers/)
### Annotations
#### Annotation process
During the digitisation process, newspaper pages were semi-automatically zoned into articles. This was done by external suppliers to the library according to the digitisation specifications.
#### Who are the annotators?
Staff at the external suppliers.
### Personal and Sensitive Information
The dataset contains only data that was published in a newspaper. Since it contains only articles before 1879, no living person is expected to be included.
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
The biases in the text represent the biases from newspaper editors and journalists at the time of the publication. In particular during the period from 1940/05/10 to 1944/09/10 the Nazi occupier controlled all information published.
### Other Known Limitations
The OCR transcription is not perfect. It is estimated that the quality is 95% or better.
## Additional Information
### size of dataset
500MB-2GB
### Dataset Curators
This dataset is curated by the national library of Luxembourg (opendata at bnl.etat.lu).
### Licensing Information
Creative Commons Public Domain Dedication and Certification
### Citation Information
```
@misc{bnl_newspapers,
title={Historical Newspapers},
url={https://data.bnl.lu/data/historical-newspapers/},
author={ Bibliothèque nationale du Luxembourg},
```
### Contributions
Thanks to [@ymaurer](https://github.com/ymaurer) for adding this dataset. | biglam/bnl_newspapers1841-1879 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:de",
"language:fr",
"language:lb",
"language:nl",
"language:la",
"language:en",
"license:cc0-1.0",
"newspapers",
"1800-1900",
"lam",
"region:us"
] | 2022-11-14T11:37:16+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["de", "fr", "lb", "nl", "la", "en"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "BnL Newspapers 1841-1879", "tags": ["newspapers", "1800-1900", "lam"], "dataset_info": {"features": [{"name": "publisher", "dtype": "string"}, {"name": "paperID", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "newpaperTitle", "dtype": "string"}, {"name": "date", "dtype": "timestamp[ns]"}, {"name": "metsType", "dtype": "string"}, {"name": "identifier", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "creator", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1605420260, "num_examples": 630709}], "download_size": 1027493424, "dataset_size": 1605420260}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-30T13:23:29+00:00 |
2abeb0ec1afe29f11c420554dde89a03f2037936 | # Dataset Card for "ai4lam-demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | davanstrien/ai4lam-demo | [
"region:us"
] | 2022-11-14T11:46:07+00:00 | {"dataset_info": {"features": [{"name": "record_id", "dtype": "string"}, {"name": "date", "dtype": "timestamp[ns]"}, {"name": "raw_date", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "place", "dtype": "string"}, {"name": "empty_pg", "dtype": "bool"}, {"name": "text", "dtype": "string"}, {"name": "pg", "dtype": "int64"}, {"name": "mean_wc_ocr", "dtype": "float64"}, {"name": "std_wc_ocr", "dtype": "float64"}, {"name": "name", "dtype": "string"}, {"name": "all_names", "dtype": "string"}, {"name": "Publisher", "dtype": "string"}, {"name": "Country of publication 1", "dtype": "string"}, {"name": "all Countries of publication", "dtype": "string"}, {"name": "Physical description", "dtype": "string"}, {"name": "Language_1", "dtype": "string"}, {"name": "Language_2", "dtype": "string"}, {"name": "Language_3", "dtype": "null"}, {"name": "Language_4", "dtype": "null"}, {"name": "multi_language", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 5300866, "num_examples": 4148}], "download_size": 2857751, "dataset_size": 5300866}} | 2022-11-14T11:46:11+00:00 |
45bc9f6cda53745ebdd539d6ed810b66c42165d9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-d44dbe-2087167153 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T11:52:23+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T12:47:02+00:00 |
9bba4fd751b4566df79da6793336964beb507e00 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-d44dbe-2087167154 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T11:58:18+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-560m", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T12:47:25+00:00 |
a0ad81e432b6319cdd49a8f28564f1464692f23f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-74fd83-2087367155 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T12:07:36+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-7b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T13:55:17+00:00 |
25466151aac0b74dd009692a1391eb52ef75fc79 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-74fd83-2087367156 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T12:09:02+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-3b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T12:59:52+00:00 |
6ebfa982030a4cb00f92ecefd2f655de5f376384 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b7
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-74fd83-2087367157 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T12:09:02+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b7", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T12:50:40+00:00 |
38275c46bc34226523d3e9ce88c94fa0890c5330 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-74fd83-2087367158 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T12:24:48+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T12:55:53+00:00 |
81f2880e9d800f47d4a1f5c428ee5509857e41be | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-74fd83-2087367159 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T12:38:39+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-560m", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T13:08:09+00:00 |
5764cf7f85429f1e92d690faeb7e3e91dc320599 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-3b
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-6ca7d2-2087467160 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T12:38:52+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-3b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T13:58:21+00:00 |
2c473b1bc8367bec6f322ba6e13886b1ff720e1d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-7b1
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-6ca7d2-2087467161 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T12:46:22+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-7b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T15:50:03+00:00 |
0e970267d885a714a51cae1fb47a23e9843c8725 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-1b1
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-6ca7d2-2087467163 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T12:54:29+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-1b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T13:42:54+00:00 |
e0501c693987c44b4bc07a2a623409dfd75d10f8 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-1b7
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-6ca7d2-2087467162 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T12:54:29+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-1b7", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T14:00:19+00:00 |
b4fea7a9cae9818a6888ffb5d6bee7aea261c2e9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-560m
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-6ca7d2-2087467164 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T12:57:21+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-560m", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T13:53:52+00:00 |
6425559679455cda6d175477a09f29f79150da39 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-7b1
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-f50546-2087567166 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T13:01:49+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-7b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T16:15:40+00:00 |
091d92c0662b9f94efe5c878bec7d0d9fc82044f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-3b
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-f50546-2087567165 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T13:02:02+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-3b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T14:30:09+00:00 |
5defd285a8b45b469e45a55cf9f44e2eb674145d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-1b7
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-f50546-2087567167 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T13:06:27+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-1b7", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T14:19:13+00:00 |
85144f20752c11d2de5a8b9c177c8dd74a725f7f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-1b1
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-f50546-2087567168 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T13:16:13+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-1b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T14:10:44+00:00 |
78e0c338882c9e66c4da23e79bfb234a4a47455b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-2bc32ae8-3118-4561-b552-cc3a89a73cd5-1816 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T13:35:29+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-11-14T13:36:07+00:00 |
0440fc0e9b596f8fd685fc1d8ae401a1edb88586 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-3b
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-8ea950-2087767170 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T13:49:16+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-3b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T14:34:17+00:00 |
83fbfa7765df01f861324eefeb0dc9d1368fb173 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-560m
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-f50546-2087567169 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T13:49:18+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-560m", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T14:39:30+00:00 |
b84205d203d9ffd71582d446d70b061da198fac4 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-1b7
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-8ea950-2087767172 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T14:01:36+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-1b7", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T14:39:34+00:00 |
ec147a7d3e74acb9e6a3566a6ee23518b00a459b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-7b1
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-8ea950-2087767171 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T14:01:37+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-7b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T15:49:38+00:00 |
85725852540d745dc6930f1f530e3c09869101fa | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-1b1
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-8ea950-2087767173 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T14:06:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-1b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T14:34:00+00:00 |
586f5f8c735bf3b169a6ae7825ee7839bf79fb5d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-560m
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-8ea950-2087767174 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T14:07:37+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-560m", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T14:39:48+00:00 |
28060bdc8ecb737cc611bb83ee8b7106f026b9ec | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-b7ccdeae-8bc5-40c1-85ae-3aef82a8e55e-1917 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T14:12:40+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-11-14T14:13:17+00:00 |
e203d13863cccb78717734192f9ea0c77e34d9bc | pat-jj/nyt10_corpus | [
"license:mit",
"region:us"
] | 2022-11-14T14:14:45+00:00 | {"license": "mit"} | 2022-11-14T17:01:23+00:00 |
|
1ac7bd886bb4366607691e745f086352a6ed6786 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-3b
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-3e6f1a-2087867175 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T14:18:37+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-3b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T15:08:03+00:00 |
55440ebb06a099e48327b072cf6ad10f03d92246 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-7b1
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-3e6f1a-2087867176 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T14:27:14+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-7b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T16:17:07+00:00 |
6542676a795bf74578ab344bc6b8c6eae5271515 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-1b7
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-3e6f1a-2087867177 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T14:27:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-1b7", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T15:08:14+00:00 |
f6c320ac42466cba4f5cc6ca2d683da10b2c5115 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-1b1
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-3e6f1a-2087867178 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T14:38:27+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-1b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T15:08:51+00:00 |
3ef35fbcd887afa52e2356bdbe3fc93343fed5fb | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-560m
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-3e6f1a-2087867179 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T14:42:21+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-560m", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T15:10:30+00:00 |
3d5d1c7d1719bded4b7a1f96ce77b589a17e801d | # Telegram News (Farsi - Persian)
## Updated 24 OCT 2022
bbc.pickle Total News: 139,275 Timespan: 2015-10-13 - 2022-10-24
fars.pickle Total News: 241,346 Timespan: 2015-09-26 - 2022-10-24
farsivoa.pickle Total News: 134,023 Timespan: 2015-10-07 - 2022-10-24
iranint.pickle Total News: 137,459 Timespan: 2017-05-16 - 2022-10-24
irna.pickle Total News: 178,395 Timespan: 2016-07-05 - 2022-10-24
khabar.pickle Total News: 384,922 Timespan: 2016-09-22 - 2022-10-24
Tabnak.pickle Total News: 102,122 Timespan: 2017-05-22 - 2022-10-24
### Helper functions
```py
def getTxt(msg):
txt=''
if msg.text:
txt+=msg.text+' '
if msg.caption:
txt+=msg.caption+' '
if not msg.web_page==None:
try:
txt+=msg.web_page.title+' '
txt+=msg.web_page.description
except:pass
txt=txt.lower().replace(u'\u200c', '').replace('\n','').replace('📸','').replace('\xa0','')
txt=re.sub(r'http\S+', '', txt)
txt=re.sub(r'[a-z]', '', txt)
txt=re.sub(r'[^\w\s\d]', '', txt)
return txt.strip()
```
```py
def getDocs(m):
txt=getTxt(m)
if len(txt)>10:
return {'text':txt,'date':m.date}
else:
return ['']
```
```py
def getDate(news):
return news. Date
```
### Read the Files
```py
with open('bbc.pickle', 'rb') as handle:
news=pickle.load(handle)
newsText=list(map(getTxt,news))
newsDate=list(map(getDate,news))
```
| qhnprof/Telegram_News | [
"license:afl-3.0",
"region:us"
] | 2022-11-14T14:44:45+00:00 | {"license": "afl-3.0"} | 2022-11-14T15:13:47+00:00 |
a08f776e773971ecb42f1efd8a47b9dc1bdd9c36 | # Dataset Card for "legaltokenized1024"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vegeta/legaltokenized1024 | [
"region:us"
] | 2022-11-14T16:28:38+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 27016370584, "num_examples": 5268403}, {"name": "validation", "num_bytes": 2947948744, "num_examples": 574873}], "download_size": 7022414209, "dataset_size": 29964319328}} | 2022-11-17T12:33:35+00:00 |
69007e4698d83021157b11fdadfd17924e40c1a7 | ppietro/catrinas | [
"license:afl-3.0",
"region:us"
] | 2022-11-14T16:37:20+00:00 | {"license": "afl-3.0"} | 2022-11-14T17:18:37+00:00 |
|
e862f017cec09267fa4645afa9d010fb1e99408e | # Dataset Card for "mapsnlsloaded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | davanstrien/mapsnlsloaded | [
"region:us"
] | 2022-11-14T17:06:16+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "no building or railspace", "1": "railspace", "2": "building", "3": "railspace and non railspace building"}}}}, {"name": "map_sheet", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 323743326.376, "num_examples": 12404}, {"name": "train", "num_bytes": 957911247.448, "num_examples": 37212}, {"name": "validation", "num_bytes": 316304202.708, "num_examples": 12404}], "download_size": 1599110547, "dataset_size": 1597958776.5319998}} | 2022-11-14T17:09:41+00:00 |
be2e86928be852df4c47cec9708430c143999c33 | # Dataset Card for "legaltokenized256"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vegeta/legaltokenized256 | [
"region:us"
] | 2022-11-14T17:30:42+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 27564311544, "num_examples": 21400863}, {"name": "validation", "num_bytes": 3008263104, "num_examples": 2335608}], "download_size": 7092165713, "dataset_size": 30572574648}} | 2022-11-17T11:22:51+00:00 |
a99a936bfa227ce73e1175cad73095a1d285ba1e | # Dataset Card for "wmt19-valid-only"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | WillHeld/wmt19-valid-only | [
"region:us"
] | 2022-11-14T18:52:45+00:00 | {"dataset_info": {"features": [{"name": "translation", "dtype": {"translation": {"languages": ["zh", "en"]}}}], "splits": [{"name": "validation", "num_bytes": 1107522, "num_examples": 3981}], "download_size": 719471, "dataset_size": 1107522}} | 2022-11-14T18:56:06+00:00 |
0d6ed75757797a41d42f318992a2d3ded0dad095 | Your mother
nobody is going to see this probably
I saw | datasciencemmw/current-data | [
"license:openrail",
"doi:10.57967/hf/0155",
"region:us"
] | 2022-11-14T18:57:23+00:00 | {"license": "openrail"} | 2022-12-01T19:08:36+00:00 |
cc6682dcd28b7eae76c184b331e590e5bc0202f3 | # Dataset Card for "wmt19-valid-only-de_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | WillHeld/wmt19-valid-only-de_en | [
"region:us"
] | 2022-11-14T18:59:13+00:00 | {"dataset_info": {"features": [{"name": "translation", "dtype": {"translation": {"languages": ["de", "en"]}}}], "splits": [{"name": "validation", "num_bytes": 757649, "num_examples": 2998}], "download_size": 491141, "dataset_size": 757649}} | 2022-11-14T18:59:17+00:00 |
48654674506bc442da75cc6ddcf20d51a4f17f34 | # Dataset Card for "wmt19-valid-only-zh_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | WillHeld/wmt19-valid-only-zh_en | [
"region:us"
] | 2022-11-14T18:59:22+00:00 | {"dataset_info": {"features": [{"name": "translation", "dtype": {"translation": {"languages": ["zh", "en"]}}}], "splits": [{"name": "validation", "num_bytes": 1107522, "num_examples": 3981}], "download_size": 719471, "dataset_size": 1107522}} | 2022-11-14T18:59:26+00:00 |
1ef6156f6beccdf1200eee90b7d4afb70da3a8b6 | # Dataset Card for "wmt19-valid-only-gu_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | WillHeld/wmt19-valid-only-gu_en | [
"region:us"
] | 2022-11-14T18:59:33+00:00 | {"dataset_info": {"features": [{"name": "translation", "dtype": {"translation": {"languages": ["gu", "en"]}}}], "splits": [{"name": "validation", "num_bytes": 774621, "num_examples": 1998}], "download_size": 367288, "dataset_size": 774621}} | 2022-11-14T18:59:37+00:00 |
d572eaed743d99e7331c8bd550224d9792b51096 | # Dataset Card for "wmt19-valid-only-ru_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | WillHeld/wmt19-valid-only-ru_en | [
"region:us"
] | 2022-11-14T19:00:56+00:00 | {"dataset_info": {"features": [{"name": "translation", "dtype": {"translation": {"languages": ["ru", "en"]}}}], "splits": [{"name": "validation", "num_bytes": 1085596, "num_examples": 3000}], "download_size": 605574, "dataset_size": 1085596}} | 2022-11-14T19:01:01+00:00 |
770740869211d4ea18ca852c37ed65df706d488f |


Work developed as part of [Project IRIS](https://www.inesc-id.pt/projects/PR07005/).
Thesis: [A Semantic Search System for Supremo Tribunal de Justiça](https://rufimelo99.github.io/SemanticSearchSystemForSTJ/)
# Portuguese Legal Sentences
Collection of Legal Sentences from the Portuguese Supreme Court of Justice
The goal of this dataset was to be used for MLM and TSDAE
### Contributions
[@rufimelo99](https://github.com/rufimelo99)
If you use this work, please cite:
```bibtex
@inproceedings{MeloSemantic,
author = {Melo, Rui and Santos, Professor Pedro Alexandre and Dias, Professor Jo{\~ a}o},
title = {A {Semantic} {Search} {System} for {Supremo} {Tribunal} de {Justi}{\c c}a},
}
```
| stjiris/portuguese-legal-sentences-v0 | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:pt",
"license:apache-2.0",
"region:us"
] | 2022-11-14T21:28:26+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["pt"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "source_datasets": ["original"]} | 2023-01-08T14:23:33+00:00 |
8475f5ae11e7a4c351adec79c237856b8168d875 | Обработан из 54 гигабайт данных. Удалены имена, не используются ответы больше 100 символов. | Den4ikAI/mailruQA-big | [
"license:mit",
"region:us"
] | 2022-11-14T23:23:53+00:00 | {"license": "mit"} | 2022-11-18T04:08:50+00:00 |
0576ca123410ea4832ecd70c3bf5fa9ebeccba1e | Vextwix/Yes | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-11-14T23:27:55+00:00 | {"license": "creativeml-openrail-m"} | 2022-11-14T23:27:55+00:00 |
|
7bbf37d128454d3416d14bcd89e5464e321cb08b | Den4ikAI/mailruQA-small | [
"license:mit",
"region:us"
] | 2022-11-15T00:02:20+00:00 | {"license": "mit"} | 2022-11-18T04:09:53+00:00 |
|
4083a8159b907eaa2c1bb87b1891e14fdf0ad5cf |
## Introduction
Chinese-C4 is a clean Chinese internet dataset based on Common Crawl. The dataset is 46.29GB and has undergone multiple cleaning strategies, including Chinese filtering, heuristic cleaning based on punctuation, line-based hashing for deduplication, and repetition removal.
The dataset is open source and free for commercial use, and you are welcome to use the data and the cleaning strategies provided and contribute your cleaning strategies.
You can find the cleaning script for the dataset on GitHub [c4-dataset-script](https://github.com/shjwudp/c4-dataset-script).
| shjwudp/chinese-c4 | [
"language:zh",
"license:cc-by-4.0",
"region:us"
] | 2022-11-15T01:27:26+00:00 | {"language": ["zh"], "license": "cc-by-4.0"} | 2023-06-20T10:40:06+00:00 |
fe756af71bb1e8ff0b5f0d39a18e71823b34b154 | andy-fang/jf_portraits | [
"license:mit",
"region:us"
] | 2022-11-15T02:23:41+00:00 | {"license": "mit"} | 2022-11-15T05:35:41+00:00 |
|
9c8044256421ee3c0aec44c60567bf9c8ac4d7db | JM138/Olivia | [
"region:us"
] | 2022-11-15T03:56:53+00:00 | {} | 2022-11-15T04:46:41+00:00 |
|
cb5aaebecdb4bcebb2d9adbbc3714698c9daa219 | andy-fang/andy_portraits | [
"license:mit",
"region:us"
] | 2022-11-15T05:57:11+00:00 | {"license": "mit"} | 2022-11-15T10:41:59+00:00 |
|
09e9b472e787deb7909bc3fafd651587d4708786 | chatuur/fashion-complete-the-look | [
"license:openrail",
"region:us"
] | 2022-11-15T06:29:58+00:00 | {"license": "openrail"} | 2022-11-15T06:29:58+00:00 |
|
5e6d5faaa6b7a2ac63884eeb124ce30279509b1d | zlgao/test | [
"region:us"
] | 2022-11-15T06:52:54+00:00 | {} | 2022-11-15T06:57:06+00:00 |
|
faa996ad4c4efb058881b75c84b5cc8106376d51 |
annotations_creators:
- machine-generated
language:
- en
language_creators: []
license:
- wtfpl
multilinguality:
- monolingual
pretty_name: OCR-IDL
size_categories:
- 10M<n<100M
source_datasets:
- original
tags:
- pretraining
- documents
- idl
- ''
task_categories: []
task_ids: []
| rubentito/OCR-IDL | [
"license:wtfpl",
"region:us"
] | 2022-11-15T08:14:01+00:00 | {"license": "wtfpl"} | 2022-11-30T08:59:49+00:00 |
ea77161978d40cecf6371091b6bbbf7ed70b8930 |
# Dataset Card for SI-NLI
### Dataset Summary
SI-NLI (Slovene Natural Language Inference Dataset) contains 5,937 human-created Slovene sentence pairs (premise and hypothesis) that are manually labeled with the labels "entailment", "contradiction", and "neutral". We created the dataset using sentences that appear in the Slovenian reference corpus [ccKres](http://hdl.handle.net/11356/1034). Annotators were tasked to modify the hypothesis in a candidate pair in a way that reflects one of the labels. The dataset is balanced since the annotators created three modifications (entailment, contradiction, neutral) for each candidate sentence pair. The dataset is split into train, validation, and test sets, with sizes of 4,392, 547, and 998.
Only the hypothesis and premise are given in the test set (i.e. no annotations) since SI-NLI is integrated into the Slovene evaluation framework [SloBENCH](https://slobench.cjvt.si/). If you use the dataset to train your models, please consider submitting the test set predictions to SloBENCH to get the evaluation score and see how it compares to others.
If you have access to the private test set (with labels), you can load it instead of the public one via `datasets.load_dataset("cjvt/si_nli", "private", data_dir="<...>")`.
### Supported Tasks and Leaderboards
Natural language inference.
### Languages
Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset:
```
{
'pair_id': 'P0',
'premise': 'Vendar se je anglikanska večina v grofijah na severu otoka (Ulster) na plebiscitu odločila, da ostane v okviru Velike Britanije.',
'hypothesis': 'A na glasovanju o priključitvi ozemlja k Severni Irski so se prebivalci ulsterskih grofij, pretežno anglikanske veroizpovedi, izrekli o obstanku pod okriljem VB.',
'annotation1': 'entailment',
'annotator1_id': 'annotator_C',
'annotation2': 'entailment',
'annotator2_id': 'annotator_A',
'annotation3': '',
'annotator3_id': '',
'annotation_final': 'entailment',
'label': 'entailment'
}
```
### Data Fields
- `pair_id`: string identifier of the pair (`""` in the test set),
- `premise`: premise sentence,
- `hypothesis`: hypothesis sentence,
- `annotation1`: the first annotation (`""` if not available),
- `annotator1_id`: anonymized identifier of the first annotator (`""` if not available),
- `annotation2`: the second annotation (`""` if not available),
- `annotator2_id`: anonymized identifier of the second annotator (`""` if not available),
- `annotation3`: the third annotation (`""` if not available),
- `annotator3_id`: anonymized identifier of the third annotator (`""` if not available),
- `annotation_final`: aggregated annotation where it could be unanimously determined (`""` if not available or an unanimous agreement could not be reached),
- `label`: aggregated annotation: either same as `annotation_final` (in case of agreement), same as `annotation1` (in case of disagreement), or `""` (in the test set). **Note that examples with disagreement are all put in the training set**. This aggregation is just the most simple possibility and the user may instead do something more advanced based on the individual annotations (e.g., learning with disagreement).
\* A small number of examples did not go through the annotation process because they were constructed by the authors when writing the guidelines. The quality of these was therefore checked by the authors. Such examples do not have the individual annotations and the annotator IDs.
## Additional Information
### Dataset Curators
Matej Klemen, Aleš Žagar, Jaka Čibej, Marko Robnik-Šikonja.
### Licensing Information
CC BY-NC-SA 4.0.
### Citation Information
```
@misc{sinli,
title = {Slovene Natural Language Inference Dataset {SI}-{NLI}},
author = {Klemen, Matej and {\v Z}agar, Ale{\v s} and {\v C}ibej, Jaka and Robnik-{\v S}ikonja, Marko},
url = {http://hdl.handle.net/11356/1707},
note = {Slovenian language resource repository {CLARIN}.{SI}},
year = {2022}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset. | cjvt/si_nli | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:sl",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-11-15T08:41:29+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found", "expert-generated"], "language": ["sl"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "natural-language-inference"], "pretty_name": "Slovene natural language inference dataset", "tags": [], "dataset_info": [{"config_name": "default", "features": [{"name": "pair_id", "dtype": "string"}, {"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "annotation1", "dtype": "string"}, {"name": "annotator1_id", "dtype": "string"}, {"name": "annotation2", "dtype": "string"}, {"name": "annotator2_id", "dtype": "string"}, {"name": "annotation3", "dtype": "string"}, {"name": "annotator3_id", "dtype": "string"}, {"name": "annotation_final", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1352635, "num_examples": 4392}, {"name": "validation", "num_bytes": 164561, "num_examples": 547}, {"name": "test", "num_bytes": 246518, "num_examples": 998}], "download_size": 410093, "dataset_size": 1763714}, {"config_name": "public", "features": [{"name": "pair_id", "dtype": "string"}, {"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "annotation1", "dtype": "string"}, {"name": "annotator1_id", "dtype": "string"}, {"name": "annotation2", "dtype": "string"}, {"name": "annotator2_id", "dtype": "string"}, {"name": "annotation3", "dtype": "string"}, {"name": "annotator3_id", "dtype": "string"}, {"name": "annotation_final", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1352591, "num_examples": 4392}, {"name": "validation", "num_bytes": 164517, "num_examples": 547}, {"name": "test", "num_bytes": 246474, "num_examples": 998}], "download_size": 410093, "dataset_size": 1763582}, {"config_name": "private", "features": [{"name": "pair_id", "dtype": "string"}, {"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "annotation1", "dtype": "string"}, {"name": "annotator1_id", "dtype": "string"}, {"name": "annotation2", "dtype": "string"}, {"name": "annotator2_id", "dtype": "string"}, {"name": "annotation3", "dtype": "string"}, {"name": "annotator3_id", "dtype": "string"}, {"name": "annotation_final", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train"}, {"name": "validation"}, {"name": "test"}], "download_size": 0, "dataset_size": 0}]} | 2023-04-04T07:51:01+00:00 |
e9df778e49a78115fd77c91f9c64c5d0f925ac2d | # Dataset Card for "test_push_two_configs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | polinaeterna/test_push_two_configs | [
"region:us"
] | 2022-11-15T10:35:42+00:00 | {"dataset_info": [{"config_name": "v1", "features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 46, "num_examples": 3}, {"name": "test", "num_bytes": 32, "num_examples": 2}], "download_size": 1674, "dataset_size": 78}, {"config_name": "v2", "features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 60, "num_examples": 4}, {"name": "test", "num_bytes": 18, "num_examples": 1}], "download_size": 1671, "dataset_size": 78}]} | 2022-11-21T13:11:15+00:00 |
d6d7882743b7c7275ac1830a2a37ba17c7d7114a |
# Gold standards and outputs
## Dataset Description
- MapReader’s GitHub: https://github.com/Living-with-machines/MapReader
- MapReader paper: https://dl.acm.org/doi/10.1145/3557919.3565812
- Zenodo link for gold standards and outputs: https://doi.org/10.5281/zenodo.7147906
- Contacts: Katherine McDonough, The Alan Turing Institute, kmcdonough at turing.ac.uk; Kasra Hosseini, The Alan Turing Institute, k.hosseinizad at gmail.com
### Dataset Summary
Here we share gold standard annotations and outputs from early experiments using MapReader. MapReader creates datasets for humanities research using historical map scans and metadata as inputs.
Using maps provided by the National Library of Scotland, these annotations and outputs reflect labeling tasks relevant to historical research on the [Living with Machines](https://livingwithmachines.ac.uk/) project.
Data shared here is derived from maps printed in nineteenth-century Britain by the Ordnance Survey, Britain's state mapping agency. These maps cover England, Wales, and Scotland from 1888 to 1913.
## Directory structure
The gold standards and outputs are stored on [Zenodo](https://doi.org/10.5281/zenodo.7147906). It contains the following directories/files:
```
MapReader_Data_SIGSPATIAL_2022
├── README
├── annotations
│ ├── maps
│ │ ├── map_100942121.png
│ │ ├── ...
│ │ └── map_99383316.png
│ ├── slice_meters_100_100
│ │ ├── test
│ │ │ ├── patch-...PNG
│ │ │ ├── ...
│ │ │ └── patch-...PNG
│ │ ├── train
│ │ │ ├── patch-...PNG
│ │ │ ├── ...
│ │ │ └── patch-...PNG
│ │ └── val
│ │ ├── patch-...PNG
│ │ ├── ...
│ │ └── patch-...PNG
│ ├── test.csv
│ ├── train.csv
│ └── valid.csv
└── outputs
├── label_01_03
│ ├── pred_01_03_all.csv
│ ├── pred_01_03_keep_01_0250.csv
│ ├── pred_01_03_keep_05_0500.csv
│ └── pred_01_03_keep_10_1000.csv
├── label_02
│ ├── pred_02_all.csv
│ ├── pred_02_keep_01_0250.csv
│ ├── pred_02_keep_05_0500.csv
│ └── pred_02_keep_10_1000.csv
├── patches_all.csv
├── percentage
│ └── pred_02_keep_1_250_01_03_keep_1_250_percentage.csv
└── resources
├── StopsGB4paper.csv
└── six_inch4paper.json
```
## annotations
The `annotations` directory is as follows:
```
├── annotations
│ ├── maps
│ │ ├── map_100942121.png
│ │ ├── ...
│ │ └── map_99383316.png
│ ├── slice_meters_100_100
│ │ ├── test
│ │ │ ├── patch-...PNG
│ │ │ ├── ...
│ │ │ └── patch-...PNG
│ │ ├── train
│ │ │ ├── patch-...PNG
│ │ │ ├── ...
│ │ │ └── patch-...PNG
│ │ └── val
│ │ ├── patch-...PNG
│ │ ├── ...
│ │ └── patch-...PNG
│ ├── test.csv
│ ├── train.csv
│ └── valid.csv
```
### annotations/train.csv, valid.csv and test.csv
In the `MapReader_Data_SIGSPATIAL_2022/annotations` directory, there are three CSV files, namely `train.csv`, `valid.csv` and `test.csv`. These files have two columns:
```
image_id,label
slice_meters_100_100/train/patch-1390-3892-1529-4031-#map_101590193.png#.PNG,0
slice_meters_100_100/train/patch-1716-3960-1848-4092-#map_101439245.png#.PNG,0
...
```
in which:
- `image_id`: path to each labelled patch. For example in `slice_meters_100_100/train/patch-1390-3892-1529-4031-#map_101590193.png#.PNG`:
- `slice_meters_100_100/train`: directory where the patch is stored. (in this example, it is a patch used for training)
- `patch-1390-3892-1529-4031-#map_101590193.png#.PNG` has two parts itself: `patch-1390-3892-1529-4031` is the patch ID, and the patch itself is extracted from `map_101590193.png` map sheet.
- `label`: label assigned to each patch by an annotator.
- Labels: 0: no [building or railspace]; 1: railspace; 2: building; and 3: railspace and [non railspace] building.
### annotations/slice_meters_100_100
Patches used for training, validation, and test in PNG format.
```
├── annotations
│ ├── slice_meters_100_100
│ │ ├── test
│ │ │ ├── patch-...PNG
│ │ │ ├── ...
│ │ │ └── patch-...PNG
│ │ ├── train
│ │ │ ├── patch-...PNG
│ │ │ ├── ...
│ │ │ └── patch-...PNG
│ │ └── val
│ │ ├── patch-...PNG
│ │ ├── ...
│ │ └── patch-...PNG
```
### annotations/maps
Map sheets retrieved from the National Library of Scotland via webservers. These maps were later sliced into patches which can be found in `annotations/slice_meters_100_100`.
```
├── annotations
│ ├── maps
│ │ ├── map_100942121.png
│ │ ├── ...
│ │ └── map_99383316.png
```
## outputs
The `outputs` directory is as follows:
```
└── outputs
├── label_01_03
│ ├── pred_01_03_all.csv
│ ├── pred_01_03_keep_01_0250.csv
│ ├── pred_01_03_keep_05_0500.csv
│ └── pred_01_03_keep_10_1000.csv
├── label_02
│ ├── pred_02_all.csv
│ ├── pred_02_keep_01_0250.csv
│ ├── pred_02_keep_05_0500.csv
│ └── pred_02_keep_10_1000.csv
├── patches_all.csv
├── percentage
│ └── pred_02_keep_1_250_01_03_keep_1_250_percentage.csv
└── resources
├── StopsGB4paper.csv
└── six_inch4paper.json
```
### outputs/label_01_03
Starting with:
```
└── outputs
├── label_01_03
│ ├── pred_01_03_all.csv
│ ├── pred_01_03_keep_01_0250.csv
│ ├── pred_01_03_keep_05_0500.csv
│ └── pred_01_03_keep_10_1000.csv
```
The file `pred_01_03_all.csv` contains the following columns:
```
,center_lon,center_lat,pred,conf,mean_pixel_RGB,std_pixel_RGB,mean_pixel_A,image_id,parent_id,pub_date,url,x,y,z,opening_year_quicks,closing_year_quicks,dist2quicks
0,-0.4011055106547341,52.61260776720805,1,0.9898980855941772,0.8450341820716858,0.1668068021535873,1.0,patch-3014-0-3151-137-#map_100890251.png#.PNG,map_100890251.png,1902,https://maps.nls.uk/view/100890251,3880925.8529841416,-27169.29919979412,5044483.051365171,1867,1929,1121.9150481268305
1,-0.399645312864389,52.61260776720805,1,0.9999995231628418,0.823089599609375,0.1925655305385589,1.0,patch-3151-0-3288-137-#map_100890251.png#.PNG,map_100890251.png,1902,https://maps.nls.uk/view/100890251,3880926.544140446,-27070.392789791513,5044483.051365171,1867,1929,1113.0714735200893
...
```
- **center_lon**: longitude of the patch center
- **center_lat**: latitude of the patch center
- **pred**: predicted label for the patch
- **conf**: model confidence
- **mean_pixel_RGB**: mean pixel intensities, using all three channels
- **std_pixel_RGB**: standard deviations of pixel intensities, using all three channels
- **mean_pixel_A**: mean pixel intensities of alpha channel
- **image_id**: patch ID
- **parent_id**: ID of the map sheet that the patch belongs to
- **pub_date**: publication date of the map sheet that the patch belongs to
- **url**: URL of the map sheet that the patch belongs to
- **x, y, z**: to compute distances (using k-d tree)
- **opening_year_quicks**: Date when the railway station first opened
- **closing_year_quicks**: Date when the railway station last closed,
- **dist2quicks**: distance to the closest StopsGB in meters.
NB: See `outputs/resources` below for description of the StopsGB (railway station) data and links to related publications.
---
The other files in `outputs/label_01_03` have the same columns as `pred_01_03_all.csv` (described above). The difference is:
- `pred_01_03_all.csv`: all patches predicted as labels 1 (railspace) or 3 (railspace and [non railspace] building).
- `pred_01_03_keep_01_0250.csv`: similar to `pred_01_03_all.csv` except that we removed those patches that had no other neighboring patches with the same label within a radius of 250 meters. Note 01 and 0250 in the name. 01 means one neighboring patch and 0250 means 250 meters.
- `pred_01_03_keep_05_0500.csv`: similar to `pred_01_03_all.csv` except that we removed those patches that had less than five neighboring patches with the same label within a radius of 500 meters.
- `pred_01_03_keep_10_1000.csv`: similar to `pred_01_03_all.csv` except that we removed those patches that had less than ten neighboring patches with the same label within a radius of 1000 meters.
### outputs/label_02
Next, these files:
```
├── label_02
│ ├── pred_02_all.csv
│ ├── pred_02_keep_01_0250.csv
│ ├── pred_02_keep_05_0500.csv
│ └── pred_02_keep_10_1000.csv
```
Are the same as the files described above for `label_01_03` except for label 02 (i.e., building).
### outputs/patches_all.csv
And last:
```
└── outputs
├── patches_all.csv
```
The file `patches_all.csv` has the following columns:
⚠️ this file contains the results for 30,490,411 patches used in the MapReader paper.
```
center_lat,center_lon,pred
52.61260776720805,-0.4332298620423274,0
52.61260776720805,-0.4317696642519822,0
...
```
in which:
- **center_lon**: longitude of the patch center
- **center_lat**: latitude of the patch center
- **pred**: predicted label for the patch
### outputs/percentage
We have added one file in `outputs/percentage`:
```
└── outputs
├── percentage
│ └── pred_02_keep_1_250_01_03_keep_1_250_percentage.csv
```
This file has the following columns:
```
,center_lon,center_lat,pred,conf,mean_pixel_RGB,std_pixel_RGB,mean_pixel_A,image_id,parent_id,pub_date,url,x,y,z,dist2rail,dist2quicks,dist2quicks_km,dist2rail_km,dist2rail_minus_station,dist2quicks_km_quantized,dist2rail_km_quantized,dist2rail_minus_station_quantized,perc_neigh_rails,perc_neigh_builds,harmonic_mean_rail_build
0,-0.4040259062354244,52.61260776720805,2,0.9999010562896729,0.8095282316207886,0.1955385357141494,1.0,patch-2740-0-2877-137-#map_100890251.png#.PNG,map_100890251.png,1902,https://maps.nls.uk/view/100890251,3880924.4631095687,-27367.11196679585,5044483.051365171,197.8176497186437,1164.8640633870857,1.1648640633870857,0.1978176497186437,0.9670464136684418,1.0,0.0,0.5,7.198443579766536,4.669260700389105,5.664349046373668
1,-0.4054861040257695,52.61171342293056,2,0.9999876022338868,0.8741853833198547,0.1160899400711059,1.0,patch-2603-137-2740-274-#map_100890251.png#.PNG,map_100890251.png,1902,https://maps.nls.uk/view/100890251,3881002.836728637,-27466.57793328472,5044422.621073416,296.73252022623865,1290.9640259717814,1.2909640259717814,0.2967325202262386,0.9942315057455428,1.0,0.0,0.5,7.050092764378478,4.452690166975881,5.45813633371237
...
```
in which:
- **center_lon**: longitude of the patch center
- **center_lat**: latitude of the patch center
- **pred**: predicted label for the patch
- **conf**: model confidence
- **mean_pixel_RGB**: mean pixel intensities, using all three channels
- **std_pixel_RGB**: standard deviations of pixel intensities, using all three channels
- **mean_pixel_A**: mean pixel intensities of alpha channel
- **image_id**: patch ID
- **parent_id**: ID of the map sheet that the patch belongs to
- **pub_date**: publication date of the map sheet that the patch belongs to
- **url**: URL of the map sheet that the patch belongs to
- **x, y, z**: to compute distances (using k-d tree)
- **dist2rail**: distance to the closest railspace patch (i.e., the patch that is classified as 1: railspace or 3: railspace and [non railspace] building)
- **dist2quicks**: distance to the closest StopsGB station in meters.
- **dist2quicks_km**: distance to the closest StopsGB station in km.
- **dist2rail_km**: similar to **dist2rail** except in km.
- **dist2rail_minus_station**: | dist2rail_km - dist2quicks_km |
- **dist2quicks_km_quantized**: discrete version of **dist2quicks_km**, we used these intervals: [0. , 0.5), [0.5, 1.), [1., 1.5), ... , [4.5, 5.) and [5., inf).
- **dist2rail_km_quantized**: discrete version of **dist2rail_km**, we used these intervals: [0. , 0.5), [0.5, 1.), [1., 1.5), ... , [4.5, 5.) and [5., inf).
- **dist2rail_minus_station_quantized**: discrete version of **dist2rail_minus_station**, we used these intervals: [0. , 0.5), [0.5, 1.), [1., 1.5), ... , [4.5, 5.) and [5., inf).
- **perc_neigh_rails**: what is the percentage of neighboring patches predicted as rail (labels 01 and 03).
- **perc_neigh_builds**: what is the percentage of neighboring patches predicted as building (label 02).
- **harmonic_mean_rail_build**: Harmonic mean of *perc_neigh_rails* and **perc_neigh_builds**.
These additional `percentage` attributes shed light on the relationship between 'railspace' and stations, something we explore in further Living with Machines research.
### outputs/resources
Finally, we have the following files:
```
└── outputs
└── resources
├── StopsGB4paper.csv
└── six_inch4paper.json
```
- `StopsGB4paper.csv`: this is a trimmed down version of StopsGB, a dataset documenting passenger railway stations in Great Britain (see [this link](https://bl.iro.bl.uk/concern/datasets/0abea1b1-2a43-4422-ba84-39b354c8bb09?locale=en) for the complete dataset). We filtered the stations as follows:
- Keep only stations for which "ghost_entry" and "cross_ref" columns are "False". (These two fields help remove records in the StopsGB dataset that are not actually stations, but relics of the original publication formatting.)
- "Opening" was NOT "unknown".
- The map sheet was surveyed during a year when the station was operational (i.e., "opening_year_quicks" <= survey_date_of_map_sheet <= "closing_year_quicks").
You can learn more about the StopsGB dataset and how it was created from this paper:
```
Mariona Coll Ardanuy, Kaspar Beelen, Jon Lawrence, Katherine McDonough, Federico Nanni, Joshua Rhodes, Giorgia Tolfo, and Daniel C.S. Wilson. "Station to Station: Linking and Enriching Historical British Railway Data." In Computational Humanities Research (CHR2021). 2021.
```
```bibtex
@inproceedings{lwm-station-to-station-2021,
title = "Station to Station: Linking and Enriching Historical British Railway Data",
author = "Coll Ardanuy, Mariona and
Beelen, Kaspar and
Lawrence, Jon and
McDonough, Katherine and
Nanni, Federico and
Rhodes, Joshua and
Tolfo, Giorgia and
Wilson, Daniel CS",
booktitle = "Computational Humanities Research",
year = "2021",
}
```
- `six_inch4paper.json`: similar to [metadata_OS_Six_Inch_GB_WFS_light.json](https://github.com/Living-with-machines/MapReader/blob/main/mapreader/persistent_data/metadata_OS_Six_Inch_GB_WFS_light.json) on MapReader's GitHub with some minor changes.
## Dataset Creation
### Curation Rationale
These annotations of map patches are part of a research project to develop humanistic methods for structuring visual information on digitized historical maps. Dividing thousands of nineteenth-century map sheets into 100m x 100m patches and labeling those patches with historically-meaningful concepts diverges from traditional methods for creating data from maps, both in terms of scale (the number of maps being examined), and of type (raster-style patches vs. pixel-level vector data). For more on the rationale for this approach, see the following paper:
```
Kasra Hosseini, Katherine McDonough, Daniel van Strien, Olivia Vane, Daniel C S Wilson, Maps of a Nation? The Digitized Ordnance Survey for New Historical Research, *Journal of Victorian Culture*, Volume 26, Issue 2, April 2021, Pages 284–299.
```
```bibtex
@article{hosseini_maps_2021,
title = {Maps of a Nation? The Digitized Ordnance Survey for New Historical Research},
volume = {26},
rights = {All rights reserved},
issn = {1355-5502},
url = {https://doi.org/10.1093/jvcult/vcab009},
doi = {10.1093/jvcult/vcab009},
shorttitle = {Maps of a Nation?},
pages = {284--299},
number = {2},
journaltitle = {Journal of Victorian Culture},
author = {Hosseini, Kasra and {McDonough}, Katherine and van Strien, Daniel and Vane, Olivia and Wilson, Daniel C S},
urldate = {2021-05-19},
date = {2021-04-01},
}
```
### Source Data
#### Initial Data Access
Data was accessed via the National Library of Scotland's Historical Maps API: https://maps.nls.uk/projects/subscription-api/
The data shared here is derived from the six-inch to one mile sheets printed between 1888-1913: https://maps.nls.uk/projects/subscription-api/#gb6inch
### Annotations and Outputs
The annotations and output datasets collected here are related to experiments to identify the 'footprint' of rail infrastructure in the UK, a concept we call 'railspace'. We also created a dataset to identify buildings on the maps.
#### Annotation process
The custom annotation interface built into MapReader is designed specifically to assist researchers in labeling patches relevant to concepts of interest to their research questions.
Our **guidelines** for the data shared here were:
- for any non-null label (railspace, building, or railspace + building), if a patch contains any visual signal for that label (e.g. 'railspace'), it should be assigned the relevant label. For example, if it is possible for an annotator to see a railway track passing through the corner of a patch, that patch is labeled as 'railspace'.
- the context around the patch should not be used as an aid in extreme cases where it is nearly impossible to determine whether a patch contains a non-null label
- however, the patch context shown in the annotation interface can be used to quickly distinguish between different content types, particularly where the contiguity of a type across patches is useful in determining what label to assign
- for 'railspace': use this label for any type of rail infrastructure as determined by expert labelers. This includes, for example, single-track mining railroads; larger double-track passenger routes; sidings and embankments; etc. It excludes urban trams.
- for 'building': use this label for any size building
- for 'building + railspace': use this label for patches combining these two types of content
Because 'none' (e.g. null) patches made up the vast majority of patches in the total dataset from these map sheets, we ordered patches to annotate based on their pixel intensity. This allowed us to focus first on patches containing more visual content printed on the map sheet, and later to move more quickly through the patches that captured parts of the map with little to no printed features.
#### Who are the annotators?
Data shared here was annotated by Kasra Hosseini and Katherine McDonough.
Members of the Living with Machines research team contributed early annotations during the development of MapReader: Ruth Ahnert, Kaspar Beelen, Mariona Coll-Ardanuy, Emma Griffin, Tim Hobson, Jon Lawrence, Giorgia Tolfo, Daniel van Strien, Olivia Vane, and Daniel C.S. Wilson.
## Credits and re-use terms
### MapReader outputs
The files shared here (other than ```resources```) under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (https://creativecommons.org/licenses/by-nc-sa/4.0/) (CC-BY-NC-SA) licence.
If you are interested in working with OS maps used to create these results, please also note the re-use terms of the original map images and metadata detailed below.
### Digitized maps
MapReader can retrieve maps from NLS (National Library of Scotland) via webservers. For all the digitized maps (retrieved or locally stored), please note the re-use terms:
Use of the digitised maps for commercial purposes is currently restricted by contract. Use of these digitised maps for non-commercial purposes is permitted under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (https://creativecommons.org/licenses/by-nc-sa/4.0/) (CC-BY-NC-SA) licence. Please refer to https://maps.nls.uk/copyright.html#exceptions-os for details on copyright and re-use license.
### Map metadata
We have provided some metadata files in on MapReader’s GitHub page (https://github.com/Living-with-machines/MapReader/tree/main/mapreader/persistent_data). For all these file, please note the re-use terms:
Use of the digitised maps for commercial purposes is currently restricted by contract. Use of these digitised maps for non-commercial purposes is permitted under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (https://creativecommons.org/licenses/by-nc-sa/4.0/) (CC-BY-NC-SA) licence. Please refer to https://maps.nls.uk/copyright.html#exceptions-os for details on copyright and re-use license.
## Acknowledgements
This work was supported by Living with Machines (AHRC grant AH/S01179X/1) and The Alan Turing Institute (EPSRC grant EP/N510129/1).
Living with Machines, funded by the UK Research and Innovation (UKRI) Strategic Priority Fund, is a multidisciplinary collaboration delivered by the Arts and Humanities Research Council (AHRC), with The Alan Turing Institute, the British Library and the Universities of Cambridge, East Anglia, Exeter, and Queen Mary University of London. | Livingwithmachines/MapReader_Data_SIGSPATIAL_2022 | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nc-sa-4.0",
"maps",
"historical",
"National Library of Scotland",
"heritage",
"humanities",
"lam",
"region:us"
] | 2022-11-15T11:16:13+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": [], "language": ["en"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": [], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "pretty_name": "MapReader Data SIGSPATIAL 2022", "tags": ["maps", "historical", "National Library of Scotland", "heritage", "humanities", "lam"]} | 2023-05-11T21:38:38+00:00 |
a5bdece60ed026e3cfaa376db5a92ffda482a083 | softcatala/catalan-youtube-speech | [
"license:mit",
"region:us"
] | 2022-11-15T12:01:44+00:00 | {"license": "mit"} | 2023-02-21T22:28:00+00:00 |
|
f91ecb5b914361b69950c84c24431a18cb0f454e | # Dataset Card for "malayalam-news-ds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | egorulz/malayalam-news-ds | [
"region:us"
] | 2022-11-15T12:09:49+00:00 | {"dataset_info": {"features": [{"name": "news", "dtype": "string"}, {"name": "news_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2014.76925, "num_examples": 9}, {"name": "validation", "num_bytes": 447.7265, "num_examples": 2}], "download_size": 16029, "dataset_size": 2462.49575}} | 2022-11-15T12:10:08+00:00 |
46a7f8dea1cced34a234fd3ccf7a515767d6c59e | yuansui/GitTables | [
"license:cc-by-nc-nd-3.0",
"region:us"
] | 2022-11-15T12:14:49+00:00 | {"license": "cc-by-nc-nd-3.0"} | 2022-11-22T04:46:06+00:00 |
|
6ccc1ad25586820fcf173802a6ab0adbcd5a15f2 | SuryaGrandhi/DLClassProjectData | [
"license:unknown",
"region:us"
] | 2022-11-15T12:27:47+00:00 | {"license": "unknown"} | 2022-11-15T12:34:30+00:00 |
|
06e62c148917d560fdbd9acbedd6a9b82fb1e3e3 | mrbesher/tr-paraphrase-tatoeba | [
"license:cc-by-4.0",
"region:us"
] | 2022-11-15T13:15:03+00:00 | {"license": "cc-by-4.0"} | 2022-11-15T13:15:35+00:00 |
|
8ae4c67c8b64e8691d39f259e11ec2ed8af288d7 | mrbesher/tr-paraphrase-opensubtitles2018 | [
"license:cc-by-4.0",
"region:us"
] | 2022-11-15T13:18:54+00:00 | {"license": "cc-by-4.0"} | 2022-11-15T13:33:12+00:00 |
|
85646b678c1b6f9e09c151f13f33e849d1975432 | # artistas_brasileiros
| fredguth/artistas_brasileiros | [
"region:us"
] | 2022-11-15T13:28:34+00:00 | {} | 2022-11-15T14:52:47+00:00 |
82b989fefb71215c2f1658dd94807413ddd31b16 | Shengtao/recipe | [
"license:mit",
"region:us"
] | 2022-11-15T13:44:47+00:00 | {"license": "mit"} | 2022-11-15T13:45:41+00:00 |
|
a8f6079011fa4c989ca0c8d02132e718d17f9731 | mrbesher/tr-paraphrase-ted2013 | [
"license:cc-by-4.0",
"region:us"
] | 2022-11-15T14:04:30+00:00 | {"license": "cc-by-4.0"} | 2022-11-15T14:05:22+00:00 |
|
68b7f6608e203b50bbd0a0098a5f47e777b21f3f | # Dataset Card for "RickAndMorty-HorizontalMirror-blip-captions" | Norod78/RickAndMorty-HorizontalMirror-blip-captions | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-11-15T14:31:28+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["other"], "language": ["en"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "pretty_name": "Rick and Morty, Horizontal Mirror, BLIP captions", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 161499799.0, "num_examples": 530}], "download_size": 161488169, "dataset_size": 161499799.0}, "tags": []} | 2022-11-15T14:38:40+00:00 |
a01f186ccc6708648d90ac0f8c3ef0eb63723030 | # Dataset Card for IPC classification of French patents
## Dataset Description
- **Homepage:**
- **Repository:** [IPC Classification of French Patents](https://github.com/ZoeYou/Patent-Classification-2022)
- **Paper:** [Patent Classification using Extreme Multi-label Learning: A Case Study of French Patents](https://hal.science/hal-03850405v1)
- **Point of Contact:** [You Zuo]([email protected])
### Dataset Summary
INPI-CLS is a French Patents corpus extracted from the internal database of the INPI (National Institute of Industrial Property of France). It was initially designed for the patent classification task and consists of approximately 296k patent texts (including title, abstract, claims, and description) published between 2002 and 2021. Each patent in the corpus is annotated with labels ranging from sections to the IPC subgroup levels.
### Languages
French
### Domain
Patents (intellectual property).
### Social Impact of Dataset
The purpose of this dataset is to help develop models that enable the classification of French patents in the [International Patent Classification (IPC)](https://www.wipo.int/classifications/ipc/en/) system standard.
Thanks to the high integrity of the data, the INPI-CLS corpus can be utilized for various analytical studies concerning French language patents. Moreover, it serves as a valuable resource as a scientific corpus that comprehensively documents the technological inventions of the country.
### Citation Information
```
@inproceedings{zuo:hal-03850405,
TITLE = {{Patent Classification using Extreme Multi-label Learning: A Case Study of French Patents}},
AUTHOR = {Zuo, You and Mouzoun, Houda and Ghamri Doudane, Samir and Gerdes, Kim and Sagot, Beno{\^i}t},
URL = {https://hal.archives-ouvertes.fr/hal-03850405},
BOOKTITLE = {{SIGIR 2022 - PatentSemTech workshop}},
ADDRESS = {Madrid, Spain},
YEAR = {2022},
MONTH = Jul,
KEYWORDS = {IPC prediction ; Clustering and Classification ; Extreme Multi-label Learning ; French ; Patent},
PDF = {https://hal.archives-ouvertes.fr/hal-03850405/file/PatentSemTech_2022___extended_abstract.pdf},
HAL_ID = {hal-03850405},
HAL_VERSION = {v1},
}
```
| ZoeYou/INPI-CLS | [
"multilinguality:monolingual",
"language:fr",
"license:cc-by-nc-sa-3.0",
"region:us"
] | 2022-11-15T14:43:49+00:00 | {"language": ["fr"], "license": ["cc-by-nc-sa-3.0"], "multilinguality": ["monolingual"], "task_categories": ["text-classification, multi-label-classification"]} | 2023-06-09T11:27:09+00:00 |
a7cf0a730f4fb59896485c6f3ac611abf78b48e6 | mrbesher/tr-paraphrase-opensubtitles2018-raw | [
"license:cc-by-4.0",
"region:us"
] | 2022-11-15T15:01:37+00:00 | {"license": "cc-by-4.0"} | 2022-11-15T15:05:10+00:00 |
|
767499f8a83254d490dde4f9b959021a034159d8 | mrbesher/tr-paraphrase-tatoeba-raw | [
"license:cc-by-4.0",
"region:us"
] | 2022-11-15T15:05:35+00:00 | {"license": "cc-by-4.0"} | 2022-11-15T15:06:26+00:00 |
|
6a4ddc90763d04ffc292f0d3606ff8b6485455bc | kanemitsukun/facade_of_kyoto | [
"license:openrail",
"region:us"
] | 2022-11-15T15:35:45+00:00 | {"license": "openrail"} | 2022-11-15T15:38:05+00:00 |
|
df7c609529686d3e3c0d0b83f00a80345ae412bf | # Dataset Card for "ai4lam-demo2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | davanstrien/ai4lam-demo2 | [
"region:us"
] | 2022-11-15T16:45:10+00:00 | {"dataset_info": {"features": [{"name": "metadata_text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Low_Quality", "1": "High_Quality"}}}}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 29309108, "num_examples": 100821}], "download_size": 16023375, "dataset_size": 29309108}} | 2022-11-15T16:45:24+00:00 |
70ba1db10cdac67c212cd433963132b879e295f2 | # Dataset Card for "natural_language"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Priyash/natural_language | [
"region:us"
] | 2022-11-15T17:00:08+00:00 | {"dataset_info": {"features": [{"name": "review", "dtype": "string"}, {"name": "Length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 4742.1, "num_examples": 9}, {"name": "validation", "num_bytes": 1154, "num_examples": 1}], "download_size": 0, "dataset_size": 5896.1}} | 2022-11-18T17:33:35+00:00 |
2f071cebd6a3b6b48a2e76c5b4b6c1bde49d95ee | # Dataset Card for "github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | transformer-001/github-issues | [
"region:us"
] | 2022-11-15T17:46:15+00:00 | {"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "repository_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "comments_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "user", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "labels", "list": [{"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "color", "dtype": "string"}, {"name": "default", "dtype": "bool"}, {"name": "description", "dtype": "string"}]}, {"name": "state", "dtype": "string"}, {"name": "locked", "dtype": "bool"}, {"name": "assignee", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "assignees", "list": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "milestone", "struct": [{"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "creator", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "open_issues", "dtype": "int64"}, {"name": "closed_issues", "dtype": "int64"}, {"name": "state", "dtype": "string"}, {"name": "created_at", "dtype": "timestamp[s]"}, {"name": "updated_at", "dtype": "timestamp[s]"}, {"name": "due_on", "dtype": "timestamp[s]"}, {"name": "closed_at", "dtype": "timestamp[s]"}]}, {"name": "comments", "sequence": "string"}, {"name": "created_at", "dtype": "timestamp[s]"}, {"name": "updated_at", "dtype": "timestamp[s]"}, {"name": "closed_at", "dtype": "timestamp[s]"}, {"name": "author_association", "dtype": "string"}, {"name": "active_lock_reason", "dtype": "null"}, {"name": "draft", "dtype": "bool"}, {"name": "pull_request", "struct": [{"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "diff_url", "dtype": "string"}, {"name": "patch_url", "dtype": "string"}, {"name": "merged_at", "dtype": "timestamp[s]"}]}, {"name": "body", "dtype": "string"}, {"name": "reactions", "struct": [{"name": "url", "dtype": "string"}, {"name": "total_count", "dtype": "int64"}, {"name": "+1", "dtype": "int64"}, {"name": "-1", "dtype": "int64"}, {"name": "laugh", "dtype": "int64"}, {"name": "hooray", "dtype": "int64"}, {"name": "confused", "dtype": "int64"}, {"name": "heart", "dtype": "int64"}, {"name": "rocket", "dtype": "int64"}, {"name": "eyes", "dtype": "int64"}]}, {"name": "timeline_url", "dtype": "string"}, {"name": "performed_via_github_app", "dtype": "null"}, {"name": "state_reason", "dtype": "string"}, {"name": "is_pull_request", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 18908112, "num_examples": 5000}], "download_size": 5112946, "dataset_size": 18908112}} | 2022-11-15T17:46:29+00:00 |
8af4f48eb15d518ab21318ebe75ea21a2bd6423a | Zanter/JustSomeModels | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-11-15T17:52:21+00:00 | {"license": "creativeml-openrail-m"} | 2022-11-26T15:39:30+00:00 |
|
bb1fff2db16bd92b2b658a9d37a720c720d8844b | # Dataset Card for "testtyt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | helliun/testtyt | [
"region:us"
] | 2022-11-15T18:00:38+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "channel", "dtype": "string"}, {"name": "channel_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "categories", "sequence": "string"}, {"name": "tags", "sequence": "string"}, {"name": "description", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "segments", "list": [{"name": "end", "dtype": "float64"}, {"name": "start", "dtype": "float64"}, {"name": "text", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2138, "num_examples": 1}], "download_size": 11227, "dataset_size": 2138}} | 2022-11-15T18:00:42+00:00 |
43d716dc64f9ede73658c2a57c66de81ca7afe95 | # Dataset Card for "test_whisper_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | juancopi81/test_whisper_test | [
"region:us"
] | 2022-11-15T20:13:37+00:00 | {"dataset_info": {"features": [{"name": "CHANNEL_NAME", "dtype": "string"}, {"name": "URL", "dtype": "string"}, {"name": "TITLE", "dtype": "string"}, {"name": "DESCRIPTION", "dtype": "string"}, {"name": "TRANSCRIPTION", "dtype": "string"}, {"name": "SEGMENTS", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32551, "num_examples": 8}], "download_size": 39136, "dataset_size": 32551}} | 2022-11-15T21:57:02+00:00 |
9fa2b05e61f41ef0537b9c2ba7ec3f49e6e1fa8c | saraimarte/flowerVase | [
"license:other",
"region:us"
] | 2022-11-15T20:15:28+00:00 | {"license": "other"} | 2022-11-15T20:19:00+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.