sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
b8242c22de52b5fc4c3a34015a1e709be98dced5
|
# m1_qualitative_analysis_ref_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approach : M1
* Dataset type : ground-truth
* Tokenizer : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* Level-1 : [nlpso/m1_ind_layers_ref_cmbert_io_level_1](https://huggingface.co/nlpso/m1_ind_layers_ref_cmbert_io_level_1)
* Level 2 : [nlpso/m1_ind_layers_ref_cmbert_io_level_2](https://huggingface.co/nlpso/m1_ind_layers_ref_cmbert_io_level_2)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m1_qualitative_analysis_ref_cmbert_io")
|
nlpso/m1_qualitative_analysis_ref_cmbert_io
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:39:33+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:39:49+00:00
|
3a4f7d43934b8d74b6f6878c2d5604c05b0a459a
|
# m1_qualitative_analysis_ref_ptrn_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approach : M1
* Dataset type : ground-truth
* Tokenizer : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* Level-1 : [nlpso/m1_ind_layers_ref_ptrn_cmbert_io_level_1](https://huggingface.co/nlpso/m1_ind_layers_ref_ptrn_cmbert_io_level_1)
* Level 2 : [nlpso/m1_ind_layers_ref_ptrn_cmbert_io_level_2](https://huggingface.co/nlpso/m1_ind_layers_ref_ptrn_cmbert_io_level_2)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m1_qualitative_analysis_ref_ptrn_cmbert_io")
|
nlpso/m1_qualitative_analysis_ref_ptrn_cmbert_io
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:39:50+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:40:06+00:00
|
ae448c48cdfac4c1949a1dbb543cfb363d2a73d8
|
# m1_qualitative_analysis_ref_cmbert_iob2
## Introduction
This dataset was used to perform **qualitative analysis** of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approach : M1
* Dataset type : ground-truth
* Tokenizer : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Tagging format : IOB2
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* Level-1 : [nlpso/m1_ind_layers_ref_cmbert_iob2_level_1](https://huggingface.co/nlpso/m1_ind_layers_ref_cmbert_iob2_level_1)
* Level 2 : [nlpso/m1_ind_layers_ref_cmbert_iob2_level_2](https://huggingface.co/nlpso/m1_ind_layers_ref_cmbert_iob2_level_2)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m1_qualitative_analysis_ref_cmbert_iob2")
|
nlpso/m1_qualitative_analysis_ref_cmbert_iob2
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:40:07+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:40:23+00:00
|
96b6f5cdbb0d50f6fce5ec62931991a3bb8229d9
|
# m1_qualitative_analysis_ref_ptrn_cmbert_iob2
## Introduction
This dataset was used to perform **qualitative analysis** of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approach : M1
* Dataset type : ground-truth
* Tokenizer : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Tagging format : IOB2
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* Level-1 : [nlpso/m1_ind_layers_ref_ptrn_cmbert_iob2_level_1](https://huggingface.co/nlpso/m1_ind_layers_ref_ptrn_cmbert_iob2_level_1)
* Level 2 : [nlpso/m1_ind_layers_ref_ptrn_cmbert_iob2_level_2](https://huggingface.co/nlpso/m1_ind_layers_ref_ptrn_cmbert_iob2_level_2)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m1_qualitative_analysis_ref_ptrn_cmbert_iob2")
|
nlpso/m1_qualitative_analysis_ref_ptrn_cmbert_iob2
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:40:24+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:40:39+00:00
|
634879beed80de49f8613b032fa6c1021efaa88f
|
# m1_qualitative_analysis_ocr_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approach : M1
* Dataset type : noisy (Pero OCR)
* Tokenizer : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* Level-1 : [nlpso/m1_ind_layers_ocr_cmbert_io_level_1](https://huggingface.co/nlpso/m1_ind_layers_ocr_cmbert_io_level_1)
* Level 2 : [nlpso/m1_ind_layers_ocr_cmbert_io_level_2](https://huggingface.co/nlpso/m1_ind_layers_ocr_cmbert_io_level_2)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m1_qualitative_analysis_ocr_cmbert_io")
|
nlpso/m1_qualitative_analysis_ocr_cmbert_io
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:40:40+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:40:55+00:00
|
91e2ad118b0750e554ebc5487e1c3da9e1189baf
|
# m1_qualitative_analysis_ocr_ptrn_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approach : M1
* Dataset type : noisy (Pero OCR)
* Tokenizer : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* Level-1 : [nlpso/m1_ind_layers_ocr_ptrn_cmbert_io_level_1](https://huggingface.co/nlpso/m1_ind_layers_ocr_ptrn_cmbert_io_level_1)
* Level 2 : [nlpso/m1_ind_layers_ocr_ptrn_cmbert_io_level_2](https://huggingface.co/nlpso/m1_ind_layers_ocr_ptrn_cmbert_io_level_2)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m1_qualitative_analysis_ocr_ptrn_cmbert_io")
|
nlpso/m1_qualitative_analysis_ocr_ptrn_cmbert_io
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:40:56+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:41:12+00:00
|
dbeb75b9165681c3fa63386601037a745ce457a3
|
# m1_qualitative_analysis_ocr_cmbert_iob2
## Introduction
This dataset was used to perform **qualitative analysis** of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approach : M1
* Dataset type : noisy (Pero OCR)
* Tokenizer : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Tagging format : IOB2
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* Level-1 : [nlpso/m1_ind_layers_ocr_cmbert_iob2_level_1](https://huggingface.co/nlpso/m1_ind_layers_ocr_cmbert_iob2_level_1)
* Level 2 : [nlpso/m1_ind_layers_ocr_cmbert_iob2_level_2](https://huggingface.co/nlpso/m1_ind_layers_ocr_cmbert_iob2_level_2)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m1_qualitative_analysis_ocr_cmbert_iob2")
|
nlpso/m1_qualitative_analysis_ocr_cmbert_iob2
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:41:13+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:41:29+00:00
|
cc7c461a4cb9e82523a7bd30cd39cde808888d58
|
# m1_qualitative_analysis_ocr_ptrn_cmbert_iob2
## Introduction
This dataset was used to perform **qualitative analysis** of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approach : M1
* Dataset type : noisy (Pero OCR)
* Tokenizer : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Tagging format : IOB2
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* Level-1 : [nlpso/m1_ind_layers_ocr_ptrn_cmbert_iob2_level_1](https://huggingface.co/nlpso/m1_ind_layers_ocr_ptrn_cmbert_iob2_level_1)
* Level 2 : [nlpso/m1_ind_layers_ocr_ptrn_cmbert_iob2_level_2](https://huggingface.co/nlpso/m1_ind_layers_ocr_ptrn_cmbert_iob2_level_2)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m1_qualitative_analysis_ocr_ptrn_cmbert_iob2")
|
nlpso/m1_qualitative_analysis_ocr_ptrn_cmbert_iob2
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:41:30+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:41:46+00:00
|
df353323836f3b83f298fb995a25bbeddd1f4f03
|
# m2m3_qualitative_analysis_ref_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approachrd : M2 and M3
* Dataset type : ground-truth
* Tokenizer : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* M2 : [nlpso/m2_joint_label_ref_cmbert_io](https://huggingface.co/nlpso/m2_joint_label_ref_cmbert_io)
* M3 : [nlpso/m3_hierarchical_ner_ref_cmbert_io](https://huggingface.co/nlpso/m3_hierarchical_ner_ref_cmbert_io)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m2m3_qualitative_analysis_ref_cmbert_io")
|
nlpso/m2m3_qualitative_analysis_ref_cmbert_io
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:47:49+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:48:34+00:00
|
5fc1505cefe0790c9e79db7d4add708cb84386d6
|
# m2m3_qualitative_analysis_ref_ptrn_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approachrd : M2 and M3
* Dataset type : ground-truth
* Tokenizer : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* M2 : [nlpso/m2_joint_label_ref_ptrn_cmbert_io](https://huggingface.co/nlpso/m2_joint_label_ref_ptrn_cmbert_io)
* M3 : [nlpso/m3_hierarchical_ner_ref_ptrn_cmbert_io](https://huggingface.co/nlpso/m3_hierarchical_ner_ref_ptrn_cmbert_io)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m2m3_qualitative_analysis_ref_ptrn_cmbert_io")
|
nlpso/m2m3_qualitative_analysis_ref_ptrn_cmbert_io
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:48:35+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:48:50+00:00
|
30ab760f00769993e3d37221d3122aafc04ad7cf
|
# m2m3_qualitative_analysis_ref_cmbert_iob2
## Introduction
This dataset was used to perform **qualitative analysis** of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approachrd : M2 and M3
* Dataset type : ground-truth
* Tokenizer : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Tagging format : IOB2
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* M2 : [nlpso/m2_joint_label_ref_cmbert_iob2](https://huggingface.co/nlpso/m2_joint_label_ref_cmbert_iob2)
* M3 : [nlpso/m3_hierarchical_ner_ref_cmbert_iob2](https://huggingface.co/nlpso/m3_hierarchical_ner_ref_cmbert_iob2)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m2m3_qualitative_analysis_ref_cmbert_iob2")
|
nlpso/m2m3_qualitative_analysis_ref_cmbert_iob2
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:48:51+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:49:07+00:00
|
dc5b968dbe50835edbc57f868d7be570b6142fc6
|
# m2m3_qualitative_analysis_ref_ptrn_cmbert_iob2
## Introduction
This dataset was used to perform **qualitative analysis** of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approachrd : M2 and M3
* Dataset type : ground-truth
* Tokenizer : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Tagging format : IOB2
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* M2 : [nlpso/m2_joint_label_ref_ptrn_cmbert_iob2](https://huggingface.co/nlpso/m2_joint_label_ref_ptrn_cmbert_iob2)
* M3 : [nlpso/m3_hierarchical_ner_ref_ptrn_cmbert_iob2](https://huggingface.co/nlpso/m3_hierarchical_ner_ref_ptrn_cmbert_iob2)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m2m3_qualitative_analysis_ref_ptrn_cmbert_iob2")
|
nlpso/m2m3_qualitative_analysis_ref_ptrn_cmbert_iob2
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:49:07+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:49:23+00:00
|
fd74e0fefac7e27918d1c637bae1cc7e36e6f8ce
|
# m2m3_qualitative_analysis_ocr_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approachrd : M2 and M3
* Dataset type : noisy (Pero OCR)
* Tokenizer : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* M2 : [nlpso/m2_joint_label_ocr_cmbert_io](https://huggingface.co/nlpso/m2_joint_label_ocr_cmbert_io)
* M3 : [nlpso/m3_hierarchical_ner_ocr_cmbert_io](https://huggingface.co/nlpso/m3_hierarchical_ner_ocr_cmbert_io)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m2m3_qualitative_analysis_ocr_cmbert_io")
|
nlpso/m2m3_qualitative_analysis_ocr_cmbert_io
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:49:24+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:49:39+00:00
|
4bffdcddf1197886586cd74c18b511289862a56b
|
# m2m3_qualitative_analysis_ocr_ptrn_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approachrd : M2 and M3
* Dataset type : noisy (Pero OCR)
* Tokenizer : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* M2 : [nlpso/m2_joint_label_ocr_ptrn_cmbert_io](https://huggingface.co/nlpso/m2_joint_label_ocr_ptrn_cmbert_io)
* M3 : [nlpso/m3_hierarchical_ner_ocr_ptrn_cmbert_io](https://huggingface.co/nlpso/m3_hierarchical_ner_ocr_ptrn_cmbert_io)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m2m3_qualitative_analysis_ocr_ptrn_cmbert_io")
|
nlpso/m2m3_qualitative_analysis_ocr_ptrn_cmbert_io
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:49:40+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:49:55+00:00
|
a92b2a115698d84023b4bd0ca7182fb605fed755
|
# m2m3_qualitative_analysis_ocr_cmbert_iob2
## Introduction
This dataset was used to perform **qualitative analysis** of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approachrd : M2 and M3
* Dataset type : noisy (Pero OCR)
* Tokenizer : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Tagging format : IOB2
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* M2 : [nlpso/m2_joint_label_ocr_cmbert_iob2](https://huggingface.co/nlpso/m2_joint_label_ocr_cmbert_iob2)
* M3 : [nlpso/m3_hierarchical_ner_ocr_cmbert_iob2](https://huggingface.co/nlpso/m3_hierarchical_ner_ocr_cmbert_iob2)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m2m3_qualitative_analysis_ocr_cmbert_iob2")
|
nlpso/m2m3_qualitative_analysis_ocr_cmbert_iob2
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:49:56+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:50:12+00:00
|
8f49525c05763df65d75d72212a2d0d4d3a03314
|
# m2m3_qualitative_analysis_ocr_ptrn_cmbert_iob2
## Introduction
This dataset was used to perform **qualitative analysis** of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approachrd : M2 and M3
* Dataset type : noisy (Pero OCR)
* Tokenizer : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Tagging format : IOB2
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* M2 : [nlpso/m2_joint_label_ocr_ptrn_cmbert_iob2](https://huggingface.co/nlpso/m2_joint_label_ocr_ptrn_cmbert_iob2)
* M3 : [nlpso/m3_hierarchical_ner_ocr_ptrn_cmbert_iob2](https://huggingface.co/nlpso/m3_hierarchical_ner_ocr_ptrn_cmbert_iob2)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m2m3_qualitative_analysis_ocr_ptrn_cmbert_iob2")
|
nlpso/m2m3_qualitative_analysis_ocr_ptrn_cmbert_iob2
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:50:13+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:50:28+00:00
|
0897f3d8e29249496d2f8dff473d75fa54e419ca
|
suolyer/ocnli
|
[
"license:apache-2.0",
"region:us"
] |
2023-02-22T08:54:19+00:00
|
{"license": "apache-2.0"}
|
2023-02-22T11:10:11+00:00
|
|
45df06fb0b31edc882d7c8d34389261f995e5208
|
# Opus100
- Source: https://huggingface.co/datasets/opus100
- Num examples:
- 1,000,000 (train)
- 2,000 (validation)
- 192,744 (test)
- Language: English
```python
from datasets import load_dataset
load_dataset("tdtunlp/opus100_envi")
```
- Format for Translation task
```python
def preprocess(
sample,
instruction_key="### Instruction:",
input_key="Input:",
response_key="<|endofprompt|>",
end_key="<|endoftext|>",
en2vi=True,
):
if en2vi:
if random.random() < 0.5:
instruction = "Translate the following sentences from English into Vietnamese."
else:
instruction = "Dịch các câu sau từ tiếng Anh sang tiếng Việt."
input = sample['en'].strip()
response = sample['vi'].strip()
else:
if random.random() < 0.5:
instruction = "Translate the following sentences from Vietnamese into English."
else:
instruction = "Dịch các câu sau từ tiếng Việt sang tiếng Anh."
input = sample['vi'].strip()
response = sample['en'].strip()
return {'text': """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
{instruction_key}
{instruction}
{input_key}
{input}
{response_key}
{response}
{end_key}""".format(
instruction_key=instruction_key,
instruction=instruction,
input_key=input_key,
input=input,
response_key=response_key,
response=response,
end_key=end_key,
)}
"""
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
Dịch các câu sau từ tiếng Anh sang tiếng Việt.
Input:
Toast falls jelly-side down, children hit tables and people get hurt.
<|endofprompt|>
Bánh mì nướng rơi đông lại, trẻ con va vào bàn và con người bị thương.
<|endoftext|>
"""
```
|
vietgpt/opus100_envi
|
[
"task_categories:translation",
"size_categories:1M<n<10M",
"language:en",
"language:vi",
"LM",
"region:us"
] |
2023-02-22T09:11:25+00:00
|
{"language": ["en", "vi"], "size_categories": ["1M<n<10M"], "task_categories": ["translation"], "dataset_info": {"features": [{"name": "en", "dtype": "string"}, {"name": "vi", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 192744, "num_examples": 2000}, {"name": "train", "num_bytes": 82614470, "num_examples": 1000000}, {"name": "validation", "num_bytes": 194721, "num_examples": 2000}], "download_size": 59201490, "dataset_size": 83001935}, "tags": ["LM"]}
|
2023-07-03T16:56:58+00:00
|
08cf34f8842c8f387a404ba9e4ff821fb4a51454
|
# Dataset Card for "nlp.4.summarization"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lansinuote/nlp.4.summarization
|
[
"region:us"
] |
2023-02-22T09:22:48+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 53358155, "num_examples": 20000}, {"name": "validation", "num_bytes": 273883, "num_examples": 100}, {"name": "test", "num_bytes": 249614, "num_examples": 100}], "download_size": 0, "dataset_size": 53881652}}
|
2023-02-22T11:39:04+00:00
|
f18290546dc92d362ae1ab2e3d6913c52c2a6e63
|
# Dataset Card for "nlp.5.classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lansinuote/nlp.5.classification
|
[
"region:us"
] |
2023-02-22T09:40:10+00:00
|
{"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "unacceptable", "1": "acceptable"}}}}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 621111, "num_examples": 8551}, {"name": "validation", "num_bytes": 77558, "num_examples": 1043}, {"name": "test", "num_bytes": 78463, "num_examples": 1063}], "download_size": 0, "dataset_size": 777132}}
|
2023-02-22T11:25:27+00:00
|
265d3e9b31616e70504ff1aa9c7a3e9020e6c73e
|
# Dataset Card for "nlp.7.translation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lansinuote/nlp.7.translation
|
[
"region:us"
] |
2023-02-22T10:02:01+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 8389390, "num_examples": 20000}, {"name": "validation", "num_bytes": 84758, "num_examples": 200}, {"name": "test", "num_bytes": 84885, "num_examples": 200}], "download_size": 0, "dataset_size": 8559033}}
|
2023-02-22T12:04:31+00:00
|
dc314582f7d53a97a8c4a6be0dd97e3b85cccec4
|
# Dataset Card for "sv_corpora_parliament_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Liumx/sv_corpora_parliament_processed
|
[
"region:us"
] |
2023-02-22T10:21:50+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 292351437, "num_examples": 1892723}], "download_size": 0, "dataset_size": 292351437}}
|
2023-02-22T10:43:56+00:00
|
8a43ee056c50c3a2e9adba9d579cc3bf16665e36
|
test
|
Hiren/first_demo
|
[
"region:us"
] |
2023-02-22T10:45:18+00:00
|
{}
|
2023-02-22T10:56:33+00:00
|
fe0fe707a2513537a40b872e84c676878a99df54
|
# Dataset Card for "google_speech_commands_augmented_raw_fixed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mazkooleg/0-9up_google_speech_commands_augmented_raw
|
[
"region:us"
] |
2023-02-22T10:49:44+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "zero", "1": "one", "2": "two", "3": "three", "4": "four", "5": "five", "6": "six", "7": "seven", "8": "eight", "9": "nine", "10": "#unk#", "11": "#pub#"}}}}], "splits": [{"name": "validation", "num_bytes": 107972507.648, "num_examples": 3368}, {"name": "train", "num_bytes": 35133207938.0, "num_examples": 1095480}, {"name": "test", "num_bytes": 120956609.28, "num_examples": 3773}], "download_size": 31701155906, "dataset_size": 35362137054.928}}
|
2023-02-22T11:14:24+00:00
|
50861611e70ce131503a65ee7cfa3b0297fb447e
|
suolyer/lcsts
|
[
"license:apache-2.0",
"region:us"
] |
2023-02-22T11:11:33+00:00
|
{"license": "apache-2.0"}
|
2023-02-22T11:12:55+00:00
|
|
f71eb11cfa4629acb34dccc9cc933e6826d0af9e
|
suolyer/c3
|
[
"license:apache-2.0",
"region:us"
] |
2023-02-22T11:15:11+00:00
|
{"license": "apache-2.0"}
|
2023-02-22T11:19:39+00:00
|
|
28a541798cf0d637f959ea68c9133b3847a630dd
|
zarif98sjs/bangla-plagiarism-dataset
|
[
"language:bn",
"license:cc-by-sa-4.0",
"region:us"
] |
2023-02-22T11:15:24+00:00
|
{"language": ["bn"], "license": "cc-by-sa-4.0"}
|
2023-02-27T04:24:01+00:00
|
|
5bb7ba2f15fe38700284f91c7a995690a15419b6
|
suolyer/webqa
|
[
"license:apache-2.0",
"region:us"
] |
2023-02-22T11:17:52+00:00
|
{"license": "apache-2.0"}
|
2023-02-23T02:12:12+00:00
|
|
e51b874769837f52edd540c7c48ff329df16fc30
|
suolyer/cmqa
|
[
"license:apache-2.0",
"region:us"
] |
2023-02-22T11:20:12+00:00
|
{"license": "apache-2.0"}
|
2023-02-23T02:12:12+00:00
|
|
9db183efc347c91cc55d4509dc6b4ebac735ad8f
|
suolyer/translate_zh2en
|
[
"license:apache-2.0",
"region:us"
] |
2023-02-22T11:22:59+00:00
|
{"license": "apache-2.0"}
|
2023-02-22T11:24:26+00:00
|
|
f7f579f1942323d131dd4f7d28aba6e1aff9657c
|
iMperria/hakaton_nto
|
[
"license:openrail",
"region:us"
] |
2023-02-22T11:25:13+00:00
|
{"license": "openrail"}
|
2023-02-22T11:25:13+00:00
|
|
25930c47e82323a0cd3c202004ba194b0d175aef
|
suolyer/translate_en2zh
|
[
"license:apache-2.0",
"region:us"
] |
2023-02-22T11:26:42+00:00
|
{"license": "apache-2.0"}
|
2023-02-22T11:28:18+00:00
|
|
867bc1e621094be8c0d6e1514e663cca88211876
|
# Dataset Card for "nlp.6.named_entity_recognition"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lansinuote/nlp.6.named_entity_recognition
|
[
"region:us"
] |
2023-02-22T11:32:25+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 3810117, "num_examples": 14041}, {"name": "validation", "num_bytes": 941811, "num_examples": 3250}, {"name": "test", "num_bytes": 884226, "num_examples": 3453}], "download_size": 773605, "dataset_size": 5636154}}
|
2023-02-22T11:35:38+00:00
|
87237e05c6954c7cefd3411b9642fba71ca6e6b0
|
# Dataset Card for "patched_test_p_20_f_SPOUT_m1_predictions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
roa7n/patched_test_p_20_f_SPOUT_m1_predictions
|
[
"region:us"
] |
2023-02-22T11:37:51+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence_str", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "m1_preds", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 524213737, "num_examples": 1607399}], "download_size": 54370586, "dataset_size": 524213737}}
|
2023-02-22T11:38:05+00:00
|
2a1f8f846ddeea3b3ec7066f64e2f54384b63d31
|
Abirami/tamilwikipedia
|
[
"license:other",
"region:us"
] |
2023-02-22T11:47:49+00:00
|
{"license": "other"}
|
2023-02-22T11:47:49+00:00
|
|
e5cf99702c7aed0e984a46bdbf8b0565eba6bc2a
|
annotations_creators:
- found
language:
- Tamil
language_creators:
- found
license: []
multilinguality:
- multilingual
pretty_name: tamilwikipediadataset
size_categories:
- 100K<n<1M
source_datasets: []
tags: []
task_categories:
- summarization
task_ids: []
|
Abirami/tamilwikipediadataset
|
[
"region:us"
] |
2023-02-22T11:50:04+00:00
|
{}
|
2023-02-22T12:42:51+00:00
|
51d16e614efcacdaa79885d19cb5f7f8bd72ef2d
|
# Dataset Card for "cv.1.image_classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lansinuote/cv.1.image_classification
|
[
"region:us"
] |
2023-02-22T12:12:29+00:00
|
{"dataset_info": {"features": [{"name": "labels", "dtype": {"class_label": {"names": {"0": "angular_leaf_spot", "1": "bean_rust", "2": "healthy"}}}}, {"name": "pixel_values", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 625388016, "num_examples": 1034}, {"name": "validation", "num_bytes": 80441592, "num_examples": 133}, {"name": "test", "num_bytes": 77417472, "num_examples": 128}], "download_size": 0, "dataset_size": 783247080}}
|
2023-02-23T03:04:38+00:00
|
49eb8610f79de077097c29a1eec0d35a46fac606
|
# Dataset Card for "cv.2.image_segmentation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lansinuote/cv.2.image_segmentation
|
[
"region:us"
] |
2023-02-22T12:25:11+00:00
|
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 292478590.8, "num_examples": 900}, {"name": "test", "num_bytes": 32497621.2, "num_examples": 100}], "download_size": 324358820, "dataset_size": 324976212.0}}
|
2023-02-23T02:49:15+00:00
|
faf07b0f97e10db42e80214c6e972310525852e7
|
NbAiLab/norwegian-xsum
|
[
"language:no",
"region:us"
] |
2023-02-22T12:40:35+00:00
|
{"language": ["no"]}
|
2023-10-02T21:06:43+00:00
|
|
acd9ebd512a40ec864e8fd8702ce9d8a323134bd
|
# Dataset Card for "nlp.8.generation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lansinuote/nlp.8.generation
|
[
"region:us"
] |
2023-02-22T13:04:56+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 299146484, "num_examples": 44863}, {"name": "test", "num_bytes": 713476, "num_examples": 107}], "download_size": 0, "dataset_size": 299859960}}
|
2023-02-23T02:02:10+00:00
|
1ed608bbe120cf3f049d496711dba464cc3a404f
|
kirim9001/Tryin
|
[
"license:other",
"region:us"
] |
2023-02-22T13:07:18+00:00
|
{"license": "other"}
|
2023-02-22T13:08:56+00:00
|
|
fa22f3601622d7b6a5581284e2d79fd8f9d13929
|
pavanBuduguppa/asr_inverse_text_normalization
|
[
"license:gpl-3.0",
"region:us"
] |
2023-02-22T13:09:34+00:00
|
{"license": "gpl-3.0"}
|
2023-02-22T13:15:29+00:00
|
|
3cffd418f684915a2ad5e3e9bb17b25f3866df0c
|
# Dataset Card for "bookcorpus_stage2_relation_label"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
MartinKu/bookcorpus_stage2_relation_label
|
[
"region:us"
] |
2023-02-22T13:57:19+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "relation_label_list", "sequence": "int64"}, {"name": "start_point_list", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 21667730608, "num_examples": 74004248}], "download_size": 3694358344, "dataset_size": 21667730608}}
|
2023-02-27T00:01:45+00:00
|
e9f31477ff2d881d90a67fcf1a20ce3299017dea
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
businesstengi/mikeyAIData
|
[
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:afl-3.0",
"region:us"
] |
2023-02-22T14:05:36+00:00
|
{"language": ["en"], "license": "afl-3.0", "size_categories": ["100K<n<1M"], "task_categories": ["conversational", "text-generation", "text-classification"], "pretty_name": "mikeyData"}
|
2023-02-22T16:51:58+00:00
|
feb13d4b9b67c12813fa60b4c8307e8840dc2326
|
# Dataset Card for "kr3_train_subset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
seokwoni/kr3_train_subset
|
[
"region:us"
] |
2023-02-22T14:44:53+00:00
|
{"dataset_info": {"features": [{"name": "labels", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Review", "dtype": "string"}, {"name": "en_review", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9011519, "num_examples": 15000}], "download_size": 5541231, "dataset_size": 9011519}}
|
2023-02-22T14:45:07+00:00
|
eefdda4d2e12c44d378985f482d49c382741f17d
|
# Dataset Card for "miniwob_snippets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LucasThil/miniwob_snippets
|
[
"region:us"
] |
2023-02-22T14:47:30+00:00
|
{"dataset_info": {"features": [{"name": "episodes", "dtype": "string"}, {"name": "refs", "dtype": "int64"}, {"name": "click", "dtype": "int64"}, {"name": "dblclick", "dtype": "int64"}, {"name": "keydown", "dtype": "int64"}, {"name": "keypress", "dtype": "int64"}, {"name": "keyup", "dtype": "int64"}, {"name": "mousedown", "dtype": "int64"}, {"name": "mouseup", "dtype": "int64"}, {"name": "scroll", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 674906155, "num_examples": 587268}, {"name": "test", "num_bytes": 84528980, "num_examples": 73420}, {"name": "validate", "num_bytes": 84695448, "num_examples": 73318}], "download_size": 140471741, "dataset_size": 844130583}}
|
2023-02-22T14:48:19+00:00
|
e66078af2f3f16a3fd9cd658712472264aca7dda
|
# Dataset Card for "patched_test_p_40_f_SPOUT_m1_predictions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
roa7n/patched_test_p_40_f_SPOUT_m1_predictions
|
[
"region:us"
] |
2023-02-22T15:14:36+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence_str", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "m1_preds", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 484629878, "num_examples": 1470999}], "download_size": 49491513, "dataset_size": 484629878}}
|
2023-02-22T15:14:50+00:00
|
df213f7d96e13fda9f1fc920951b08e06860634f
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:https://huggingface.co/datasets/jjiiaa/mj-prompts/
- **Repository:https://huggingface.co/datasets/jjiiaa/mj-prompts/
### Dataset Summary
adding soon
## Dataset Structure
adding soon
### Data Splits
adding soon
### Licensing Information
adding soon
|
jjiiaa/mj-prompts
|
[
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:token-classification",
"language:en",
"license:unknown",
"region:us"
] |
2023-02-22T15:17:50+00:00
|
{"language": ["en"], "license": "unknown", "task_categories": ["text-classification", "text-generation", "token-classification"], "pretty_name": "midjoury prompts"}
|
2023-02-23T15:56:38+00:00
|
3a9c4576cbc5fe65a49f115e7e0bd87b647d31c1
|
# Dataset Card for "patched_test_p_10_f_ATCaseOTCase_m1_predictions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
roa7n/patched_test_p_10_f_ATCaseOTCase_m1_predictions
|
[
"region:us"
] |
2023-02-22T15:39:25+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence_str", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "m1_preds", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 53142996, "num_examples": 143667}], "download_size": 5106512, "dataset_size": 53142996}}
|
2023-02-22T15:39:31+00:00
|
092d9f4a5b013b5714b30e4866f218f623ad07c2
|
# Dataset Card for "patched_test_p_20_f_ATCaseOTCase_m1_predictions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
roa7n/patched_test_p_20_f_ATCaseOTCase_m1_predictions
|
[
"region:us"
] |
2023-02-22T15:59:09+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence_str", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "m1_preds", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 51507146, "num_examples": 139207}], "download_size": 4945248, "dataset_size": 51507146}}
|
2023-02-22T15:59:15+00:00
|
ab096821b26270a5167851e62cc818b206e1ff9b
|
# Dataset Card for "patched_test_p_40_f_ATCaseOTCase_m1_predictions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
roa7n/patched_test_p_40_f_ATCaseOTCase_m1_predictions
|
[
"region:us"
] |
2023-02-22T16:17:43+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence_str", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "m1_preds", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 48235460, "num_examples": 130287}], "download_size": 4618804, "dataset_size": 48235460}}
|
2023-02-22T16:17:49+00:00
|
f88b2746f8a461649c8c24d324ada513e0b5c1ba
|
gusiorini/franco
|
[
"region:us"
] |
2023-02-22T17:22:26+00:00
|
{}
|
2023-02-22T17:26:37+00:00
|
|
c01ef914675cddd3774aaac93b56ad1dbc34c97f
|
parnoux/hate_speech_open_data_original_class_test_set
|
[
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] |
2023-02-22T17:24:24+00:00
|
{"language": ["en"], "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"]}
|
2023-02-22T17:27:49+00:00
|
|
6650abf12d61c4310d2b499a8ce3c78d6a1cef01
|
This is a dataset collected from all the texts available at Corpus Corporum, which includes probably all the literary works ever written in Latin. The dataset is split in two parts: preprocessed with basic cltk tools, ready for work, and raw text data. It must be noted, however, that the latter contains text in Greek, Hebrew, and other languages, with references and contractions
|
Fece228/latin-literature-dataset-170M
|
[
"size_categories:100M<n<1B",
"language:la",
"text",
"linguistics",
"NLP",
"Latin",
"literature",
"region:us"
] |
2023-02-22T17:36:43+00:00
|
{"language": ["la"], "size_categories": ["100M<n<1B"], "tags": ["text", "linguistics", "NLP", "Latin", "literature"]}
|
2023-02-23T09:05:13+00:00
|
6fae1b9da6a531dee81874424fee9dc37639ca34
|
# Dataset Card for "mr_trial3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
bhatvineet/mr_trial3
|
[
"region:us"
] |
2023-02-22T17:44:23+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcriptions", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 1009711800.042, "num_examples": 4179}, {"name": "test", "num_bytes": 359681461.83, "num_examples": 1393}], "download_size": 1379902601, "dataset_size": 1369393261.872}}
|
2023-02-22T17:47:16+00:00
|
efd19dcce878ce3893afb1f30a830cb0d72fa9d5
|
# Dataset Card for "fooset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
roszcz/fooset
|
[
"region:us"
] |
2023-02-22T18:50:26+00:00
|
{"dataset_info": {"features": [{"name": "pitch", "sequence": "int64"}, {"name": "velocity", "sequence": "int64"}, {"name": "start", "sequence": "float64"}, {"name": "end", "sequence": "float64"}], "splits": [{"name": "train", "num_bytes": 2814000, "num_examples": 875}], "download_size": 0, "dataset_size": 2814000}}
|
2023-02-22T19:12:59+00:00
|
1ddaa14beed79edda621fdd72ad22fd654d760b3
|
Jacobvs/PoliticalTweets
|
[
"license:mit",
"region:us"
] |
2023-02-22T19:18:37+00:00
|
{"license": "mit"}
|
2023-02-22T19:19:34+00:00
|
|
74a3957c2d3397ceeb7c85db5fc53da00f46fb02
|
# Fork of [SirNeural/flan_v2](https://huggingface.co/datasets/SirNeural/flan_v2)
just in case it gets deleted.
# Dataset Card for Flan V2
## Dataset Description
- **Homepage:** https://ai.googleblog.com/2023/02/the-flan-collection-advancing-open.html
- **Repository:** https://github.com/google-research/FLAN/tree/main/flan/v2
- **Paper:** https://arxiv.org/abs/2301.13688
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a processed version of the Flan V2 dataset.
I'm not affiliated with the creators, I'm just releasing the files in an easier-to-access format after processing.
The authors of the Flan Collection recommend experimenting with different mixing ratio's of tasks to get optimal results downstream.
This current version I've processed is missing a few datasets compared to the main branch of the flan v2 repo:
- cs-en WMT translation task requires manual download and I wasn't able to get the credentials
- q_re_cc dataset preprocessing for the dialog task wasn't working
-
These are minor hits to the total size of the collection (orders of MB compared to GB) but once those are fixed I will upload a complete version.
## Dataset Structure
### Data Instances
Flan 2021 (flan), P3 (t0), Super-Natural Instructions (niv2), Chain-of-thought (cot), and Dialog (dialog)
### Data Fields
Instruction data comes in a few formats:
- Few Shot (fs)
- Zero Shot (zs)
- Options Provided in context (i.e. multiple choice pick one) (opt)
- No Options Provided (noopt)
Each combination of the above tasks + formats are saved as a JSONL with following schema `{"input": ..., "target": ..., "task": ...}`
### Data Splits
Everything is saved as a train split
|
philschmid/flanv2
|
[
"license:apache-2.0",
"flan",
"flan 2022",
"flan v2",
"arxiv:2301.13688",
"region:us"
] |
2023-02-22T19:38:58+00:00
|
{"license": "apache-2.0", "pretty_name": "Flan v2", "tags": ["flan", "flan 2022", "flan v2"], "duplicated_from": "SirNeural/flan_v2"}
|
2023-02-22T19:39:49+00:00
|
e548b940c1130bc4a168cf005195aa0aca1d90b5
|
businesstengi/mikeytranscripts
|
[
"license:openrail",
"region:us"
] |
2023-02-22T19:40:58+00:00
|
{"license": "openrail"}
|
2023-02-22T20:26:43+00:00
|
|
ec865093c8c85fe6b4cb90f07a866d7b4701aebd
|
karlen532/wikisql_and_spider
|
[
"license:unknown",
"region:us"
] |
2023-02-22T20:34:14+00:00
|
{"license": "unknown"}
|
2023-03-21T08:05:32+00:00
|
|
f86ef92911c4a65f0d1fbaf0b8b329d4d9f2e1b0
|
# Dataset Card for "zambezivoice_lox_aug_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zambezivoice/zambezivoice_lox_aug_text
|
[
"region:us"
] |
2023-02-22T20:36:36+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 582481, "num_examples": 10397}], "download_size": 345760, "dataset_size": 582481}}
|
2023-02-22T20:36:41+00:00
|
e8965d28bde931c77eb93e2ade7f0dba3c305b43
|
# Dataset Card for "zambezivoice_loz_aug_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zambezivoice/zambezivoice_loz_aug_text
|
[
"region:us"
] |
2023-02-22T21:06:14+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 582481, "num_examples": 10397}], "download_size": 345760, "dataset_size": 582481}}
|
2023-02-22T21:06:19+00:00
|
e9e70b44cd92f2ad0e9f0312d24c393481815020
|
# Dataset Card for "VQAv2_sample_validation_facebook_opt_6.7b_mode_VQAv2_visclues_detection_ns_10_open_ended"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_sample_validation_facebook_opt_6.7b_mode_VQAv2_visclues_detection_ns_10_open_ended
|
[
"region:us"
] |
2023-02-22T21:09:43+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_bs_8", "num_bytes": 1619, "num_examples": 10}], "download_size": 2691, "dataset_size": 1619}}
|
2023-02-22T21:09:46+00:00
|
bbb5318a8738b37eae604a503dbe9de3d42c81c3
|
# Dataset Card for "VQAv2_sample_validation_facebook_opt_6.7b_mode_VQAv2_visclues_detection_ns_20_open_ended"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_sample_validation_facebook_opt_6.7b_mode_VQAv2_visclues_detection_ns_20_open_ended
|
[
"region:us"
] |
2023-02-22T21:14:19+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_bs_8", "num_bytes": 3065, "num_examples": 20}], "download_size": 4018, "dataset_size": 3065}}
|
2023-02-23T00:19:45+00:00
|
2ec2eaaad35068d323cf1b4ea911fc3d1aa9cf68
|
# Dataset Card for ESA Hubble Deep Space Images & Captions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Examples](#examples)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [ESA Hubble](https://esahubble.org/)
- **Repository:** [Hubble Diffusion repository](https://github.com/Supermaxman/hubble-diffusion)
- **Point of Contact:** [Maxwell Weinzierl](mailto:[email protected])
### Dataset Summary
The ESA Hubble Deep Space Images & Captions dataset is composed primarily of Hubble deep space scans as high-resolution images,
along with textual descriptions written by ESA/Hubble. Metadata is also included, which enables more detailed filtering and understanding of massive space scans.
The purpose of this dataset is to enable text-to-image generation methods for generating high-quality deep space scans from prompts.
Check out [Hubble Diffusion v2](https://huggingface.co/Supermaxman/hubble-diffusion-2) for an example of a model trained on this dataset!
### Examples
#### A grazing encounter between two spiral galaxies
> In the direction of the constellation Canis Major, two spiral galaxies pass by each other like majestic ships in the night. The near-collision has been caught in images taken by the NASA/ESA Hubble Space Telescope and its Wide Field Planetary Camera 2.
>
>
> Credit: NASA/ESA and The Hubble Heritage Team (STScI)
#### The magnificent starburst galaxy Messier 82
> This mosaic image of the magnificent starburst galaxy, Messier 82 (M82) is the sharpest wide-angle view ever obtained of M82. It is a galaxy remarkable for its webs of shredded clouds and flame-like plumes of glowing hydrogen blasting out from its central regions where young stars are being born 10 times faster than they are inside in our Milky Way Galaxy.
>
>
> Credit: NASA, ESA and the Hubble Heritage Team (STScI/AURA). Acknowledgment: J. Gallagher (University of Wisconsin), M. Mountain (STScI) and P. Puxley (NSF).
#### Extreme star cluster bursts into life in new Hubble image
> The star-forming region NGC 3603 - seen here in the latest Hubble Space Telescope image - contains one of the most impressive massive young star clusters in the Milky Way. Bathed in gas and dust the cluster formed in a huge rush of star formation thought to have occurred around a million years ago. The hot blue stars at the core are responsible for carving out a huge cavity in the gas seen to the right of the star cluster in NGC 3603's centre.
>
>
> Credit: NASA, ESA and the Hubble Heritage (STScI/AURA)-ESA/Hubble Collaboration
#### Statistics
- There are a total of 2,706 deep space images
- The complete uncompressed size of the dataset is 120 GB, so definitely make use of [Streaming](https://huggingface.co/docs/datasets/stream)
- The average image is 44 MB, while the max image size is 432 MB
- The average image has a height of 2,881 pixels, and an average width of 3,267 pixels
### Supported Tasks and Leaderboards
- `text-to-image`: The dataset can be used to train a model for conditional image generation from text. A conditional text-to-image generation model is presented with a text prompt, and is asked to generate an image which aligns with that text prompt. Model performance is typically measured by human judgement, as it is difficult to automatically measure the quality of generated images and how closely they match the text prompt. An example of a text-to-image model is [Stable Diffusion v2-1](https://huggingface.co/stabilityai/stable-diffusion-2-1). An example of a text-to-image model trained on this dataset is [Hubble Diffusion v2](https://huggingface.co/Supermaxman/hubble-diffusion-2).
### Languages
The text describing the images in the dataset is in English, as written by the writers from ESA/Hubble at [https://esahubble.org/](https://esahubble.org/). The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
A typical data point comprises a high-quality deep space scan as an image, along with a textual description of that image produced by ESA/Hubble.
The textual description was derived by combining the `title` and the `description` of the image from the ESA/Hubble website.
Additionally, each data point also contains significant metadata about the image, such as the type of image, credits, the URL, the release date, and more.
An example looks as follows:
```json
{
"image": "<encoded image>",
"text":"A grazing encounter between two spiral galaxies: In the direction of the constellation Canis Major, two spiral galaxies pass by each other like majestic ships in the night. The near-collision has been caught in images taken by the NASA/ESA Hubble Space Telescope and its Wide Field Planetary Camera 2.",
"id":"opo9941a",
"title":"A grazing encounter between two spiral galaxies",
"description":"In the direction of the constellation Canis Major, two spiral galaxies pass by each other like majestic ships in the night. The near-collision has been caught in images taken by the NASA/ESA Hubble Space Telescope and its Wide Field Planetary Camera 2.",
"credits":"NASA/ESA and The Hubble Heritage Team (STScI)",
"url":"https://esahubble.org/images/opo9941a/",
"Id":"opo9941a",
"Type":"Local Universe : Galaxy : Type : Interacting",
"Release date":"4 November 1999, 07:00",
"Size":"2907 x 1486 px",
"Name":"IC 2163, NGC 2207",
"Distance":"110 million light years",
"Constellation":"Canis Major",
"Category":"Galaxies",
"Position (RA)":"6 16 25.10",
"Position (Dec)":"-21° 22' 34.62\"",
"Field of view":"4.82 x 2.47 arcminutes",
"Orientation":"North is 191.2\u00b0 right of vertical",
"width":2907,
"height":1486,
"file_size":12959406,
"crop_w":0,
"crop_h":0,
"cropped":false
}
```
### Data Fields
- `image`: encoded RGB `.png` image of the deep space scan
- `text`: text description of image, a combination of `title` + ': ' + `description`
- `id`: id of the image from ESA/Hubble
- `title`: textual title of image from ESA/Hubble URL
- `description`: textual description of image from ESA/Hubble URL
- `credits`: required credits for each image from ESA/Hubble URL
- `url`: ESA/Hubble URL
- `Id`: id of the image from ESA/Hubble (from website metadata)
- `Type`: type of deep space scan
- `Release date`: release date of deep space scan
- `Size`: size of original image
- `Name`: name of celestial entities present in image
- `Distance`: distance from celestial entities present in image
- `Constellation`: constellation of celestial entities present in image
- `Category`: category of celestial entities present in image
- `Position (RA)`: coordinates for deep space scan used by Hubble telescope
- `Position (Dec)`: coordinates for deep space scan used by Hubble telescope
- `Field of view`: coordinates for deep space scan used by Hubble telescope
- `Orientation`: coordinates for deep space scan used by Hubble telescope
- `width`: width of image, same if the image did not need to be cropped, but otherwise could differ from `Size`
- `height`: height of image, same if the image did not need to be cropped, but otherwise could differ from `Size`
- `file_size`: `width` x `height` x 3 bytes, used to estimate size of raw images
- `crop_w`: width starting point of image if cropped, otherwise 0
- `crop_h`: height starting point of image if cropped, otherwise 0
- `cropped`: whether this image needed to be cropped or not
### Data Splits
The data is only provided in a single training split, as the purpose of the dataset is additional fine-tuning for the task of `text-to-image` generation.
## Dataset Creation
### Curation Rationale
The ESA Hubble Deep Space Images & Captions dataset was built to provide ease of access to extremely high-quality Hubble deep space scans.
Images from the Hubble telescope have already inspired millions, and the hope is that this dataset can be used to create inspiring models and approaches to further push interest in space & cosmology.
### Source Data
#### Initial Data Collection
All images were collected from [https://esahubble.org/](https://esahubble.org/).
Fullsize Original images & metadata were crawled from the ESA Hubble website using [Scrapy](https://scrapy.org/).
Images were downloaded as `.tiff` files, while
additional metadata was later collected for each image using the following [code](https://github.com/Supermaxman/hubble-diffusion).
As the ESA Hubble website collects images from a wide variety of sources, images were filtered to try to avoid any non-space scan images as follows:
The ESA Hubble [Advanced Image Search](http://esahubble.org/images/archive/search) enables the following filtering parameters:
- images with Minimum size greater than or equal to 400x300
- Ranking greater than or equal to Fair or better
- Type containing 'Observation'
This reduced significantly the number of images which had nothing to do with Hubble deep space scans.
A total of around 3,000 space images were collected with this method.
#### Filtering
Further automatic and manual filtering was performed to remove the following:
- improperly classified images
- space renders
- diagrams with text
- images of celestial bodies within our solar system
- images with too low a resolution
This brought the total number of deep space images down to 2,593.
This process was not perfect, and there likely remain some images in the dataset that should be removed in the future.
#### Preprocessing
Some of the deep space scans were as large as 34,372x19,345, with a bit depth of 24 (nearly 2 GB).
Unfortunately, these images were too large to upload easily
Therefore, images were automatically subdivided in half if they were above 12,000 pixels in either height or width.
Subdivided images were also tagged with additional metadata, such that users can reconstruct the original images if they would prefer.
Otherwise, metadata was copied across subdivided images.
Additionally, images were converted from RGB/RGBX `.tiff` to RGB `.png` files to avoid encoding issues.
This process resulted in 2,706 final deep space images.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help inspire people to be interested in astronomy.
A system that succeeds at text-to-image generation would be able to generate inspiring deep space scans, providing interesting and inspiring art for those interested in space. This dataset provides a starting-point for building such a system by providing text and image pairs for Hubble deep space scans.
### Discussion of Biases
It is unfortunate that we currently only have English captions for these deep space scans.
In the future, expanding these captions to more languages could help spread interest in astronomy far and wide.
Additionally, these captions may be too technical for the average person to effectively utilize for a text-to-image model.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The dataset was initially created by all the wonderful researchers, engineers, scientists, and more behind the Hubble Telescope, NASA, and the ESA.
Maxwell Weinzierl collected, filtered, and preprocessed this data for ease of use.
### Licensing Information
ESA/Hubble images, videos and web texts are released under the [Creative Commons Attribution 4.0 International license](https://creativecommons.org/licenses/by/4.0/)
and may on a non-exclusive basis be reproduced without fee provided they are clearly and visibly credited.
See [https://esahubble.org/copyright/](https://esahubble.org/copyright/) for additional conditions for reproduction and copyright.
### Citation Information
If you use this dataset, please cite it as:
```bibtex
@misc{weinzierl2023hubble,
author = {Weinzierl, Maxwell A.},
title = {ESA Hubble Deep Space Images & Captions},
year={2023},
howpublished= {\url{https://huggingface.co/datasets/Supermaxman/esa-hubble}}
}
```
### Contributions
Thanks to [@supermaxman](https://github.com/supermaxman) for adding this dataset.
|
Supermaxman/esa-hubble
|
[
"task_categories:text-to-image",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"space",
"region:us"
] |
2023-02-22T22:03:08+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-to-image"], "pretty_name": "ESA Hubble Deep Space Images & Captions", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "credits", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "Type", "dtype": "string"}, {"name": "Release date", "dtype": "string"}, {"name": "Related releases", "dtype": "string"}, {"name": "Size", "dtype": "string"}, {"name": "Name", "dtype": "string"}, {"name": "Distance", "dtype": "string"}, {"name": "Constellation", "dtype": "string"}, {"name": "Category", "dtype": "string"}, {"name": "Position (RA)", "dtype": "string"}, {"name": "Position (Dec)", "dtype": "string"}, {"name": "Field of view", "dtype": "string"}, {"name": "Orientation", "dtype": "string"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "file_size", "dtype": "int64"}, {"name": "crop_w", "dtype": "int64"}, {"name": "crop_h", "dtype": "int64"}, {"name": "cropped", "dtype": "bool"}, {"name": "Related science announcements", "dtype": "string"}, {"name": "Related announcements", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 94474695584.124, "num_examples": 2706}], "download_size": 61236366105, "dataset_size": 94474695584.124}, "tags": ["space"]}
|
2023-02-26T13:20:26+00:00
|
40818e91d4a73bc0e96fc16da7d64b54f80d4515
|
# Dataset Card for "VQAv2_sample_validation_facebook_opt_6.7b_mode_VQAv2_visclues_ns_20_open_ended"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_sample_validation_facebook_opt_6.7b_mode_VQAv2_visclues_ns_20_open_ended
|
[
"region:us"
] |
2023-02-22T22:37:21+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_bs_8", "num_bytes": 3172, "num_examples": 20}], "download_size": 3386, "dataset_size": 3172}}
|
2023-02-22T22:37:24+00:00
|
49b99a8de1585ab3ec4a815d53b769e5495bf463
|
# Dataset Card for "wikisource-yellow"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Zombely/wikisource-yellow
|
[
"region:us"
] |
2023-02-22T22:39:55+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train_1", "num_bytes": 12984998648.244, "num_examples": 9998}, {"name": "train_2", "num_bytes": 16071270493.0, "num_examples": 10000}, {"name": "train_3", "num_bytes": 15496290078.0, "num_examples": 10000}, {"name": "train_4", "num_bytes": 8549111534.0, "num_examples": 10000}, {"name": "train_5", "num_bytes": 13382018606.0, "num_examples": 10000}, {"name": "train_6", "num_bytes": 16871883641.979, "num_examples": 9959}, {"name": "train_7", "num_bytes": 15199574685.0, "num_examples": 10000}, {"name": "train_8", "num_bytes": 13887271412.0, "num_examples": 10000}, {"name": "train_9", "num_bytes": 15434064354.0, "num_examples": 10000}, {"name": "train_10", "num_bytes": 7874718803.82, "num_examples": 6969}, {"name": "validation", "num_bytes": 12645144007.93, "num_examples": 7745}], "download_size": 13454099590, "dataset_size": 131524462621.994}}
|
2023-03-07T13:50:51+00:00
|
fb45b1f4523e45c512498ab8171968703bba4539
|
# Dataset Card for "stackoverflow-python-with-meta-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
suriyagunasekar/stackoverflow-python-with-meta-data
|
[
"region:us"
] |
2023-02-22T23:21:53+00:00
|
{"dataset_info": {"features": [{"name": "content", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "answers_scores", "sequence": "int32"}, {"name": "non_answers", "sequence": "string"}, {"name": "non_answers_scores", "sequence": "int32"}, {"name": "tags", "sequence": "string"}, {"name": "name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9114535831, "num_examples": 1745972}], "download_size": 4753108665, "dataset_size": 9114535831}}
|
2023-02-22T23:36:50+00:00
|
222aa8c5c676cb970c37f6446d5199c4e75d361b
|
# Dataset Card for "dwnews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tillschwoerer/dwnews
|
[
"region:us"
] |
2023-02-22T23:26:33+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Business", "1": "Catastrophe", "2": "Conflicts", "3": "Crime", "4": "Culture", "5": "Health", "6": "Nature and Environment", "7": "Politics", "8": "Society", "9": "Sports"}}}}], "splits": [{"name": "train", "num_bytes": 6675633, "num_examples": 1598}, {"name": "validation", "num_bytes": 807911, "num_examples": 200}, {"name": "test", "num_bytes": 807911, "num_examples": 200}], "download_size": 5058088, "dataset_size": 8291455}}
|
2023-02-22T23:26:39+00:00
|
e898618e47401563f1a33e5922668f9c238792e7
|
dog/fuego-20230222-154818-7e82ca
|
[
"fuego",
"region:us"
] |
2023-02-22T23:48:20+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230222-154818-7e82ca", "status": "done", "script": "run.py", "requirements_file": "requirements.txt", "space_id": "dog/fuego-runner", "space_hardware": "cpu-basic"}}
|
2023-02-22T23:50:52+00:00
|
|
4b551fcc2cab566b6a9174f1e979341112d02584
|
# Dataset Card for "xsum_clean_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
venetis/xsum_clean_text
|
[
"region:us"
] |
2023-02-22T23:56:36+00:00
|
{"dataset_info": {"features": [{"name": "document", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 479206363, "num_examples": 204045}, {"name": "validation", "num_bytes": 26292877, "num_examples": 11332}, {"name": "test", "num_bytes": 26756141, "num_examples": 11334}], "download_size": 338049038, "dataset_size": 532255381}}
|
2023-02-22T23:56:49+00:00
|
6bb9924339fc7168b5c4986c78f1fc1a235a7406
|
dog/fuego-20230223-010011-c7aaa3
|
[
"fuego",
"region:us"
] |
2023-02-23T00:00:12+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230223-010011-c7aaa3", "status": "done", "script": "run.py", "requirements_file": "requirements.txt", "space_id": "dog/fuego-runner", "space_hardware": "cpu-basic"}}
|
2023-02-23T00:04:13+00:00
|
|
876eec75b83e6283c17df6397659593c76f1145e
|
KrakExilios/koreandoll
|
[
"license:other",
"region:us"
] |
2023-02-23T00:24:41+00:00
|
{"license": "other"}
|
2023-05-12T16:19:42+00:00
|
|
d4d16d7f8c78740d0032f214a18a461131cfc597
|
# Dataset Card for "the-stack-smol-filtered-python-docstrings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
calum/the-stack-smol-python-docstrings
|
[
"region:us"
] |
2023-02-23T01:04:12+00:00
|
{"dataset_info": {"features": [{"name": "body", "dtype": "string"}, {"name": "body_hash", "dtype": "int64"}, {"name": "docstring", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "repository_name", "dtype": "string"}, {"name": "lang", "dtype": "string"}, {"name": "body_without_docstring", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 33019111.239729874, "num_examples": 24616}], "download_size": 0, "dataset_size": 33019111.239729874}}
|
2023-02-23T01:43:55+00:00
|
b62eb16078c37471dae98030e0749bc1174e9dce
|
# Dataset Card for "commonsenseqa_with_content_words"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liujqian/commonsenseqa_with_content_words
|
[
"region:us"
] |
2023-02-23T01:26:14+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "question_concept", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "label", "dtype": "string"}, {"name": "text", "dtype": "string"}]}, {"name": "answerKey", "dtype": "string"}, {"name": "question_content_words", "sequence": "string"}, {"name": "choice_0_content_words", "sequence": "string"}, {"name": "choice_1_content_words", "sequence": "string"}, {"name": "choice_2_content_words", "sequence": "string"}, {"name": "choice_3_content_words", "sequence": "string"}, {"name": "choice_4_content_words", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 3595329, "num_examples": 9741}, {"name": "validation", "num_bytes": 446090, "num_examples": 1221}, {"name": "test", "num_bytes": 419929, "num_examples": 1140}], "download_size": 2361458, "dataset_size": 4461348}}
|
2023-02-24T01:21:03+00:00
|
3f0d3db9bd3f74672db5c96008ebf4ed2a8c5549
|
# Dataset Card for "ms_marco_large_sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nschantz21/ms_marco_large_sample
|
[
"region:us"
] |
2023-02-23T02:19:59+00:00
|
{"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "qid", "dtype": "int64"}, {"name": "iteration", "dtype": "int64"}, {"name": "pid", "dtype": "int64"}, {"name": "relevancy", "dtype": "int64"}, {"name": "query", "dtype": "string"}, {"name": "passage", "dtype": "string"}, {"name": "passage_class", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 64530835, "num_examples": 53276}], "download_size": 34349985, "dataset_size": 64530835}}
|
2023-02-23T02:20:06+00:00
|
24659faeb9d171f40e17d81306280475879b761a
|
# Dataset Card for "stackoverflow-with-meta-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
suriyagunasekar/stackoverflow-with-meta-data
|
[
"region:us"
] |
2023-02-23T02:29:47+00:00
|
{"dataset_info": {"features": [{"name": "content", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "answers_scores", "sequence": "int32"}, {"name": "non_answers", "sequence": "string"}, {"name": "non_answers_scores", "sequence": "int32"}, {"name": "tags", "sequence": "string"}, {"name": "name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 104739824581, "num_examples": 19904590}], "download_size": 0, "dataset_size": 104739824581}}
|
2023-02-23T06:04:44+00:00
|
f6519abdebc57e49c73ea438b7b0f6ba41a1c114
|
# Dataset Card for "XOM"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
thercyl/XOM
|
[
"region:us"
] |
2023-02-23T02:36:40+00:00
|
{"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "float64"}, {"name": "Ticker", "dtype": "string"}, {"name": "Year", "dtype": "string"}, {"name": "Text", "dtype": "string"}, {"name": "Embedding", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4309445, "num_examples": 131}], "download_size": 2623140, "dataset_size": 4309445}}
|
2023-02-23T02:36:43+00:00
|
6adf6167b80edbae70456db7e3b42d57826cbbc7
|
# Dataset Card for "UNH"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
thercyl/UNH
|
[
"region:us"
] |
2023-02-23T02:41:59+00:00
|
{"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "float64"}, {"name": "Ticker", "dtype": "string"}, {"name": "Year", "dtype": "string"}, {"name": "Text", "dtype": "string"}, {"name": "Embedding", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 72142710, "num_examples": 2069}], "download_size": 41968517, "dataset_size": 72142710}}
|
2023-02-23T02:42:10+00:00
|
6658355c8302ecd97c0c3c3fb7aaed259e2ce74b
|
# Dataset Card for "JNJ"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
thercyl/JNJ
|
[
"region:us"
] |
2023-02-23T02:44:46+00:00
|
{"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "float64"}, {"name": "Ticker", "dtype": "string"}, {"name": "Year", "dtype": "string"}, {"name": "Text", "dtype": "string"}, {"name": "Embedding", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 41199241, "num_examples": 1185}], "download_size": 22949643, "dataset_size": 41199241}}
|
2023-02-23T02:44:54+00:00
|
d6990a4b60ae77e0fd9bb3d0aafa238ab8b7b573
|
# Dataset Card for "VQAv2_validation_facebook_opt_6.7b_mode_VQAv2_visclues_detection_ns_100_open_ended"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_validation_facebook_opt_6.7b_mode_VQAv2_visclues_detection_ns_100_open_ended
|
[
"region:us"
] |
2023-02-23T02:56:20+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_bs_32", "num_bytes": 14423, "num_examples": 100}], "download_size": 8281, "dataset_size": 14423}}
|
2023-02-23T02:56:23+00:00
|
5ab5a9d7f5ee0bd4a56411bc3fc1dca151a4f308
|
# Dataset Card for "cv.3.image_object_detection.detect_illustration"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lansinuote/cv.3.image_object_detection.detect_illustration
|
[
"region:us"
] |
2023-02-23T02:57:14+00:00
|
{"dataset_info": {"features": [{"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "width", "dtype": "int32"}, {"name": "height", "dtype": "int32"}, {"name": "objects", "list": [{"name": "category_id", "dtype": {"class_label": {"names": {"0": "early_printed_illustration"}}}}, {"name": "image_id", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "area", "dtype": "int64"}, {"name": "bbox", "sequence": "float32", "length": 4}, {"name": "segmentation", "list": {"list": "float32"}}, {"name": "iscrowd", "dtype": "bool"}]}], "splits": [{"name": "train", "num_bytes": 894127063.61973, "num_examples": 6800}, {"name": "test", "num_bytes": 25952722.812344998, "num_examples": 200}], "download_size": 0, "dataset_size": 920079786.432075}}
|
2023-02-23T03:03:09+00:00
|
a6f692bdf623a10012c94d95d5bda4e92610d9f7
|
# Dataset Card for "cv.3.image_object_detection"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lansinuote/cv.3.image_object_detection
|
[
"region:us"
] |
2023-02-23T02:57:22+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "digits", "sequence": [{"name": "bbox", "sequence": "int32", "length": 4}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3", "4": "4", "5": "5", "6": "6", "7": "7", "8": "8", "9": "9"}}}}]}], "splits": [{"name": "train", "num_bytes": 67463846.91850163, "num_examples": 6646}, {"name": "test", "num_bytes": 690276.6069133481, "num_examples": 68}], "download_size": 60342937, "dataset_size": 68154123.52541497}}
|
2023-02-23T02:58:10+00:00
|
2d7cc3065a785ff6b2a30abd6bac416b864528fc
|
# Dataset Card for "VQAv2_validation_facebook_opt_6.7b_mode_VQAv2_visclues_detection_ns_200_open_ended"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_validation_facebook_opt_6.7b_mode_VQAv2_visclues_detection_ns_200_open_ended
|
[
"region:us"
] |
2023-02-23T03:02:10+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_bs_32", "num_bytes": 29928, "num_examples": 200}], "download_size": 14434, "dataset_size": 29928}}
|
2023-02-23T03:02:12+00:00
|
23dc50d4e3789aabd7c188a8341486390942297e
|
myscale/unsplash-examples
|
[
"license:mit",
"region:us"
] |
2023-02-23T03:08:48+00:00
|
{"license": "mit"}
|
2023-03-02T11:50:02+00:00
|
|
83d6213934243ac22178af1550263d734d4a6178
|
# Dataset Card for "VQAv2_validation_facebook_opt_2.7b_mode_VQAv2_visclues_detection_ns_500_open_ended"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_validation_facebook_opt_2.7b_mode_VQAv2_visclues_detection_ns_500_open_ended
|
[
"region:us"
] |
2023-02-23T03:17:04+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_bs_8", "num_bytes": 72950, "num_examples": 500}], "download_size": 28917, "dataset_size": 72950}}
|
2023-02-23T03:17:06+00:00
|
3d97b383d844bca2affd943b21099599f7840490
|
# Dataset Card for "au.1.audio_classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lansinuote/au.1.audio_classification
|
[
"region:us"
] |
2023-02-23T03:18:16+00:00
|
{"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "yes", "1": "no", "2": "up", "3": "down", "4": "left", "5": "right", "6": "on", "7": "off", "8": "stop", "9": "go", "10": "_silence_", "11": "_unknown_"}}}}, {"name": "input_values", "sequence": "float32"}, {"name": "attention_mask", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 3018873312, "num_examples": 23582}, {"name": "test", "num_bytes": 12801600, "num_examples": 100}], "download_size": 1130643070, "dataset_size": 3031674912}}
|
2023-02-23T05:41:14+00:00
|
a7cef05ebf01331f5df30e4ec6fc848bcccf1e85
|
# Dataset Card for "META"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
thercyl/META
|
[
"region:us"
] |
2023-02-23T03:21:10+00:00
|
{"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "float64"}, {"name": "Ticker", "dtype": "string"}, {"name": "Year", "dtype": "string"}, {"name": "Text", "dtype": "string"}, {"name": "Embedding", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 86462290, "num_examples": 2473}], "download_size": 48256052, "dataset_size": 86462290}}
|
2023-02-23T03:21:22+00:00
|
27c05fbe349950c8661ac8a375879553eb9e67b1
|
# Dataset Card for "VQAv2_validation_facebook_opt_6.7b_mode_VQAv2_visclues_detection_ns_500_open_ended"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_validation_facebook_opt_6.7b_mode_VQAv2_visclues_detection_ns_500_open_ended
|
[
"region:us"
] |
2023-02-23T03:23:21+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_bs_32", "num_bytes": 73169, "num_examples": 500}], "download_size": 29281, "dataset_size": 73169}}
|
2023-02-23T03:23:24+00:00
|
0d32717d8e43c059b3d1a20da65e9c664a33ee6a
|
# Dataset Card for "au.2.speech_recognition"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lansinuote/au.2.speech_recognition
|
[
"region:us"
] |
2023-02-23T03:50:26+00:00
|
{"dataset_info": {"features": [{"name": "transcription", "dtype": "string"}, {"name": "raw_speech", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 233406389.45149255, "num_examples": 482}, {"name": "test", "num_bytes": 26149263.548507463, "num_examples": 54}], "download_size": 260816894, "dataset_size": 259555653.0}}
|
2023-02-23T10:32:05+00:00
|
3289c8fe06c36508e04f7d457e0be52d4966466c
|
# Dataset Card for "VQAv2_validation_facebook_opt_13b_mode_VQAv2_visclues_detection_ns_400_open_ended"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_validation_facebook_opt_13b_mode_VQAv2_visclues_detection_ns_400_open_ended
|
[
"region:us"
] |
2023-02-23T04:27:06+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_bs_32", "num_bytes": 59835, "num_examples": 400}], "download_size": 0, "dataset_size": 59835}}
|
2023-02-23T04:40:09+00:00
|
fca9a9494565824c6c08a1a883c83b5b485c9cc5
|
dataset_info:
- features:
- name: labels
- dtype: class_label
- '0': Negative
- '1': Positive
- name: Review
- dtype: string
---
splits:
- name: train
- num_examples: 15000
- Positive : 50%
- Negative : 50%
- name: test
- num_examples: 1000
- Positive : 87.7%
- Negative : 17.3%
- The ratio of positive and negative data is the same as the ratio of the original data.
---
task_categories:
- text-classification
# Dataset Card for "review_subset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
seokwoni/review_subset
|
[
"region:us"
] |
2023-02-23T05:03:41+00:00
|
{}
|
2023-02-23T05:46:22+00:00
|
730a475851dc17466b4e6005de8f5a73da7ae188
|
A dataset of SD-generated images of anime neko characters vs. non_neko characters.
|
ongknsro/nekos-nonnekos
|
[
"task_categories:image-classification",
"size_categories:1K<n<10K",
"doi:10.57967/hf/1715",
"region:us"
] |
2023-02-23T05:19:18+00:00
|
{"size_categories": ["1K<n<10K"], "task_categories": ["image-classification"], "pretty_name": "neko or non_neko"}
|
2023-05-19T08:30:59+00:00
|
26b64629894a67ea0c7410f6e8bccbd686f4fbff
|
# Dataset Card for "speechocean"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
siegels/speechocean
|
[
"region:us"
] |
2023-02-23T05:49:09+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "file_id", "dtype": "int64"}, {"name": "accuracy", "dtype": "int64"}, {"name": "completeness", "dtype": "float64"}, {"name": "fluency", "dtype": "int64"}, {"name": "prosodic", "dtype": "int64"}, {"name": "words", "list": [{"name": "accuracy", "dtype": "int64"}, {"name": "phones", "sequence": "string"}, {"name": "phones-accuracy", "sequence": "float64"}, {"name": "stress", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "total", "dtype": "int64"}]}, {"name": "total", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}, {"name": "file_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 576478346.0, "num_examples": 5000}], "download_size": 611300052, "dataset_size": 576478346.0}}
|
2023-02-23T05:50:26+00:00
|
69a35e273947b0b9f9c2969a9a93af0ac382248c
|
# Dataset Card for "simplewiki2023-minilml6v2-avgembeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lsb/simplewiki2023-minilml6v2-avgembeddings
|
[
"region:us"
] |
2023-02-23T06:11:53+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "avg_embed", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 622429552, "num_examples": 225332}], "download_size": 328121680, "dataset_size": 622429552}}
|
2023-02-23T06:17:07+00:00
|
e192c171229e7cf3fb43896c551d40058475d6a7
|
# Dataset Card for UTS_Text
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
The UTS_Text dataset is a collection of 100,000 sentences sourced from various news articles.
Out of the 10,000 sentences in the dataset, 5,000 sentences have a length ranging from 50 to 150, while the other 5,000 sentences have a length ranging from 20 to 50. This distribution of sentence lengths provides a diverse range of text samples that can be used to train and test natural language processing models.
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
| name | train | validation | test |
|---------|--------:|-----------:|-------:|
| small | 1600 | 200 | 200 |
| base | 8000 | 1000 | 1000 |
| large | 95000 | 2500 | 2500 |
## Dataset Creation
### Curation Rationale
### Source Data
### Annotations
## Additional Information
### Licensing Information
The dataset is released under Apache 2.0.
### Citation Information
### Contributions
|
undertheseanlp/UTS_Text
|
[
"task_categories:text-generation",
"annotations_creators:no-annotation",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:vi",
"license:apache-2.0",
"region:us"
] |
2023-02-23T06:19:46+00:00
|
{"annotations_creators": ["no-annotation"], "language": ["vi"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "pretty_name": "UTS_Text"}
|
2023-03-03T03:29:39+00:00
|
71bec5f33ec24000aed0fa8656a9adaa9a776f43
|
# Dataset Card for "snli-french"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
kseth919/snli-french
|
[
"size_categories:1M<n<10M",
"language:fr",
"nli",
"fnli",
"snli-french",
"region:us"
] |
2023-02-23T06:58:34+00:00
|
{"language": ["fr"], "size_categories": ["1M<n<10M"], "pretty_name": "SNLI-French", "dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 60781325, "num_examples": 549367}, {"name": "dev", "num_bytes": 1097461, "num_examples": 9842}, {"name": "test", "num_bytes": 1092127, "num_examples": 9824}], "download_size": 0, "dataset_size": 62970913}, "tags": ["nli", "fnli", "snli-french"]}
|
2023-02-26T06:39:35+00:00
|
c8e8ba7bc87bd371fe25ae6ec8231f080ee53065
|
# Dataset Card for "review_subset_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
seokwoni/review_subset_test
|
[
"region:us"
] |
2023-02-23T07:00:27+00:00
|
{"dataset_info": {"features": [{"name": "labels", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Review", "dtype": "string"}, {"name": "en_review", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 627736, "num_examples": 1000}, {"name": "train", "num_bytes": 627736, "num_examples": 1000}], "download_size": 769994, "dataset_size": 1255472}}
|
2023-02-23T09:41:23+00:00
|
38e97fe0c9f3b01a0d0bb762e9398e32b2269df8
|
# Dataset Card for "instructpix2pix-1000-samples"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
The dataset was created using the code from [this repository](https://github.com/sayakpaul/instruct-pix2pix-dataset).
|
fusing/instructpix2pix-1000-samples
|
[
"region:us"
] |
2023-02-23T07:05:45+00:00
|
{"dataset_info": {"features": [{"name": "input_image", "dtype": "image"}, {"name": "edit_prompt", "dtype": "string"}, {"name": "edited_image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 416880759.0, "num_examples": 1000}], "download_size": 416899514, "dataset_size": 416880759.0}}
|
2023-02-23T07:08:49+00:00
|
87bbe4e14515fc8110f6ecd992ee07b52b161aed
|
# Dataset Card for "mari-russian-parallel-corpora"
```
@inproceedings{
title={Mari-Russian parallel corpora},
author={Andrei Chemyshev, Gennadii Sabantsev, Nadezhda Timofeeva, Vasilii Semenov},
year={2023}
}
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AigizK/mari-russian-parallel-corpora
|
[
"task_categories:translation",
"language:mhr",
"language:ru",
"license:cc-by-4.0",
"region:us"
] |
2023-02-23T07:06:37+00:00
|
{"language": ["mhr", "ru"], "license": "cc-by-4.0", "task_categories": ["translation"], "dataset_info": {"features": [{"name": "mhr", "dtype": "string"}, {"name": "rus", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 79751117, "num_examples": 386707}], "download_size": 39195604, "dataset_size": 79751117}}
|
2023-11-11T17:56:56+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.