sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
70545a46f1abd20a2c2b92398d1dabd34e08afa2
|
WD14-REG-GIRLS
regularization images generated by waifu-diffusion 1.4 e2 for myself.
image resolution 768x768.
REG.zip:generate normal WD1.4
REG2.zip:generate fine-tuned WD1.4
|
trprkkk/WD14-REG-GIRLS
|
[
"region:us"
] |
2023-02-03T10:09:04+00:00
|
{}
|
2023-02-03T10:19:21+00:00
|
d9f804bbb93d2f679758ed4c6a03871ab72de112
|
# Dataset Card for "unlabeled_samples"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
active-learning/unlabeled_samples
|
[
"region:us"
] |
2023-02-03T10:33:41+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 16624881.912901606, "num_examples": 59755}], "download_size": 15263092, "dataset_size": 16624881.912901606}}
|
2023-09-04T20:47:15+00:00
|
31413324465f5f5389b21cc9d94901f8a439e1c9
|
# Dataset Card for "labeled_samples"
This is a labeled dataset of images to train an image classification system.
|
active-learning/labeled_samples
|
[
"region:us"
] |
2023-02-03T10:34:06+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3", "4": "4", "5": "5", "6": "6", "7": "7", "8": "8", "9": "9"}}}}], "splits": [{"name": "train", "num_bytes": 56962.0, "num_examples": 155}], "download_size": 42096, "dataset_size": 56962.0}}
|
2023-03-09T13:01:17+00:00
|
3bb4895c704d4b142056fd4505eead8ca6abe18a
|
# Dataset Card for "test_mnist"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
active-learning/test_mnist
|
[
"region:us"
] |
2023-02-03T10:46:10+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3", "4": "4", "5": "5", "6": "6", "7": "7", "8": "8", "9": "9"}}}}], "splits": [{"name": "test", "num_bytes": 1413525.0, "num_examples": 5000}], "download_size": 1165913, "dataset_size": 1413525.0}}
|
2023-02-03T10:51:50+00:00
|
5221aa25c9d7645f25d426f8ef3850e1ac966c69
|
Amitesh007/hin-eng-translatin-data
|
[
"license:apache-2.0",
"region:us"
] |
2023-02-03T10:53:04+00:00
|
{"license": "apache-2.0"}
|
2023-02-03T10:59:43+00:00
|
|
18a560856826fdcfab5b328829f550ce0f9e5a7a
|
# Dataset Card for KGEditor
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** [https://arxiv.org/abs/2301.10405](https://arxiv.org/abs/2301.10405)
- **Leaderboard:** [https://zjunlp.github.io/project/KGE_Editing/](https://zjunlp.github.io/project/KGE_Editing/)
- **Point of Contact:**
### Supported Tasks and Leaderboards
The purpose of the KGE Edit task is to modify the erroneous knowledge in the KGE model and to inject new knowledge into the KGE model. Thus, in response to the task objectives, we design two subtasks (EDIT & ADD). For the EDIT sub-task, we edit the wrong fact knowledge that is stored in the KG embeddings. Also, for the ADD sub-task, we add brand-new knowledge into the model without re-training the whole model.
### Dataset Summary
We build four datasets for the sub-task of EDIT and ADD based on two benchmark datasets FB15k-237, and WN18RR. Firstly, we train KG embedding models with language models. For EDIT task, we sample some hard triples as candidates following the procedure below. For the ADD sub-task, we leverage the original training set of FB15k-237 and WN18RR to build the pre-train dataset (original pre-train data) and use the data from the standard inductive setting as they are not seen before.
## Dataset Structure
### Data Instances
An example of E-FB15k237:
(Note that we have converted the ID to text for easier understanding)
```
{
"ori": ["Jennifer Connelly", "type of union", "Marriage"],
"cor": ["Stephen Sondheim", "type of union", "Marriage"],
"process": ["[MASK]", "type of union", "Marriage"],
"label": "Jennifer Connelly"
}
```
An example of A-FB15k237:
```
{
"triples": ["Darryl F. Zanuck", "place of death", "Palm Springs"],
"label": "Palm Springs",
"head": 0
}
```
### Data Fields
The data fields are the same among all splits.
For EDIT sub-task:
- ori: the fact in the pre-train dataset.
- cor: corrupted triple.
- process: the triple after replacing the wrong entity with the [MASK] token.
- label: a classification label, the scope is the entire set of entities.
For ADD sub-task:
- triples: the knowledge that needs to be injected into the model.
- label: a classification label, the scope is the entire set of entities.
- head: the head or tail entity that does not appear in pre-train.
### Data Splits
<table>
<tr>
<th></th>
<td>Pre-trained</td>
<th>Train</th>
<th>Test</th>
<th>L-Test</th>
</tr>
<tr>
<th>E-FB15k237</th>
<td>310,117</td>
<td>3,087</td>
<td>3,087</td>
<td>7,051</td>
</tr>
<tr>
<th>A-FB15k237</th>
<td>215,082</td>
<td>2,000</td>
<td>-</td>
<td>16,872</td>
</tr>
<tr>
<th>E-WN18RR</th>
<td>93,003</td>
<td>1,491</td>
<td>1,401</td>
<td>5,003</td>
</tr>
<tr>
<th>A-WN18RR</th>
<td>69,721</td>
<td>2,000</td>
<td>-</td>
<td>10,000</td>
</tr>
</table>
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
For the EDIT subtask, our data(E-FB15k237 and E-WN18RR) is based on the [FB15k237](https://paperswithcode.com/dataset/fb15k-237) and [WN18RR](https://paperswithcode.com/dataset/wn18rr).
For the ADD subtask, our data(A-FB15k237 and E-WN18RR) remain the same as the inductive settings in [paper](https://arxiv.org/abs/2010.03496).
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{DBLP:journals/corr/abs-2301-10405,
author = {Siyuan Cheng and
Ningyu Zhang and
Bozhong Tian and
Zelin Dai and
Feiyu Xiong and
Wei Guo and
Huajun Chen},
title = {Editing Language Model-based Knowledge Graph Embeddings},
journal = {CoRR},
volume = {abs/2301.10405},
year = {2023},
url = {https://doi.org/10.48550/arXiv.2301.10405},
doi = {10.48550/arXiv.2301.10405},
eprinttype = {arXiv},
eprint = {2301.10405},
timestamp = {Thu, 26 Jan 2023 17:49:16 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2301-10405.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
[More Information Needed]
|
zjunlp/KGEditor
|
[
"size_categories:1K<n<10K",
"language:en",
"arxiv:2301.10405",
"arxiv:2010.03496",
"region:us"
] |
2023-02-03T10:54:11+00:00
|
{"language": ["en"], "size_categories": ["1K<n<10K"]}
|
2023-02-13T03:38:01+00:00
|
e7e24d038128c0a66c236e6ae0757c37eb8c87ca
|
keezin/asda1
|
[
"license:unknown",
"region:us"
] |
2023-02-03T10:55:39+00:00
|
{"license": "unknown"}
|
2023-02-03T10:57:20+00:00
|
|
da1ebbb71aa0d170c439504e80f1c11ea5f78b12
|
# Dataset Card for NST Swedish Speech Synthesis (44 kHz)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [NST Swedish Speech Synthesis (44 kHz)](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-18/)
### Dataset Summary
The corpus consists of a single speaker, with 5277 segments.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The audio is in Swedish.
## Dataset Structure
[Needs More Information]
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
(The below is a partially corrected machine translation from [here](https://www.nb.no/sbfil/dok/nst_taledat_se.pdf) )
The data was developed by Nordisk språkteknologi holding AS (NST),
which went bankrupt in 2003.
In 2006, a jointly owned group of the University of
Oslo, the University of Bergen, the Norwegian University of Science and Technology, the Language Council and IBM
bought the assets of NST, to ensure that the linguistic resources that NST had developed were
take care of. The National Library was commissioned by the Ministry of Culture to build a Norwegian
language bank in 2009, and started this work in 2010.
The resources after NST were transferred to the National Library in May 2011, and they are now done
available in the Language Bank, initially without further processing.
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[The Norwegian Language Bank](https://www.nb.no/sprakbanken/en/sprakbanken/)
### Licensing Information
[CC0: Public Domain](https://creativecommons.org/publicdomain/zero/1.0/)
### Citation Information
[Needs More Information]
### Contributions
[Needs More Information]
|
jimregan/nst_swedish_tts
|
[
"task_categories:text-to-speech",
"language:sv",
"license:cc0-1.0",
"region:us"
] |
2023-02-03T11:02:10+00:00
|
{"language": ["sv"], "license": "cc0-1.0", "task_categories": ["text-to-speech"]}
|
2023-02-03T12:05:36+00:00
|
57c902446f3660852c7e5dc261e42d09b997004a
|
mikolaj-p/MOCKS-test
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-02-03T11:21:20+00:00
|
{"license": "cc-by-4.0"}
|
2023-10-27T13:20:12+00:00
|
|
8ec494c5c548b5e9f0cda96c14a4175ab4277286
|
# Dataset Card for Writing Prompt Augmentation Dataset
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** Writing Prompt Augmentation Dataset was built to expand samples of [FAIR Writing Prompt Dataset](https://www.kaggle.com/datasets/ratthachat/writing-prompts)
- **Leaderboard:**
- **Point of Contact:** [Fabricio Braz](mailto:[email protected])
### Dataset Summary
Writing Prompt Augmentation Dataset was built to expand samples of [FAIR Writing Prompt Dataset](https://www.kaggle.com/datasets/ratthachat/writing-prompts), for feeding [Open Assistant](https://github.com/LAION-AI/Open-Assistant).
### Languages
English
## Dataset Structure
### Data Instances
```javascript
{"splitLineIndex":36888,
"text":"User: write me a story about: Most responses on here have a twist , and all of them are fictional . Show us a piece of your actual life ; let the reader experience you ., where the middle of the story is about: The boss sits on my wall and asks me where a urgent report is up. I stare at the screen, confused. -> Rosey: Sure, here\'s a story about: Most responses on here have a twist , and all of them are fictional . Show us a piece of your actual life ; let the reader experience you ., where the middle of the story is about: The boss sits on my wall and asks me where a urgent report is up. I stare at the screen, confused.:\\nMy bedroom was dark , but the chatter of distant birds outside of the heavily curtained window suggested a new day was dawning . I stayed in bed , one with the shadows , and stared at the indistinct blackness that filled the ceiling . A trickle of sickly grey light would soon spring from the edges of curtain - the heavy fabric could only do so much to dam the brightness of day . \\n \\n But , for a few more minutes , it was still dark . \\n \\n And I was staring at it - staring at the darkness . Examining it . \\n \\n What was I looking for ? Why , answers of course . Why does anyone stare into the darkness of a bedroom ceiling ? I was seeking answers . \\n \\n Why do it ? I asked myself . Why go to work ? You \'re good at your job when you can be bothered to do it , but how often does that happen ? How often do you really put the effort in ? \\n \\n Can you even remember enjoying it ? \\n \\n Can you remember when you were happy ? \\n \\n I had been too deep in my hunt for answers to notice that the homogenous darkness had given way to a bluish grey world of shapes and objects . My feet swung out of bed and I sat up in the early morning coldness . \\n \\n When *was* I happy last ? \\n \\n I stood up and started my day . \\n \\n * * * \\n \\n The kitchen was filling with light , the muted greys and blues of morning had arrived first , but each minute that passed promised the arrival of the full colours of day . \\n \\n The spoon clinked in the bowl as I scooped up some cereal . I wore only what I had to bed : boxer shirts and a t-shirt . The winter cold does n\'t bother you when you \'ve stopped caring . \\n \\n *When* was I happy ? \\n \\n The question was echoing in my head . A great puzzle . A mystery of the ages . \\n \\n I gulped the last of my morning coffee and went to the bathroom . \\n \\n * * * \\n \\n The plug hole held no answers , no matter how long I stared . \\n \\n How long had I been staring ? \\n \\n I turned the shower off and stepped out into the sterile tiled whiteness . A lifetime of habits drew me to the basin and , without thought , I started to brush my teeth . My mind was still locked , frozen , on the question . \\n \\n When was I happy ? \\n \\n As I wondered , day continued it \'s steady march outside . \\n \\n The bathroom was clean and white , morning light filtered in through a frosted window . The birds were loud now , but I could hardly hear them over the whir of the steam sucking fan above me . \\n \\n Day had officially arrived . \\n \\n Perhaps I am asking myself the wrong question , I thought . \\n \\n The man in the mirror bared his teeth and scrubbed some more , white foam dripped in blobs about the basin . \\n \\n *What* makes me happy ? \\n \\n * * * \\n \\n I had slipped into my work clothes : business shirt , dress pants , leather shoes . My prisoners garb . As I pulled the items on they weighed me down , each a colossal burden . At least I did n\'t wear a tie any more . \\n \\n I had given up on ties , and the rest of my uniform wore the scars of neglect : the shirt was unironed , the pants were thin at the knees and the stitching had come loose at the bottoms , the shoes were beaten , scratched , the soles and tops barely held their bond . \\n \\n This is the business attire of a man who has stopped caring . \\n \\n No one at work seemed to mind . \\n \\n I walked to the front door of my house , shuffling without enthusiasm , without joy for the new day that lay on the other side . \\n \\n I grabbed the handle . \\n \\n What makes me happy ? \\n \\n * * * \\n \\n Another request , another complaint , and my list of work grew longer . It only ever grew longer these days . I had important calls to make , issues to resolve , reports to write - but all I did , for the most part , was stare . \\n \\n Stare at my screen . At my hands . At nothing . \\n \\n The questions I had been asking in the darkness and through-out my house during my morning preparations were not new . I had been thinking on them for a while . I did not know for how long . \\n \\n Weeks ? No . Months . \\n \\n Still no answers . \\n \\n What I do know is : I am *not* happy . \\n \\n The boss leaned on my cubicle wall and asked me where an urgent report , a report that had been urgent for weeks , was up to . The bullshit I served sated his questions and as he walked away I sighed and stared at my screen . \\n \\n To my surprise the report was there . I had been working on it absent-mindedly . Try as I might I still did my job , at least to a degree . \\n \\n Manager for a division of one . Writer of reports and promiser of game changing applications . Mr IT . \\n \\n Well ... at one time I had been Mr IT . Once , when I had been passionate , had had a fire in my belly that churned the engine of my rising star . A career in IT . I had wanted this . \\n \\n Had n\'t I ? \\n \\n Then , why are n\'t I happy ? \\n \\n Because , you did n\'t want this . You never did . You stepped out of high school and fell into it . You \'re good with computers - at least , you were - but they never made you happy . You liked the challenge , sure , but you did it because you had to pay the bills and you had to leave your parents house at some point . \\n \\n Then it was a matter of you being lazy and gutless . Work is a hard habit to break , especially when people keep throwing money at you . You \'d just go in , day after day . Week after week . Month after ... \\n \\n School was almost a decade away and you have n\'t done half of what you wanted . Remember writing ? You were going to write , remember ? You \'ve done some shorts over the years , but you wanted more . You wanted to type those two words . After months and months , you \'d type those two words and you \'d have accomplished sonething . The End . And your book would be done - who cares if it got published . Who cares if no one but you ever saw it . \\n \\n You \'d have written something . You \'d have accomplished something . \\n \\n You \'d be ... \\n \\n And there it is . The answer . \\n \\n Ten years of wasted time - ten years of excuses and meeting other people \'s expectations . Ten years of syaing you \'ll get around to it . \\n \\n Ten years of regret . \\n \\n The report was done . So was I . \\n \\n How do I do this ? Do I walk in and hand in the report and a resignation . No . I ca n\'t do that . These people have been good to me . I need to finish up some of the jobs . Need to get them ready for my abscence . \\n \\n Or am I making excuses ? \\n \\n My screen and my work came into focus . I knew what I needed to do , could feel , almost by instinct , what job \'s were my biggest priorities . A spark lit in my gut and passion trickled through my veins . \\n \\n I was n\'t turning back into Mr IT - could in fact , never be that man again . \\n \\n But I knew what made me happy . Knew how to get there ... \\n \\n ... and could feel it there , just on my horizon ."}
```
### Data Fields
* splitLineIndex: refers to the index line of the data source.
* text: refers to the actual prompt/story text
### Data Splits
|split|samples|
|--|--
|train| 13232|
|valid|5168|
|test| 5700|
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
As mentioned, this dataset is an extension of FAIR writing prompt dataset. The steps employed to create the dataset are in the jupyter notebook at files.
#### Who are the source language producers?
FAIR
### Personal and Sensitive Information
The data comes with NSFW samples. Be aware!
## Additional Information
### Licensing Information
Writing Prompt Augmentation Dataset is licensed under MIT.
### Citation Information
Use to generate consistent stories by Hierarchical Neural Story Generation (Fan et al., 2018) https://arxiv.org/abs/1805.04833
### Contributions
Thanks to Huu Nguyen (gh:ontocord)!
|
fabraz/writingPromptAug
|
[
"task_categories:text-generation",
"language:en",
"license:mit",
"writingPrompt",
"arxiv:1805.04833",
"region:us"
] |
2023-02-03T11:53:02+00:00
|
{"language": ["en"], "license": "mit", "task_categories": ["text-generation"], "tags": ["writingPrompt"], "dataset_info": [{"config_name": "0.0.1", "features": [{"name": "splitLineIndex", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 13232}, {"name": "valid", "num_examples": 5168}, {"name": "test", "num_examples": 5700}]}]}
|
2023-10-14T18:48:35+00:00
|
5cd4155f0fbd4c3084cc56a80b87e67b4cafee02
|
# Dataset Card for "midjourney"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
danielshemesh/midjourney
|
[
"region:us"
] |
2023-02-03T12:10:44+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1134997116.24, "num_examples": 4866}], "download_size": 702442852, "dataset_size": 1134997116.24}}
|
2023-02-03T12:45:25+00:00
|
dee67fa18f6908b202c85039d7c53bd581d94a04
|
# Dataset Card for "mini_gesture_pred"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Jsevisal/mini_gesture_pred
|
[
"region:us"
] |
2023-02-03T12:26:39+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "tokens", "sequence": "string"}, {"name": "gestures", "sequence": "string"}, {"name": "label", "sequence": {"class_label": {"names": {"0": "B-BUT", "1": "I-BUT", "2": "B-CALM_DOWN", "3": "I-CALM_DOWN", "4": "B-COME_ON", "5": "I-COME_ON", "6": "B-EMPHATIC", "7": "I-EMPHATIC", "8": "B-ENTHUSIASTIC", "9": "I-ENTHUSIASTIC", "10": "B-EXPLAIN", "11": "I-EXPLAIN", "12": "B-FRONT", "13": "I-FRONT", "14": "B-GREET", "15": "I-GREET", "16": "B-ITERATE", "17": "I-ITERATE", "18": "B-NEUTRAL", "19": "I-NEUTRAL", "20": "B-NO", "21": "I-NO", "22": "B-NO_GESTURE", "23": "I-NO_GESTURE", "24": "B-OTHER_PEER", "25": "I-OTHER_PEER", "26": "B-PLEASE", "27": "I-PLEASE", "28": "B-QUESTION", "29": "I-QUESTION", "30": "B-SELF", "31": "I-SELF", "32": "B-SORRY", "33": "I-SORRY", "34": "B-THANKS", "35": "I-THANKS", "36": "B-THINKING", "37": "I-THINKING", "38": "B-THIRD_PERSON", "39": "I-THIRD_PERSON", "40": "B-YES", "41": "I-YES"}}}}], "splits": [{"name": "train", "num_bytes": 39101, "num_examples": 126}, {"name": "test", "num_bytes": 39857.0796460177, "num_examples": 130}], "download_size": 20541, "dataset_size": 78958.0796460177}}
|
2023-02-03T12:26:52+00:00
|
3a9c9361b02e415e9c85a70edf216b9b8330c8fb
|
# Dataset Card for "sidewalk-imagery"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
byayla/sidewalk-imagery
|
[
"region:us"
] |
2023-02-03T12:30:14+00:00
|
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3138386.0, "num_examples": 10}], "download_size": 3139600, "dataset_size": 3138386.0}}
|
2023-02-03T12:30:36+00:00
|
853e86d9171e3d30fb787ad048a9473e3514ea88
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
tanaysk/news
|
[
"region:us"
] |
2023-02-03T12:32:25+00:00
|
{}
|
2023-02-03T12:35:53+00:00
|
c02d692fbfa29a1d63f1ca2c15f0f1704e6f7d37
|
### Albanian dataset corput
It consists of 36M phrases/articles collected from the internet.
To cite:
```
@misc{https://doi.org/10.57967/hf/0324,
doi = {10.57967/HF/0324},
url = {https://huggingface.co/datasets/n-iv/sq},
author = {{Nullius in verba}},
title = {sq},
publisher = {Hugging Face},
year = {2023}
}
```
|
n-iv/sq
|
[
"task_categories:text-generation",
"size_categories:10M<n<100M",
"language:sq",
"license:openrail",
"doi:10.57967/hf/0324",
"region:us"
] |
2023-02-03T12:44:23+00:00
|
{"language": ["sq"], "license": "openrail", "size_categories": ["10M<n<100M"], "task_categories": ["text-generation"], "pretty_name": "SQ"}
|
2023-02-03T13:45:20+00:00
|
c8d8928f7ba9ab7ef89abf4b24ff071007127732
|
# Dataset Card for SemEval2018Task7
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://lipn.univ-paris13.fr/~gabor/semeval2018task7/](https://lipn.univ-paris13.fr/~gabor/semeval2018task7/)
- **Repository:** [https://github.com/gkata/SemEval2018Task7/tree/testing](https://github.com/gkata/SemEval2018Task7/tree/testing)
- **Paper:** [SemEval-2018 Task 7: Semantic Relation Extraction and Classification in Scientific Papers](https://aclanthology.org/S18-1111/)
- **Leaderboard:** [https://competitions.codalab.org/competitions/17422#learn_the_details-overview](https://competitions.codalab.org/competitions/17422#learn_the_details-overview)
- **Size of downloaded dataset files:** 1.93 MB
### Dataset Summary
Semeval2018Task7 is a dataset that describes the Semantic Relation Extraction and Classification in Scientific Papers.
The challenge focuses on domain-specific semantic relations and includes three different subtasks. The subtasks were designed so as to compare and quantify the effect of different pre-processing steps on the relation classification results. We expect the task to be relevant for a broad range of researchers working on extracting specialized knowledge from domain corpora, for example but not limited to scientific or bio-medical information extraction. The task attracted a total of 32 participants, with 158 submissions across different scenarios.
The three subtasks are:
- Subtask 1.1: Relation classification on clean data
- In the training data, semantic relations are manually annotated between entities.
- In the test data, only entity annotations and unlabeled relation instances are given.
- Given a scientific publication, The task is to predict the semantic relation between the entities.
- Subtask 1.2: Relation classification on noisy data
- Entity occurrences are automatically annotated in both the training and the test data.
- The task is to predict the semantic
relation between the entities.
- Subtask 2: Metrics for the extraction and classification scenario
- Evaluation of relation extraction
- Evaluation of relation classification
The Relations types are USAGE, RESULT, MODEL, PART_WHOLE, TOPIC, COMPARISION.
The following example shows a text snippet with the information provided in the test data:
Korean, a \<entity id=”H01-1041.10”>verb final language\</entity>with\<entity id=”H01-1041.11”>overt case markers\</entity>(...)
- A relation instance is identified by the unique identifier of the entities in the pair, e.g.(H01-1041.10, H01-1041.11)
- The information to be predicted is the relation class label: MODEL-FEATURE(H01-1041.10, H01-1041.11).
For details, see the paper https://aclanthology.org/S18-1111/.
### Supported Tasks and Leaderboards
- **Tasks:** Relation extraction and classification in scientific papers
- **Leaderboards:** [https://competitions.codalab.org/competitions/17422#learn_the_details-overview](https://competitions.codalab.org/competitions/17422#learn_the_details-overview)
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
#### subtask_1.1
- **Size of downloaded dataset files:** 714 KB
An example of 'train' looks as follows:
```json
{
"id": "H01-1041",
"title": "'Interlingua-Based Broad-Coverage Korean-to-English Translation in CCLING'",
"abstract": 'At MIT Lincoln Laboratory, we have been developing a Korean-to-English machine translation system CCLINC (Common Coalition Language System at Lincoln Laboratory) . The CCLINC Korean-to-English translation system consists of two core modules , language understanding and generation modules mediated by a language neutral meaning representation called a semantic frame . The key features of the system include: (i) Robust efficient parsing of Korean (a verb final language with overt case markers , relatively free word order , and frequent omissions of arguments ). (ii) High quality translation via word sense disambiguation and accurate word order generation of the target language . (iii) Rapid system development and porting to new domains via knowledge-based automated acquisition of grammars . Having been trained on Korean newspaper articles on missiles and chemical biological warfare, the system produces the translation output sufficient for content understanding of the original document.
"entities": [{'id': 'H01-1041.1', 'char_start': 54, 'char_end': 97},
{'id': 'H01-1041.2', 'char_start': 99, 'char_end': 161},
{'id': 'H01-1041.3', 'char_start': 169, 'char_end': 211},
{'id': 'H01-1041.4', 'char_start': 229, 'char_end': 240},
{'id': 'H01-1041.5', 'char_start': 244, 'char_end': 288},
{'id': 'H01-1041.6', 'char_start': 304, 'char_end': 342},
{'id': 'H01-1041.7', 'char_start': 353, 'char_end': 366},
{'id': 'H01-1041.8', 'char_start': 431, 'char_end': 437},
{'id': 'H01-1041.9', 'char_start': 442, 'char_end': 447},
{'id': 'H01-1041.10', 'char_start': 452, 'char_end': 470},
{'id': 'H01-1041.11', 'char_start': 477, 'char_end': 494},
{'id': 'H01-1041.12', 'char_start': 509, 'char_end': 523},
{'id': 'H01-1041.13', 'char_start': 553, 'char_end': 561},
{'id': 'H01-1041.14', 'char_start': 584, 'char_end': 594},
{'id': 'H01-1041.15', 'char_start': 600, 'char_end': 624},
{'id': 'H01-1041.16', 'char_start': 639, 'char_end': 659},
{'id': 'H01-1041.17', 'char_start': 668, 'char_end': 682},
{'id': 'H01-1041.18', 'char_start': 692, 'char_end': 715},
{'id': 'H01-1041.19', 'char_start': 736, 'char_end': 742},
{'id': 'H01-1041.20', 'char_start': 748, 'char_end': 796},
{'id': 'H01-1041.21', 'char_start': 823, 'char_end': 847},
{'id': 'H01-1041.22', 'char_start': 918, 'char_end': 935},
{'id': 'H01-1041.23', 'char_start': 981, 'char_end': 997}],
}
"relations": [{'label': 3, 'arg1': 'H01-1041.3', 'arg2': 'H01-1041.4', 'reverse': True},
{'label': 0, 'arg1': 'H01-1041.8', 'arg2': 'H01-1041.9', 'reverse': False},
{'label': 2, 'arg1': 'H01-1041.10', 'arg2': 'H01-1041.11', 'reverse': True},
{'label': 0, 'arg1': 'H01-1041.14', 'arg2': 'H01-1041.15', 'reverse': True}]
```
#### Subtask_1.2
- **Size of downloaded dataset files:** 1.00 MB
An example of 'train' looks as follows:
```json
{'id': 'L08-1450',
'title': '\nA LAF/GrAF based Encoding Scheme for underspecified Representations of syntactic Annotations.\n',
'abstract': 'Data models and encoding formats for syntactically annotated text corpora need to deal with syntactic ambiguity; underspecified representations are particularly well suited for the representation of ambiguousdata because they allow for high informational efficiency. We discuss the issue of being informationally efficient, and the trade-off between efficient encoding of linguistic annotations and complete documentation of linguistic analyses. The main topic of this article is adata model and an encoding scheme based on LAF/GrAF ( Ide and Romary, 2006 ; Ide and Suderman, 2007 ) which provides a flexible framework for encoding underspecified representations. We show how a set of dependency structures and a set of TiGer graphs ( Brants et al., 2002 ) representing the readings of an ambiguous sentence can be encoded, and we discuss basic issues in querying corpora which are encoded using the framework presented here.\n',
'entities': [{'id': 'L08-1450.4', 'char_start': 0, 'char_end': 3},
{'id': 'L08-1450.5', 'char_start': 5, 'char_end': 10},
{'id': 'L08-1450.6', 'char_start': 25, 'char_end': 31},
{'id': 'L08-1450.7', 'char_start': 61, 'char_end': 64},
{'id': 'L08-1450.8', 'char_start': 66, 'char_end': 72},
{'id': 'L08-1450.9', 'char_start': 82, 'char_end': 85},
{'id': 'L08-1450.10', 'char_start': 92, 'char_end': 100},
{'id': 'L08-1450.11', 'char_start': 102, 'char_end': 110},
{'id': 'L08-1450.12', 'char_start': 128, 'char_end': 142},
{'id': 'L08-1450.13', 'char_start': 181, 'char_end': 194},
{'id': 'L08-1450.14', 'char_start': 208, 'char_end': 211},
{'id': 'L08-1450.15', 'char_start': 255, 'char_end': 264},
{'id': 'L08-1450.16', 'char_start': 282, 'char_end': 286},
{'id': 'L08-1450.17', 'char_start': 408, 'char_end': 420},
{'id': 'L08-1450.18', 'char_start': 425, 'char_end': 443},
{'id': 'L08-1450.19', 'char_start': 450, 'char_end': 453},
{'id': 'L08-1450.20', 'char_start': 455, 'char_end': 459},
{'id': 'L08-1450.21', 'char_start': 481, 'char_end': 484},
{'id': 'L08-1450.22', 'char_start': 486, 'char_end': 490},
{'id': 'L08-1450.23', 'char_start': 508, 'char_end': 513},
{'id': 'L08-1450.24', 'char_start': 515, 'char_end': 519},
{'id': 'L08-1450.25', 'char_start': 535, 'char_end': 537},
{'id': 'L08-1450.26', 'char_start': 559, 'char_end': 561},
{'id': 'L08-1450.27', 'char_start': 591, 'char_end': 598},
{'id': 'L08-1450.28', 'char_start': 611, 'char_end': 619},
{'id': 'L08-1450.29', 'char_start': 649, 'char_end': 663},
{'id': 'L08-1450.30', 'char_start': 687, 'char_end': 707},
{'id': 'L08-1450.31', 'char_start': 722, 'char_end': 726},
{'id': 'L08-1450.32', 'char_start': 801, 'char_end': 808},
{'id': 'L08-1450.33', 'char_start': 841, 'char_end': 845},
{'id': 'L08-1450.34', 'char_start': 847, 'char_end': 852},
{'id': 'L08-1450.35', 'char_start': 857, 'char_end': 864},
{'id': 'L08-1450.36', 'char_start': 866, 'char_end': 872},
{'id': 'L08-1450.37', 'char_start': 902, 'char_end': 910},
{'id': 'L08-1450.1', 'char_start': 12, 'char_end': 16},
{'id': 'L08-1450.2', 'char_start': 27, 'char_end': 32},
{'id': 'L08-1450.3', 'char_start': 72, 'char_end': 80}],
'relations': [{'label': 1,
'arg1': 'L08-1450.12',
'arg2': 'L08-1450.13',
'reverse': False},
{'label': 5, 'arg1': 'L08-1450.17', 'arg2': 'L08-1450.18', 'reverse': False},
{'label': 1, 'arg1': 'L08-1450.28', 'arg2': 'L08-1450.29', 'reverse': False},
{'label': 3, 'arg1': 'L08-1450.30', 'arg2': 'L08-1450.32', 'reverse': False},
{'label': 3, 'arg1': 'L08-1450.34', 'arg2': 'L08-1450.35', 'reverse': False},
{'label': 3, 'arg1': 'L08-1450.36', 'arg2': 'L08-1450.37', 'reverse': True}]}
[ ]
```
### Data Fields
#### subtask_1_1
- `id`: the instance id of this abstract, a `string` feature.
- `title`: the title of this abstract, a `string` feature
- `abstract`: the abstract from the scientific papers, a `string` feature
- `entities`: the entity id's for the key phrases, a `list` of entity id's.
- `id`: the instance id of this sentence, a `string` feature.
- `char_start`: the 0-based index of the entity starting, an `ìnt` feature.
- `char_end`: the 0-based index of the entity ending, an `ìnt` feature.
- `relations`: the list of relations of this sentence marking the relation between the key phrases, a `list` of classification labels.
- `label`: the list of relations between the key phrases, a `list` of classification labels.
- `arg1`: the entity id of this key phrase, a `string` feature.
- `arg2`: the entity id of the related key phrase, a `string` feature.
- `reverse`: the reverse is `True` only if reverse is possible otherwise `False`, a `bool` feature.
```python
RELATIONS
{"":0,"USAGE": 1, "RESULT": 2, "MODEL-FEATURE": 3, "PART_WHOLE": 4, "TOPIC": 5, "COMPARE": 6}
```
#### subtask_1_2
- `id`: the instance id of this abstract, a `string` feature.
- `title`: the title of this abstract, a `string` feature
- `abstract`: the abstract from the scientific papers, a `string` feature
- `entities`: the entity id's for the key phrases, a `list` of entity id's.
- `id`: the instance id of this sentence, a `string` feature.
- `char_start`: the 0-based index of the entity starting, an `ìnt` feature.
- `char_end`: the 0-based index of the entity ending, an `ìnt` feature.
- `relations`: the list of relations of this sentence marking the relation between the key phrases, a `list` of classification labels.
- `label`: the list of relations between the key phrases, a `list` of classification labels.
- `arg1`: the entity id of this key phrase, a `string` feature.
- `arg2`: the entity id of the related key phrase, a `string` feature.
- `reverse`: the reverse is `True` only if reverse is possible otherwise `False`, a `bool` feature.
```python
RELATIONS
{"":0,"USAGE": 1, "RESULT": 2, "MODEL-FEATURE": 3, "PART_WHOLE": 4, "TOPIC": 5, "COMPARE": 6}
```
### Data Splits
| | | Train| Test |
|-------------|-----------|------|------|
| subtask_1_1 | text | 2807 | 3326 |
| | relations | 1228 | 1248 |
| subtask_1_2 | text | 1196 | 1193 |
| | relations | 335 | 355 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{gabor-etal-2018-semeval,
title = "{S}em{E}val-2018 Task 7: Semantic Relation Extraction and Classification in Scientific Papers",
author = {G{\'a}bor, Kata and
Buscaldi, Davide and
Schumann, Anne-Kathrin and
QasemiZadeh, Behrang and
Zargayouna, Ha{\"\i}fa and
Charnois, Thierry},
booktitle = "Proceedings of the 12th International Workshop on Semantic Evaluation",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S18-1111",
doi = "10.18653/v1/S18-1111",
pages = "679--688",
abstract = "This paper describes the first task on semantic relation extraction and classification in scientific paper abstracts at SemEval 2018. The challenge focuses on domain-specific semantic relations and includes three different subtasks. The subtasks were designed so as to compare and quantify the effect of different pre-processing steps on the relation classification results. We expect the task to be relevant for a broad range of researchers working on extracting specialized knowledge from domain corpora, for example but not limited to scientific or bio-medical information extraction. The task attracted a total of 32 participants, with 158 submissions across different scenarios.",
}
```
### Contributions
Thanks to [@basvoju](https://github.com/basvoju) for adding this dataset.
|
DFKI-SLT/SemEval2018_Task7
|
[
"task_categories:text-classification",
"task_ids:entity-linking-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:other",
"Relation Classification",
"Relation extraction",
"Scientific papers",
"Research papers",
"region:us"
] |
2023-02-03T13:05:40+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": ["entity-linking-classification"], "paperswithcode_id": "acronym-identification", "pretty_name": "Semeval2018Task7 is a dataset that describes the Semantic Relation Extraction and Classification in Scientific Papers", "tags": ["Relation Classification", "Relation extraction", "Scientific papers", "Research papers"], "train-eval-index": [{"col_mapping": {"labels": "tags", "tokens": "tokens"}, "config": "default", "splits": {"eval_split": "test"}, "task": "text-classification", "task_id": "entity_extraction"}]}
|
2023-02-20T10:18:51+00:00
|
678221395e7e2eab672e087ae106a9641afa7877
|
# Dataset Card for "male-nurse"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
abhijit1247/male-nurse
|
[
"region:us"
] |
2023-02-03T13:26:11+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 172326570.958, "num_examples": 5066}], "download_size": 264342051, "dataset_size": 172326570.958}}
|
2023-02-03T13:26:32+00:00
|
3d66a347a0d58892b3837d1f4f880096f4861070
|
huggingface-projects/DELETE-bot-fight-data
|
[
"license:mit",
"region:us"
] |
2023-02-03T15:13:55+00:00
|
{"license": "mit"}
|
2023-02-03T15:16:25+00:00
|
|
9ce779c7f8d1538e5f045e246a8d9b8b41f6242a
|
huggingface-projects/DELETE-temp-match-results
|
[
"license:mit",
"region:us"
] |
2023-02-03T15:14:13+00:00
|
{"license": "mit"}
|
2023-02-03T15:18:27+00:00
|
|
770deb77376b52215acad28fb4a07c9decb3c5ef
|
huggingface-projects/DELETE-filter-bad-models
|
[
"license:mit",
"region:us"
] |
2023-02-03T15:14:43+00:00
|
{"license": "mit"}
|
2023-02-03T15:17:12+00:00
|
|
1af66ea62fbdf7ee04c50cf7c3026e49d5ef47b8
|
# Dataset Card for "Caltech101_not_background_test_facebook_opt_2.7b_Attributes_Caption_ns_5647_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/Caltech101_not_background_test_facebook_opt_2.7b_Attributes_Caption_ns_5647_random
|
[
"region:us"
] |
2023-02-03T15:27:25+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_1_bs_16", "num_bytes": 85893257.125, "num_examples": 5647}, {"name": "fewshot_3_bs_16", "num_bytes": 88897400.125, "num_examples": 5647}], "download_size": 168047048, "dataset_size": 174790657.25}}
|
2023-02-04T00:20:39+00:00
|
ddf61ba8ce34484a4731a605bb0b2383bc7cb539
|
freddyaboulton/gradio-image-urls
|
[
"license:mit",
"region:us"
] |
2023-02-03T15:28:36+00:00
|
{"license": "mit"}
|
2023-02-03T15:28:36+00:00
|
|
a60335438ce17a2bbdd02c27997eba90188607b8
|
# Dataset Card for "Caltech101_not_background_test_facebook_opt_2.7b_Visclues_ns_5647_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/Caltech101_not_background_test_facebook_opt_2.7b_Visclues_ns_5647_random
|
[
"region:us"
] |
2023-02-03T15:48:40+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_1_bs_16", "num_bytes": 86816245.125, "num_examples": 5647}, {"name": "fewshot_3_bs_16", "num_bytes": 90734679.125, "num_examples": 5647}], "download_size": 169650032, "dataset_size": 177550924.25}}
|
2023-02-03T17:01:01+00:00
|
a080ae437385dc97c56c7823b8f3ea9d500baefc
|
# Dataset Card for "Caltech101_with_background_test_facebook_opt_350m_Attributes_Caption_ns_6084_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/Caltech101_with_background_test_facebook_opt_350m_Attributes_Caption_ns_6084_random
|
[
"region:us"
] |
2023-02-03T17:06:56+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_1_bs_16", "num_bytes": 102753751.5, "num_examples": 6084}, {"name": "fewshot_3_bs_16", "num_bytes": 105997404.5, "num_examples": 6084}], "download_size": 201172569, "dataset_size": 208751156.0}}
|
2023-02-03T18:30:16+00:00
|
0a38c4a5b0bf3bc202c5e9b7d6b6d4993f33261d
|
nateraw/fuego-20230203-121124-88b549
|
[
"fuego",
"region:us"
] |
2023-02-03T17:11:25+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230203-121124-88b549", "status": "running", "script": "main.py", "requirements_file": "requirements.txt", "space_id": "nateraw/fuego-20230203-121124-88b549", "space_hardware": "cpu-basic", "github_repo_id": "pytorch/examples", "github_repo_branch": "main", "github_repo_sha": "d8456a36d1bbb22f72b003f59406a19a0a0547c3"}}
|
2023-02-03T17:14:06+00:00
|
|
eed65c314469b514190d8fb2ff7ee41e448774d4
|
# Dataset Card for "jupyter-parsed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
bigcode/jupyter-parsed
|
[
"region:us"
] |
2023-02-03T17:16:23+00:00
|
{"dataset_info": {"features": [{"name": "hexsha", "dtype": "string"}, {"name": "size", "dtype": "int64"}, {"name": "ext", "dtype": "string"}, {"name": "lang", "dtype": "string"}, {"name": "max_stars_repo_path", "dtype": "string"}, {"name": "max_stars_repo_name", "dtype": "string"}, {"name": "max_stars_repo_head_hexsha", "dtype": "string"}, {"name": "max_stars_repo_licenses", "sequence": "string"}, {"name": "max_stars_count", "dtype": "int64"}, {"name": "max_stars_repo_stars_event_min_datetime", "dtype": "string"}, {"name": "max_stars_repo_stars_event_max_datetime", "dtype": "string"}, {"name": "max_issues_repo_path", "dtype": "string"}, {"name": "max_issues_repo_name", "dtype": "string"}, {"name": "max_issues_repo_head_hexsha", "dtype": "string"}, {"name": "max_issues_repo_licenses", "sequence": "string"}, {"name": "max_issues_count", "dtype": "int64"}, {"name": "max_issues_repo_issues_event_min_datetime", "dtype": "string"}, {"name": "max_issues_repo_issues_event_max_datetime", "dtype": "string"}, {"name": "max_forks_repo_path", "dtype": "string"}, {"name": "max_forks_repo_name", "dtype": "string"}, {"name": "max_forks_repo_head_hexsha", "dtype": "string"}, {"name": "max_forks_repo_licenses", "sequence": "string"}, {"name": "max_forks_count", "dtype": "int64"}, {"name": "max_forks_repo_forks_event_min_datetime", "dtype": "string"}, {"name": "max_forks_repo_forks_event_max_datetime", "dtype": "string"}, {"name": "avg_line_length", "dtype": "float64"}, {"name": "max_line_length", "dtype": "int64"}, {"name": "alphanum_fraction", "dtype": "float64"}, {"name": "cells", "sequence": {"sequence": {"sequence": "string"}}}, {"name": "cell_types", "sequence": "string"}, {"name": "cell_type_groups", "sequence": {"sequence": "string"}}], "splits": [{"name": "train", "num_bytes": 22910808665, "num_examples": 1459454}], "download_size": 9418947545, "dataset_size": 22910808665}}
|
2023-02-21T19:16:28+00:00
|
45cb993640bfa09a170a7e65617a0aeca467264f
|
# Dataset Card for "Caltech101_with_background_test_facebook_opt_1.3b_Attributes_Caption_ns_6084_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/Caltech101_with_background_test_facebook_opt_1.3b_Attributes_Caption_ns_6084_random
|
[
"region:us"
] |
2023-02-03T17:19:07+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_1_bs_16", "num_bytes": 102753684.5, "num_examples": 6084}, {"name": "fewshot_3_bs_16", "num_bytes": 105998445.5, "num_examples": 6084}], "download_size": 201178406, "dataset_size": 208752130.0}}
|
2023-02-03T18:51:48+00:00
|
59943ccb35ff60fa6595efa00dd0adfc72754964
|
nateraw/fuego-20230203-171955-25ab48
|
[
"fuego",
"region:us"
] |
2023-02-03T17:19:56+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230203-171955-25ab48", "status": "done", "script": "main.py", "requirements_file": "requirements.txt", "space_id": "nateraw/fuego-20230203-171955-25ab48", "space_hardware": "cpu-basic", "github_repo_id": "pytorch/examples", "github_repo_branch": "main", "github_repo_sha": "d8456a36d1bbb22f72b003f59406a19a0a0547c3"}}
|
2023-02-03T17:27:14+00:00
|
|
0bea609a4e65dfba20ea1fb867c9e456a8990add
|
nateraw/fuego-20230203-172144-721c96
|
[
"fuego",
"region:us"
] |
2023-02-03T17:21:45+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230203-172144-721c96", "status": "done", "script": "run_glue.py", "requirements_file": "requirements.txt", "space_id": "nateraw/fuego-20230203-172144-721c96", "space_hardware": "t4-small", "github_repo_id": "huggingface/transformers", "github_repo_branch": "main", "github_repo_sha": "fb13a7df95f4d378cbd80e2f0014d67d26eb2778"}}
|
2023-02-03T17:30:12+00:00
|
|
2ac02aff4edaccfb92347737ce12f9c817fa858f
|
# Dataset Card for "Caltech101_with_background_test_facebook_opt_2.7b_Attributes_Caption_ns_6084_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/Caltech101_with_background_test_facebook_opt_2.7b_Attributes_Caption_ns_6084_random
|
[
"region:us"
] |
2023-02-03T17:36:33+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_1_bs_16", "num_bytes": 102753816.5, "num_examples": 6084}, {"name": "fewshot_3_bs_16", "num_bytes": 105998509.5, "num_examples": 6084}], "download_size": 201174504, "dataset_size": 208752326.0}}
|
2023-02-03T19:25:20+00:00
|
a1d84861fa24399128df172bf06c5864bf628021
|
# Dataset Card for "Caltech101_with_background_test_facebook_opt_350m_Visclues_ns_6084_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/Caltech101_with_background_test_facebook_opt_350m_Visclues_ns_6084_random
|
[
"region:us"
] |
2023-02-03T17:43:44+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_1_bs_16", "num_bytes": 103748648.5, "num_examples": 6084}, {"name": "fewshot_3_bs_16", "num_bytes": 107977501.5, "num_examples": 6084}], "download_size": 202931853, "dataset_size": 211726150.0}}
|
2023-02-03T19:38:09+00:00
|
546e9e69a42cc9e5ca135951d23ca3ded2e6ab50
|
# Dataset Card for "Caltech101_with_background_test_facebook_opt_1.3b_Visclues_ns_6084_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/Caltech101_with_background_test_facebook_opt_1.3b_Visclues_ns_6084_random
|
[
"region:us"
] |
2023-02-03T17:57:36+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_1_bs_16", "num_bytes": 103748153.5, "num_examples": 6084}, {"name": "fewshot_3_bs_16", "num_bytes": 107977706.5, "num_examples": 6084}], "download_size": 202932897, "dataset_size": 211725860.0}}
|
2023-02-03T20:05:16+00:00
|
3093255cb68d7b753af522a61fc306e6bc8ed4c0
|
# Dataset Card for Dataset Name
## Dataset Description
- **Paper** [Deepsat: a learning framework for satellite imagery](https://dl.acm.org/doi/pdf/10.1145/2820783.2820816)
- **Split** Test
### Split Information
This HuggingFace dataset repository contains just the 'Test' split.
### Licensing Information
Public Domain
## Citation Information
[https://dl.acm.org/doi/pdf/10.1145/2820783.2820816](https://dl.acm.org/doi/pdf/10.1145/2820783.2820816)
```
@inproceedings{basu2015deepsat,
title = {Deepsat: a learning framework for satellite imagery},
author = {Basu, Saikat and Ganguly, Sangram and Mukhopadhyay, Supratik and DiBiano, Robert and Karki, Manohar and Nemani, Ramakrishna},
year = 2015,
booktitle = {Proceedings of the 23rd SIGSPATIAL international conference on advances in geographic information systems},
pages = {1--10}
}
```
|
jonathan-roberts1/SAT-4
|
[
"license:other",
"region:us"
] |
2023-02-03T18:12:58+00:00
|
{"license": "other", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "barren land", "1": "grassland", "2": "other", "3": "trees"}}}}], "splits": [{"name": "train", "num_bytes": 150589308, "num_examples": 100000}], "download_size": 177776551, "dataset_size": 150589308}}
|
2023-04-03T15:17:18+00:00
|
b069f361f2b8bf2ead4e071a19999cae7d3fe4ca
|
# Dataset Card for "SAT-6"
## Dataset Description
- **Paper** [Deepsat: a learning framework for satellite imagery](https://dl.acm.org/doi/pdf/10.1145/2820783.2820816)
- **Split** Test
### Split Information
This HuggingFace dataset repository contains just the 'Test' split.
### Licensing Information
Public Domain
## Citation Information
[https://dl.acm.org/doi/pdf/10.1145/2820783.2820816](https://dl.acm.org/doi/pdf/10.1145/2820783.2820816)
```
@inproceedings{basu2015deepsat,
title = {Deepsat: a learning framework for satellite imagery},
author = {Basu, Saikat and Ganguly, Sangram and Mukhopadhyay, Supratik and DiBiano, Robert and Karki, Manohar and Nemani, Ramakrishna},
year = 2015,
booktitle = {Proceedings of the 23rd SIGSPATIAL international conference on advances in geographic information systems},
pages = {1--10}
}
```
|
jonathan-roberts1/SAT-6
|
[
"license:other",
"region:us"
] |
2023-02-03T18:13:10+00:00
|
{"license": "other", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "barren land", "1": "building", "2": "grassland", "3": "road", "4": "trees", "5": "water"}}}}], "splits": [{"name": "train", "num_bytes": 120518797, "num_examples": 81000}], "download_size": 142842069, "dataset_size": 120518797}}
|
2023-04-03T15:17:41+00:00
|
5a00579ced8727fb4ee033e414618c4b770e123d
|
# Dataset Card for "Caltech101_with_background_test_facebook_opt_2.7b_Visclues_ns_6084_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/Caltech101_with_background_test_facebook_opt_2.7b_Visclues_ns_6084_random
|
[
"region:us"
] |
2023-02-03T18:20:09+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_1_bs_16", "num_bytes": 103748376.5, "num_examples": 6084}, {"name": "fewshot_3_bs_16", "num_bytes": 107978091.5, "num_examples": 6084}], "download_size": 202926056, "dataset_size": 211726468.0}}
|
2023-02-03T20:49:39+00:00
|
017258d41991489752363bfa4cbcc12725786967
|
# GPT Wiki Intro
## Overview
Dataset for training models to classify human written vs GPT/ChatGPT generated text.
This dataset contains Wikipedia introductions and GPT (Curie) generated introductions for 150k topics.
Prompt used for generating text
```
200 word wikipedia style introduction on '{title}'
{starter_text}
```
where `title` is the title for the wikipedia page, and `starter_text` is the first seven words of the wikipedia introduction.
Here's an example of prompt used to generate the introduction paragraph for 'Secretory protein' -
>'200 word wikipedia style introduction on Secretory protein
>
> A secretory protein is any protein, whether'
Configuration used for GPT model
```
model="text-curie-001",
prompt=prompt,
temperature=0.7,
max_tokens=300,
top_p=1,
frequency_penalty=0.4,
presence_penalty=0.1
```
Schema for the dataset
|Column |Datatype|Description |
|---------------------|--------|-------------------------------------------|
|id |int64 |ID |
|url |string |Wikipedia URL |
|title |string |Title |
|wiki_intro |string |Introduction paragraph from wikipedia |
|generated_intro |string |Introduction generated by GPT (Curie) model|
|title_len |int64 |Number of words in title |
|wiki_intro_len |int64 |Number of words in wiki_intro |
|generated_intro_len |int64 |Number of words in generated_intro |
|prompt |string |Prompt used to generate intro |
|generated_text |string |Text continued after the prompt |
|prompt_tokens |int64 |Number of tokens in the prompt |
|generated_text_tokens|int64 |Number of tokens in generated text |
## Credits
* [wikipedia dataset](https://huggingface.co/datasets/wikipedia#licensing-information)
## Code
Code to create this dataset can be found on [GitHub](https://github.com/aadityaubhat/wiki_gpt)
## Citation
```
@misc {aaditya_bhat_2023,
author = { {Aaditya Bhat} },
title = { GPT-wiki-intro (Revision 0e458f5) },
year = 2023,
url = { https://huggingface.co/datasets/aadityaubhat/GPT-wiki-intro },
doi = { 10.57967/hf/0326 },
publisher = { Hugging Face }
}
```
|
aadityaubhat/GPT-wiki-intro
|
[
"task_categories:text-classification",
"task_categories:zero-shot-classification",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:cc",
"doi:10.57967/hf/0326",
"region:us"
] |
2023-02-03T18:30:39+00:00
|
{"language": ["en"], "license": "cc", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification", "zero-shot-classification", "text-generation"], "pretty_name": "GPT Wiki Intro"}
|
2023-10-03T21:48:42+00:00
|
a6ad9d64349b79384626c463cda659d50c78f509
|
# Dataset Card for "test_corpus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
rdev12/test_corpus
|
[
"region:us"
] |
2023-02-03T18:37:28+00:00
|
{"dataset_info": {"features": [{"name": "doc_id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "title", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7580154, "num_examples": 9680}], "download_size": 4516561, "dataset_size": 7580154}}
|
2023-02-03T18:37:50+00:00
|
c70cd45cd3388528a5354048b58bdfe0695159dc
|
nateraw/fuego-20230203-184642-f3c09b
|
[
"fuego",
"region:us"
] |
2023-02-03T18:46:43+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230203-184642-f3c09b", "status": "done", "script": "main.py", "requirements_file": "requirements.txt", "space_id": "nateraw/fuego-20230203-184642-f3c09b", "space_hardware": "cpu-basic", "github_repo_id": "pytorch/examples", "github_repo_branch": "main", "github_repo_sha": "d8456a36d1bbb22f72b003f59406a19a0a0547c3"}}
|
2023-02-03T18:54:13+00:00
|
|
28bf3cb3cb3e06e0de1ac85d039cdf63cfc607ab
|
# Dataset Card for "flores200-bmfr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ell-hol/flores200-bmfr
|
[
"region:us"
] |
2023-02-03T18:48:14+00:00
|
{"dataset_info": {"features": [{"name": "bambara", "dtype": "string"}, {"name": "french", "dtype": "string"}], "splits": [{"name": "dev", "num_bytes": 293658, "num_examples": 997}, {"name": "devtest", "num_bytes": 293658, "num_examples": 997}], "download_size": 377990, "dataset_size": 587316}}
|
2023-02-03T18:48:33+00:00
|
10768ad7a49ee0dac5346a4c9c6ed199e08a015e
|
# Dataset Card for FSTDT Quotes
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
FSTDT Quotes is a snapshot of the [Fundies Say the Darndest Things](https://fstdt.com/) website taken on 2023/02/03 14:16. It is intended for hate and fringe speech detection and classification.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
FSTDT Quotes is in English.
## Dataset Structure
### Data Instances
An example instance looks like this:
```
{
"id": "G",
"submitter": "anonymous",
"timestamp": "2005-05-21 00:00:00+00:00",
"name": "Jack777 ",
"src_url": "http://www.theologyweb.com/forum/showpost.php?p=1034624&postcount=10",
"tags": ["#fundie"],
"quote": "As long as evolutionists deny their theory is a theory and point out ID or whatever is bunk, people like me will pester them til they drop."
}
```
### Data Fields
- `id`: A `string` feature, the ID of the post on FSTDT.
- `submitter`: A `string` feature, the submitter of the post.
- `timestamp`: A `string` feature, the time of submission.
- `name`: A `string` feature, the (user)name of the person who is being quoted.
- `src_url`: A `string` feature, the source URL of the quote.
- `tags`: A sequence of `string` features, the tags the post has been tagged with.
- `quote`: A `string` feature, the quote itself.
### Data Splits
- `train`: 56,448 instances
- `validation`: 7,111 instances
- `test`: 7,131 instances
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
The quotes are collected from all over the internet.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The data is annotated by users on FSTDT.
### Personal and Sensitive Information
The dataset contains the usernames of submitters as well as those quoted. However, this information is publicly available.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
This dataset contains large amounts of hate speech as well as pseudoscience and quackery.
### Other Known Limitations
Some quotes in the dataset are quoted from news articles depicting acts of hate, which could potentially cause misclassifications on models trained on this dataset.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
MtCelesteMa/fstdt-quotes
|
[
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] |
2023-02-03T19:15:24+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": "cc-by-4.0", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["hate-speech-detection"], "pretty_name": "FSTDT Quotes"}
|
2023-02-03T19:40:44+00:00
|
4f67c55775c1b47544056121392788b9a602ef56
|
Natural language inference using attempto controlled english
Paper to come
```
@inproceedings{fuchs2012first,
title={First-order reasoning for attempto controlled english},
author={Fuchs, Norbert E},
booktitle={Controlled Natural Language: Second International Workshop, CNL 2010, Marettimo Island, Italy, September 13-15, 2010. Revised Papers 2},
pages={73--94},
year={2012},
organization={Springer}
}
```
|
sileod/attempto-nli
|
[
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-02-03T20:18:39+00:00
|
{"language": ["en"], "license": "apache-2.0", "task_categories": ["text-classification"], "task_ids": ["natural-language-inference"]}
|
2023-05-31T07:29:58+00:00
|
ac961cd86cb815f97b758224b7e1560810e75989
|
deeptigp/car_generation_diffusion_mini
|
[
"license:unknown",
"region:us"
] |
2023-02-03T21:01:59+00:00
|
{"license": "unknown"}
|
2023-02-03T21:02:41+00:00
|
|
121b03f7820133d4ec4b42b776158968b787e3fd
|
metaeval/scruples
|
[
"license:apache-2.0",
"region:us"
] |
2023-02-03T21:16:05+00:00
|
{"license": "apache-2.0"}
|
2023-02-03T21:16:57+00:00
|
|
5ec3ed294629e1868529e6ba9701a661d8a7b9be
|
# MIRACL (yo) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-yo-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-yo-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-yo-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-yo-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-yo-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-yo-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-yo-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-yo-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-yo-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-yo-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-yo-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-yo-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
Cohere/miracl-yo-corpus-22-12
|
[
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:yo",
"license:apache-2.0",
"region:us"
] |
2023-02-03T21:33:48+00:00
|
{"annotations_creators": ["expert-generated"], "language": ["yo"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []}
|
2023-02-06T11:54:24+00:00
|
4a6a4ed4b582d8d8cf085ea52ab050518e3a1bfa
|
# MIRACL (yo) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-yo-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-yo-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-yo-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-yo-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-yo-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-yo-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-yo-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-yo-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-yo-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-yo-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-yo-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-yo-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
Cohere/miracl-yo-queries-22-12
|
[
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:yo",
"license:apache-2.0",
"region:us"
] |
2023-02-03T21:34:17+00:00
|
{"annotations_creators": ["expert-generated"], "language": ["yo"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []}
|
2023-02-06T11:54:06+00:00
|
882769b8641dda14c1c9e75daa142a846e3a59eb
|
this is just a try
|
parth3739/try
|
[
"region:us"
] |
2023-02-03T21:56:41+00:00
|
{}
|
2023-02-04T05:32:46+00:00
|
0e7abb6df6e5961f93c7e6e5a00f5524d50a41dd
|
renenijman/doodles
|
[
"license:openrail",
"region:us"
] |
2023-02-03T22:25:20+00:00
|
{"license": "openrail"}
|
2023-02-03T22:25:20+00:00
|
|
19933dc12c7f7466f66814f12146188feba70a8c
|
# Dataset Card for "bookcorpus_SV"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
MartinKu/bookcorpus_SV
|
[
"region:us"
] |
2023-02-03T22:36:06+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 442719314, "num_examples": 22412930}], "download_size": 284591599, "dataset_size": 442719314}}
|
2023-02-06T15:07:42+00:00
|
8893c84216dc99804d7135c0242b32e2c8986a8f
|
# Dataset Card for "bookcorpus_OC"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
MartinKu/bookcorpus_OC
|
[
"region:us"
] |
2023-02-03T22:37:03+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 597206420, "num_examples": 20347911}], "download_size": 403128921, "dataset_size": 597206420}}
|
2023-02-06T15:12:23+00:00
|
809aed0b34eebb1fa29095e013c9dc1719af6a9c
|
# Dataset Card for "wikipedia_SV"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
MartinKu/wikipedia_SV
|
[
"region:us"
] |
2023-02-03T22:40:14+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4196566331, "num_examples": 134491003}], "download_size": 2739206517, "dataset_size": 4196566331}}
|
2023-02-13T08:21:35+00:00
|
9b4b92bae657e8e0a5e4385e9cc03efddaa9f9e5
|
# Dataset Card for "wikipedia_OC"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
MartinKu/wikipedia_OC
|
[
"region:us"
] |
2023-02-03T22:40:34+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11476303825, "num_examples": 161961924}], "download_size": 7119631815, "dataset_size": 11476303825}}
|
2023-02-13T08:32:58+00:00
|
1fd221906210448496b440df2fc4fccb0eaf1a67
|
MartinKu/martin_test_dataset
|
[
"region:us"
] |
2023-02-03T22:43:19+00:00
|
{}
|
2023-02-05T16:16:08+00:00
|
|
426e32e422b919421433437dc61bc7a0591b2ef7
|
Source: human reference genome
Filtering: CDS + 256 bp flanks
Data augmentation: windows of 512 bp, with 256 step size as well as reverse complements
|
gonzalobenegas/human-genome-cds
|
[
"region:us"
] |
2023-02-03T22:46:32+00:00
|
{}
|
2023-02-03T22:57:25+00:00
|
4cc27a1792ae9cd8f6200b39b164f7240675c776
|
nateraw/fuego-20230204-002846-f7fef2
|
[
"fuego",
"region:us"
] |
2023-02-03T23:28:47+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230204-002846-f7fef2", "status": "preparing", "script": "main.py", "requirements_file": "requirements.txt", "space_id": "nateraw/fuego-20230204-002846-f7fef2", "space_hardware": "cpu-basic", "github_repo_id": "pytorch/examples", "github_repo_branch": "main", "github_repo_sha": "d8456a36d1bbb22f72b003f59406a19a0a0547c3"}}
|
2023-02-03T23:28:49+00:00
|
|
5b0a490df76d23e5cb2c94d9aa125853c6830c30
|
# Dataset Card for "FGVC_Aircraft_test_facebook_opt_2.7b_Attributes_Caption_ns_3333_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/FGVC_Aircraft_test_facebook_opt_2.7b_Attributes_Caption_ns_3333_random
|
[
"region:us"
] |
2023-02-03T23:45:50+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_3_bs_16", "num_bytes": 301867105.375, "num_examples": 3333}, {"name": "fewshot_1_bs_16", "num_bytes": 300148197.375, "num_examples": 3333}], "download_size": 595041338, "dataset_size": 602015302.75}}
|
2023-02-04T16:06:41+00:00
|
95394c9dce9371bb21baef8f6bf7f25592ab8ae2
|
# MIRACL (de) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-de-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-de-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-de-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-de-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-de-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-de-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-de-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-de-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-de-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-de-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-de-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-de-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
Cohere/miracl-de-corpus-22-12
|
[
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:de",
"license:apache-2.0",
"region:us"
] |
2023-02-04T01:04:54+00:00
|
{"annotations_creators": ["expert-generated"], "language": ["de"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []}
|
2023-02-06T11:52:49+00:00
|
aa4b787d23fc20211263972da18c8bd3aafdd022
|
# Dataset Card for "Caltech101_not_background_test_facebook_opt_350m_Visclues_ns_5647_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/Caltech101_not_background_test_facebook_opt_350m_Visclues_ns_5647_random
|
[
"region:us"
] |
2023-02-04T01:14:54+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_1_bs_16", "num_bytes": 86816473.125, "num_examples": 5647}, {"name": "fewshot_3_bs_16", "num_bytes": 90734475.125, "num_examples": 5647}], "download_size": 169654260, "dataset_size": 177550948.25}}
|
2023-02-04T01:39:49+00:00
|
b7e7be59c90091ed66b243a251565fa501a7245d
|
# Dataset Card for "Caltech101_not_background_test_facebook_opt_1.3b_Visclues_ns_5647_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/Caltech101_not_background_test_facebook_opt_1.3b_Visclues_ns_5647_random
|
[
"region:us"
] |
2023-02-04T01:27:48+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_1_bs_16", "num_bytes": 86815989.125, "num_examples": 5647}, {"name": "fewshot_3_bs_16", "num_bytes": 90734149.125, "num_examples": 5647}], "download_size": 169653770, "dataset_size": 177550138.25}}
|
2023-02-04T02:05:09+00:00
|
7a1bfe4ed798345cd027f950e97c13e382c3f01c
|
GeodLick/NEWAI
|
[
"license:mit",
"region:us"
] |
2023-02-04T01:31:26+00:00
|
{"license": "mit"}
|
2023-02-04T01:31:26+00:00
|
|
bd6dd27790b40be83fd747666417a6b333833216
|
# Dataset Card for "nowiki_abstract_second_scrape_split1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jkorsvik/nowiki_abstract_second_scrape_split1
|
[
"region:us"
] |
2023-02-04T02:02:55+00:00
|
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "date_scraped", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "ingress", "dtype": "string"}, {"name": "article", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 927542194, "num_examples": 201819}], "download_size": 478486503, "dataset_size": 927542194}}
|
2023-02-04T02:03:20+00:00
|
844e5643737a392f32aeabf8072c6763a865a349
|
# MIRACL (de) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-de-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-de-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-de-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-de-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-de-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-de-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-de-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-de-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-de-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-de-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-de-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-de-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
Cohere/miracl-de-queries-22-12
|
[
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:de",
"license:apache-2.0",
"region:us"
] |
2023-02-04T02:19:03+00:00
|
{"annotations_creators": ["expert-generated"], "language": ["de"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []}
|
2023-02-06T11:53:32+00:00
|
7fe195cefa2d118ff128c33bcb3202f4be39dbf4
|
# Dataset Card for "FGVC_Aircraft_test_facebook_opt_350m_Visclues_ns_3333_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/FGVC_Aircraft_test_facebook_opt_350m_Visclues_ns_3333_random
|
[
"region:us"
] |
2023-02-04T02:44:33+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_1_bs_16", "num_bytes": 300686732.375, "num_examples": 3333}, {"name": "fewshot_3_bs_16", "num_bytes": 302943471.375, "num_examples": 3333}], "download_size": 595742250, "dataset_size": 603630203.75}}
|
2023-02-04T03:16:14+00:00
|
a7125575d88eec89e0eb2ec35f16e48c1a72d27a
|
# Dataset Card for "FGVC_Aircraft_test_facebook_opt_1.3b_Visclues_ns_3333_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/FGVC_Aircraft_test_facebook_opt_1.3b_Visclues_ns_3333_random
|
[
"region:us"
] |
2023-02-04T02:53:45+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_1_bs_16", "num_bytes": 300686978.375, "num_examples": 3333}, {"name": "fewshot_3_bs_16", "num_bytes": 302944359.375, "num_examples": 3333}], "download_size": 595746699, "dataset_size": 603631337.75}}
|
2023-02-04T03:32:30+00:00
|
e77ec3937dbbf5d344c15c3cc2c5f7c05ef467f4
|
# COVID19 CASES IN HONG KONG UNIVERSITIES
## main part: HK Universities COVID19 Cases
### 🏠 Dashboards
[The Hong Kong Polytechnic University][l1]
[The University of Hong Kong][l2]
[The Chinese University of Hong Kong][l3]
[The Hong Kong Baptist University][l4]
[The City University of Hong Kong][l5]
### 🏠 Info Square
[Public Information Sharing and Personal Ask&Help Square][l6]
[l1]: https://datastudio.google.com/reporting/6f62f56f-fd34-4e7b-9ce3-8991fd35ae5e/page/IVVmC
[l2]: https://datastudio.google.com/reporting/19380e90-a92c-4e22-bbdf-2b4a56b2630a/page/5jWmC
[l3]: https://datastudio.google.com/reporting/fb0280da-c5c8-4b46-bd29-c80978179536/page/6YXmC
[l4]: https://datastudio.google.com/reporting/7ad2ae5c-b543-4d8e-94df-f5dd0419b147
[l5]: https://datastudio.google.com/reporting/b30e540b-3ef6-430c-9abf-94c89c621ade/page/ThbnC
[l6]: https://docs.google.com/document/d/15zdVq6KPEByHO-xtv6hJh-HQz80t6mUyZ5LMeHjtWOU/edit#heading=h.tv3qxy36yxj8
## usage part: Background Description and Lessons Learned
### 🔔 Storyline Start
Back in the February 2022, the Hong Kong was severely impacted by the 5th wave of Covid-19. Each and every day, newspapers published tsunami like and exponentially increasing everywhere confirmed cases. Every citizen felt enormously scared and deeply concerned about uncertainty in the future. Most of people had to work from home or stay at home for entire day, shops were closed at a large scale and almost no one could be seen on any street. Even worse, people are surrounded by unclear and sometimes self-contradictory messages.
My friend, Pili reminded me that as a soft developer, why not to do something to make the society better. To visualize the data might be a good option and let people see it and calm down. Moreover, good news were that Hong Kong universities started to collect and publish daily cases. As a consequence, the project was launched immediately and benefited on Cloud Native, lightweight, low code and “more important”, zero cost.
### 🔔 Storyline Continue
Now in the February 2023, the 5th wave of Covid-19 has already faded away. The city and her citizens become resuscitated and invigorated once more. As the government eased Covid-19 regulations and cancelled isolation orders from 30 January 2023, universities no longer collected and updated the data either. The project have to stop here, although so far thousands of visitors have gotten beneficial and expressed gratitude.
Through this plight and struggle, everyone has learned more or less. As Alexandre Dumas pointed out, “all human wisdom is contained in these two words – WAIT and HOPE.” No matter what happened and will happen, please always take away this two words from the project. WAIT & HOPE.
## technic part: System Architecture and Main Technic Analysis
The is a light-weight low-code cloud native project, whose components are built on Google Cloud Platform and Google Workspace.
### ⚙️ Spec
- Database: Google Sheets, Google Docs
- Backend: Google Apps Script, GCP Cloud Functions, GCP API Gateway, GCP IAM
- Frontend: Google Data Studio
- Language: Javascript
- Platform: Node.js, NPM
## supportive part: Public Information Sources
### 🔍 Hong Kong Government Sources
https://www.covidvaccine.gov.hk/pdf/5th_wave_statistics.pdf
https://www.chp.gov.hk/files/pdf/local_situation_covid19_tc.pdf
https://www.covidvaccine.gov.hk/pdf/death_analysis.pdf
https://www.covidvaccine.gov.hk/en/dashboard
### 🔍 The Hong Kong Polytechnic University Sources
https://www.polyu.edu.hk/cpa/notices/index_student.php
### 🔍 The University of Hong Kong Sources
https://covid19.hku.hk/control/latest-campus-related-test-positive-cases/
https://covid19.hku.hk/control/cases-table/
https://covid19.hku.hk/control/latest-close-contact-with-confirmed-cases/
### 🔍 The Chinese University of Hong Kong Sources
https://www.cuhk.edu.hk/english/whats-on/faces/confirmed-covid-19-cases.html
### 🔍 The Hong Kong Baptist University Sources
https://ehsu.hkbu.edu.hk/2019-nCOV/
### 🔍 The City University of Hong Kong Sources
https://auth.cityu.edu.hk
## contributive part: Acknowledgement and Awards
### 🎉 Thanks
#The Hong Kong Government
#The Hong Kong Polytechnic University
#The University of Hong Kong
#The Chinese University of Hong Kong
#The Hong Kong Baptist University
#The City University of Hong Kong
### 🎉 Awards
Contributors welcome!
- if you possess more datasets
- if you want to improve it
### 🎉 Sponsors
Please contact me! @Henry [email protected]
- if you want to access to raw dataset
- if you want to access to code
|
HenryHL/covid19_cases_in_HK_universities
|
[
"region:us"
] |
2023-02-04T02:54:20+00:00
|
{}
|
2023-02-04T02:56:24+00:00
|
66a9a61255cb27dcdbf2c23f94c102d61c6f6453
|
nateraw/fuego-20230204-040326-2cc7f2
|
[
"fuego",
"region:us"
] |
2023-02-04T03:03:26+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230204-040326-2cc7f2", "status": "done", "script": "main.py", "requirements_file": "requirements.txt", "space_id": "nateraw/fuego-20230204-040326-2cc7f2", "space_hardware": "cpu-basic", "github_repo_id": "pytorch/examples", "github_repo_branch": "main", "github_repo_sha": "d8456a36d1bbb22f72b003f59406a19a0a0547c3"}}
|
2023-02-04T03:10:15+00:00
|
|
6792c08b53223be7fb4c78cd4ec941e44579e9fc
|
# Dataset Card for "FGVC_Aircraft_test_facebook_opt_2.7b_Visclues_ns_3333_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/FGVC_Aircraft_test_facebook_opt_2.7b_Visclues_ns_3333_random
|
[
"region:us"
] |
2023-02-04T03:07:27+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_1_bs_16", "num_bytes": 300686660.375, "num_examples": 3333}, {"name": "fewshot_3_bs_16", "num_bytes": 302943871.375, "num_examples": 3333}], "download_size": 595742511, "dataset_size": 603630531.75}}
|
2023-02-04T03:57:56+00:00
|
0e4ad216c9b1d6a6748ad1d9351ea86c5964f62a
|
dandrade/canciones_juan_luis_guerra
|
[
"task_categories:text-generation",
"language:es",
"license:other",
"region:us"
] |
2023-02-04T03:54:09+00:00
|
{"language": ["es"], "license": "other", "task_categories": ["text-generation"]}
|
2023-02-04T14:55:03+00:00
|
|
1a1f019ee94123f2ced2fcf20485ddc32db9a435
|
# Dataset Card for "OxfordFlowers_test_facebook_opt_350m_Attributes_Caption_ns_6149_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/OxfordFlowers_test_facebook_opt_350m_Attributes_Caption_ns_6149_random
|
[
"region:us"
] |
2023-02-04T04:04:32+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_1_bs_16", "num_bytes": 269127090.375, "num_examples": 6149}, {"name": "fewshot_3_bs_16", "num_bytes": 272741629.375, "num_examples": 6149}], "download_size": 532650421, "dataset_size": 541868719.75}}
|
2023-02-04T05:39:02+00:00
|
d2b50ea830517b0190b00eedfc6200f3bda8a144
|
weijiawu/DSText
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-02-04T04:16:01+00:00
|
{"license": "cc-by-4.0"}
|
2023-02-04T04:16:48+00:00
|
|
22c16cbe19670b7b7c0467af050923cb610159a5
|
# Dataset Card for "OxfordFlowers_test_facebook_opt_1.3b_Attributes_Caption_ns_6149_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/OxfordFlowers_test_facebook_opt_1.3b_Attributes_Caption_ns_6149_random
|
[
"region:us"
] |
2023-02-04T04:18:08+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_1_bs_16", "num_bytes": 269127158.375, "num_examples": 6149}, {"name": "fewshot_3_bs_16", "num_bytes": 272737942.375, "num_examples": 6149}], "download_size": 532668439, "dataset_size": 541865100.75}}
|
2023-02-04T06:02:41+00:00
|
8e3272f867812aa23cfd3214a36f44aafd45d4fc
|
# range3/wikipedia-ja-20230101
This dataset consists of a parquet file from the wikipedia dataset with only Japanese data extracted. It is generated by the following python code.
このデータセットは、wikipediaデータセットの日本語データのみを抽出したparquetファイルで構成されます。以下のpythonコードによって生成しています。
```py
import datasets
dss = datasets.load_dataset(
"wikipedia",
language="ja",
date="20230101",
beam_runner="DirectRunner",
)
for split,ds in dss.items():
ds.to_parquet(f"wikipedia-ja-20230101/{split}.parquet")
```
|
range3/wikipedia-ja-20230101
|
[
"task_categories:text-generation",
"task_categories:fill-mask",
"language:ja",
"license:cc-by-sa-3.0",
"license:gfdl",
"region:us"
] |
2023-02-04T04:29:29+00:00
|
{"language": ["ja"], "license": ["cc-by-sa-3.0", "gfdl"], "task_categories": ["text-generation", "fill-mask"]}
|
2023-02-04T05:44:41+00:00
|
c0f03f2e8da38dddbb778871dd5bb5cd22ad1dcf
|
# Dataset Card for "OxfordFlowers_test_facebook_opt_2.7b_Attributes_Caption_ns_6149_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/OxfordFlowers_test_facebook_opt_2.7b_Attributes_Caption_ns_6149_random
|
[
"region:us"
] |
2023-02-04T04:38:01+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_1_bs_16", "num_bytes": 269126653.375, "num_examples": 6149}, {"name": "fewshot_3_bs_16", "num_bytes": 272734898.375, "num_examples": 6149}], "download_size": 532672085, "dataset_size": 541861551.75}}
|
2023-02-04T06:40:39+00:00
|
eff8c4c762ffc656d3b4180b45406727334fcce1
|
# Dataset Card for "OxfordFlowers_test_facebook_opt_350m_Visclues_ns_6149_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/OxfordFlowers_test_facebook_opt_350m_Visclues_ns_6149_random
|
[
"region:us"
] |
2023-02-04T04:46:17+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_1_bs_16", "num_bytes": 270234974.375, "num_examples": 6149}, {"name": "fewshot_3_bs_16", "num_bytes": 274952857.375, "num_examples": 6149}], "download_size": 534121686, "dataset_size": 545187831.75}}
|
2023-02-04T06:55:41+00:00
|
0ef871c1afe0fecd3c90f185cf716430e9ae7d88
|
p1atdev/noz
|
[
"license:cc0-1.0",
"region:us"
] |
2023-02-04T04:48:14+00:00
|
{"license": "cc0-1.0"}
|
2023-02-04T04:48:42+00:00
|
|
9ceb37b7773d4e25acc49962aec3ae7dd2e8141b
|
# range3/wiki40b-ja
This dataset consists of three parquet files from the wiki40b dataset with only Japanese data extracted. It is generated by the following python code.
このデータセットは、wiki40bデータセットの日本語データのみを抽出した3つのparquetファイルで構成されます。以下のpythonコードによって生成しています。
```py
import datasets
dss = datasets.load_dataset(
"wiki40b",
"ja",
beam_runner="DirectRunner",
)
for split,ds in dss.items():
ds.to_parquet(f"wikipedia-ja-20230101/{split}.parquet")
```
|
range3/wiki40b-ja
|
[
"task_categories:text-generation",
"task_categories:fill-mask",
"language:ja",
"region:us"
] |
2023-02-04T04:54:17+00:00
|
{"language": ["ja"], "task_categories": ["text-generation", "fill-mask"]}
|
2023-02-04T05:44:21+00:00
|
f3c9cf70baafecd7c693609622fdd680be16a61f
|
# Dataset Card for "OxfordFlowers_test_facebook_opt_1.3b_Visclues_ns_6149_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/OxfordFlowers_test_facebook_opt_1.3b_Visclues_ns_6149_random
|
[
"region:us"
] |
2023-02-04T05:02:19+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_1_bs_16", "num_bytes": 270233527.375, "num_examples": 6149}, {"name": "fewshot_3_bs_16", "num_bytes": 274949398.375, "num_examples": 6149}], "download_size": 534137349, "dataset_size": 545182925.75}}
|
2023-02-04T07:27:15+00:00
|
5f83e950d0581df275911f8bed66939e361e697f
|
# range3/cc100-ja
This dataset consists of parquet files from the cc100 dataset with only the Japanese language extracted and sharded.
このデータセットは、cc100データセットの日本語のみを抽出し、シャーディングしたparquetファイルで構成されます。
|
range3/cc100-ja
|
[
"task_categories:text-generation",
"task_categories:fill-mask",
"language:ja",
"license:unknown",
"region:us"
] |
2023-02-04T05:10:34+00:00
|
{"language": ["ja"], "license": "unknown", "task_categories": ["text-generation", "fill-mask"]}
|
2023-02-04T05:43:32+00:00
|
44c81c75462c0440236ecbfb47c992d7857e5ace
|
# Dataset Card for "OxfordFlowers_test_facebook_opt_2.7b_Visclues_ns_6149_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/OxfordFlowers_test_facebook_opt_2.7b_Visclues_ns_6149_random
|
[
"region:us"
] |
2023-02-04T05:27:26+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_1_bs_16", "num_bytes": 270234457.375, "num_examples": 6149}, {"name": "fewshot_3_bs_16", "num_bytes": 274950362.375, "num_examples": 6149}], "download_size": 534140897, "dataset_size": 545184819.75}}
|
2023-02-04T08:17:34+00:00
|
cbf55f4429c983246b194fec3ff9195d229c3dd9
|
# Dataset Card for "nowiki_abstract_second_scrape_split2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jkorsvik/nowiki_abstract_second_scrape_split2
|
[
"region:us"
] |
2023-02-04T06:39:17+00:00
|
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "date_scraped", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "ingress", "dtype": "string"}, {"name": "article", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 758224560, "num_examples": 201819}], "download_size": 358042111, "dataset_size": 758224560}}
|
2023-02-04T06:39:46+00:00
|
6b76992be330318546a1797e6c01dba975f4d613
|
# Dataset Card for "boostcamp-docvqa-v4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Ssunbell/boostcamp-docvqa-v4
|
[
"region:us"
] |
2023-02-04T08:02:32+00:00
|
{"dataset_info": {"features": [{"name": "questionId", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "image", "sequence": {"sequence": {"sequence": {"sequence": "uint8"}}}}, {"name": "docId", "dtype": "int64"}, {"name": "ucsf_document_id", "dtype": "string"}, {"name": "ucsf_document_page_no", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "data_split", "dtype": "string"}, {"name": "words", "sequence": "string"}, {"name": "boxes", "sequence": {"sequence": "int64"}}], "splits": [{"name": "train", "num_bytes": 6381951489, "num_examples": 39454}, {"name": "val", "num_bytes": 869383194, "num_examples": 5349}], "download_size": 2582271500, "dataset_size": 7251334683}}
|
2023-02-04T10:39:51+00:00
|
5d075b9f87a3ed9a9fbb1dc9e15443167513ba19
|
This dataset was created by:
* Starting with the [Dialog Inpainting](https://github.com/google-research/dialog-inpainting) dataset
* Labeling the turns of each dialog with `User: ` and `Assistant: `
* Filtering using spaCy, using code similar to the following (written by https://huggingface.co/ontocord):
```
import pandas as pd
import spacy
try:
if sci is None: pass
except:
sci = spacy.load("en_ner_craft_md")
data = pd.read_parquet('data.parquet', engine='pyarrow')
for a in data['labeleddialog']:
a = a.replace("this article", "this subject").replace("()", "").replace(" ", " ")
if 'novel' in a or ' story' in a or 'movie' in a or 'film' in a or 'music' in a:
#print ('###arts\n', a)
continue
if ' game' in a or 'sports' in a or 'football' in a or 'soccer' in a or 'baseball' in a or 'basketball' in a:
#print ('###sports\n', a)
continue
if 'population' in a or 'territory' in a or 'village' in a or 'country' in a or 'county' in a:
#print ('###place\n', a)
continue
if 'ingredient' in a or 'food' in a or 'recipe' in a:
#print ('###recipe\n', a)
continue
if ' rights' in a or ' court ' in a or ' criminal ' in a or ' verdict ' in a or ' guilt ' in a or ' legislat' in a:
#print ('###law\n', a)
continue
doc = sci(a)
j = 0
for ent in doc.ents:
if ent.label == 'SO' or (ent.label == 'CHEBI' and len(ent.text) > 5):
j+= 1
if j > 3:
print ('###biomed\n',a)
break
#print (ent.label, ent.text)
```
* Filtering using BERT, using the following code:
```
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="facebook/bart-large-mnli")
classifier(page_titles, ["Biomedical", "Non-biomedical"])
# Dialogs with page titles with `prob < 0.7` were dropped.
prob = classification_result["scores"][classification_result["labels"].index("Biomedical")]
```
|
ericyu3/openassistant_inpainted_dialogs_5k_biomedical
|
[
"size_categories:1K<n<10K",
"license:apache-2.0",
"region:us"
] |
2023-02-04T08:41:50+00:00
|
{"license": "apache-2.0", "size_categories": ["1K<n<10K"]}
|
2023-02-06T00:26:21+00:00
|
57b4ddb12a6d215cbeadc61e8877f4dcf45da3c2
|
sleeplesslad/yuh
|
[
"license:openrail",
"region:us"
] |
2023-02-04T09:12:33+00:00
|
{"license": "openrail"}
|
2023-02-04T09:12:33+00:00
|
|
814392ed0aa924cdaa21b71456e1ef601bf1f305
|
GlowingBrick/venv-me
|
[
"license:mit",
"region:us"
] |
2023-02-04T09:22:14+00:00
|
{"license": "mit"}
|
2023-04-25T07:25:52+00:00
|
|
9145c144a0b1fb16763a8494cdff6e8fa87159ab
|
# Dataset Card for "nowiki_abstract_second_scrape_split3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jkorsvik/nowiki_abstract_second_scrape_split3
|
[
"region:us"
] |
2023-02-04T10:10:30+00:00
|
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "date_scraped", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "ingress", "dtype": "string"}, {"name": "article", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 631468644, "num_examples": 201819}], "download_size": 270168559, "dataset_size": 631468644}}
|
2023-02-04T10:10:55+00:00
|
650fdb08b147a44c7cf3a3697be4a4dcfec8875e
|
# Mawqif: A Multi-label Arabic Dataset for Target-specific Stance Detection
- *Mawqif* is the first Arabic dataset that can be used for target-specific stance detection.
- This is a multi-label dataset where each data point is annotated for stance, sentiment, and sarcasm.
- We benchmark *Mawqif* dataset on the stance detection task and evaluate the performance of four BERT-based models. Our best model achieves a macro-F1 of 78.89\%.
# Mawqif Statistics
- This dataset consists of **4,121** tweets in multi-dialectal Arabic. Each tweet is annotated with a stance toward one of three targets: “COVID-19 vaccine,” “digital transformation,” and “women empowerment.” In addition, it is annotated with sentiment and sarcasm polarities.
- The following figure illustrates the labels’ distribution across all targets, and the distribution per target.
<img width="738" alt="dataStat-2" src="https://user-images.githubusercontent.com/31368075/188299057-54d04e87-802d-4b0e-b7c6-56bdc1078284.png">
# Interactive Visualization
To browse an interactive visualization of the *Mawqif* dataset, please click [here](https://public.tableau.com/views/MawqifDatasetDashboard/Dashboard1?:language=en-US&publish=yes&:display_count=n&:origin=viz_share_link)
- *You can click on visualization components to filter the data by target and by class. **For example,** you can click on “women empowerment" and "against" to get the information of tweets that express against women empowerment.*
# Citation
If you feel our paper and resources are useful, please consider citing our work!
```
@inproceedings{alturayeif-etal-2022-mawqif,
title = "Mawqif: A Multi-label {A}rabic Dataset for Target-specific Stance Detection",
author = "Alturayeif, Nora Saleh and
Luqman, Hamzah Abdullah and
Ahmed, Moataz Aly Kamaleldin",
booktitle = "Proceedings of the The Seventh Arabic Natural Language Processing Workshop (WANLP)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.wanlp-1.16",
pages = "174--184"
}
```
|
NoraAlt/Mawqif_Stance-Detection
|
[
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:ar",
"Stance Detection",
"Sentiment Analysis",
"Sarcasm Detection",
"region:us"
] |
2023-02-04T10:21:10+00:00
|
{"language": ["ar"], "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "pretty_name": "Mawqif: Stance Detection", "tags": ["Stance Detection", "Sentiment Analysis", "Sarcasm Detection"]}
|
2024-01-18T10:11:13+00:00
|
3454631364656f2cd2f2a2ca97a30f850030b927
|
# Dataset Card for "discord4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tomaxe/discord4
|
[
"region:us"
] |
2023-02-04T10:37:08+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 51865891, "num_examples": 569785}, {"name": "validation", "num_bytes": 8713, "num_examples": 100}], "download_size": 28882054, "dataset_size": 51874604}}
|
2023-02-04T10:37:19+00:00
|
c99df89198a7f588a74c2d673630eb90e03ef050
|
# Dataset Card for "boostcamp-docvqa-v4-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Ssunbell/boostcamp-docvqa-v4-test
|
[
"region:us"
] |
2023-02-04T10:39:55+00:00
|
{"dataset_info": {"features": [{"name": "questionId", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "image", "sequence": {"sequence": {"sequence": {"sequence": "uint8"}}}}, {"name": "docId", "dtype": "int64"}, {"name": "ucsf_document_id", "dtype": "string"}, {"name": "ucsf_document_page_no", "dtype": "string"}, {"name": "data_split", "dtype": "string"}, {"name": "words", "sequence": "string"}, {"name": "boxes", "sequence": {"sequence": "int64"}}], "splits": [{"name": "test", "num_bytes": 843104716, "num_examples": 5188}], "download_size": 297218666, "dataset_size": 843104716}}
|
2023-02-04T10:41:21+00:00
|
b861d67183c66dc44717903ce289a03e29d05004
|
# Dataset Card for "nowiki_second_scrape_merged"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@jkorsvik](https://github.com/jkorsvik) for adding this dataset.
|
jkorsvik/nowiki_second_scrape_merged
|
[
"task_categories:summarization",
"task_categories:text2text-generation",
"task_ids:text-simplification",
"annotations_creators:jkorsvik",
"annotations_creators:machine-generated",
"language_creators:found",
"language_creators:jkorsvik",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:no",
"language:nob",
"Norwegian wikipedia scraped with links from nowiki dump:<https://dumps.wikimedia.org/nowiki/20230120/>",
"region:us"
] |
2023-02-04T10:42:26+00:00
|
{"annotations_creators": ["jkorsvik", "machine-generated"], "language_creators": ["found", "jkorsvik"], "language": [false, "nob"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["summarization", "text2text-generation"], "task_ids": ["text-simplification"], "pretty_name": "nowiki_second_scrape_merged", "tags": ["Norwegian wikipedia scraped with links from nowiki dump:<https://dumps.wikimedia.org/nowiki/20230120/>"], "dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "date_scraped", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "ingress", "dtype": "string"}, {"name": "article", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2321106744, "num_examples": 601040}], "download_size": 1109052656, "dataset_size": 2321106744}}
|
2023-02-04T10:47:58+00:00
|
7c3bb9c344fd0c9ea4780fb4ec49ee4dcbb5b903
|
# Dataset Card for "beijingopera-trainorigin"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AkikoOu/beijingopera-trainorigin
|
[
"region:us"
] |
2023-02-04T11:21:40+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 343446.0, "num_examples": 9}], "download_size": 342528, "dataset_size": 343446.0}}
|
2023-02-04T11:21:45+00:00
|
46865e0bc989f0b010218ec2c590b4703d0e2a2f
|
From https://github.com/audioset/ontology
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
xfh/ontology_image_audio_2k
|
[
"region:us"
] |
2023-02-04T11:45:49+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "audio", "dtype": "audio"}, {"name": "tag", "dtype": "string"}, {"name": "text_id", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 2403}]}}
|
2023-02-06T02:46:05+00:00
|
71aaa1520f39a3edd4e13a01074f45c4203b7192
|
```bibtex
@misc{https://doi.org/10.48550/arxiv.2211.05417,
doi = {10.48550/ARXIV.2211.05417},
url = {https://arxiv.org/abs/2211.05417},
author = {Schlegel, Viktor and Pavlov, Kamen V. and Pratt-Hartmann, Ian},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Can Transformers Reason in Fragments of Natural Language?},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
tasksource/natural-language-satisfiability
|
[
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:en",
"arxiv:2211.05417",
"region:us"
] |
2023-02-04T13:25:23+00:00
|
{"language": ["en"], "task_categories": ["text-classification"], "task_ids": ["natural-language-inference"]}
|
2023-02-06T16:02:49+00:00
|
cfcaf62ade854cc95fd644872c0613be0ebd51cc
|
Zaperdolik/ferni
|
[
"license:afl-3.0",
"region:us"
] |
2023-02-04T14:11:21+00:00
|
{"license": "afl-3.0"}
|
2023-02-04T14:13:44+00:00
|
|
f17e55b8f60da2f32720c1a16e4b6149145c662f
|
https://github.com/microsoft/LoNLI
```bibtex
@article{Tarunesh2021TrustingRO,
title={Trusting RoBERTa over BERT: Insights from CheckListing the Natural Language Inference Task},
author={Ishan Tarunesh and Somak Aditya and Monojit Choudhury},
journal={ArXiv},
year={2021},
volume={abs/2107.07229}
}
```
|
tasksource/lonli
|
[
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:en",
"license:mit",
"region:us"
] |
2023-02-04T14:48:11+00:00
|
{"language": ["en"], "license": "mit", "task_categories": ["text-classification"], "task_ids": ["natural-language-inference"]}
|
2023-05-31T07:41:36+00:00
|
30af7c6761f5c2b87261aa81bd05d4a51e4c271b
|
# Dataset Card for "FGVC_Aircraft_test_facebook_opt_350m_Attributes_Caption_ns_3333_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/FGVC_Aircraft_test_facebook_opt_350m_Attributes_Caption_ns_3333_random
|
[
"region:us"
] |
2023-02-04T15:47:35+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_1_bs_16", "num_bytes": 300147629.375, "num_examples": 3333}, {"name": "fewshot_3_bs_16", "num_bytes": 301866570.375, "num_examples": 3333}], "download_size": 595044400, "dataset_size": 602014199.75}}
|
2023-02-04T16:13:52+00:00
|
196055cf3335442ce66b4d46dcb2715a5e4d5d06
|
# Dataset Card for "FGVC_Aircraft_test_facebook_opt_1.3b_Attributes_Caption_ns_3333_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/FGVC_Aircraft_test_facebook_opt_1.3b_Attributes_Caption_ns_3333_random
|
[
"region:us"
] |
2023-02-04T15:55:44+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_1_bs_16", "num_bytes": 300147552.375, "num_examples": 3333}, {"name": "fewshot_3_bs_16", "num_bytes": 301866337.375, "num_examples": 3333}], "download_size": 595047424, "dataset_size": 602013889.75}}
|
2023-02-04T16:27:09+00:00
|
2e762d1f58f3e61511136bdbc4b27a5384d14c73
|
# Dataset Card for "bookcorpus_compact_1024_shard9_of_10_meta"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
saibo/bookcorpus_compact_1024_shard9_of_10_meta
|
[
"region:us"
] |
2023-02-04T15:59:14+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "concept_with_offset", "dtype": "string"}, {"name": "cid_arrangement", "sequence": "int32"}, {"name": "schema_lengths", "sequence": "int64"}, {"name": "topic_entity_mask", "sequence": "int64"}, {"name": "text_lengths", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 7675706871, "num_examples": 61605}], "download_size": 1683788529, "dataset_size": 7675706871}}
|
2023-02-04T16:01:42+00:00
|
4320043296a1f2cbf7260dbcdb00aa29e4b848dd
|
# Dataset Card for "devign_with_norm_vul_lines"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
EddieChen372/devign_with_norm_vul_lines
|
[
"region:us"
] |
2023-02-04T16:28:55+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "func", "dtype": "string"}, {"name": "target", "dtype": "bool"}, {"name": "project", "dtype": "string"}, {"name": "commit_id", "dtype": "string"}, {"name": "func_clean", "dtype": "string"}, {"name": "vul_lines", "struct": [{"name": "code", "sequence": "string"}, {"name": "line_no", "sequence": "int64"}]}, {"name": "normalized_func", "dtype": "string"}, {"name": "lines", "sequence": "string"}, {"name": "label", "sequence": "int64"}, {"name": "line_no", "sequence": {"sequence": "int64"}}], "splits": [{"name": "test", "num_bytes": 22801956, "num_examples": 2732}, {"name": "train", "num_bytes": 183794878, "num_examples": 21854}, {"name": "validation", "num_bytes": 22451009, "num_examples": 2732}], "download_size": 72100845, "dataset_size": 229047843}}
|
2023-02-04T16:35:18+00:00
|
a2a41a2fb50456f359b2f08763675a716cc5b2d3
|
# Dataset Card for "sample_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
BhavyaMuni/sample_dataset
|
[
"region:us"
] |
2023-02-04T16:37:27+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 627787, "num_examples": 387}], "download_size": 259828, "dataset_size": 627787}}
|
2023-02-04T16:37:31+00:00
|
e9763a4df23c93d6783fe5dcc881a5729a2e377c
|
# Dataset Card for "mnist_sorted_v0.0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jlbaker361/mnist_sorted_v0.0
|
[
"region:us"
] |
2023-02-04T17:09:24+00:00
|
{"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "sequence", "sequence": "int64"}, {"name": "occurence", "dtype": "int64"}, {"name": "split", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 84223889, "num_examples": 68614}], "download_size": 12695868, "dataset_size": 84223889}}
|
2023-02-04T17:09:27+00:00
|
bd980e88405e0c3cbf3880f6a670dcd047384661
|
# Dataset Card for *BioLeaflets* Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [GitHub link](https://github.com/bayer-science-for-a-better-life/data2text-bioleaflets)
- **Paper:** [ACL Anthology](https://aclanthology.org/2021.inlg-1.40/)
- **Leaderboard:** [Papers with Code leaderboard for BioLeaflets Dataset](https://paperswithcode.com/dataset/bioleaflets)
- **Point of Contact:** [Ruslan Yermakov](https://github.com/wingedRuslan)
### Dataset Summary
*BioLeaflets* is a biomedical dataset for Data2Text generation. It is a corpus of 1,336 package leaflets of medicines authorised in Europe, which were obtained by scraping the European Medicines Agency (EMA) website.
Package leaflets are included in the packaging of medicinal products and contain information to help patients use the product safely and appropriately.
This dataset comprises the large majority (∼ 90%) of medicinal products authorised through the centralised procedure in Europe as of January 2021.
For more detailed information, please read the paper at [ACL Anthology](https://aclanthology.org/2021.inlg-1.40/).
### Supported Tasks and Leaderboards
BioLeaflets proposes a **conditional generation task** (data-to-text) in the biomedical domain: given an ordered set of entities as source, the *goal* is to produce a multi-sentence section.
Successful generation thus requires the model to learn specific syntax, terminology, and writing style from the corpus. Alternatively, the dataset might be used for **named-entity recognition task**: given text, detect medical entities.
The dataset provides an extensive description of medicinal products and thus supports a plain **language modeling task** focused on biomedical data.
### Languages
Monolingual - en.
## Dataset Structure
### Data Instances
For each instance (leaflet), there is a unique ID, URL, Product_Name, and textual information clearly describing the medicine.
The content of each section is not standardized (NO template), yet it is still well-structured.
Each document contains six sections:
1) What is the product and what is it used for
2) What you need to know before you take the product
3) Product usage instructions
4) possible side effects
5) product storage conditions
6) other information
Every section is represented as a dictionary with the 'Title', 'Section_Content', and 'Entity_Recognition' as keys.
The content of each section is lower-cased and tokenized by treating all special characters as separate tokens.
### Data Fields
- `ID`: a string representing a unique ID assigned to a leaflet
- `URL`: a string containing the link to the article on the European Medicines Agency (EMA) website
- `Product Name`: a string, the name of the medicine
- `Full Content`: a string covering the full content of the article available at URL
- `Section 1`: a dictionary including section 1 content and associated medical entities
- `Section 2`: a dictionary including section 2 content and associated medical entities
- `Section 3`: a dictionary including section 3 content and associated medical entities
- `Section 4`: a dictionary including section 4 content and associated medical entities
- `Section 5`: a dictionary including section 5 content and associated medical entities
- `Section 6`: a dictionary including section 6 content and associated medical entities
### Data Splits
We randomly split the dataset into training (80%), development (10%), and test (10%) set. Duplicates are removed.
## Dataset Creation
### Curation Rationale
Introduce a new biomedical dataset (BioLeaflets), which could serve as a benchmark for biomedical text generation models.
BioLeaflets proposes a conditional generation task: given an ordered set of entities as source, the goal is to produce a multi-sentence section.
### Source Data
#### Initial Data Collection and Normalization
The dataset was obtained by scraping the European Medicines Agency (EMA) website.
Each leaflet has an URL associated with it to the article on the EMA website.
#### Who are the source language producers?
Labeling experts with domain knowledge produced factual information.
### Annotations
#### Annotation process
To create the required input for data-to-text generation, we augment each document by leveraging named entity recognition (NER).
We combine two NER frameworks: Amazon Comprehend Medical (commercial) and Stanford Stanza (open-sourced).
Additionally, we treat all digits as entities and add the medicine name as the first entity
#### Who are the annotators?
Machine-generated: ensemble of state-of-the-art named entity recognition (NER) models.
### Personal and Sensitive Information
[Not included / Not present]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop models that can automatically generate long paragraphs of text as well as to facilitate the development of NLP models in the biomedical domain.
The main challenges of this dataset for D2T generation are multi-sentence and multi-section target text, small sample size, specialized medical vocabulary, and syntax.
### Discussion of Biases
Package leaflets are published for medicinal products approved in the European Union (EU).
They are included in the packaging of medicinal products and contain information to help patients use the product safely and appropriately.
The dataset represents factual information produced by labeling experts and validated before publishing. Hence, biases of any kind are not present in the dataset.
Package leaflets are required to be written in a way that is clear and understandable.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The data was originally collected by Ruslan Yermakov<sup>*</sup>, Nicholas Drago, and Angelo Ziletti at Bayer AG (Decision Science & Advanced Analytics unit). The code is made publicly available at [github link](https://github.com/bayer-science-for-a-better-life/data2text-bioleaflets)
<sup>*</sup> Work done during internship.
### Licensing Information
The *BioLeaflets* dataset is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
@inproceedings{yermakov-etal-2021-biomedical,
title = "Biomedical Data-to-Text Generation via Fine-Tuning Transformers",
author = "Yermakov, Ruslan and
Drago, Nicholas and
Ziletti, Angelo",
booktitle = "Proceedings of the 14th International Conference on Natural Language Generation",
month = aug,
year = "2021",
address = "Aberdeen, Scotland, UK",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.inlg-1.40",
pages = "364--370",
abstract = "Data-to-text (D2T) generation in the biomedical domain is a promising - yet mostly unexplored - field of research. Here, we apply neural models for D2T generation to a real-world dataset consisting of package leaflets of European medicines. We show that fine-tuned transformers are able to generate realistic, multi-sentence text from data in the biomedical domain, yet have important limitations. We also release a new dataset (BioLeaflets) for benchmarking D2T generation models in the biomedical domain.",
}
### Contributions
Thanks to [@wingedRuslan](https://github.com/wingedRuslan) for adding this dataset.
|
ruslan/bioleaflets-biomedical-ner
|
[
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_ids:language-modeling",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"bio",
"medical",
"biomedical",
"bioleaflets",
"medicine",
"data2text",
"data-to-text",
"region:us"
] |
2023-02-04T17:29:52+00:00
|
{"annotations_creators": ["machine-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-generation", "text2text-generation"], "task_ids": ["language-modeling"], "pretty_name": "bioleaflets-biomedical-ner", "tags": ["bio", "medical", "biomedical", "bioleaflets", "medicine", "data2text", "data-to-text"]}
|
2023-02-08T20:34:51+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.