sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
668c4aa16d43b48582f0df34391ec841e1449ce8 | Bucharest-NLP/dgt-tm-hu-ro | [
"license:apache-2.0",
"region:us"
]
| 2023-01-29T11:48:38+00:00 | {"license": "apache-2.0"} | 2023-02-07T18:27:30+00:00 |
|
6f5312e066d604d71bfa3628034b34229b4bebb9 |
# Azerbaijani News Summary Dataset Card
## Dataset Summary
I present __az-news-summary__, a comprehensive and diverse dataset comprising __143k (143,448)__ Azerbaijani news articles extracted using a set of carefully designed heuristics. The dataset covers common topics for news reports include war, government, politics, education, health, the environment, economy, business, fashion, entertainment, and sport, as well as quirky or unusual events.
The dataset is prepared for Abstractive/Extractive summarization tasks. It can also be used in other scopes like Text Generation, Title Generation and etc.
## Dataset Structure
One example from the dataset is given below in JSON format.
```json
{'id': 33885080,
'title': 'İsmayıllı silkələndi - Zəlzələ',
'summary': 'Avqustun 11-də İsmayıllı rayonu ərazisində zəlzələ baş verib',
'text': 'Azərbaycan milli elmlər akademiyası nəzdində respublika seysmoloji
xidmət mərkəzindən bildirilib ki, ilkin məlumatlara əsasən yeraltı təkanlar
yerli vaxtla saat 23:03:11-də pirquludan 11 kilometr qərbdə i̇smayıllı ərazisində
qeydə alınıb.ocağı 9 kilometr dərinlikdə yerləşən zəlzələ episentrdə 4 bal,
ətraf rayonlarda isə 3 bala qədər hiss olunub.'}
```
## Data Fields
- `id`: ID of the news.
- `title`: The title of the news.
- `summary`: The summary of the news.
- `text`: The body of the news.
## Data Splits
This dataset has 3 splits: _train_, _validation_, and _test_. \
Token counts are white space based.
| Dataset Split | Number of Instances | Size (MB) |
| ------------- | --------------------|:----------------------|
| Train | 100,413 | 150 |
| Validation | 14,344 | 21.3 |
| Test | 28,691 | 42.8 |
## Usage
Usage is easy and takes only a few minutes. Firstly, you need to use install datasets library as follows:
```python
!pip install datasets
```
To load the dataset from the library, you need to pass the file name on the load_dataset() function. In this case:
```python
from datasets import load_dataset
dataset = load_dataset("nijatzeynalov/azerbaijani-multi-news")
```
## Dataset Curator
This dataset was curated by [Nijat Zeynalov](https://www.linkedin.com/in/nijat-zeynalov-064163142/)
# Citation Information
```bibtex
@misc {nijatzeynalov_2023,
author = { {NijatZeynalov} },
title = { azerbaijani-multi-news (Revision 2afa300) },
year = 2023,
url = { https://huggingface.co/datasets/nijatzeynalov/azerbaijani-multi-news },
doi = { 10.57967/hf/0312 },
publisher = { Hugging Face }
}
``` | nijatzeynalov/azerbaijani-multi-news | [
"task_categories:summarization",
"language:az",
"license:creativeml-openrail-m",
"doi:10.57967/hf/0312",
"region:us"
]
| 2023-01-29T13:26:52+00:00 | {"language": ["az"], "license": "creativeml-openrail-m", "task_categories": ["summarization"], "pretty_name": "Azerbaijani News Summary Dataset Card", "extra_gated_prompt": "You agree to not use the dataset to conduct experiments that cause harm to human subjects.", "extra_gated_fields": {"Name and Surname": "text", "Email": "text", "Purpose": "text", "I agree to use this dataset for non-commercial use ONLY": "checkbox"}} | 2023-03-27T19:24:18+00:00 |
4989a297edb159907de3dbe83fdd7ee90cfea216 | # Dataset Card for "MIMMICIII-tokenized_notes_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JuanJoseMV/MIMMICIII-tokenized_notes_train | [
"region:us"
]
| 2023-01-29T13:36:59+00:00 | {"dataset_info": {"features": [{"name": "HADMID", "dtype": "int64"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 3309959072, "num_examples": 1284922}], "download_size": 114544651, "dataset_size": 3309959072}} | 2023-01-29T13:38:46+00:00 |
0ed3c776636025306e33bede44b2dc253403a887 | # Dataset Card for "he_cnn_dailymail"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | imvladikon/he_cnn_dailymail | [
"task_categories:summarization",
"language:he",
"region:us"
]
| 2023-01-29T13:37:03+00:00 | {"language": ["he"], "task_categories": ["summarization"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "article", "dtype": "string"}, {"name": "highlights", "dtype": "string"}, {"name": "article_en", "dtype": "string"}, {"name": "highlights_en", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2945431371, "num_examples": 287113}, {"name": "validation", "num_bytes": 134808274, "num_examples": 13368}, {"name": "test", "num_bytes": 116636491, "num_examples": 11490}], "download_size": 1781960238, "dataset_size": 3196876136}} | 2023-11-22T15:57:34+00:00 |
27be203f7518264bd1b2d967cd69b725dae57b14 | # Dataset Card for "MIMMICIII-tokenized_notes_valid"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JuanJoseMV/MIMMICIII-tokenized_notes_valid | [
"region:us"
]
| 2023-01-29T13:53:52+00:00 | {"dataset_info": {"features": [{"name": "HADMID", "dtype": "int64"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 3683664544, "num_examples": 1429994}], "download_size": 127610869, "dataset_size": 3683664544}} | 2023-01-29T13:56:15+00:00 |
98a008bb5b5d5ae160007961cdbabd9412d41dbe | # Dataset Card for "MIMMICIII-tokenized_notes_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JuanJoseMV/MIMMICIII-tokenized_notes_test | [
"region:us"
]
| 2023-01-29T14:06:02+00:00 | {"dataset_info": {"features": [{"name": "HADMID", "dtype": "int64"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 3683664544, "num_examples": 1429994}], "download_size": 127614703, "dataset_size": 3683664544}} | 2023-01-29T14:08:12+00:00 |
cf02d7006bb472a97672766d1c3c6bcfafaf0bf3 | DanteMerlino/ENT-papers | [
"license:afl-3.0",
"region:us"
]
| 2023-01-29T14:59:41+00:00 | {"license": "afl-3.0"} | 2023-01-29T15:03:57+00:00 |
|
94e6e132c9ad1c02a24c5d457dca753c630d956a | # Dataset Card for "OxfordPets_test_facebook_opt_6.7b_Attributes_ns_3669"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Multimodal-Fatima/OxfordPets_test_facebook_opt_6.7b_Attributes_ns_3669 | [
"region:us"
]
| 2023-01-29T15:34:40+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_1_bs_16", "num_bytes": 121909317.375, "num_examples": 3669}], "download_size": 119417055, "dataset_size": 121909317.375}} | 2023-01-29T15:34:45+00:00 |
ae4db7cec061b3d543e6fd38e9b3322c20cf5154 | # Dataset Card for "chilean_spanish_corpus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jorgeortizfuentes/small-chilean-spanish-corpus | [
"region:us"
]
| 2023-01-29T15:35:19+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8869161972, "num_examples": 14875186}], "download_size": 5504277686, "dataset_size": 8869161972}} | 2023-01-29T15:43:08+00:00 |
e6b9a57aa2938259bdad5b1d43092dedc67214db | # Dataset Card for "DTD_parition1_test_facebook_opt_6.7b_Attributes_ns_1880"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Multimodal-Fatima/DTD_parition1_test_facebook_opt_6.7b_Attributes_ns_1880 | [
"region:us"
]
| 2023-01-29T15:43:29+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 91665552.0, "num_examples": 1880}, {"name": "fewshot_1_bs_16", "num_bytes": 92070944.0, "num_examples": 1880}, {"name": "fewshot_3_bs_16", "num_bytes": 92895854.0, "num_examples": 1880}, {"name": "fewshot_5_bs_16", "num_bytes": 93723701.0, "num_examples": 1880}, {"name": "fewshot_8_bs_16", "num_bytes": 94963856.0, "num_examples": 1880}], "download_size": 451991650, "dataset_size": 465319907.0}} | 2023-01-29T17:58:34+00:00 |
bc0d59a9cd8f907a868b9629fac4056f37b35cac | # Dataset Card for "DTD_parition1_test_facebook_opt_6.7b_Attributes_Caption_ns_1880"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Multimodal-Fatima/DTD_parition1_test_facebook_opt_6.7b_Attributes_Caption_ns_1880 | [
"region:us"
]
| 2023-01-29T15:49:55+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 91757415.0, "num_examples": 1880}, {"name": "fewshot_1_bs_16", "num_bytes": 92256078.0, "num_examples": 1880}, {"name": "fewshot_3_bs_16", "num_bytes": 93264812.0, "num_examples": 1880}, {"name": "fewshot_5_bs_16", "num_bytes": 94273986.0, "num_examples": 1880}, {"name": "fewshot_8_bs_16", "num_bytes": 95791901.0, "num_examples": 1880}], "download_size": 452526243, "dataset_size": 467344192.0}} | 2023-01-29T18:17:24+00:00 |
4751fb84bfebdb4e79e4fda756602104c23bbc2d |
Dataset for watermark classification (no_watermark/watermark)
~22k images, 512x512, manually annotated
additional info - https://github.com/qwertyforce/scenery_watermarks | qwertyforce/scenery_watermarks | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"license:cc-by-nc-4.0",
"watermark",
"doi:10.57967/hf/0313",
"region:us"
]
| 2023-01-29T15:52:12+00:00 | {"license": "cc-by-nc-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["image-classification"], "pretty_name": "Scenery Watermarks", "tags": ["watermark"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "no_watermark", "1": "watermark"}}}}], "splits": [{"name": "train", "num_bytes": 1094841327.222, "num_examples": 22762}], "download_size": 1057455120, "dataset_size": 1094841327.222}} | 2023-01-31T16:58:17+00:00 |
c4add321635d370291289f1a7eff19632f30f700 | # Story generation
## Dataset Description
- **Homepage:** https://laion.ai/
### Dataset Summary
This dataset contains summaries and stories from [RUCAIBox/Story-Generation](https://huggingface.co/datasets/RUCAIBox/Story-Generation) dataset.
## Dataset Structure
### Data Fields
- `summary`: The summary of the story
- `story`: The story | qwedsacf/story-generation | [
"task_categories:text-generation",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:en",
"story-generation",
"region:us"
]
| 2023-01-29T15:52:32+00:00 | {"language": ["en"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "task_ids": [], "tags": ["story-generation"], "dataset_info": {"features": [{"name": "summary", "dtype": "string"}, {"name": "story", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 385345341, "num_examples": 427223}], "download_size": 213423683, "dataset_size": 385345341}} | 2023-02-02T11:00:46+00:00 |
a3950bf13410bda831594efe7af60334ca1cefb3 | # Dataset Card for "DTD_parition1_test_facebook_opt_6.7b_Visclues_ns_1880"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Multimodal-Fatima/DTD_parition1_test_facebook_opt_6.7b_Visclues_ns_1880 | [
"region:us"
]
| 2023-01-29T15:56:44+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 91909111.0, "num_examples": 1880}, {"name": "fewshot_1_bs_16", "num_bytes": 92558185.0, "num_examples": 1880}, {"name": "fewshot_3_bs_16", "num_bytes": 93868317.0, "num_examples": 1880}, {"name": "fewshot_5_bs_16", "num_bytes": 95179914.0, "num_examples": 1880}, {"name": "fewshot_8_bs_16", "num_bytes": 97151410.0, "num_examples": 1880}], "download_size": 453704587, "dataset_size": 470666937.0}} | 2023-01-29T18:43:32+00:00 |
258dd986d2c617001501ca8618fc786885d368b0 | # Dataset Card for "Sentences"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nlpproject2023/Sentences_old | [
"region:us"
]
| 2023-01-29T17:31:44+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "facts", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 3897012, "num_examples": 7405}], "download_size": 2542216, "dataset_size": 3897012}} | 2023-01-29T17:31:47+00:00 |
452abc047dc0a3a93fbd8344757ee6dad2199faa |
# LoRA - niji_jelly

LoRA trained on images trained on from MidJourney's Niji style, specifically the jelly look.
All image examples are on the dalcefoV3Anime model: https://civitai.com/models/5398/dalcefov3animepastelmix
I recommend using the LoRA at around 0.8 emphasis for best results.
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the LoRA to deliberately produce nor share illegal or harmful outputs or content
2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | kxly/niji_jelly | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"image-to-image",
"region:us"
]
| 2023-01-29T19:32:25+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "pretty_name": "niji_jelly Style", "thumbnail": "https://huggingface.co/datasets/kxly/niji_jelly/blob/main/niji_jelly_showcase_2.png", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false} | 2023-01-31T17:00:28+00:00 |
4549df85068667acb93e1147adbd7deb859fdb87 | # Dataset Card for "expository_documents_medicine"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nbalepur/expository_documents_medicine | [
"region:us"
]
| 2023-01-29T20:16:24+00:00 | {"dataset_info": {"features": [{"name": "aspect", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "web_sentences_with_desc", "sequence": "string"}, {"name": "web_sentences_no_desc", "sequence": "string"}, {"name": "output", "dtype": "string"}, {"name": "output_aug", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 52889067, "num_examples": 169}, {"name": "train", "num_bytes": 177551118.56296295, "num_examples": 590}, {"name": "val", "num_bytes": 25579398.437037036, "num_examples": 85}], "download_size": 140551296, "dataset_size": 256019584.0}} | 2023-01-29T20:16:36+00:00 |
b539037325ecb230f37bda2d9f8daad6c23db37c | nazneen/rlhf | [
"license:apache-2.0",
"region:us"
]
| 2023-01-29T20:35:11+00:00 | {"license": "apache-2.0"} | 2023-01-30T04:01:41+00:00 |
|
a30bc0a97757fc4a52d85255a78836faeb26e25f |
# Dataset Card for MultiGLUE
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset is a combination of the cola, mrpc, qnli, qqp, rte, sst2, and wnli subsets of the GLUE dataset. Its intended use is to benchmark language models on multitask binary classification.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Like the GLUE dataset, this dataset is in English.
## Dataset Structure
### Data Instances
An example instance looks like this:
```
{
"label": 1,
"task": "cola",
"sentence1": "The sailors rode the breeze clear of the rocks.",
"sentence2": null
}
```
### Data Fields
- `task`: A `string` feature, indicating the GLUE task the instance is from.
- `sentence1`: A `string` feature.
- `sentence2`: A `string` feature.
- `label`: A classification label, either 0 or 1.
### Data Splits
- `train`: 551,282 instances
- `validation`: 48,564 instances
- `test`: 404,183 instances, no classification label (same as GLUE)
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
This dataset is created by combining the cola, mrpc, qnli, qqp, rte, sst2, and wnli subsets of the GLUE dataset.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | MtCelesteMa/multiglue | [
"task_categories:text-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|glue",
"language:en",
"license:cc-by-4.0",
"region:us"
]
| 2023-01-29T20:43:51+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": "cc-by-4.0", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|glue"], "task_categories": ["text-classification"], "pretty_name": "MultiGLUE"} | 2023-01-30T17:24:52+00:00 |
72aa902a9eaa7b73c247b99d4d0c7f12a3a9e561 |
# Sakimi-chan LoRA
## Who is Sakimi-chan?
Sakimi-chan is a Canadian artist best known for her digital paintings and unique style. She mainly draws fanart of games and popular characters and creates gifs for her fans with voiceovers.
Patreon: https://www.patreon.com/sakimichan
# Use Cases
The LoRA is in itself very compatible with the most diverse model. However, it is most effective when used with Kenshi or AbyssOrangeMix2.
The LoRA itself was trained with the token: ```skistyle```.
I would suggest using the token with AbyssOrangeMix2, but not with Kenshi, since I got better results that way.
The model mentioned right now
1. AbyssOrangeMix2 from [WarriorMama777](https://huggingface.co/WarriorMama777/OrangeMixs)
2. Kenshi Model from [Luna](https://huggingface.co/SweetLuna/Kenshi)
## Strength
I would personally use these strength with the assosiated model:
- 0.8-0.85 for AbyssOrangeMix2
- 0.65-0.75 for Kenshi
# Showcase
**Example 1**
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/sakimi-chan_LoRA/resolve/main/preview/Preview%20(1).png"/>
```
skistyle,
1girl, solo, blonde hair, armor, gauntlets, ahoge, green eyes, armored dress, ribbon, puffy sleeves, dress, braid, hair ribbon, looking at viewer, weapon, long sleeves, juliet sleeves, sword, blue ribbon, lips, sidelocks, hair bun, hand on hilt, excalibur (fate/stay night), breastplate, bangs
Steps: 32, Sampler: Euler a, CFG scale: 7
```
**Example 2**
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/sakimi-chan_LoRA/resolve/main/preview/Preview%20(2).png"/>
```
skistyle,
1girl, best quality, (masterpiece:1.3), (white eyelashes:1.2), (albino:1.2), [black eyeshadow], bangs, closed mouth, cowboy shot, dress shirt, hair between eyes, long hair, looking at viewer, red eyes, shirt, simple background, sleeves past wrists, white hair, white shirt, wing collar, black skirt, (upper body:1.3), (highly detailed face:1.3)
Steps: 32, Sampler: Euler a, CFG scale: 7
```
**Example 3**
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/sakimi-chan_LoRA/resolve/main/preview/Preview%20(3).png"/>
```
skistyle,
(extremely detailed CG unity 8k wallpaper), (ultra-detailed), masterpiece, best quality, raiden shogun, 1girl, breasts, solo
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7
```
**Example 4**
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/sakimi-chan_LoRA/resolve/main/preview/Preview%20(4).png"/>
```
1girl, solo, long hair, blue eyes, small breasts, hair over one eye, breast curtain, looking at viewer, braid, blush, shoulder cutout, hair ornament, large breasts, smile, upper body, tassel, parted lips, white hair, clothing cutout, bodysuit, braided ponytail, bangs, bare shoulders, eyes visible through hair, gold trim, earrings, jewelry, very long hair, white background, (masterpiece:1.2), ((best quality)), (ultra-detailed)
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7
```
# License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/sakimi-chan_LoRA | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"image-to-image",
"region:us"
]
| 2023-01-29T21:01:32+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "thumbnail": "https://huggingface.co/datasets/Nerfgun3/sakimi-chan_LoRA/resolve/main/preview/Preview.png", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false} | 2023-01-30T16:32:39+00:00 |
bd6a8eb56bd2263763221764be6e73cb60e2b952 |
Preprocessed version of Super-Natural-Instructions from https://github.com/allenai/natural-instructions/tree/master/splits. The same inputs may appear with different outputs, thus to avoid duplicate inputs, you can deduplicate by the `id` or the `inputs` field.
This is modified from https://huggingface.co/datasets/Muennighoff/natural-instructions
with a few improvements:
1. Adds positive/negative examples, outputs, explanations for each task, to
support different task definitions.
2. Adds an "eval" field which which is True for the first 100 examples of each
test task (119 * 100 = 11900 examples). This field indicates whether an example
is part of the abbreviated + balanced test split. See
https://github.com/allenai/natural-instructions/blob/master/src/reorder_instances_for_testing.py.
3. Adds an "eval" field to the training dataset, which can be used as an
in-domain evaluation set. To do so, we sample a balanced set the first 15
examples of each train split (757 * 15 = 11355 examples) and mark the "eval"
field as true.
| jayelm/natural-instructions | [
"task_categories:other",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"language:en",
"region:us"
]
| 2023-01-29T21:27:10+00:00 | {"annotations_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "multilinguality": ["monolingual"], "size_categories": ["100M<n<1B"], "task_categories": ["other"]} | 2023-01-29T23:16:06+00:00 |
005402949a0e95be2f1100704bdbdd666a5813b0 | Yone2/Pokimane | [
"license:afl-3.0",
"region:us"
]
| 2023-01-29T22:21:47+00:00 | {"license": "afl-3.0"} | 2023-01-29T22:21:47+00:00 |
|
cc973ef7000a89c7eb030f22d6c4fe15593748ea | Yone12/Pokimane | [
"license:afl-3.0",
"region:us"
]
| 2023-01-29T22:37:04+00:00 | {"license": "afl-3.0"} | 2023-01-29T22:37:04+00:00 |
|
536ef6f0428bdb8fb3c58fa38b5b4e63a492c536 | # Dataset Card for "Caltech101_with_background_test_facebook_opt_350m_Attributes_ns_6084"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Multimodal-Fatima/Caltech101_with_background_test_facebook_opt_350m_Attributes_ns_6084 | [
"region:us"
]
| 2023-01-29T22:53:35+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 100845617.5, "num_examples": 6084}, {"name": "fewshot_1_bs_16", "num_bytes": 102174276.5, "num_examples": 6084}, {"name": "fewshot_3_bs_16", "num_bytes": 104837565.5, "num_examples": 6084}, {"name": "fewshot_5_bs_16", "num_bytes": 107498683.5, "num_examples": 6084}, {"name": "fewshot_8_bs_16", "num_bytes": 111470389.5, "num_examples": 6084}], "download_size": 498526769, "dataset_size": 526826532.5}} | 2023-01-30T03:43:48+00:00 |
ffee0ad41979e23ddd690fbe139df623bf1d0ec6 | # Dataset Card for "Caltech101_with_background_test_facebook_opt_1.3b_Attributes_ns_6084"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Multimodal-Fatima/Caltech101_with_background_test_facebook_opt_1.3b_Attributes_ns_6084 | [
"region:us"
]
| 2023-01-29T23:04:46+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 100845953.5, "num_examples": 6084}, {"name": "fewshot_1_bs_16", "num_bytes": 102174317.5, "num_examples": 6084}, {"name": "fewshot_3_bs_16", "num_bytes": 104837551.5, "num_examples": 6084}, {"name": "fewshot_5_bs_16", "num_bytes": 107497714.5, "num_examples": 6084}, {"name": "fewshot_8_bs_16", "num_bytes": 111468918.5, "num_examples": 6084}], "download_size": 498501590, "dataset_size": 526824455.5}} | 2023-01-30T04:20:59+00:00 |
c997db516b0bf37739237560fba9cb22f0ca1400 |
# MIRACL (sw) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-sw-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-sw-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-sw-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-sw-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-sw-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-sw-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-sw-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-sw-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-sw-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-sw-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-sw-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-sw-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| Cohere/miracl-sw-queries-22-12 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:sw",
"license:apache-2.0",
"region:us"
]
| 2023-01-29T23:14:56+00:00 | {"annotations_creators": ["expert-generated"], "language": ["sw"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-02-06T12:02:02+00:00 |
6c1b4d3cc42deca51c436afdf05a9f4e059843c0 |
# MIRACL (sw) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-sw-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-sw-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-sw-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-sw-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-sw-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-sw-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-sw-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-sw-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-sw-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-sw-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-sw-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-sw-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| Cohere/miracl-sw-corpus-22-12 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:sw",
"license:apache-2.0",
"region:us"
]
| 2023-01-29T23:19:54+00:00 | {"annotations_creators": ["expert-generated"], "language": ["sw"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-02-06T12:02:12+00:00 |
b1c2db85560864aedcddb10d329be369368559e6 | # Dataset Card for "Caltech101_with_background_test_facebook_opt_2.7b_Attributes_ns_6084"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Multimodal-Fatima/Caltech101_with_background_test_facebook_opt_2.7b_Attributes_ns_6084 | [
"region:us"
]
| 2023-01-29T23:20:52+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 100846738.5, "num_examples": 6084}, {"name": "fewshot_1_bs_16", "num_bytes": 102174531.5, "num_examples": 6084}, {"name": "fewshot_3_bs_16", "num_bytes": 104837834.5, "num_examples": 6084}, {"name": "fewshot_5_bs_16", "num_bytes": 107498126.5, "num_examples": 6084}, {"name": "fewshot_8_bs_16", "num_bytes": 111469795.5, "num_examples": 6084}], "download_size": 498513923, "dataset_size": 526827026.5}} | 2023-01-30T05:20:22+00:00 |
524a8745a132ec0a671520b33a2fb7722bc5610c |
# MIRACL (bn) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-bn-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-bn-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-bn-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-bn-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-bn-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-bn-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-bn-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-bn-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-bn-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-bn-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-bn-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-bn-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| Cohere/miracl-bn-corpus-22-12 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:bn",
"license:apache-2.0",
"region:us"
]
| 2023-01-29T23:33:44+00:00 | {"annotations_creators": ["expert-generated"], "language": ["bn"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-02-06T12:01:45+00:00 |
16e4e7e0ac67bc564ad0c41e9821b52696e89fbf |
# MIRACL (bn) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-bn-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-bn-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-bn-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-bn-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-bn-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-bn-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-bn-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-bn-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-bn-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-bn-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-bn-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-bn-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| Cohere/miracl-bn-queries-22-12 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:bn",
"license:apache-2.0",
"region:us"
]
| 2023-01-29T23:35:54+00:00 | {"annotations_creators": ["expert-generated"], "language": ["bn"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-02-06T12:01:34+00:00 |
30e91018ad93c86410650a16319d2b445960a646 |
# Dataset Card for glue_augmented_mrpc
## Dataset Description
Augmented MRPC dataset
**Reference:** https://huggingface.co/datasets/glue | gokuls/glue_augmented_mrpc | [
"license:apache-2.0",
"region:us"
]
| 2023-01-29T23:37:20+00:00 | {"license": "apache-2.0"} | 2023-01-30T14:34:28+00:00 |
2cfa372e28359bc7e706de2970a020988a259031 |
# MIRACL (hi) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-hi-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-hi-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-hi-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-hi-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-hi-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-hi-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-hi-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-hi-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-hi-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-hi-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-hi-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-hi-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| Cohere/miracl-hi-corpus-22-12 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:hi",
"license:apache-2.0",
"region:us"
]
| 2023-01-29T23:42:57+00:00 | {"annotations_creators": ["expert-generated"], "language": ["hi"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-02-06T12:02:40+00:00 |
8e1edead5f52cdbb4b326d5a53a8c6a69240d98d |
# MIRACL (hi) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-hi-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-hi-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-hi-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-hi-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-hi-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-hi-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-hi-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-hi-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-hi-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-hi-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-hi-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-hi-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| Cohere/miracl-hi-queries-22-12 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:hi",
"license:apache-2.0",
"region:us"
]
| 2023-01-29T23:45:55+00:00 | {"annotations_creators": ["expert-generated"], "language": ["hi"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-02-06T12:02:28+00:00 |
cf4ca0c85b133aeb3f6f1f809808ffaa98981ce8 |
# MIRACL (te) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-te-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-te-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-te-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-te-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-te-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-te-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-te-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-te-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-te-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-te-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-te-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-te-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| Cohere/miracl-te-corpus-22-12 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:te",
"license:apache-2.0",
"region:us"
]
| 2023-01-29T23:53:23+00:00 | {"annotations_creators": ["expert-generated"], "language": ["te"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-02-06T12:00:45+00:00 |
775adabf10de10ae9f9d802855c259fc6f1c3887 |
# Dataset Card for glue_augmented_cola
## Dataset Description
Augmented COLA dataset
**Reference:** https://huggingface.co/datasets/glue | gokuls/glue_augmented_cola | [
"license:apache-2.0",
"region:us"
]
| 2023-01-29T23:53:32+00:00 | {"license": "apache-2.0"} | 2023-01-30T00:37:42+00:00 |
c8dd67fe995641cbbcde40c8bb76e71e8ee85807 |
# Dataset Card for glue_augmented_mnli
## Dataset Description
Augmented MNLI dataset
**Reference:** https://huggingface.co/datasets/glue | gokuls/glue_augmented_mnli | [
"license:apache-2.0",
"region:us"
]
| 2023-01-29T23:54:07+00:00 | {"license": "apache-2.0"} | 2023-02-03T12:26:58+00:00 |
d081d92c754d312fb81ba3f3584b86d36e060d8b |
# Dataset Card for glue_augmented_qnli
## Dataset Description
Augmented QNLI dataset
**Reference:** https://huggingface.co/datasets/glue | gokuls/glue_augmented_qnli | [
"license:apache-2.0",
"region:us"
]
| 2023-01-29T23:54:39+00:00 | {"license": "apache-2.0"} | 2023-01-30T14:13:49+00:00 |
d0c70968261462fb9061141b343de9450096c4ea |
# Dataset Card for glue_augmented_qqp
## Dataset Description
Augmented QQP dataset
**Reference:** https://huggingface.co/datasets/glue | gokuls/glue_augmented_qqp | [
"license:apache-2.0",
"region:us"
]
| 2023-01-29T23:55:02+00:00 | {"license": "apache-2.0"} | 2023-02-02T20:12:47+00:00 |
05e151c0051aeac95f1d93f1e69a9f824d3d6189 |
# Dataset Card for glue_augmented_rte
## Dataset Description
Augmented RTE dataset
**Reference:** https://huggingface.co/datasets/glue | gokuls/glue_augmented_rte | [
"license:apache-2.0",
"region:us"
]
| 2023-01-29T23:55:49+00:00 | {"license": "apache-2.0"} | 2023-01-30T14:26:00+00:00 |
61f5e05e9df6d6d7704ad5b37f9a8df8c249879a |
# Dataset Card for glue_augmented_sst2
## Dataset Description
Augmented SST-2 dataset
**Reference:** https://huggingface.co/datasets/glue | gokuls/glue_augmented_sst2 | [
"license:apache-2.0",
"region:us"
]
| 2023-01-29T23:56:12+00:00 | {"license": "apache-2.0"} | 2023-01-30T13:21:43+00:00 |
513812bcca040dd26c4e8da4d700c6b7813929ca |
# MIRACL (te) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-te-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-te-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-te-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-te-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-te-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-te-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-te-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-te-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-te-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-te-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-te-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-te-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| Cohere/miracl-te-queries-22-12 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:te",
"license:apache-2.0",
"region:us"
]
| 2023-01-29T23:56:46+00:00 | {"annotations_creators": ["expert-generated"], "language": ["te"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-02-06T12:00:55+00:00 |
cc8e91e79b0217bf97de2091a16d1851a3adba32 |
# Dataset Card for glue_augmented_stsb
## Dataset Description
Augmented STSB dataset
**Reference:** https://huggingface.co/datasets/glue | gokuls/glue_augmented_stsb | [
"license:apache-2.0",
"region:us"
]
| 2023-01-29T23:56:49+00:00 | {"license": "apache-2.0"} | 2023-01-30T14:29:39+00:00 |
2cb94d7fa71a6705688a0d59ca8d3001ed737fc7 | # Dataset Card for glue_augmented_wnli
## Dataset Description
Augmented WNLI dataset
**Reference:** https://huggingface.co/datasets/glue | gokuls/glue_augmented_wnli | [
"license:apache-2.0",
"region:us"
]
| 2023-01-29T23:57:12+00:00 | {"license": "apache-2.0"} | 2023-01-30T14:31:41+00:00 |
7f77840b9e5be966850cef99c0e693fb5bd8d5d0 |
# MIRACL (th) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-th-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-th-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-th-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-th-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-th-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-th-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-th-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-th-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-th-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-th-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-th-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-th-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| Cohere/miracl-th-corpus-22-12 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:th",
"license:apache-2.0",
"region:us"
]
| 2023-01-30T00:05:01+00:00 | {"annotations_creators": ["expert-generated"], "language": ["th"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-02-06T12:01:08+00:00 |
6a37209aa48318cad10a490be57ce82253fd379b |
# MIRACL (th) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-th-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-th-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-th-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-th-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-th-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-th-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-th-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-th-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-th-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-th-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-th-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-th-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| Cohere/miracl-th-queries-22-12 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:th",
"license:apache-2.0",
"region:us"
]
| 2023-01-30T00:08:50+00:00 | {"annotations_creators": ["expert-generated"], "language": ["th"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-02-06T12:01:19+00:00 |
4e2f204e4eb41d08c1e4c8d2e41066c2e21c67e3 | # Dataset Card for "production-samples-17"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Gabef/production-samples-17 | [
"region:us"
]
| 2023-01-30T01:07:58+00:00 | {"dataset_info": {"features": [{"name": "features", "sequence": "float32"}, {"name": "labels", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4142196436, "num_examples": 16560}], "download_size": 4000299254, "dataset_size": 4142196436}} | 2023-01-30T01:42:39+00:00 |
d92b0ba5320361f8ef26042bf822477a56dbd32b |
# Dataset Card for Swiss Court View Generation
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Swiss Court View Generation is a multilingual, diachronic dataset of 404K Swiss Federal Supreme Court (FSCS) cases. This dataset is part of a challenging text generation task.
This dataset contains court views for different languages and court chambers. It includes information such as decision id, language, chamber, file name, url, and the number of tokens in the facts and considerations sections.
Main (L1) contains all the data, Origin (L2) contains only data with complete origin facts & origin considerations.
### Supported Tasks and Leaderboards
### Languages
Switzerland has four official languages with three languages German, French and Italian being represenated. The decisions are written by the judges and clerks in the language of the proceedings.
| Language | Subset | Number of Documents Main |Number of Documents Origin|
|------------|------------|--------------------------|--------------------------|
| German | **de** | 197K | 49 |
| French | **fr** | 163K | 221 |
| Italian | **it** | 44K | 0 |
## Dataset Structure
### Data Fields
```
decision_id (string)
facts (string)
considerations (string)
origin_facts (string)
origin_considerations (string)
law_area (string)
language (string)
year (int32)
court (string)
chamber (string)
canton (string)
region (string)
```
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
The decisions are written by the judges and clerks in the language of the proceedings.
### Annotations
#### Annotation process
#### Who are the annotators?
Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch).
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
© Swiss Federal Supreme Court, 2002-2022
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
Please cite our [ArXiv-Preprint](https://arxiv.org/abs/2306.09237)
```
@misc{rasiah2023scale,
title={SCALE: Scaling up the Complexity for Advanced Language Model Evaluation},
author={Vishvaksenan Rasiah and Ronja Stern and Veton Matoshi and Matthias Stürmer and Ilias Chalkidis and Daniel E. Ho and Joel Niklaus},
year={2023},
eprint={2306.09237},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
| rcds/swiss_court_view_generation | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:de",
"language:fr",
"language:it",
"license:cc-by-sa-4.0",
"arxiv:2306.09237",
"region:us"
]
| 2023-01-30T01:50:28+00:00 | {"language": ["de", "fr", "it"], "license": "cc-by-sa-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "pretty_name": "Swiss Court View Generation"} | 2023-07-20T06:35:29+00:00 |
9e2fc1153948d772bea854cc12333bf0f523ee46 |
# AppStore Rankings Dataset
This is a dataset of AppStore rankings, JP and US, each containing 176 json files of ranking chart information including paid and free.
```
2023-01-30
├─jp
| └─... 176 files
└─us
├─Action - Top Free Games.json
├─Action - Top Paid Games.json
├─Adventure - Top Free Games
... 176 files
└─Word - Top Paid Games.json
```
Example of json file:
```json
{
"chart": "Action",
"section": "Top Free Games",
"apps": [
{
"chart": "Action",
"section": "Top Free Games",
"rank": "1",
"id": "431946152",
"name": "Roblox",
"dev": "Roblox Corporation",
"url": "https://apps.apple.com/us/app/roblox/id431946152",
"icon": {
"image/png": {
"320w": "https://is5-ssl.mzstatic.com/image/thumb/Purple113/v4/65/6d/d3/656dd35b-2896-0a9f-043a-b1be8501f7f8/AppIcon-1x_U007emarketing-0-7-0-0-85-220.png/320x0w.png",
"157w": "https://is5-ssl.mzstatic.com/image/thumb/Purple113/v4/65/6d/d3/656dd35b-2896-0a9f-043a-b1be8501f7f8/AppIcon-1x_U007emarketing-0-7-0-0-85-220.png/157x0w.png",
"146w": "https://is5-ssl.mzstatic.com/image/thumb/Purple113/v4/65/6d/d3/656dd35b-2896-0a9f-043a-b1be8501f7f8/AppIcon-1x_U007emarketing-0-7-0-0-85-220.png/146x0w.png",
"640w": "https://is5-ssl.mzstatic.com/image/thumb/Purple113/v4/65/6d/d3/656dd35b-2896-0a9f-043a-b1be8501f7f8/AppIcon-1x_U007emarketing-0-7-0-0-85-220.png/640x0w.png",
"314w": "https://is5-ssl.mzstatic.com/image/thumb/Purple113/v4/65/6d/d3/656dd35b-2896-0a9f-043a-b1be8501f7f8/AppIcon-1x_U007emarketing-0-7-0-0-85-220.png/314x0w.png",
"292w": "https://is5-ssl.mzstatic.com/image/thumb/Purple113/v4/65/6d/d3/656dd35b-2896-0a9f-043a-b1be8501f7f8/AppIcon-1x_U007emarketing-0-7-0-0-85-220.png/292x0w.png"
},
"image/webp": {
"320w": "https://is5-ssl.mzstatic.com/image/thumb/Purple113/v4/65/6d/d3/656dd35b-2896-0a9f-043a-b1be8501f7f8/AppIcon-1x_U007emarketing-0-7-0-0-85-220.png/320x0w.webp",
"157w": "https://is5-ssl.mzstatic.com/image/thumb/Purple113/v4/65/6d/d3/656dd35b-2896-0a9f-043a-b1be8501f7f8/AppIcon-1x_U007emarketing-0-7-0-0-85-220.png/157x0w.webp",
"146w": "https://is5-ssl.mzstatic.com/image/thumb/Purple113/v4/65/6d/d3/656dd35b-2896-0a9f-043a-b1be8501f7f8/AppIcon-1x_U007emarketing-0-7-0-0-85-220.png/146x0w.webp",
"640w": "https://is5-ssl.mzstatic.com/image/thumb/Purple113/v4/65/6d/d3/656dd35b-2896-0a9f-043a-b1be8501f7f8/AppIcon-1x_U007emarketing-0-7-0-0-85-220.png/640x0w.webp",
"314w": "https://is5-ssl.mzstatic.com/image/thumb/Purple113/v4/65/6d/d3/656dd35b-2896-0a9f-043a-b1be8501f7f8/AppIcon-1x_U007emarketing-0-7-0-0-85-220.png/314x0w.webp",
"292w": "https://is5-ssl.mzstatic.com/image/thumb/Purple113/v4/65/6d/d3/656dd35b-2896-0a9f-043a-b1be8501f7f8/AppIcon-1x_U007emarketing-0-7-0-0-85-220.png/292x0w.webp"
}
}
},
...
}
``` | p1atdev/appstore | [
"license:cc0-1.0",
"region:us"
]
| 2023-01-30T02:32:18+00:00 | {"license": "cc0-1.0"} | 2023-01-30T04:30:58+00:00 |
1202c05016f8379d970a3f9ae04813aa5cb0d2c2 | # Dataset Card for "binhvq-news"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ademax/binhvq-news-sentence | [
"region:us"
]
| 2023-01-30T02:48:47+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 26415267562, "num_examples": 141229987}], "download_size": 13799896934, "dataset_size": 26415267562}} | 2023-01-30T03:11:48+00:00 |
d5e7075a07884ce0b33d81f3d0ae1989a2a6d583 | nc33/same_text | [
"license:mit",
"region:us"
]
| 2023-01-30T03:20:33+00:00 | {"license": "mit"} | 2023-01-30T03:26:06+00:00 |
|
2b07626eba22fbb8ac936faa385767e06cba6593 | # Dataset Card for DSIR-filtered-pile-50M
## Dataset Description
- **Repository:** https://github.com/p-lambda/dsir
- **Paper:** https://arxiv.org/abs/2302.03169
- **Point of Contact: Sang Michael Xie <[email protected]>**
### Dataset Summary
This dataset is a subset of The Pile, selected via the DSIR data selection method. The target distribution for DSIR is the Wikipedia and BookCorpus2 subsets of The Pile.
### Languages
English (EN)
## Dataset Structure
A train set is provided (51.2M examples) in jsonl format.
### Data Instances
```
{"contents": "Hundreds of soul music enthusiasts from the United Kingdom plan to make their way to Detroit this month for a series of concerts.\n\nDetroit A-Go-Go, a festival organized by DJ Phil Dick, will take place Oct. 19-22 with 26 scheduled acts.\n\nThe festival is focused on what Dick calls the northern soul movement.\n\n\"We just love Detroit soul and Motown music,\" Dick said. \"It's been popular in England for decades. Every weekend, thousands of people go out and listen to this music in England.\"\n\nArtists booked for the festival include: The Elgins, Pat Lewis, Melvin Davis, The Velvelettes, The Contours, Kim Weston, Ronnie McNeir, The Capitols, Yvonne Vernee, JJ Barnes, Gino Washington, Spyder Turner, The Adorables, Lorraine Chandler, Eddie Parker, Dusty Wilson, The Precisions, The Professionals, The Tomangoes, The Fabulous Peps andNow that\u2019s a punishment: club vice president sent to train with the reserves!\n\nFor almost an entire year, Gabriel Bostina has been playing a double role for Universitatea Cluj. Unfortunately for him, the position acquired in the club\u2019s board didn\u2019t earn him any favors from the technical staff, who recently punished the central midfielder. Twice. First of all, Bostina lost the armband during one of the training camps from Antalya for some unknown disciplinary problems and now the player & vice president has suffered further embarrassment being sent to train with the reservers \u201cfor an unlimited period\u201d.\n\nCurrently injured, he failed to show up for the weekend training sessions that were going to be supervised by the club\u2019s medical staff, so the former Otelul, Steaua and Dinamo man is now", "metadata": {"pile_set_name": ["OpenWebText2", "Pile-CC"]}, "id": 423}
```
### Data Fields
```
"contents": the text
"metadata": contains information about the source(s) of text that the text comes from. Multiple sources means that the example is concatenated from two sources.
"id": Ignore - a non-unique identifier
```
## Dataset Creation
We first select 102.4M examples then concatenate every two examples to create 51.2M examples.
This ensures that the examples are long enough for a max token length of 512 without much padding.
We train the importance weight estimator for DSIR from The Pile validation set, where the target is Wikipedia + BookCorpus2 + Gutenberg + Books3 and the raw data come from the rest of the data sources in The Pile.
We first select 98.4M examples from non-Wikipedia and book data, then randomly select 2M from Wikipedia and 0.66M each from BookCorpus2, Gutenberg, and Books3.
After this, we concatenate every two examples.
### Source Data
The Pile
#### Initial Data Collection and Normalization
We select data from The Pile, which comes in 30 random chunks. We reserve chunk 0 for validation purposes and only consider the last 29 chunks.
We first divided the documents in The Pile into chunks of 128 words, according to whitespace tokenization.
These chunks define the examples that we do data selection on, totaling 1.7B examples.
Before DSIR, we first apply a manual quality filter (see paper for details) and only consider the examples that pass the filter.
## Considerations for Using the Data
The dataset is biased towards choosing data from non-Wikipedia and non-Books sources. A balanced approach would be to mix in more data from Wikipedia and books.
### Dataset Curators
Sang Michael Xie, Shibani Santurkar
### Citation Information
Paper: <https://arxiv.org/abs/2302.03169>
```
@article{xie2023data,
author = {Sang Michael Xie and Shibani Santurkar and Tengyu Ma and Percy Liang},
journal = {arXiv preprint arXiv:2302.03169},
title = {Data Selection for Language Models via Importance Resampling},
year = {2023},
}
``` | stanford-crfm/DSIR-filtered-pile-50M | [
"task_categories:text-generation",
"task_categories:fill-mask",
"size_categories:10M<n<100M",
"language:en",
"license:mit",
"language modeling",
"masked language modeling",
"pretraining",
"pile",
"DSIR",
"arxiv:2302.03169",
"region:us"
]
| 2023-01-30T06:09:13+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["10M<n<100M"], "task_categories": ["text-generation", "fill-mask"], "tags": ["language modeling", "masked language modeling", "pretraining", "pile", "DSIR"]} | 2023-09-16T13:50:10+00:00 |
328ef4213fed8a3acf62b786fd4fcb3827b8bfa3 | zirui3/cuad-instructions | [
"license:cc-by-4.0",
"region:us"
]
| 2023-01-30T07:36:39+00:00 | {"license": "cc-by-4.0", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "struct": [{"name": "text", "sequence": "string"}, {"name": "answer_start", "sequence": "int64"}]}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2933858226, "num_examples": 44900}, {"name": "test", "num_bytes": 397434014, "num_examples": 8364}], "download_size": 6827533, "dataset_size": 3331292240}} | 2023-01-30T07:49:22+00:00 |
|
c2b995af77956d93c1afbe79edd511645b058239 | EQUATE
EQUATE (Evaluating Quantitative Understanding Aptitude in Textual Entailment) is a new framework for evaluating quantitative reasoning ability in textual entailment. EQUATE consists of five NLI test sets featuring quantities. You can download EQUATE here. Three of these tests for quantitative reasoning feature language from real-world sources such as news articles and social media (RTE, NewsNLI Reddit), and two are controlled synthetic tests, evaluating model ability to reason with quantifiers and perform simple arithmetic (AWP, Stress Test).
```bib
@article{ravichander2019equate,
title={EQUATE: A Benchmark Evaluation Framework for Quantitative Reasoning in Natural Language Inference},
author={Ravichander, Abhilasha and Naik, Aakanksha and Rose, Carolyn and Hovy, Eduard},
journal={arXiv preprint arXiv:1901.03735},
year={2019}
}
``` | tasksource/equate | [
"license:apache-2.0",
"region:us"
]
| 2023-01-30T08:36:17+00:00 | {"license": "apache-2.0"} | 2023-04-07T08:43:36+00:00 |
1d95b91bfaf233063bc7b5f34447770d04774c4c |
### Dataset Summary
This is a Finnish SQuAD question answering dataset. It is a DeepL -based machine translation of the English SQuAD2.0 dataset which combines the 100,000 questions in
SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones.
To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported
by the paragraph and abstain from answering.
### Data Fields
The data fields are the same among all splits.
#### Example Data
```
{
"title": "Victoria_(Australia)",
"paragraphs": [
{
"qas": [
{
"question": "Millainen talous Victoriassa on?",
"id": "570d2417fed7b91900d45c3d",
"answers": [
{
"text": "monipuolinen",
"answer_start": 26,
"texts": [
"monipuolinen"
],
"starts": [
26
]
},
{
"text": "hyvin monipuolinen",
"answer_start": 20,
"texts": [
"hyvin ",
"monipuolinen"
],
"starts": [
20,
26
]
},
{
"text": "hyvin monipuolinen",
"answer_start": 20,
"texts": [
"hyvin ",
"monipuolinen"
],
"starts": [
20,
26
]
}
],
"is_impossible": false
}
],
"context": "Victorian talous on hyvin monipuolinen: palvelualat, kuten rahoitus- ja kiinteistöpalvelut, terveydenhuolto, koulutus, tukkukauppa, vähittäiskauppa, majoitus- ja ravitsemistoiminta ja teollisuus muodostavat suurimman osan työllisyydestä. Victorian osavaltion bruttokansantuote on Australian toiseksi suurin, vaikka Victoria on asukaskohtaisen bruttokansantuotteen osalta neljäntenä, koska sen kaivostoiminta on vähäistä. Kulttuurin alalla Melbournessa on useita museoita, taidegallerioita ja teattereita, ja sitä kutsutaan myös \"Australian urheilupääkaupungiksi\". Melbournen krikettikenttä (Melbourne Cricket Ground) on Australian suurin stadion, ja siellä järjestettiin vuoden 1956 kesäolympialaiset ja vuoden 2006 Kansainyhteisön kisat. Kenttää pidetään myös australialaisen kriketin ja australialaisen jalkapallon \"henkisenä kotina\", ja se isännöi vuosittain Australian jalkapalloliigan (AFL) suurta loppuottelua, johon osallistuu yleensä yli 95 000 ihmistä. Victoriaan kuuluu kahdeksan julkista yliopistoa, joista vanhin, Melbournen yliopisto, on perustettu vuonna 1853."
}
]
}
```
#### squad_v2
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
- `texts`: a `string` feature.
- `starts`: a `int32` feature.
### Data Splits
| name | train | validation |
| -------- | -----: | ---------: |
| squad_v2 | 130319 | 11873 |
### Evaluation Results
Results from fine-tuning [TurkuNLP/bert-base-finnish-cased-v1](ttps://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) for extractive question answering.
| dataset | F1 |
| -------------------- | ----: |
| TurkuNLP/squad_v2_fi | 73.66 |
| ilmariky/SQuAD_v2_fi | 61.87 |
### Considerations for Using the Data
Due to DeepL terms and conditions, this dataset **must not be used for any machine translation work**, namely machine translation
system development and evaluation of any kind. In general, we wish you do not pair the original English data with the translations
except when working on research unrelated to machine translation, so as not to infringe on the terms and conditions.
### Licensing Information
Contents of this repository are distributed under the
[Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
Copyright of the dataset contents belongs to the original copyright holders. | TurkuNLP/squad_v2_fi | [
"task_categories:question-answering",
"language:fi",
"license:cc-by-sa-4.0",
"region:us"
]
| 2023-01-30T09:03:27+00:00 | {"language": ["fi"], "license": "cc-by-sa-4.0", "task_categories": ["question-answering"]} | 2023-10-10T18:55:56+00:00 |
9621f16c04599d03bbdb3b777ed7e3bdf92cf8e9 |
# Dataset Card for "alsqa"
## Table of Contents
- [Dataset Card for "alsqa"](#dataset-card-for-alsqa)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [squad_v2](#squad_v2)
- [Data Fields](#data-fields)
- [squad_v2](#squad_v2-1)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/elronbandel/lexical-generalization](https://github.com/elronbandel/lexical-generalization)
- **Repository:** [https://github.com/elronbandel/lexical-generalization](https://github.com/elronbandel/lexical-generalization)
- **Paper:** [Lexical Generalization Improves with Larger Models and Longer Training](https://arxiv.org/abs/2210.12673)
- **Point of Contact:** [https://github.com/elronbandel/lexical-generalization](https://github.com/elronbandel/lexical-generalization)
- **Size of downloaded dataset files:** 100 KB
- **Size of the generated dataset:** 1 MB
- **Total amount of disk used:** 1 MB
### Dataset Summary
To test the lexical overlap heuristic utilization in Reading Comprehension models, we create a new test set: Analyzing Lexically Similar QA (ALSQA).
We augment the SQuAD 2.0 dataset (Rajpurkar et al., 2018) by asking crowdworkers to generate questions with high context-overlap from questions with low overlap (These questions are paraphrases of the original questions).
In the case of un-answerable questions, annotators were asked to re-write the question without changing its meaning and maintain the unanswerability reason.3 ALSQA contains 365 questions pairs, 190 with an- swer and 174 without answer.
## Dataset Structure
Identical to squad v2
#
### Data Fields
The data fields are the same among all splits.
#### alsqa
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name | test |
| -------- | -----: |
| squad_v2 | 365 |
## Dataset Creation
### Curation Rationale
### Source Data
squad_v2
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2210.12673,
doi = {10.48550/ARXIV.2210.12673},
url = {https://arxiv.org/abs/2210.12673},
author = {Bandel, Elron and Goldberg, Yoav and Elazar, Yanai},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Lexical Generalization Improves with Larger Models and Longer Training},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
### Contributions
Thanks to [@elronbandel](https://github.com/elronbandel) for adding this dataset. | biu-nlp/alsqa | [
"task_categories:question-answering",
"task_categories:text-classification",
"task_ids:open-domain-qa",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:<1000",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"arxiv:2210.12673",
"region:us"
]
| 2023-01-30T09:22:51+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": "apache-2.0", "multilinguality": ["monolingual"], "size_categories": ["<1000"], "source_datasets": ["original"], "task_categories": ["question-answering", "text-classification"], "task_ids": ["open-domain-qa", "extractive-qa"], "paperswithcode_id": "alsqa", "pretty_name": "ALSQA", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "config_name": "alsqa"}} | 2023-02-15T07:46:52+00:00 |
33e36abe3f5e9e43128ace7cb518c26517905713 |
# MIRACL (ar) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-ar-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-ar-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-ar-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ar-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ar-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-ar-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-ar-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-ar-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| Cohere/miracl-ar-corpus-22-12 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:ar",
"license:apache-2.0",
"region:us"
]
| 2023-01-30T09:45:15+00:00 | {"annotations_creators": ["expert-generated"], "language": ["ar"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-02-06T12:00:08+00:00 |
48083ac51d439fa46656723eee67ce28483bb0d5 |
# MIRACL (ar) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-ar-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-ar-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-ar-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ar-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ar-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-ar-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-ar-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-ar-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| Cohere/miracl-ar-queries-22-12 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:ar",
"license:apache-2.0",
"region:us"
]
| 2023-01-30T09:57:38+00:00 | {"annotations_creators": ["expert-generated"], "language": ["ar"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-02-06T12:00:30+00:00 |
81300ee240953dcb505b786d4220b3ca6a8b060e | https://github.com/Aatlantise/syntactic-augmentation-nli/tree/master/datasets
```
@inproceedings{min-etal-2020-syntactic,
title = "Syntactic Data Augmentation Increases Robustness to Inference Heuristics",
author = "Min, Junghyun and
McCoy, R. Thomas and
Das, Dipanjan and
Pitler, Emily and
Linzen, Tal",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.acl-main.212",
doi = "10.18653/v1/2020.acl-main.212",
pages = "2339--2352",
}
``` | metaeval/syntactic-augmentation-nli | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:en",
"license:mit",
"region:us"
]
| 2023-01-30T10:35:09+00:00 | {"language": ["en"], "license": "mit", "task_categories": ["text-classification"], "task_ids": ["natural-language-inference"]} | 2023-06-13T06:28:15+00:00 |
f7f5588c7b9693db720631375c2f936521f422f6 |
# MIRACL (fa) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-fa-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fa-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-fa-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fa-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-fa-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fa-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-fa-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-fa-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-fa-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fa-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-fa-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-fa-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| Cohere/miracl-fa-corpus-22-12 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:fa",
"license:apache-2.0",
"region:us"
]
| 2023-01-30T13:01:02+00:00 | {"annotations_creators": ["expert-generated"], "language": ["fa"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-02-06T11:59:54+00:00 |
7096872a89a52c8e0f51d7360bff3316caec57b1 | ecoue/wmt19_de-en_tokenized | [
"license:mit",
"region:us"
]
| 2023-01-30T13:11:28+00:00 | {"license": "mit", "dataset_info": {"features": [{"name": "de", "sequence": "uint16"}, {"name": "en", "sequence": "uint16"}], "splits": [{"name": "train", "num_bytes": 251652546, "num_examples": 2238991}, {"name": "validation", "num_bytes": 245832, "num_examples": 2015}], "download_size": 299858004, "dataset_size": 251898378}} | 2023-02-15T22:06:25+00:00 |
|
1b9394c1f0b75bbbdf85a0c97f91dfe9626d63ac |
# MIRACL (fa) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-fa-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fa-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-fa-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fa-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-fa-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fa-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-fa-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-fa-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-fa-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fa-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-fa-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-fa-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| Cohere/miracl-fa-queries-22-12 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:fa",
"license:apache-2.0",
"region:us"
]
| 2023-01-30T13:12:49+00:00 | {"annotations_creators": ["expert-generated"], "language": ["fa"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-02-06T11:59:41+00:00 |
ae94d192bedee3e80b4498ce8e3cf5847c1946d6 |
# NFCorpus: 20 generated queries (BEIR Benchmark)
This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset.
- DocT5query model used: [BeIR/query-gen-msmarco-t5-base-v1](https://huggingface.co/BeIR/query-gen-msmarco-t5-base-v1)
- id (str): unique document id in NFCorpus in the BEIR benchmark (`corpus.jsonl`).
- Questions generated: 20
- Code used for generation: [evaluate_anserini_docT5query_parallel.py](https://github.com/beir-cellar/beir/blob/main/examples/retrieval/evaluation/sparse/evaluate_anserini_docT5query_parallel.py)
Below contains the old dataset card for the BEIR benchmark.
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** [email protected]
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.Top-20 generated queries for every passage in NFCorpus
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** [email protected]
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. | income/bioasq-top-20-gen-queries | [
"task_categories:text-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
]
| 2023-01-30T13:16:21+00:00 | {"annotations_creators": [], "language_creators": [], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": {"msmarco": ["1M<n<10M"], "trec-covid": ["100k<n<1M"], "nfcorpus": ["1K<n<10K"], "nq": ["1M<n<10M"], "hotpotqa": ["1M<n<10M"], "fiqa": ["10K<n<100K"], "arguana": ["1K<n<10K"], "touche-2020": ["100K<n<1M"], "cqadupstack": ["100K<n<1M"], "quora": ["100K<n<1M"], "dbpedia": ["1M<n<10M"], "scidocs": ["10K<n<100K"], "fever": ["1M<n<10M"], "climate-fever": ["1M<n<10M"], "scifact": ["1K<n<10K"]}, "source_datasets": [], "task_categories": ["text-retrieval"], "paperswithcode_id": "beir", "pretty_name": "BEIR Benchmark"} | 2023-01-30T13:29:05+00:00 |
63b5aad3783024b570f25eb35d89a616ed80e337 | epts/kanji-serif | [
"license:openrail",
"region:us"
]
| 2023-01-30T13:32:12+00:00 | {"license": "openrail", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 49143088.248, "num_examples": 2136}], "download_size": 42660088, "dataset_size": 49143088.248}} | 2023-01-30T13:34:35+00:00 |
|
059e812bea1ccb20bd55510af10f5550a73d0d86 | Tinsae/beyaynetu | [
"license:mit",
"region:us"
]
| 2023-01-30T13:35:21+00:00 | {"license": "mit"} | 2023-01-30T13:40:03+00:00 |
|
60f1c3eeea99169106ba08e47dabf28b8ca93704 |
# MIRACL (fi) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-fi-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fi-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-fi-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fi-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-fi-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fi-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-fi-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-fi-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-fi-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fi-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-fi-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-fi-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| Cohere/miracl-fi-corpus-22-12 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:fi",
"license:apache-2.0",
"region:us"
]
| 2023-01-30T13:38:46+00:00 | {"annotations_creators": ["expert-generated"], "language": ["fi"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-02-06T11:59:27+00:00 |
98feaf5dade7e0a562f823ef12c13bcb44383f01 |
# MIRACL (fi) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-fi-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fi-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-fi-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fi-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-fi-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fi-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-fi-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-fi-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-fi-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fi-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-fi-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-fi-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| Cohere/miracl-fi-queries-22-12 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:fi",
"license:apache-2.0",
"region:us"
]
| 2023-01-30T13:51:11+00:00 | {"annotations_creators": ["expert-generated"], "language": ["fi"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-02-06T11:59:18+00:00 |
d3a3d994bd952c599726c9c67e9e9a30116fc94e |
# MIRACL (id) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-id-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-id-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-id-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-id-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-id-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-id-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-id-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-id-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-id-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-id-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-id-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-id-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| Cohere/miracl-id-corpus-22-12 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:id",
"license:apache-2.0",
"region:us"
]
| 2023-01-30T14:12:12+00:00 | {"annotations_creators": ["expert-generated"], "language": ["id"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-02-06T11:59:03+00:00 |
71567d2f4e1ffb3fa5911f441873cd2569c88012 |
# MIRACL (id) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-id-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-id-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-id-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-id-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-id-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-id-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-id-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-id-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-id-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-id-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-id-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-id-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| Cohere/miracl-id-queries-22-12 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:id",
"license:apache-2.0",
"region:us"
]
| 2023-01-30T14:21:04+00:00 | {"annotations_creators": ["expert-generated"], "language": ["id"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-02-06T11:58:53+00:00 |
5ce5474b5157a3948c17adda790739969f1eca13 |
# MAESTRO-2004-SYNTH Dataset
This is a synthesized audio dataset using the midi of MAESTRO dataset [https://magenta.tensorflow.org/datasets/maestro].
Audio files are batch-synthesized in REAPER DAW [https://www.reaper.fm/] using superposition of triangle waves, square waves and sinusoid waves. | lucainiao/MAESTRO_2004_SYNTH | [
"license:openrail",
"region:us"
]
| 2023-01-30T14:40:18+00:00 | {"license": "openrail"} | 2023-01-30T17:37:34+00:00 |
493bad50e227070c5c7323206e0265b44d1771ef | # Dataset Card for "mgb2_audios_transcriptions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | BelalElhossany/mgb2_audios_transcriptions | [
"region:us"
]
| 2023-01-30T15:37:03+00:00 | {"dataset_info": {"features": [{"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1181823173.4, "num_examples": 5842}], "download_size": 1141789958, "dataset_size": 1181823173.4}} | 2023-01-30T15:37:38+00:00 |
e3ff87dfb64f39017e0e49f6555db3273c99261b | # Dataset Card for "Caltech101_with_background_test_facebook_opt_350m_Visclues_ns_6084"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Multimodal-Fatima/Caltech101_with_background_test_facebook_opt_350m_Visclues_ns_6084 | [
"region:us"
]
| 2023-01-30T15:49:35+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 101625985.5, "num_examples": 6084}, {"name": "fewshot_1_bs_16", "num_bytes": 103738519.5, "num_examples": 6084}, {"name": "fewshot_3_bs_16", "num_bytes": 107968578.5, "num_examples": 6084}, {"name": "fewshot_5_bs_16", "num_bytes": 112184581.5, "num_examples": 6084}, {"name": "fewshot_8_bs_16", "num_bytes": 118494848.5, "num_examples": 6084}], "download_size": 502476013, "dataset_size": 544012513.5}} | 2023-01-30T20:06:08+00:00 |
1fba51ae9561c992681ce19546a33b7fa7bf2107 | # Dataset Card for "Caltech101_with_background_test_facebook_opt_1.3b_Visclues_ns_6084"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Multimodal-Fatima/Caltech101_with_background_test_facebook_opt_1.3b_Visclues_ns_6084 | [
"region:us"
]
| 2023-01-30T16:07:07+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 101626875.5, "num_examples": 6084}, {"name": "fewshot_1_bs_16", "num_bytes": 103738393.5, "num_examples": 6084}, {"name": "fewshot_3_bs_16", "num_bytes": 107968267.5, "num_examples": 6084}, {"name": "fewshot_5_bs_16", "num_bytes": 112183477.5, "num_examples": 6084}, {"name": "fewshot_8_bs_16", "num_bytes": 118492965.5, "num_examples": 6084}], "download_size": 403110665, "dataset_size": 544009979.5}} | 2023-01-31T06:32:23+00:00 |
2e3f38d31427af91443e033b25a7012641cc85bf | nlpservicebots/learning | [
"task_categories:token-classification",
"size_categories:1K<n<10K",
"language:ab",
"license:openrail",
"region:us"
]
| 2023-01-30T16:14:50+00:00 | {"language": ["ab"], "license": "openrail", "size_categories": ["1K<n<10K"], "task_categories": ["token-classification"]} | 2023-01-30T16:16:10+00:00 |
|
16dcd3875c119221b24ac117acafa65a0da09c0f | bstds/geco_data_generator | [
"region:us"
]
| 2023-01-30T16:18:45+00:00 | {} | 2023-01-30T16:21:04+00:00 |
|
5f60de06665fd9a3c6fc7bcb04e9e6e38f2273db |
# MIRACL (ko) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-ko-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ko-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-ko-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ko-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-ko-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ko-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ko-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ko-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-ko-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ko-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-ko-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-ko-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| Cohere/miracl-ko-corpus-22-12 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:ko",
"license:apache-2.0",
"region:us"
]
| 2023-01-30T16:21:23+00:00 | {"annotations_creators": ["expert-generated"], "language": ["ko"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-02-06T11:58:37+00:00 |
f5ded8fff2ece87d743d8c90da6c0a85149d4588 |
# Dataset Card for "relbert/t_rex_relation_similarity"
## Dataset Description
- **Repository:** [RelBERT](https://github.com/asahi417/relbert)
- **Paper:** [https://aclanthology.org/L18-1544/](https://aclanthology.org/L18-1544/)
- **Dataset:** T-REX for relational similarity
## Dataset Summary
This is the clean version of [T-REX](https://aclanthology.org/L18-1544/) converted into relation similarity dataset format.
The original dataset is [`relbert/t_rex`](https://huggingface.co/datasets/relbert/t_rex).
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```shell
{
"relation_type": "[Airline] has a hub in [Location]",
"positives": [["Korean Air", "Seoul"], ["Asiana Airlines", "Seoul"], ["Cathay Pacific", "Hong Kong"], ["Dragonair", "Hong Kong"], ["Qantas", "Singapore"], ["Air China", "Beijing"], ["Singapore Airlines", "Singapore"]],
"negatives": [["joint resolution", "United States Congress"], ["joint resolution", "Congress"], ["Great Seal", "United States"], ["trident", "Ukraine"], ["harp", "Ireland"], ["Plantagenet", "England"], ["Pahonia", "Lithuania"], ["slavery", "American Civil War"], ["main asteroid belt", "Solar System"], ["Colorado Desert", "Sonoran Desert"], ["DNA", "genome"], ["Mars", "Solar System"], ["Manchester United", "red"], ["Kermit", "greenness"], ["Ruby", "red"], ["Liberal Party", "red"], ["Macintosh", "Apple"], ["Apple II", "Apple"], ["Apple III", "Apple"], ["PlayStation 2", "Sony"], ["PlayStation 2", "Sony Computer Entertainment"], ["Beatles", "George Martin"], ["Baku", "Azerbaijan"], ["Accra", "Ghana"], ["Amman", "Jordan"], ["Hannover", "Lower Saxony"], ["Agartala", "Tripura"], ["Makassar", "South Sulawesi"], ["Taiwan", "China"], ["Poland", "United Nations"], ["Poland", "Europe"], ["Poland", "European Union"], ["Poland", "NATO"], ["German invasion", "22 June 1941"], ["Operation Barbarossa", "22 June 1941"], ["Brazil", "Catholic Church"], ["Turkey", "Islam"], ["Afghanistan", "Islam"], ["Iraq", "Islam"], ["Finland", "Evangelical Lutheran Church"], ["England", "Roman Catholic"], ["Congress", "United States"], ["Sejm", "Poland"], ["Diet", "Japan"], ["Majlis", "Iran"], ["Riksdag", "Sweden"], ["Croatian Parliament", "Croatia"], ["Knesset", "Israel"], ["Parliament", "Sri Lanka"], ["Russia", "Soviet Union"], ["Ukrainian SSR", "Soviet Union"], ["Royal Flying Corps", "Royal Air Force"], ["Canadian Army", "Canadian Forces"], ["Belarus", "Russian"], ["Russia", "Russian"], ["Ukraine", "Russian"], ["Kerala", "Malayalam"], ["American", "English"], ["zlib license", "Open Source Initiative"], ["EPL", "Open Source Initiative"], ["GNU General Public License", "Open Source Initiative"], ["Wrigley Field", "Cubs"], ["Wrigley Field", "Chicago Cubs"], ["Yankee Stadium", "Yankees"], ["Passaic River", "Newark Bay"], ["Rocky", "Sylvester Stallone"], ["The Godfather", "Francis Ford Coppola"], ["Citizen Kane", "Orson Welles"], ["She Hate Me", "Spike Lee"], ["Raajneeti", "Prakash Jha"], ["Doctor Who", "Patrick Troughton"], ["Doctor Who", "Tom Baker"], ["Jana Gana Mana", "India"], ["President", "White House"], ["Washington", "Federalist Party"], ["George Washington", "Federalist Party"], ["Joseph Stalin", "Communist Party"], ["Mao Zedong", "Communist Party"], ["Lenin", "Communist Party"], ["Nelson Mandela", "ANC"], ["Putin", "Communist Party"], ["Nehru", "Indian National Congress"], ["Nicolas Sarkozy", "UMP"], ["Andreas Papandreou", "PASOK"], ["Tim Cook", "Apple"], ["Israel", "Isaac"], ["Meg", "Peter"], ["Elizabeth II", "Canada"], ["Victor Emmanuel III", "Italy"], ["Umberto I", "Italy"], ["Victor Emmanuel II", "Italy"], ["Brahms", "pianist"], ["Beethoven", "piano"], ["Nicky Hopkins", "pianist"], ["Mozart", "violin"], ["John Zorn", "saxophonist"], ["McCartney", "piano"], ["Russians", "Russian"], ["The Real McCoys", "CBS"], ["Brookside", "Channel 4"], ["The Real McCoys", "ABC"], ["Windows", "Microsoft"], ["Busan", "Gyeongbu Line"], ["Seoul", "Gyeongbu Line"], ["Springer Mountain", "Appalachian Trail"], ["Doctor Who", "BBC One"], ["central time zone", "Illinois"], ["CT", "Canada"], ["Central Time Zone", "Mexico"], ["Central Time Zone", "United States"], ["CT", "American"], ["CT", "Mexico"], ["CT", "United States"], ["central time zone", "Indiana"], ["Central Time Zone", "American"]]
}
```
### Data Splits
| train |validation| test|
|--------:|---------:|---------:|
| 721| 602 | 24|
## Citation Information
```
@inproceedings{elsahar2018t,
title={T-rex: A large scale alignment of natural language with knowledge base triples},
author={Elsahar, Hady and Vougiouklis, Pavlos and Remaci, Arslen and Gravier, Christophe and Hare, Jonathon and Laforest, Frederique and Simperl, Elena},
booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
year={2018}
}
```
| relbert/t_rex_relational_similarity | [
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:other",
"region:us"
]
| 2023-01-30T16:30:08+00:00 | {"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "pretty_name": "T-REX for relational similarity"} | 2023-05-07T07:54:47+00:00 |
037a753c7358589ee9084d7b10c135ab48b52041 |
# MIRACL (ko) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-ko-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ko-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-ko-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ko-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-ko-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ko-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ko-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ko-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-ko-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ko-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-ko-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-ko-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| Cohere/miracl-ko-queries-22-12 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:ko",
"license:apache-2.0",
"region:us"
]
| 2023-01-30T16:31:20+00:00 | {"annotations_creators": ["expert-generated"], "language": ["ko"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-02-06T11:58:15+00:00 |
322aabe120fb7ba078852bfb1a83d7c3da525745 | # Dataset Card for "Caltech101_with_background_test_facebook_opt_2.7b_Visclues_ns_6084"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Multimodal-Fatima/Caltech101_with_background_test_facebook_opt_2.7b_Visclues_ns_6084 | [
"region:us"
]
| 2023-01-30T16:31:24+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 101626831.5, "num_examples": 6084}, {"name": "fewshot_1_bs_16", "num_bytes": 103738640.5, "num_examples": 6084}, {"name": "fewshot_3_bs_16", "num_bytes": 107968813.5, "num_examples": 6084}, {"name": "fewshot_5_bs_16", "num_bytes": 112183847.5, "num_examples": 6084}], "download_size": 301172183, "dataset_size": 425518133.0}} | 2023-02-02T05:54:15+00:00 |
7ba7810015c49acc08f0712d46fb72ddb769eb6a | # Dataset Card for "wikipedia.SVO"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lshowway/wikipedia.SVO | [
"region:us"
]
| 2023-01-30T16:43:48+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6543464364, "num_examples": 4003741}], "download_size": 2591178685, "dataset_size": 6543464364}} | 2023-01-30T21:01:25+00:00 |
49630e4a9e4023a85bac2a268b46c6f5f62b0382 | # Dataset Card for "Caltech101_with_background_test_facebook_opt_350m_Attributes_Caption_ns_6084"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Multimodal-Fatima/Caltech101_with_background_test_facebook_opt_350m_Attributes_Caption_ns_6084 | [
"region:us"
]
| 2023-01-30T16:44:54+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 101122737.5, "num_examples": 6084}, {"name": "fewshot_3_bs_16", "num_bytes": 105972735.5, "num_examples": 6084}, {"name": "fewshot_5_bs_16", "num_bytes": 109196994.5, "num_examples": 6084}, {"name": "fewshot_8_bs_16", "num_bytes": 114023910.5, "num_examples": 6084}, {"name": "fewshot_1_bs_16", "num_bytes": 102737491.5, "num_examples": 6084}], "download_size": 499616757, "dataset_size": 533053869.5}} | 2023-02-02T04:52:24+00:00 |
685abcb99823bb5e5a54e3e2b6d76214936104c6 | mehmetkay-sudo/rpbt | [
"license:gpl-2.0",
"region:us"
]
| 2023-01-30T16:52:32+00:00 | {"license": "gpl-2.0"} | 2023-01-30T16:54:08+00:00 |
|
f709d322d568879e9d088e6c99e24252f3d7287d | # Dataset Card for "Caltech101_with_background_test_facebook_opt_1.3b_Attributes_Caption_ns_6084"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Multimodal-Fatima/Caltech101_with_background_test_facebook_opt_1.3b_Attributes_Caption_ns_6084 | [
"region:us"
]
| 2023-01-30T16:57:22+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 101124421.5, "num_examples": 6084}, {"name": "fewshot_1_bs_16", "num_bytes": 102737621.5, "num_examples": 6084}, {"name": "fewshot_3_bs_16", "num_bytes": 105972678.5, "num_examples": 6084}, {"name": "fewshot_5_bs_16", "num_bytes": 109196062.5, "num_examples": 6084}, {"name": "fewshot_8_bs_16", "num_bytes": 114022454.5, "num_examples": 6084}], "download_size": 400479546, "dataset_size": 533053238.5}} | 2023-01-31T08:09:29+00:00 |
4046dd2df92da6641bfc1e513526a761038bfd6d | oz117/arg | [
"license:openrail",
"region:us"
]
| 2023-01-30T17:05:09+00:00 | {"license": "openrail"} | 2023-01-30T17:06:02+00:00 |
|
dd77e1c9e414af05759a57936b0a28a6aec8b945 | # AutoTrain Dataset for project: ssip2
## Dataset Description
This dataset has been automatically processed by AutoTrain for project ssip2.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<512x384 RGB PIL image>",
"target": 3
},
{
"image": "<512x384 RGB PIL image>",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['cardboard', 'glass', 'metal', 'paper', 'plastic', 'trash'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 355 |
| valid | 90 |
| kripsjaviya/autotrain-data-ssip2 | [
"task_categories:image-classification",
"region:us"
]
| 2023-01-30T17:07:34+00:00 | {"task_categories": ["image-classification"]} | 2023-01-30T19:23:51+00:00 |
238f127dbe0778ed300ec807012f7883e368bb77 | # Dataset Card for "Caltech101_with_background_test_facebook_opt_2.7b_Attributes_Caption_ns_6084"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Multimodal-Fatima/Caltech101_with_background_test_facebook_opt_2.7b_Attributes_Caption_ns_6084 | [
"region:us"
]
| 2023-01-30T17:17:55+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 101124605.5, "num_examples": 6084}, {"name": "fewshot_1_bs_16", "num_bytes": 102737704.5, "num_examples": 6084}, {"name": "fewshot_3_bs_16", "num_bytes": 105972706.5, "num_examples": 6084}, {"name": "fewshot_5_bs_16", "num_bytes": 109196422.5, "num_examples": 6084}, {"name": "fewshot_8_bs_16", "num_bytes": 114023345.5, "num_examples": 6084}], "download_size": 400491543, "dataset_size": 533054784.5}} | 2023-02-02T05:31:39+00:00 |
7c0cdc2a7235aaf84479a357f143d2e389907254 | marlontosta/direitodigital | [
"task_categories:token-classification",
"language:pt",
"region:us"
]
| 2023-01-30T17:56:15+00:00 | {"language": ["pt"], "task_categories": ["token-classification"]} | 2023-01-31T13:12:47+00:00 |
|
9824832fc36e3f2837130b814cb1f3c345014b03 |
Manually created seed dataset used in bootstrapping in the Self-instruct paper https://arxiv.org/abs/2212.10560. This is part of the instruction fine-tuning datasets. | nazneen/self-instruct-seed | [
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"arxiv:2212.10560",
"region:us"
]
| 2023-01-30T18:16:12+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["conversational"]} | 2023-01-30T18:21:07+00:00 |
0cd58675cd4f234b23423b6dd81173197e7bfbf0 | # Dataset Card for "wikipedia.VOS"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lshowway/wikipedia.VOS | [
"region:us"
]
| 2023-01-30T18:45:44+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6595233500, "num_examples": 4035672}], "download_size": 4574322349, "dataset_size": 6595233500}} | 2023-01-30T22:20:02+00:00 |
20844442dda6d3b603eebf85a07a9f5fe21b285c | # Dataset Card for "wikipedia.OSV"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lshowway/wikipedia.OSV | [
"region:us"
]
| 2023-01-30T18:46:08+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6595233500, "num_examples": 4035672}], "download_size": 644339821, "dataset_size": 6595233500}} | 2023-01-30T22:42:30+00:00 |
f7a6dae3d36d6f8ac9890a20730ce41489300b20 | # Dataset Card for "Ethiopian-foods"
### The dataset contains images of the following Ethiopian foods collected from social medias
* Beyaynetu (በያይነቱ)
* Chechebsa (ጨጨብሳ)
* Doro Wat (ዶሮ ወጥ)
* Fir-fir (ፍርፍር)
* Genfo (ገንፎ)
* Kikil (ቅቅል)
* Kitfo (ክትፎ)
* Shekla Tibs (ሸክላ ጥብስ)
* Shiro Wat (ሽሮ ወጥ)
* Tihlo (ጥህሎ)
* Tire Siga(ጥሬ ስጋ)
| Tinsae/Ethiopian-foods | [
"region:us"
]
| 2023-01-30T19:52:39+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 278017824.86, "num_examples": 1097}], "download_size": 271567376, "dataset_size": 278017824.86}} | 2023-01-31T15:25:36+00:00 |
0717c496e0c5ce02f3af6ffaed507b0285a24b03 | PUPPYSTYLE POV V1.4 | Druogsala/Starblazer | [
"region:us"
]
| 2023-01-30T20:29:36+00:00 | {} | 2023-01-31T01:29:44+00:00 |
67eb17475a4da720abfc5d122c515f3fb3a7670b | ---
language_creators:
- other
license:
- mit
multilinguality:
- monolingual
pretty_name: narrative-arc
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-classification
task_ids: []
---
# Dataset Card for [narrative-arc]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Dataset of stories used for Narrative Arc post-processing. An instance of a story in this dataset will include the original text and its metadata, the transformer model used to make the embeddings, the model's checkpoint, the window indices of the stored embeddings, and the embeddings.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
An example story will look like the following:
{
"book name": "",
"book meta data": "",
"full text": "",
"model": {
"distilbert-base-cased": {
"window indices": (first_index, last_index),
"embeddings": [[]] },
"distilbert-base-uncased": {
"window indices": (first_index, last_index),
"embeddings": [[]]
}
},
}
...
}
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
The processed text needs to be stored somewhere that is both accessible and can accomodate the large amount of data generated.
### Source Data
#### Initial Data Collection and Normalization
The data were sourced from the Project Gutenberg[https://www.gutenberg.org/] library.
#### Who are the source language producers?
Each instance in the dataset represents a text written by a human author. At present, data selected for processing are English-language short stories.
### Personal and Sensitive Information
Not applicable.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | vanderbilt-dsi/narrative-arc | [
"license:mit",
"region:us"
]
| 2023-01-30T20:33:01+00:00 | {"license": "mit"} | 2023-02-27T18:28:44+00:00 |
aebd83e45196cb2a9b2c8408a3dea61bbbbeebec | # Dataset Card for "relbert/conceptnet"
## Dataset Description
- **Repository:** [RelBERT](https://github.com/asahi417/relbert)
- **Paper:** [https://home.ttic.edu/~kgimpel/commonsense.html](https://home.ttic.edu/~kgimpel/commonsense.html)
- **Dataset:** High Confidence Subset of ConceptNet for link prediction
### Dataset Summary
The selected subset of ConceptNet used in [this work](https://home.ttic.edu/~kgimpel/commonsense.html).
We removed `NotCapableOf` and `NotDesires` to keep the positive relation only.
We consider the original test set as test set, dev1 as the training set, and dev2 as the validation set.
- Number of instances
| | train | validation | test |
|:--------------------------------|--------:|-------------:|-------:|
| number of pairs | 583082 | 1184 | 1187 |
| number of unique relation types | 28 | 20 | 19 |
- Number of pairs in each relation type
| | number of pairs (train) | number of pairs (validation) | number of pairs (test) |
|:-----------------|--------------------------:|-------------------------------:|-------------------------:|
| AtLocation | 69838 | 230 | 250 |
| CapableOf | 71840 | 124 | 144 |
| Causes | 34732 | 52 | 45 |
| CausesDesire | 9616 | 15 | 5 |
| CreatedBy | 534 | 1 | 2 |
| DefinedAs | 11048 | 2 | 1 |
| DesireOf | 28 | 0 | 0 |
| Desires | 8960 | 20 | 8 |
| HasA | 19234 | 43 | 41 |
| HasFirstSubevent | 7350 | 2 | 1 |
| HasLastSubevent | 5916 | 5 | 0 |
| HasPainCharacter | 2 | 0 | 0 |
| HasPainIntensity | 2 | 0 | 0 |
| HasPrerequisite | 47298 | 116 | 109 |
| HasProperty | 36610 | 63 | 70 |
| HasSubevent | 52468 | 82 | 83 |
| InheritsFrom | 112 | 0 | 0 |
| InstanceOf | 138 | 0 | 0 |
| IsA | 71034 | 197 | 211 |
| LocatedNear | 6 | 0 | 0 |
| LocationOfAction | 6 | 0 | 0 |
| MadeOf | 1518 | 10 | 14 |
| MotivatedByGoal | 23668 | 17 | 8 |
| PartOf | 5402 | 19 | 22 |
| ReceivesAction | 20656 | 15 | 11 |
| RelatedTo | 178 | 0 | 1 |
| SymbolOf | 328 | 2 | 0 |
| UsedFor | 84560 | 169 | 161 |
## Dataset Structure
An example of `train` looks as follows.
```shell
{
"relation": "IsA",
"head": "baseball",
"tail": "sport"
}
```
## Citation Information
```
@InProceedings{P16-1137,
author = "Li, Xiang
and Taheri, Aynaz
and Tu, Lifu
and Gimpel, Kevin",
title = "Commonsense Knowledge Base Completion",
booktitle = "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) ",
year = "2016",
publisher = "Association for Computational Linguistics",
pages = "1445--1455",
location = "Berlin, Germany",
doi = "10.18653/v1/P16-1137",
url = "http://aclweb.org/anthology/P16-1137"
}
``` | relbert/conceptnet | [
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:other",
"region:us"
]
| 2023-01-30T21:16:07+00:00 | {"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "pretty_name": "relbert/conceptnet"} | 2023-03-31T09:34:46+00:00 |
b538c6e111914a812af28ff677f8cffc9b404b7d |
# Dataset Card for AfriSpeech-200
## Table of Contents
- [Dataset Card for AfriSpeech-200](#dataset-card-for-afrispeech-200)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [How to use](#how-to-use)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/intron-innovation/AfriSpeech-Dataset-Paper
- **Repository:** https://github.com/intron-innovation/AfriSpeech-Dataset-Paper
- **Paper:** [AfriSpeech-200: Pan-African accented speech dataset for clinical and general domain ASR](https://github.com/intron-innovation/AfriSpeech-Dataset-Paper)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Intron Innovation](mailto:[email protected])
### Dataset Summary
AFRISPEECH-200 is a 200hr Pan-African speech corpus for clinical and general domain English accented ASR; a dataset with 120 African accents from 13 countries and 2,463 unique African speakers.
Our goal is to raise awareness for and advance Pan-African English ASR research, especially for the clinical domain.
## How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
```python
from datasets import load_dataset
afrispeech = load_dataset("tobiolatunji/afrispeech-200", "all")
```
The entire dataset is ~120GB and may take about 2hrs to download depending on internet speed/bandwidth. If you have disk space or bandwidth limitations, you can use `streaming` mode described below to work with smaller subsets of the data.
Alterntively you are able to pass a config to the `load_dataset` function and download only a subset of the data corresponding to a specific accent of interest. The example provided below is `isizulu`.
For example, to download the isizulu config, simply specify the corresponding accent config name. The list of supported accents is provided in the `accent list` section below:
```python
from datasets import load_dataset
afrispeech = load_dataset("tobiolatunji/afrispeech-200", "isizulu", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
afrispeech = load_dataset("tobiolatunji/afrispeech-200", "isizulu", split="train", streaming=True)
print(next(iter(afrispeech)))
print(list(afrispeech.take(5)))
```
### Local
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
afrispeech = load_dataset("tobiolatunji/afrispeech-200", "isizulu", split="train")
batch_sampler = BatchSampler(RandomSampler(afrispeech), batch_size=32, drop_last=False)
dataloader = DataLoader(afrispeech, batch_sampler=batch_sampler)
```
### Streaming
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
afrispeech = load_dataset("tobiolatunji/afrispeech-200", "isizulu", split="train", streaming=True)
dataloader = DataLoader(afrispeech, batch_size=32)
```
### Caveats
Note that till the end of the ongoing [AfriSpeech ASR Challenge event](https://zindi.africa/competitions/intron-afrispeech-200-automatic-speech-recognition-challenge) (Feb - May 2023), the transcripts in the validation set are hidden and the test set will be unreleased till May 19, 2023.
### Fine-tuning Colab tutorial
To walk through a complete colab tutorial that finetunes a wav2vec2 model on the afrispeech-200 dataset with `transformers`, take a look at this colab notebook [afrispeech/wav2vec2-colab-tutorial](https://colab.research.google.com/drive/1uZYew6pcgN6UE6sFDLohxD_HKivvDXzD?usp=sharing).
### Supported Tasks and Leaderboards
- Automatic Speech Recognition
- Speech Synthesis (Text-to-Speech)
### Languages
English (Accented)
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, called `path` and its transcription, called `transcript`. Some additional information about the speaker is provided.
```
{
'speaker_id': 'b545a4ca235a7b72688a1c0b3eb6bde6',
'path': 'aad9bd69-7ca0-4db1-b650-1eeea17a0153/5dcb6ee086e392376cd3b7131a250397.wav',
'audio_id': 'aad9bd69-7ca0-4db1-b650-1eeea17a0153/5dcb6ee086e392376cd3b7131a250397',
'audio': {
'path': 'aad9bd69-7ca0-4db1-b650-1eeea17a0153/5dcb6ee086e392376cd3b7131a250397.wav',
'array': array([0.00018311, 0.00061035, 0.00012207, ..., 0.00192261, 0.00195312, 0.00216675]),
'sampling_rate': 44100},
'transcript': 'His mother is in her 50 s and has hypertension .',
'age_group': '26-40',
'gender': 'Male',
'accent': 'yoruba',
'domain': 'clinical',
'country': 'US',
'duration': 3.241995464852608
}
```
### Data Fields
- speaker_id: An id for which speaker (voice) made the recording
- path: The path to the audio file
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- transcript: The sentence the user was prompted to speak
### Data Splits
The speech material has been subdivided into portions for train, dev, and test.
Speech was recorded in a quiet environment with high quality microphone, speakers were asked to read one sentence at a time.
- Total Number of Unique Speakers: 2,463
- Female/Male/Other Ratio: 57.11/42.41/0.48
- Data was first split on speakers. Speakers in Train/Dev/Test do not cross partitions
| | Train | Dev | Test |
| ----------- | ----------- | ----------- | ----------- |
| # Speakers | 1466 | 247 | 750 |
| # Seconds | 624228.83 | 31447.09 | 67559.10 |
| # Hours | 173.4 | 8.74 | 18.77 |
| # Accents | 71 | 45 | 108 |
| Avg secs/speaker | 425.81 | 127.32 | 90.08 |
| Avg num clips/speaker | 39.56 | 13.08 | 8.46 |
| Avg num speakers/accent | 20.65 | 5.49 | 6.94 |
| Avg secs/accent | 8791.96 | 698.82 | 625.55 |
| # clips general domain | 21682 | 1407 | 2723 |
| # clips clinical domain | 36318 | 1824 | 3623 |
## Dataset Creation
### Curation Rationale
Africa has a very low doctor-to-patient ratio.
At very busy clinics, doctors could see 30+ patients per day-- a heavy patient burden compared with
developed countries-- but productivity tools such as clinical automatic speech recognition
(ASR) are lacking for these overworked clinicians. However, clinical ASR is mature, even ubiquitous,
in developed nations, and clinician-reported performance of commercial clinical ASR systems
is generally satisfactory. Furthermore, the recent performance of general domain ASR is
approaching human accuracy. However, several gaps exist. Several publications have
highlighted racial bias with speech-to-text algorithms and performance on minority
accents lags significantly. To our knowledge, there is no publicly available research or
benchmark on accented African clinical ASR, and speech data is non-existent for the
majority of African accents. We release AfriSpeech, 200hrs of Pan-African speech,
67,577 clips from 2,463 unique speakers, across 120 indigenous accents from 13 countries for
clinical and general domain ASR, a benchmark test set, with publicly available pre-trained
models with SOTA performance on the AfriSpeech benchmark.
### Source Data
#### Country Stats
| Country | Clips | Speakers | Duration (seconds) | Duration (hrs) |
| ----------- | ----------- | ----------- | ----------- | ----------- |
| NG | 45875 | 1979 | 512646.88 | 142.40 |
| KE | 8304 | 137 | 75195.43 | 20.89 |
| ZA | 7870 | 223 | 81688.11 | 22.69 |
| GH | 2018 | 37 | 18581.13 | 5.16 |
| BW | 1391 | 38 | 14249.01 | 3.96 |
| UG | 1092 | 26 | 10420.42 | 2.89 |
| RW | 469 | 9 | 5300.99 | 1.47 |
| US | 219 | 5 | 1900.98 | 0.53 |
| TR | 66 | 1 | 664.01 | 0.18 |
| ZW | 63 | 3 | 635.11 | 0.18 |
| MW | 60 | 1 | 554.61 | 0.15 |
| TZ | 51 | 2 | 645.51 | 0.18 |
| LS | 7 | 1 | 78.40 | 0.02 |
#### Accent Stats
| Accent | Clips | Speakers | Duration (s) | Country | Splits |
| ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |
| yoruba | 15407 | 683 | 161587.55 | US,NG | train,test,dev |
| igbo | 8677 | 374 | 93035.79 | US,NG,ZA | train,test,dev |
| swahili | 6320 | 119 | 55932.82 | KE,TZ,ZA,UG | train,test,dev |
| hausa | 5765 | 248 | 70878.67 | NG | train,test,dev |
| ijaw | 2499 | 105 | 33178.9 | NG | train,test,dev |
| afrikaans | 2048 | 33 | 20586.49 | ZA | train,test,dev |
| idoma | 1877 | 72 | 20463.6 | NG | train,test,dev |
| zulu | 1794 | 52 | 18216.97 | ZA,TR,LS | dev,train,test |
| setswana | 1588 | 39 | 16553.22 | BW,ZA | dev,test,train |
| twi | 1566 | 22 | 14340.12 | GH | test,train,dev |
| isizulu | 1048 | 48 | 10376.09 | ZA | test,train,dev |
| igala | 919 | 31 | 9854.72 | NG | train,test |
| izon | 838 | 47 | 9602.53 | NG | train,dev,test |
| kiswahili | 827 | 6 | 8988.26 | KE | train,test |
| ebira | 757 | 42 | 7752.94 | NG | train,test,dev |
| luganda | 722 | 22 | 6768.19 | UG,BW,KE | test,dev,train |
| urhobo | 646 | 32 | 6685.12 | NG | train,dev,test |
| nembe | 578 | 16 | 6644.72 | NG | train,test,dev |
| ibibio | 570 | 39 | 6489.29 | NG | train,test,dev |
| pidgin | 514 | 20 | 5871.57 | NG | test,train,dev |
| luhya | 508 | 4 | 4497.02 | KE | train,test |
| kinyarwanda | 469 | 9 | 5300.99 | RW | train,test,dev |
| xhosa | 392 | 12 | 4604.84 | ZA | train,dev,test |
| tswana | 387 | 18 | 4148.58 | ZA,BW | train,test,dev |
| esan | 380 | 13 | 4162.63 | NG | train,test,dev |
| alago | 363 | 8 | 3902.09 | NG | train,test |
| tshivenda | 353 | 5 | 3264.77 | ZA | test,train |
| fulani | 312 | 18 | 5084.32 | NG | test,train |
| isoko | 298 | 16 | 4236.88 | NG | train,test,dev |
| akan (fante) | 295 | 9 | 2848.54 | GH | train,dev,test |
| ikwere | 293 | 14 | 3480.43 | NG | test,train,dev |
| sepedi | 275 | 10 | 2751.68 | ZA | dev,test,train |
| efik | 269 | 11 | 2559.32 | NG | test,train,dev |
| edo | 237 | 12 | 1842.32 | NG | train,test,dev |
| luo | 234 | 4 | 2052.25 | UG,KE | test,train,dev |
| kikuyu | 229 | 4 | 1949.62 | KE | train,test,dev |
| bekwarra | 218 | 3 | 2000.46 | NG | train,test |
| isixhosa | 210 | 9 | 2100.28 | ZA | train,dev,test |
| hausa/fulani | 202 | 3 | 2213.53 | NG | test,train |
| epie | 202 | 6 | 2320.21 | NG | train,test |
| isindebele | 198 | 2 | 1759.49 | ZA | train,test |
| venda and xitsonga | 188 | 2 | 2603.75 | ZA | train,test |
| sotho | 182 | 4 | 2082.21 | ZA | dev,test,train |
| akan | 157 | 6 | 1392.47 | GH | test,train |
| nupe | 156 | 9 | 1608.24 | NG | dev,train,test |
| anaang | 153 | 8 | 1532.56 | NG | test,dev |
| english | 151 | 11 | 2445.98 | NG | dev,test |
| afemai | 142 | 2 | 1877.04 | NG | train,test |
| shona | 138 | 8 | 1419.98 | ZA,ZW | test,train,dev |
| eggon | 137 | 5 | 1833.77 | NG | test |
| luganda and kiswahili | 134 | 1 | 1356.93 | UG | train |
| ukwuani | 133 | 7 | 1269.02 | NG | test |
| sesotho | 132 | 10 | 1397.16 | ZA | train,dev,test |
| benin | 124 | 4 | 1457.48 | NG | train,test |
| kagoma | 123 | 1 | 1781.04 | NG | train |
| nasarawa eggon | 120 | 1 | 1039.99 | NG | train |
| tiv | 120 | 14 | 1084.52 | NG | train,test,dev |
| south african english | 119 | 2 | 1643.82 | ZA | train,test |
| borana | 112 | 1 | 1090.71 | KE | train |
| swahili ,luganda ,arabic | 109 | 1 | 929.46 | UG | train |
| ogoni | 109 | 4 | 1629.7 | NG | train,test |
| mada | 109 | 2 | 1786.26 | NG | test |
| bette | 106 | 4 | 930.16 | NG | train,test |
| berom | 105 | 4 | 1272.99 | NG | dev,test |
| bini | 104 | 4 | 1499.75 | NG | test |
| ngas | 102 | 3 | 1234.16 | NG | train,test |
| etsako | 101 | 4 | 1074.53 | NG | train,test |
| okrika | 100 | 3 | 1887.47 | NG | train,test |
| venda | 99 | 2 | 938.14 | ZA | train,test |
| siswati | 96 | 5 | 1367.45 | ZA | dev,train,test |
| damara | 92 | 1 | 674.43 | NG | train |
| yoruba, hausa | 89 | 5 | 928.98 | NG | test |
| southern sotho | 89 | 1 | 889.73 | ZA | train |
| kanuri | 86 | 7 | 1936.78 | NG | test,dev |
| itsekiri | 82 | 3 | 778.47 | NG | test,dev |
| ekpeye | 80 | 2 | 922.88 | NG | test |
| mwaghavul | 78 | 2 | 738.02 | NG | test |
| bajju | 72 | 2 | 758.16 | NG | test |
| luo, swahili | 71 | 1 | 616.57 | KE | train |
| dholuo | 70 | 1 | 669.07 | KE | train |
| ekene | 68 | 1 | 839.31 | NG | test |
| jaba | 65 | 2 | 540.66 | NG | test |
| ika | 65 | 4 | 576.56 | NG | test,dev |
| angas | 65 | 1 | 589.99 | NG | test |
| ateso | 63 | 1 | 624.28 | UG | train |
| brass | 62 | 2 | 900.04 | NG | test |
| ikulu | 61 | 1 | 313.2 | NG | test |
| eleme | 60 | 2 | 1207.92 | NG | test |
| chichewa | 60 | 1 | 554.61 | MW | train |
| oklo | 58 | 1 | 871.37 | NG | test |
| meru | 58 | 2 | 865.07 | KE | train,test |
| agatu | 55 | 1 | 369.11 | NG | test |
| okirika | 54 | 1 | 792.65 | NG | test |
| igarra | 54 | 1 | 562.12 | NG | test |
| ijaw(nembe) | 54 | 2 | 537.56 | NG | test |
| khana | 51 | 2 | 497.42 | NG | test |
| ogbia | 51 | 4 | 461.15 | NG | test,dev |
| gbagyi | 51 | 4 | 693.43 | NG | test |
| portuguese | 50 | 1 | 525.02 | ZA | train |
| delta | 49 | 2 | 425.76 | NG | test |
| bassa | 49 | 1 | 646.13 | NG | test |
| etche | 49 | 1 | 637.48 | NG | test |
| kubi | 46 | 1 | 495.21 | NG | test |
| jukun | 44 | 2 | 362.12 | NG | test |
| igbo and yoruba | 43 | 2 | 466.98 | NG | test |
| urobo | 43 | 3 | 573.14 | NG | test |
| kalabari | 42 | 5 | 305.49 | NG | test |
| ibani | 42 | 1 | 322.34 | NG | test |
| obolo | 37 | 1 | 204.79 | NG | test |
| idah | 34 | 1 | 533.5 | NG | test |
| bassa-nge/nupe | 31 | 3 | 267.42 | NG | test,dev |
| yala mbembe | 29 | 1 | 237.27 | NG | test |
| eket | 28 | 1 | 238.85 | NG | test |
| afo | 26 | 1 | 171.15 | NG | test |
| ebiobo | 25 | 1 | 226.27 | NG | test |
| nyandang | 25 | 1 | 230.41 | NG | test |
| ishan | 23 | 1 | 194.12 | NG | test |
| bagi | 20 | 1 | 284.54 | NG | test |
| estako | 20 | 1 | 480.78 | NG | test |
| gerawa | 13 | 1 | 342.15 | NG | test |
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
The dataset was initially prepared by Intron and refined for public release by CLAIR Lab.
### Licensing Information
Public Domain, Creative Commons Attribution NonCommercial ShareAlike v4.0 ([CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode))
### Citation Information
@article{olatunji2023afrispeech,
title={AfriSpeech-200: Pan-African Accented Speech Dataset for Clinical and General Domain ASR},
author={Olatunji, Tobi and Afonja, Tejumade and Yadavalli, Aditya and Emezue, Chris Chinenye and Singh, Sahib and Dossou, Bonaventure FP and Osuchukwu, Joanne and Osei, Salomey and Tonja, Atnafu Lambebo and Etori, Naome and others},
journal={arXiv preprint arXiv:2310.00274},
year={2023}
}
### Contributions
Thanks to [@tobiolatunji](https://github.com/tobiolatunji) for adding this dataset. | tobiolatunji/afrispeech-200 | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
]
| 2023-01-30T22:34:30+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "AfriSpeech-200", "dataset_info": {"features": [{"name": "user_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 44100}}}, {"name": "transcript", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1722002133, "num_examples": 58000}, {"name": "dev", "num_bytes": 86120227, "num_examples": 3231}], "download_size": 1475540500, "dataset_size": 1808122360, "extra_gated_prompt": "By clicking on \u201cAccess repository\u201d below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset."}} | 2023-11-20T09:20:34+00:00 |
1d669c58c956ddf71dfc6ce124a9a5fb9b7010e7 | # Dataset Card for "wake_word_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | AMead10/wake_word_dataset | [
"region:us"
]
| 2023-01-30T23:35:28+00:00 | {"dataset_info": {"features": [{"name": "audio", "sequence": "float32"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 205075232, "num_examples": 1602}, {"name": "test", "num_bytes": 22786140, "num_examples": 178}], "download_size": 98516339, "dataset_size": 227861372}} | 2023-01-30T23:38:08+00:00 |
edaadb7cbdfea86a4b62a795821e5b87c9d5d77b |
# TinaFate LoRA
# Use Cases
The LoRA is in itself very compatible with the most diverse model. However, it is most effective when used with Kenshi or AbyssOrangeMix2.
The LoRA itself was trained with the token: ```skistyle```.
I would suggest using the token with AbyssOrangeMix2, but not with Kenshi, since I got better results that way.
The models mentioned right now
1. AbyssOrangeMix2 from [WarriorMama777](https://huggingface.co/WarriorMama777/OrangeMixs)
2. Kenshi Model from [Luna](https://huggingface.co/SweetLuna/Kenshi)
## Strength
I would personally use these strength with the assosiated model:
- 0.6-0.85 for AbyssOrangeMix2
- 0.5-0.75 for Kenshi
# Showcase
**Example 1**
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/tinafate_LoRA/resolve/main/preview/preview%20(2).png"/>
```
skistyle,
1girl, blonde hair, black hair, navel, long hair, breasts, crop top, smile, large breasts, midriff, pants, red eyes, choker, multicolored hair, white hairband, white background, otoko no ko, japanese clothes, looking at viewer, earrings, sweatdrop, pink nails, denim, blush, simple background, hand on hip, black eyes, blue pants, jewelry, hairband, kimono, tank top, gradient hair, jeans, holding
Steps: 32, Sampler: Euler a, CFG scale: 7
```
**Example 2**
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/tinafate_LoRA/resolve/main/preview/preview%20(3).png"/>
```
skistyle,
1girl, valkyrie, highly detailed face, feathered wings, wearing ornate viking clothes, horned helmet, (darkness, dark background:1.3), fur trim, holding a short sword with her hand on the handle, a feeling of triumph, (feathers floating around her in a magical vortex:1.15), majestic, imposing beauty, (standing atop the battlements:1.05) a night, (fantasy setting:1.2), dnd, d&d, beautiful 8k wallpaper, superb, extremely detailed, intricate, (artistic brush strokes:1.3)
Steps: 32, Sampler: Euler a, CFG scale: 7
```
**Example 3**
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/tinafate_LoRA/resolve/main/preview/preview%20(4).png"/>
```
skistyle,
1girl, (green hair:1.1), short hair, pointy ears, wearing a green (tunic:1.2) and shorts, (illustration:1.1), highres, (extremely detailed CG unity 8k wallpaper:1.1), (mid shot:1.1), (full body:1.25), (solo:1.2), plant, tree, (beautiful eyes:1.15), green boots, leaves swirling around the girl, wariza, leaning against tree at night, blue_eyes, (beautiful face:1.15), (((flat background))), (fireflies:1.1), parted lips, highly detailed face, ultra realistic, masterpiece, best quality, the legend of zelda, bokeh, extremely detailed, intricate
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7
```
# License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/tinafate_LoRA | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"image-to-image",
"region:us"
]
| 2023-01-30T23:45:36+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "thumbnail": "https://huggingface.co/datasets/Nerfgun3/tinafate_LoRA/resolve/main/preview/preview%20(1).png", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false} | 2023-01-31T00:12:56+00:00 |
e3d45533d74bd6bb4324cab2e96783d36fb9c7dd | luckyeven/SROIE2019 | [
"license:unknown",
"region:us"
]
| 2023-01-30T23:46:04+00:00 | {"license": "unknown"} | 2023-01-30T23:46:04+00:00 |
|
a44fea504f93d41a7ab8b88fd0cde204e055b5fd | # Dataset Card for "cc100_fixed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | grosenthal/cc100_fixed | [
"region:us"
]
| 2023-01-31T00:58:04+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1138972730, "num_examples": 9330038}], "download_size": 810300983, "dataset_size": 1138972730}} | 2023-01-31T00:58:30+00:00 |
fec3b5c747b92c81400b2afb0c2c2944a50cf3c9 |
# MIRACL (es) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-es-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-es-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-es-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-es-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-es-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-es-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-es-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-es-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-es-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-es-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-es-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-es-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| Cohere/miracl-es-corpus-22-12 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:es",
"license:apache-2.0",
"region:us"
]
| 2023-01-31T01:40:24+00:00 | {"annotations_creators": ["expert-generated"], "language": ["es"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-02-06T11:57:58+00:00 |
7a33132d835c082c14e48beb9c9e7653aea8ec5e | keelezibel/jjlin | [
"license:cc",
"region:us"
]
| 2023-01-31T01:53:17+00:00 | {"license": "cc"} | 2023-01-31T01:54:03+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.