sha
stringlengths
40
40
text
stringlengths
0
13.4M
id
stringlengths
2
117
tags
list
created_at
stringlengths
25
25
metadata
stringlengths
2
31.7M
last_modified
stringlengths
25
25
d8221c773420c5fd56251b65a9a51d9e7ae3f67f
# Dataset Card for "book_audio" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ykleeee/book_audio
[ "region:us" ]
2023-01-31T01:59:45+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 232165449.836, "num_examples": 2221}], "download_size": 214622915, "dataset_size": 232165449.836}}
2023-01-31T02:13:49+00:00
f2691e9246398c0d5d8464705c3a046c150343e5
jyang/webshop_state_reward_pairs
[ "license:mit", "region:us" ]
2023-01-31T02:40:38+00:00
{"license": "mit"}
2023-01-31T02:40:58+00:00
6ac09408d660087d24c07b4459cb97ca0df1e962
# MIRACL (es) embedded with cohere.ai `multilingual-22-12` encoder We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model. The query embeddings can be found in [Cohere/miracl-es-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-es-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-es-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-es-corpus-22-12). For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus). Dataset info: > MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world. > > The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage. ## Embeddings We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/). ## Loading the dataset In [miracl-es-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-es-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large. You can either load the dataset like this: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-es-corpus-22-12", split="train") ``` Or you can also stream it without downloading it before: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-es-corpus-22-12", split="train", streaming=True) for doc in docs: docid = doc['docid'] title = doc['title'] text = doc['text'] emb = doc['emb'] ``` ## Search Have a look at [miracl-es-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-es-queries-22-12) where we provide the query embeddings for the MIRACL dataset. To search in the documents, you must use **dot-product**. And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product. A full search example: ```python # Attention! For large datasets, this requires a lot of memory to store # all document embeddings and to compute the dot product scores. # Only use this for smaller datasets. For large datasets, use a vector DB from datasets import load_dataset import torch #Load documents + embeddings docs = load_dataset(f"Cohere/miracl-es-corpus-22-12", split="train") doc_embeddings = torch.tensor(docs['emb']) # Load queries queries = load_dataset(f"Cohere/miracl-es-queries-22-12", split="dev") # Select the first query as example qid = 0 query = queries[qid] query_embedding = torch.tensor(queries['emb']) # Compute dot score between query embedding and document embeddings dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1)) top_k = torch.topk(dot_scores, k=3) # Print results print("Query:", query['query']) for doc_id in top_k.indices[0].tolist(): print(docs[doc_id]['title']) print(docs[doc_id]['text']) ``` You can get embeddings for new queries using our API: ```python #Run: pip install cohere import cohere co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :)) texts = ['my search query'] response = co.embed(texts=texts, model='multilingual-22-12') query_embedding = response.embeddings[0] # Get the embedding for the first text ``` ## Performance In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset. We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results. Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted. | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 | |---|---|---|---|---| | miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 | | miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 | | miracl-de | 44.4 | 60.7 | 19.6 | 29.8 | | miracl-en | 44.6 | 62.2 | 30.2 | 43.2 | | miracl-es | 47.0 | 74.1 | 27.0 | 47.2 | | miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 | | miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 | | miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 | | miracl-id | 44.8 | 63.8 | 39.2 | 54.7 | | miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 | | **Avg** | 51.7 | 67.5 | 34.7 | 46.0 | Further languages (not supported by Elasticsearch): | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | |---|---|---| | miracl-fa | 44.8 | 53.6 | | miracl-ja | 49.0 | 61.0 | | miracl-ko | 50.9 | 64.8 | | miracl-sw | 61.4 | 74.5 | | miracl-te | 67.8 | 72.3 | | miracl-th | 60.2 | 71.9 | | miracl-yo | 56.4 | 62.2 | | miracl-zh | 43.8 | 56.5 | | **Avg** | 54.3 | 64.6 |
Cohere/miracl-es-queries-22-12
[ "task_categories:text-retrieval", "task_ids:document-retrieval", "annotations_creators:expert-generated", "multilinguality:multilingual", "language:es", "license:apache-2.0", "region:us" ]
2023-01-31T03:06:41+00:00
{"annotations_creators": ["expert-generated"], "language": ["es"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []}
2023-02-06T11:57:49+00:00
d1f840597b120e3733b4c815d1dfb37d6d760f20
# Dataset Card for "clothes_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
chiHang/clothes_dataset
[ "region:us" ]
2023-01-31T03:17:45+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 230456480.0, "num_examples": 64}], "download_size": 226942310, "dataset_size": 230456480.0}}
2023-01-31T06:33:48+00:00
74c3fbd6f49c407f3b2ff20c902420eef59ee559
# Dataset Card for "cc_raw" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Chris5Lin/cc_raw
[ "region:us" ]
2023-01-31T03:34:29+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 37, "num_examples": 4}], "download_size": 524, "dataset_size": 37}}
2023-02-16T09:09:27+00:00
3b85f34c8dd18325c323bd3a7853d3fd8b659cca
# Dataset Card for "diffusion_db_10k_processed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
calvegh/diffusion_db_10k_processed
[ "region:us" ]
2023-01-31T03:51:48+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "label_txt", "dtype": "string"}, {"name": "topic_keywords", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2762536, "num_examples": 8571}], "download_size": 647051, "dataset_size": 2762536}}
2023-02-01T04:38:38+00:00
50a437978c402705f4e5f102b89fa89fc6cf0bcb
SQA3D: Situated Question Answering in 3D Scenes (ICLR 2023, https://arxiv.org/abs/2210.07474) === 1. Download the [SQA3D dataset](https://zenodo.org/record/7544818/files/sqa_task.zip?download=1) under `assets/data/`. The following files should be used: ``` ./assets/data/sqa_task/balanced/* ./assets/data/sqa_task/answer_dict.json ``` 2. The dataset has been splited into `train`, `val` and `test`. For each category, we offer both question file, ex. `v1_balanced_questions_train_scannetv2.json`, and annotations, ex. `v1_balanced_sqa_annotations_train_scannetv2.json` - The format of question file: Run the following code: ```python import json q = json.load(open('v1_balanced_questions_train_scannetv2.json', 'r')) # Print the total number of questions print('#questions: ', len(q['questions'])) print(q['questions'][0]) ``` The output is: ```json { "alternative_situation": [ "I stand looking out of the window in thought and a radiator is right in front of me.", "I am looking outside through the window behind the desk." ], "question": "What color is the desk to my right?", "question_id": 220602000000, "scene_id": "scene0380_00", "situation": "I am facing a window and there is a desk on my right and a chair behind me." } ``` The following fileds are **useful**: `question`, `question_id`, `scene_id`, `situation`. - The format of annotations: Run the following code: ```python import json a = json.load(open('v1_balanced_sqa_annotations_train_scannetv2.json', 'r')) # Print the total number of annotations, should be the same as questions print('#annotations: ', len(a['annotations'])) print(a['annotations'][0]) ``` The output is ```json { "answer_type": "other", "answers": [ { "answer": "brown", "answer_confidence": "yes", "answer_id": 1 } ], "position": { "x": -0.9651003385573296, "y": -1.2417634435553606, "z": 0 }, "question_id": 220602000000, "question_type": "N/A", "rotation": { "_w": 0.9950041652780182, "_x": 0, "_y": 0, "_z": 0.09983341664682724 }, "scene_id": "scene0380_00" } ``` The following fields are **useful**: `answers[0]['answer']`, `question_id`, `scene_id`. **Note**: To find the answer of a question in the question file, you need to use lookup with `question_id`. 3. We provide the mapping between answers and class labels in `answer_dict.json` ```python import json j = json.load(open('answer_dict.json', 'r')) print('Total classes: ', len(j[0])) print('The class label of answer \'table\' is: ', j[0]['table']) print('The corresponding answer of class 123 is: ', j[1]['123']) ``` 4. Loader, model and training code can be found at https://github.com/SilongYong/SQA3D
jeasinema/SQA3D
[ "task_categories:question-answering", "size_categories:10K<n<100K", "license:cc-by-4.0", "3D vision", "embodied AI", "arxiv:2210.07474", "region:us" ]
2023-01-31T04:10:50+00:00
{"license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering"], "tags": ["3D vision", "embodied AI"]}
2023-01-31T04:18:56+00:00
605715eba55ec803e00fbb5904994b9cf002540a
# MIRACL (fr) embedded with cohere.ai `multilingual-22-12` encoder We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model. The query embeddings can be found in [Cohere/miracl-fr-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fr-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-fr-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fr-corpus-22-12). For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus). Dataset info: > MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world. > > The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage. ## Embeddings We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/). ## Loading the dataset In [miracl-fr-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fr-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large. You can either load the dataset like this: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-fr-corpus-22-12", split="train") ``` Or you can also stream it without downloading it before: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-fr-corpus-22-12", split="train", streaming=True) for doc in docs: docid = doc['docid'] title = doc['title'] text = doc['text'] emb = doc['emb'] ``` ## Search Have a look at [miracl-fr-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fr-queries-22-12) where we provide the query embeddings for the MIRACL dataset. To search in the documents, you must use **dot-product**. And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product. A full search example: ```python # Attention! For large datasets, this requires a lot of memory to store # all document embeddings and to compute the dot product scores. # Only use this for smaller datasets. For large datasets, use a vector DB from datasets import load_dataset import torch #Load documents + embeddings docs = load_dataset(f"Cohere/miracl-fr-corpus-22-12", split="train") doc_embeddings = torch.tensor(docs['emb']) # Load queries queries = load_dataset(f"Cohere/miracl-fr-queries-22-12", split="dev") # Select the first query as example qid = 0 query = queries[qid] query_embedding = torch.tensor(queries['emb']) # Compute dot score between query embedding and document embeddings dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1)) top_k = torch.topk(dot_scores, k=3) # Print results print("Query:", query['query']) for doc_id in top_k.indices[0].tolist(): print(docs[doc_id]['title']) print(docs[doc_id]['text']) ``` You can get embeddings for new queries using our API: ```python #Run: pip install cohere import cohere co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :)) texts = ['my search query'] response = co.embed(texts=texts, model='multilingual-22-12') query_embedding = response.embeddings[0] # Get the embedding for the first text ``` ## Performance In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset. We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results. Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted. | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 | |---|---|---|---|---| | miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 | | miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 | | miracl-de | 44.4 | 60.7 | 19.6 | 29.8 | | miracl-en | 44.6 | 62.2 | 30.2 | 43.2 | | miracl-es | 47.0 | 74.1 | 27.0 | 47.2 | | miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 | | miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 | | miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 | | miracl-id | 44.8 | 63.8 | 39.2 | 54.7 | | miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 | | **Avg** | 51.7 | 67.5 | 34.7 | 46.0 | Further languages (not supported by Elasticsearch): | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | |---|---|---| | miracl-fa | 44.8 | 53.6 | | miracl-ja | 49.0 | 61.0 | | miracl-ko | 50.9 | 64.8 | | miracl-sw | 61.4 | 74.5 | | miracl-te | 67.8 | 72.3 | | miracl-th | 60.2 | 71.9 | | miracl-yo | 56.4 | 62.2 | | miracl-zh | 43.8 | 56.5 | | **Avg** | 54.3 | 64.6 |
Cohere/miracl-fr-corpus-22-12
[ "task_categories:text-retrieval", "task_ids:document-retrieval", "annotations_creators:expert-generated", "multilinguality:multilingual", "language:fr", "license:apache-2.0", "region:us" ]
2023-01-31T06:02:06+00:00
{"annotations_creators": ["expert-generated"], "language": ["fr"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []}
2023-02-06T11:57:34+00:00
0f1f43da25df6b525956eba743c53769f71ce581
# Disclaimer All rights belong to their owners. Models and datasets can be removed from the site at the request of the copyright holder. # How to use ```bash from datasets import load_dataset dataset = load_dataset("VESSL/Bored_Ape_NFT_text") ``` # Data Field image = binary image file and path text = auto generated prompt for image # Citation & Information @InProceedings{VESSL,\ author={Jinpil Choi}\ year=2023\ } # Projects https://github.com/vessl-ai/examples
VESSL/Bored_Ape_NFT_text
[ "task_categories:text-to-image", "size_categories:1K<n<10K", "license:apache-2.0", "region:us" ]
2023-01-31T06:43:51+00:00
{"license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-to-image"], "pretty_name": "text_to_bayc_image", "source_dataset": "https://opensea.io/collection/boredapeyachtclub c 2021 Yuga Labs LLC"}
2023-02-07T05:51:25+00:00
5349b6de37f401be613aaf5c2b8c89bf03f40ac2
# Dataset Card for "processed_bert_dataset_version_1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Poulami/processed_bert_dataset_version_1
[ "region:us" ]
2023-01-31T06:54:06+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 24027526800.0, "num_examples": 6674313}], "download_size": 5886971262, "dataset_size": 24027526800.0}}
2023-01-31T07:33:12+00:00
460264d95705c0721e7dd0fb054fcfa56cf277ff
# MIRACL (fr) embedded with cohere.ai `multilingual-22-12` encoder We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model. The query embeddings can be found in [Cohere/miracl-fr-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fr-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-fr-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fr-corpus-22-12). For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus). Dataset info: > MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world. > > The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage. ## Embeddings We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/). ## Loading the dataset In [miracl-fr-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fr-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large. You can either load the dataset like this: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-fr-corpus-22-12", split="train") ``` Or you can also stream it without downloading it before: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-fr-corpus-22-12", split="train", streaming=True) for doc in docs: docid = doc['docid'] title = doc['title'] text = doc['text'] emb = doc['emb'] ``` ## Search Have a look at [miracl-fr-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fr-queries-22-12) where we provide the query embeddings for the MIRACL dataset. To search in the documents, you must use **dot-product**. And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product. A full search example: ```python # Attention! For large datasets, this requires a lot of memory to store # all document embeddings and to compute the dot product scores. # Only use this for smaller datasets. For large datasets, use a vector DB from datasets import load_dataset import torch #Load documents + embeddings docs = load_dataset(f"Cohere/miracl-fr-corpus-22-12", split="train") doc_embeddings = torch.tensor(docs['emb']) # Load queries queries = load_dataset(f"Cohere/miracl-fr-queries-22-12", split="dev") # Select the first query as example qid = 0 query = queries[qid] query_embedding = torch.tensor(queries['emb']) # Compute dot score between query embedding and document embeddings dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1)) top_k = torch.topk(dot_scores, k=3) # Print results print("Query:", query['query']) for doc_id in top_k.indices[0].tolist(): print(docs[doc_id]['title']) print(docs[doc_id]['text']) ``` You can get embeddings for new queries using our API: ```python #Run: pip install cohere import cohere co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :)) texts = ['my search query'] response = co.embed(texts=texts, model='multilingual-22-12') query_embedding = response.embeddings[0] # Get the embedding for the first text ``` ## Performance In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset. We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results. Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted. | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 | |---|---|---|---|---| | miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 | | miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 | | miracl-de | 44.4 | 60.7 | 19.6 | 29.8 | | miracl-en | 44.6 | 62.2 | 30.2 | 43.2 | | miracl-es | 47.0 | 74.1 | 27.0 | 47.2 | | miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 | | miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 | | miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 | | miracl-id | 44.8 | 63.8 | 39.2 | 54.7 | | miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 | | **Avg** | 51.7 | 67.5 | 34.7 | 46.0 | Further languages (not supported by Elasticsearch): | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | |---|---|---| | miracl-fa | 44.8 | 53.6 | | miracl-ja | 49.0 | 61.0 | | miracl-ko | 50.9 | 64.8 | | miracl-sw | 61.4 | 74.5 | | miracl-te | 67.8 | 72.3 | | miracl-th | 60.2 | 71.9 | | miracl-yo | 56.4 | 62.2 | | miracl-zh | 43.8 | 56.5 | | **Avg** | 54.3 | 64.6 |
Cohere/miracl-fr-queries-22-12
[ "task_categories:text-retrieval", "task_ids:document-retrieval", "annotations_creators:expert-generated", "multilinguality:multilingual", "language:fr", "license:apache-2.0", "region:us" ]
2023-01-31T07:18:50+00:00
{"annotations_creators": ["expert-generated"], "language": ["fr"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []}
2023-02-06T11:57:25+00:00
f437432b2641fa8cf1a57e1c01cc3c3530050ede
Translation of bias evaluation framework of May et al. (2019) from [this repository](https://github.com/W4ngatang/sent-bias) and [this paper](https://arxiv.org/abs/1903.10561) into Turkish. There is a total of 37 tests including tests addressing gender-bias as well as tests designed to evaluate the ethnic bias toward Kurdish people in Türkiye context. Abstract of the paper: While the growing size of pre-trained language models has led to large improvements in a variety of natural language processing tasks, the success of these models comes with a price: They are trained on drastic amounts of mostly Web-based data, which often contains social stereotypes and biases that the models might pick up. This can have negative consequences, as models can abuse these biases in downstream tasks or applications. An application exemplifying the embedded cultural stereotypes is statistical machine translation, a common natural language processing task. Translations to English from a gender-neutral language such as Turkish, which does not have any grammatical gender like the gendered pronouns 'he' or 'she' in English, lead to gender-stereotyped sentences. For instance, Google Translate converts these Turkish sentences with gender-neutral pronouns: 'O bir doktor. O bir hemşire.' to these English sentences: 'He is a doctor. She is a nurse.' The same behavior can be observed when translating these Turkish sentences into other languages with grammatical gender like Spanish, Russian, and German. The gender-neutral Turkish pronoun 'o' is converted into gender-stereotyped pronouns in the respective language. Mitigating different types of bias in LMs would have diverse implications: On the one hand, it would allow us to avoid amplifying these biases. On the other hand, by avoiding algorithms enforcing social biases against minorities one could shift the social balance in the long term. Previous research has primarily focused on the English language, especially in the realm of gender bias in language models. However, the investigation of more languages with different linguistic elements than English, especially the ones like Turkish that are grammatically gender-neutral, can deepen our insights into the role of gender bias in LMs. The goal of this thesis was to address this research gap and to investigate the significance of gender-bias in Turkish language models. We used existing bias evaluation frameworks on Turkish models by both translating existing English datasets and creating new ones designed to measure gender-bias in the context of Türkiye. We also extended the testing framework to evaluate Turkish models for their embedded ethnic bias toward Kurdish people. Based on the test outcomes, we suggested possible relations of the picked up biases to different model characteristics such as the model size, their multilingualism, and the training corpora.
orhunc/Bias-Evaluation-Turkish
[ "language:tr", "arxiv:1903.10561", "region:us" ]
2023-01-31T07:46:27+00:00
{"language": ["tr"]}
2023-03-10T12:54:35+00:00
f514a4d85a2c560ed24c17b14a4d1b0c225bc6ef
# MIRACL (ja) embedded with cohere.ai `multilingual-22-12` encoder We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model. The query embeddings can be found in [Cohere/miracl-ja-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-ja-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-corpus-22-12). For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus). Dataset info: > MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world. > > The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage. ## Embeddings We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/). ## Loading the dataset In [miracl-ja-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large. You can either load the dataset like this: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-ja-corpus-22-12", split="train") ``` Or you can also stream it without downloading it before: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-ja-corpus-22-12", split="train", streaming=True) for doc in docs: docid = doc['docid'] title = doc['title'] text = doc['text'] emb = doc['emb'] ``` ## Search Have a look at [miracl-ja-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-queries-22-12) where we provide the query embeddings for the MIRACL dataset. To search in the documents, you must use **dot-product**. And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product. A full search example: ```python # Attention! For large datasets, this requires a lot of memory to store # all document embeddings and to compute the dot product scores. # Only use this for smaller datasets. For large datasets, use a vector DB from datasets import load_dataset import torch #Load documents + embeddings docs = load_dataset(f"Cohere/miracl-ja-corpus-22-12", split="train") doc_embeddings = torch.tensor(docs['emb']) # Load queries queries = load_dataset(f"Cohere/miracl-ja-queries-22-12", split="dev") # Select the first query as example qid = 0 query = queries[qid] query_embedding = torch.tensor(queries['emb']) # Compute dot score between query embedding and document embeddings dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1)) top_k = torch.topk(dot_scores, k=3) # Print results print("Query:", query['query']) for doc_id in top_k.indices[0].tolist(): print(docs[doc_id]['title']) print(docs[doc_id]['text']) ``` You can get embeddings for new queries using our API: ```python #Run: pip install cohere import cohere co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :)) texts = ['my search query'] response = co.embed(texts=texts, model='multilingual-22-12') query_embedding = response.embeddings[0] # Get the embedding for the first text ``` ## Performance In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset. We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results. Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted. | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 | |---|---|---|---|---| | miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 | | miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 | | miracl-de | 44.4 | 60.7 | 19.6 | 29.8 | | miracl-en | 44.6 | 62.2 | 30.2 | 43.2 | | miracl-es | 47.0 | 74.1 | 27.0 | 47.2 | | miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 | | miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 | | miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 | | miracl-id | 44.8 | 63.8 | 39.2 | 54.7 | | miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 | | **Avg** | 51.7 | 67.5 | 34.7 | 46.0 | Further languages (not supported by Elasticsearch): | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | |---|---|---| | miracl-fa | 44.8 | 53.6 | | miracl-ja | 49.0 | 61.0 | | miracl-ko | 50.9 | 64.8 | | miracl-sw | 61.4 | 74.5 | | miracl-te | 67.8 | 72.3 | | miracl-th | 60.2 | 71.9 | | miracl-yo | 56.4 | 62.2 | | miracl-zh | 43.8 | 56.5 | | **Avg** | 54.3 | 64.6 |
Cohere/miracl-ja-corpus-22-12
[ "task_categories:text-retrieval", "task_ids:document-retrieval", "annotations_creators:expert-generated", "multilinguality:multilingual", "language:ja", "license:apache-2.0", "region:us" ]
2023-01-31T08:42:35+00:00
{"annotations_creators": ["expert-generated"], "language": ["ja"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []}
2023-02-06T11:57:11+00:00
d1d18c9bc8bc24f8ee92eacc142cc42c19d14905
Nothing here
oskarspakers/songs
[ "language:lv", "license:openrail", "region:us" ]
2023-01-31T09:01:17+00:00
{"language": ["lv"], "license": "openrail", "pretty_name": "Songs in latvian"}
2023-04-28T19:43:51+00:00
5fcfc107f41b705395ba07ecc14fe5ce0f61eb93
# MIRACL (ja) embedded with cohere.ai `multilingual-22-12` encoder We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model. The query embeddings can be found in [Cohere/miracl-ja-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-ja-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-corpus-22-12). For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus). Dataset info: > MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world. > > The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage. ## Embeddings We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/). ## Loading the dataset In [miracl-ja-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large. You can either load the dataset like this: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-ja-corpus-22-12", split="train") ``` Or you can also stream it without downloading it before: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-ja-corpus-22-12", split="train", streaming=True) for doc in docs: docid = doc['docid'] title = doc['title'] text = doc['text'] emb = doc['emb'] ``` ## Search Have a look at [miracl-ja-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-queries-22-12) where we provide the query embeddings for the MIRACL dataset. To search in the documents, you must use **dot-product**. And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product. A full search example: ```python # Attention! For large datasets, this requires a lot of memory to store # all document embeddings and to compute the dot product scores. # Only use this for smaller datasets. For large datasets, use a vector DB from datasets import load_dataset import torch #Load documents + embeddings docs = load_dataset(f"Cohere/miracl-ja-corpus-22-12", split="train") doc_embeddings = torch.tensor(docs['emb']) # Load queries queries = load_dataset(f"Cohere/miracl-ja-queries-22-12", split="dev") # Select the first query as example qid = 0 query = queries[qid] query_embedding = torch.tensor(queries['emb']) # Compute dot score between query embedding and document embeddings dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1)) top_k = torch.topk(dot_scores, k=3) # Print results print("Query:", query['query']) for doc_id in top_k.indices[0].tolist(): print(docs[doc_id]['title']) print(docs[doc_id]['text']) ``` You can get embeddings for new queries using our API: ```python #Run: pip install cohere import cohere co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :)) texts = ['my search query'] response = co.embed(texts=texts, model='multilingual-22-12') query_embedding = response.embeddings[0] # Get the embedding for the first text ``` ## Performance In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset. We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results. Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted. | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 | |---|---|---|---|---| | miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 | | miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 | | miracl-de | 44.4 | 60.7 | 19.6 | 29.8 | | miracl-en | 44.6 | 62.2 | 30.2 | 43.2 | | miracl-es | 47.0 | 74.1 | 27.0 | 47.2 | | miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 | | miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 | | miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 | | miracl-id | 44.8 | 63.8 | 39.2 | 54.7 | | miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 | | **Avg** | 51.7 | 67.5 | 34.7 | 46.0 | Further languages (not supported by Elasticsearch): | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | |---|---|---| | miracl-fa | 44.8 | 53.6 | | miracl-ja | 49.0 | 61.0 | | miracl-ko | 50.9 | 64.8 | | miracl-sw | 61.4 | 74.5 | | miracl-te | 67.8 | 72.3 | | miracl-th | 60.2 | 71.9 | | miracl-yo | 56.4 | 62.2 | | miracl-zh | 43.8 | 56.5 | | **Avg** | 54.3 | 64.6 |
Cohere/miracl-ja-queries-22-12
[ "task_categories:text-retrieval", "task_ids:document-retrieval", "annotations_creators:expert-generated", "multilinguality:multilingual", "language:ja", "license:apache-2.0", "region:us" ]
2023-01-31T09:20:40+00:00
{"annotations_creators": ["expert-generated"], "language": ["ja"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []}
2023-02-06T11:57:00+00:00
d70fb77f22f731eea22b50eb51fb5cca9f48efa3
BIG-Bench but it doesn't require the hellish dependencies (tensorflow, pypi-bigbench, protobuf) of the official version. ```python dataset = load_dataset("tasksource/bigbench",'movie_recommendation') ``` Code to reproduce: https://colab.research.google.com/drive/1MKdLdF7oqrSQCeavAcsEnPdI85kD0LzU?usp=sharing Datasets are capped to 50k examples to keep things light. I also removed the default split when train was available also to save space, as default=train+val. ```bibtex @article{srivastava2022beyond, title={Beyond the imitation game: Quantifying and extrapolating the capabilities of language models}, author={Srivastava, Aarohi and Rastogi, Abhinav and Rao, Abhishek and Shoeb, Abu Awal Md and Abid, Abubakar and Fisch, Adam and Brown, Adam R and Santoro, Adam and Gupta, Aditya and Garriga-Alonso, Adri{\`a} and others}, journal={arXiv preprint arXiv:2206.04615}, year={2022} } ```
tasksource/bigbench
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:text-classification", "task_categories:text-generation", "task_categories:zero-shot-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:fact-checking", "task_ids:acceptability-classification", "task_ids:intent-classification", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "task_ids:text-scoring", "task_ids:hate-speech-detection", "task_ids:language-modeling", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "language_creators:machine-generated", "language_creators:other", "multilinguality:multilingual", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:en", "license:apache-2.0", "region:us" ]
2023-01-31T10:44:51+00:00
{"annotations_creators": ["crowdsourced", "expert-generated", "machine-generated"], "language_creators": ["crowdsourced", "expert-generated", "machine-generated", "other"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["multilingual", "monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["multiple-choice", "question-answering", "text-classification", "text-generation", "zero-shot-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "fact-checking", "acceptability-classification", "intent-classification", "multi-class-classification", "multi-label-classification", "text-scoring", "hate-speech-detection", "language-modeling"], "pretty_name": "bigbench"}
2023-05-11T13:08:10+00:00
eacc0c3fbf2afba936044cf413fd318c515c8d7a
# Dataset Card for "Wikipedia_5gram_less_orders" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lshowway/Wikipedia_5gram_less_orders
[ "region:us" ]
2023-01-31T11:00:34+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3754120542, "num_examples": 1893405}], "download_size": 2356370630, "dataset_size": 3754120542}}
2023-02-01T22:13:52+00:00
eb2a278003b6d738f2818c13620f93f12467d011
# Dataset Card for "boostcamp-docvqa-v3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Ssunbell/boostcamp-docvqa-v3
[ "region:us" ]
2023-01-31T11:08:02+00:00
{"dataset_info": {"features": [{"name": "questionId", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "image", "sequence": {"sequence": {"sequence": {"sequence": "uint8"}}}}, {"name": "docId", "dtype": "int64"}, {"name": "ucsf_document_id", "dtype": "string"}, {"name": "ucsf_document_page_no", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "data_split", "dtype": "string"}, {"name": "words", "sequence": "string"}, {"name": "boxes", "sequence": {"sequence": "int64"}}], "splits": [{"name": "train", "num_bytes": 6381951489, "num_examples": 39454}, {"name": "val", "num_bytes": 869383194, "num_examples": 5349}], "download_size": 2582271242, "dataset_size": 7251334683}}
2023-01-31T11:18:53+00:00
af71effd6410078b0b8b2859aabe497b189a8260
# Dataset Card for "Wikipedia_5gram_more_orders" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lshowway/Wikipedia_5gram_more_orders
[ "region:us" ]
2023-01-31T11:14:39+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3729298637, "num_examples": 1894957}], "download_size": 2399612708, "dataset_size": 3729298637}}
2023-02-01T22:19:20+00:00
cd91fb6d2ee07c29c24accbf04d5454c89e8b2e8
# Dataset Card for "boostcamp-docvqa-v3-test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Ssunbell/boostcamp-docvqa-v3-test
[ "region:us" ]
2023-01-31T11:18:55+00:00
{"dataset_info": {"features": [{"name": "questionId", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "image", "sequence": {"sequence": {"sequence": {"sequence": "uint8"}}}}, {"name": "docId", "dtype": "int64"}, {"name": "ucsf_document_id", "dtype": "string"}, {"name": "ucsf_document_page_no", "dtype": "string"}, {"name": "data_split", "dtype": "string"}, {"name": "words", "sequence": "string"}, {"name": "boxes", "sequence": {"sequence": "int64"}}], "splits": [{"name": "test", "num_bytes": 843104716, "num_examples": 5188}], "download_size": 297133332, "dataset_size": 843104716}}
2023-01-31T11:20:11+00:00
019f5d6c88be9c55b605076c88da75aa3d39de7f
# MIRACL (ru) embedded with cohere.ai `multilingual-22-12` encoder We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model. The query embeddings can be found in [Cohere/miracl-ru-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ru-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-ru-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ru-corpus-22-12). For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus). Dataset info: > MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world. > > The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage. ## Embeddings We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/). ## Loading the dataset In [miracl-ru-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ru-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large. You can either load the dataset like this: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-ru-corpus-22-12", split="train") ``` Or you can also stream it without downloading it before: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-ru-corpus-22-12", split="train", streaming=True) for doc in docs: docid = doc['docid'] title = doc['title'] text = doc['text'] emb = doc['emb'] ``` ## Search Have a look at [miracl-ru-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ru-queries-22-12) where we provide the query embeddings for the MIRACL dataset. To search in the documents, you must use **dot-product**. And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product. A full search example: ```python # Attention! For large datasets, this requires a lot of memory to store # all document embeddings and to compute the dot product scores. # Only use this for smaller datasets. For large datasets, use a vector DB from datasets import load_dataset import torch #Load documents + embeddings docs = load_dataset(f"Cohere/miracl-ru-corpus-22-12", split="train") doc_embeddings = torch.tensor(docs['emb']) # Load queries queries = load_dataset(f"Cohere/miracl-ru-queries-22-12", split="dev") # Select the first query as example qid = 0 query = queries[qid] query_embedding = torch.tensor(queries['emb']) # Compute dot score between query embedding and document embeddings dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1)) top_k = torch.topk(dot_scores, k=3) # Print results print("Query:", query['query']) for doc_id in top_k.indices[0].tolist(): print(docs[doc_id]['title']) print(docs[doc_id]['text']) ``` You can get embeddings for new queries using our API: ```python #Run: pip install cohere import cohere co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :)) texts = ['my search query'] response = co.embed(texts=texts, model='multilingual-22-12') query_embedding = response.embeddings[0] # Get the embedding for the first text ``` ## Performance In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset. We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results. Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted. | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 | |---|---|---|---|---| | miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 | | miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 | | miracl-de | 44.4 | 60.7 | 19.6 | 29.8 | | miracl-en | 44.6 | 62.2 | 30.2 | 43.2 | | miracl-es | 47.0 | 74.1 | 27.0 | 47.2 | | miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 | | miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 | | miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 | | miracl-id | 44.8 | 63.8 | 39.2 | 54.7 | | miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 | | **Avg** | 51.7 | 67.5 | 34.7 | 46.0 | Further languages (not supported by Elasticsearch): | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | |---|---|---| | miracl-fa | 44.8 | 53.6 | | miracl-ja | 49.0 | 61.0 | | miracl-ko | 50.9 | 64.8 | | miracl-sw | 61.4 | 74.5 | | miracl-te | 67.8 | 72.3 | | miracl-th | 60.2 | 71.9 | | miracl-yo | 56.4 | 62.2 | | miracl-zh | 43.8 | 56.5 | | **Avg** | 54.3 | 64.6 |
Cohere/miracl-ru-corpus-22-12
[ "task_categories:text-retrieval", "task_ids:document-retrieval", "annotations_creators:expert-generated", "multilinguality:multilingual", "language:ru", "license:apache-2.0", "region:us" ]
2023-01-31T11:24:36+00:00
{"annotations_creators": ["expert-generated"], "language": ["ru"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []}
2023-02-06T11:56:20+00:00
5496cca11b191bf58ad8bad91da85af5a35a8734
# Miyuki Character LoRA # Use Cases The LoRA is in itself very compatible with the most diverse model. However, it is most effective when used with Kenshi or AbyssOrangeMix2. The LoRA itself was trained with the token: ```miyuki```. I would suggest using the token with AbyssOrangeMix2, but not with Kenshi, since I got better results that way. The models mentioned right now 1. AbyssOrangeMix2 from [WarriorMama777](https://huggingface.co/WarriorMama777/OrangeMixs) 2. Kenshi Model from [Luna](https://huggingface.co/SweetLuna/Kenshi) ## Strength I would personally use these strength with the assosiated model: - 0.6-0.75 for AbyssOrangeMix2 - 0.4-0.65 for Kenshi # Showcase **Example 1** <img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/miyuki-shiba_LoRA/resolve/main/preview/preview%20(2).png"/> ``` miyuki, 1girl, (masterpiece:1.2), (best quality:1.2), (sharp detail:1.2), (highres:1.2), (in a graden of flowers), sitting, waving Steps: 32, Sampler: Euler a, CFG scale: 7 ``` **Example 2** <img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/miyuki-shiba_LoRA/resolve/main/preview/preview%20(3).png"/> ``` miyuki, 1girl, (masterpiece:1.2), (best quality:1.2), (sharp detail:1.2), (highres:1.2), (in a graden of flowers), sitting, waving Steps: 32, Sampler: Euler a, CFG scale: 7 ``` **Example 3** <img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/miyuki-shiba_LoRA/resolve/main/preview/preview%20(4).png"/> ``` miyuki, 1girl, (masterpiece:1.2), (best quality:1.2), (sharp detail:1.2), (highres:1.2), (in a graden of flowers), sitting, hands behind her back Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7 ``` # License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
Nerfgun3/miyuki-shiba_LoRA
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "image-to-image", "region:us" ]
2023-01-31T12:08:33+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "thumbnail": "https://huggingface.co/datasets/Nerfgun3/miyuki-shiba_LoRA/resolve/main/preview/preview%20(1).png", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false}
2023-01-31T12:22:58+00:00
5240f784fb2f0b59d51358464c25d60dd0959015
# MIRACL (ru) embedded with cohere.ai `multilingual-22-12` encoder We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model. The query embeddings can be found in [Cohere/miracl-ru-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ru-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-ru-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ru-corpus-22-12). For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus). Dataset info: > MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world. > > The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage. ## Embeddings We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/). ## Loading the dataset In [miracl-ru-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ru-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large. You can either load the dataset like this: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-ru-corpus-22-12", split="train") ``` Or you can also stream it without downloading it before: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-ru-corpus-22-12", split="train", streaming=True) for doc in docs: docid = doc['docid'] title = doc['title'] text = doc['text'] emb = doc['emb'] ``` ## Search Have a look at [miracl-ru-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ru-queries-22-12) where we provide the query embeddings for the MIRACL dataset. To search in the documents, you must use **dot-product**. And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product. A full search example: ```python # Attention! For large datasets, this requires a lot of memory to store # all document embeddings and to compute the dot product scores. # Only use this for smaller datasets. For large datasets, use a vector DB from datasets import load_dataset import torch #Load documents + embeddings docs = load_dataset(f"Cohere/miracl-ru-corpus-22-12", split="train") doc_embeddings = torch.tensor(docs['emb']) # Load queries queries = load_dataset(f"Cohere/miracl-ru-queries-22-12", split="dev") # Select the first query as example qid = 0 query = queries[qid] query_embedding = torch.tensor(queries['emb']) # Compute dot score between query embedding and document embeddings dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1)) top_k = torch.topk(dot_scores, k=3) # Print results print("Query:", query['query']) for doc_id in top_k.indices[0].tolist(): print(docs[doc_id]['title']) print(docs[doc_id]['text']) ``` You can get embeddings for new queries using our API: ```python #Run: pip install cohere import cohere co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :)) texts = ['my search query'] response = co.embed(texts=texts, model='multilingual-22-12') query_embedding = response.embeddings[0] # Get the embedding for the first text ``` ## Performance In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset. We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results. Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted. | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 | |---|---|---|---|---| | miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 | | miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 | | miracl-de | 44.4 | 60.7 | 19.6 | 29.8 | | miracl-en | 44.6 | 62.2 | 30.2 | 43.2 | | miracl-es | 47.0 | 74.1 | 27.0 | 47.2 | | miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 | | miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 | | miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 | | miracl-id | 44.8 | 63.8 | 39.2 | 54.7 | | miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 | | **Avg** | 51.7 | 67.5 | 34.7 | 46.0 | Further languages (not supported by Elasticsearch): | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | |---|---|---| | miracl-fa | 44.8 | 53.6 | | miracl-ja | 49.0 | 61.0 | | miracl-ko | 50.9 | 64.8 | | miracl-sw | 61.4 | 74.5 | | miracl-te | 67.8 | 72.3 | | miracl-th | 60.2 | 71.9 | | miracl-yo | 56.4 | 62.2 | | miracl-zh | 43.8 | 56.5 | | **Avg** | 54.3 | 64.6 |
Cohere/miracl-ru-queries-22-12
[ "task_categories:text-retrieval", "task_ids:document-retrieval", "annotations_creators:expert-generated", "multilinguality:multilingual", "language:ru", "license:apache-2.0", "region:us" ]
2023-01-31T12:18:51+00:00
{"annotations_creators": ["expert-generated"], "language": ["ru"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []}
2023-02-06T11:56:00+00:00
8893f925ba117d015beff6d78316748084acce3e
OlegKit/RND3
[ "license:artistic-2.0", "region:us" ]
2023-01-31T12:21:46+00:00
{"license": "artistic-2.0", "pretty_name": "RandomVoice"}
2023-01-31T12:29:58+00:00
5f4c79d0710fffe58328dfa2795a64b927cca5de
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
flow3rdown/MarKG
[ "language:en", "license:mit", "region:us" ]
2023-01-31T13:07:23+00:00
{"language": ["en"], "license": "mit"}
2023-01-31T13:18:49+00:00
7efe436235598edf9b6103abaa757659d2a5c1cb
# MIRACL (zh) embedded with cohere.ai `multilingual-22-12` encoder We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model. The query embeddings can be found in [Cohere/miracl-zh-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-zh-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-corpus-22-12). For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus). Dataset info: > MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world. > > The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage. ## Embeddings We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/). ## Loading the dataset In [miracl-zh-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large. You can either load the dataset like this: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-zh-corpus-22-12", split="train") ``` Or you can also stream it without downloading it before: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-zh-corpus-22-12", split="train", streaming=True) for doc in docs: docid = doc['docid'] title = doc['title'] text = doc['text'] emb = doc['emb'] ``` ## Search Have a look at [miracl-zh-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-queries-22-12) where we provide the query embeddings for the MIRACL dataset. To search in the documents, you must use **dot-product**. And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product. A full search example: ```python # Attention! For large datasets, this requires a lot of memory to store # all document embeddings and to compute the dot product scores. # Only use this for smaller datasets. For large datasets, use a vector DB from datasets import load_dataset import torch #Load documents + embeddings docs = load_dataset(f"Cohere/miracl-zh-corpus-22-12", split="train") doc_embeddings = torch.tensor(docs['emb']) # Load queries queries = load_dataset(f"Cohere/miracl-zh-queries-22-12", split="dev") # Select the first query as example qid = 0 query = queries[qid] query_embedding = torch.tensor(queries['emb']) # Compute dot score between query embedding and document embeddings dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1)) top_k = torch.topk(dot_scores, k=3) # Print results print("Query:", query['query']) for doc_id in top_k.indices[0].tolist(): print(docs[doc_id]['title']) print(docs[doc_id]['text']) ``` You can get embeddings for new queries using our API: ```python #Run: pip install cohere import cohere co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :)) texts = ['my search query'] response = co.embed(texts=texts, model='multilingual-22-12') query_embedding = response.embeddings[0] # Get the embedding for the first text ``` ## Performance In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset. We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results. Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted. | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 | |---|---|---|---|---| | miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 | | miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 | | miracl-de | 44.4 | 60.7 | 19.6 | 29.8 | | miracl-en | 44.6 | 62.2 | 30.2 | 43.2 | | miracl-es | 47.0 | 74.1 | 27.0 | 47.2 | | miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 | | miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 | | miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 | | miracl-id | 44.8 | 63.8 | 39.2 | 54.7 | | miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 | | **Avg** | 51.7 | 67.5 | 34.7 | 46.0 | Further languages (not supported by Elasticsearch): | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | |---|---|---| | miracl-fa | 44.8 | 53.6 | | miracl-ja | 49.0 | 61.0 | | miracl-ko | 50.9 | 64.8 | | miracl-sw | 61.4 | 74.5 | | miracl-te | 67.8 | 72.3 | | miracl-th | 60.2 | 71.9 | | miracl-yo | 56.4 | 62.2 | | miracl-zh | 43.8 | 56.5 | | **Avg** | 54.3 | 64.6 |
Cohere/miracl-zh-corpus-22-12
[ "task_categories:text-retrieval", "task_ids:document-retrieval", "annotations_creators:expert-generated", "multilinguality:multilingual", "language:zh", "license:apache-2.0", "region:us" ]
2023-01-31T13:13:33+00:00
{"annotations_creators": ["expert-generated"], "language": ["zh"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []}
2023-02-06T11:55:44+00:00
49ac0e568262676a1fb0ac8abac0c3070c960f20
NIFD/BengaliTeenVoice
[ "license:other", "region:us" ]
2023-01-31T13:26:28+00:00
{"license": "other"}
2023-01-31T13:26:28+00:00
da70cd209896584e91cccac9c86092ec9c25c2c1
# Dataset Card for "wikipedia.reorder.SVO" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lshowway/wikipedia.reorder.SVO
[ "region:us" ]
2023-01-31T13:29:50+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4083836556, "num_examples": 1986076}], "download_size": 1989232973, "dataset_size": 4083836556}}
2023-01-31T16:41:06+00:00
6d9607deb61364812da24e285f35d6463f8910fa
# Dataset Card for "wikipedia.reorder.VOS" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lshowway/wikipedia.reorder.VOS
[ "region:us" ]
2023-01-31T13:37:46+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4083836556, "num_examples": 1986076}], "download_size": 2018381284, "dataset_size": 4083836556}}
2023-01-31T17:40:42+00:00
125f282756795fe4c1a4ba1a80cbf4434c48835b
# MIRACL (zh) embedded with cohere.ai `multilingual-22-12` encoder We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model. The query embeddings can be found in [Cohere/miracl-zh-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-zh-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-corpus-22-12). For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus). Dataset info: > MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world. > > The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage. ## Embeddings We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/). ## Loading the dataset In [miracl-zh-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large. You can either load the dataset like this: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-zh-corpus-22-12", split="train") ``` Or you can also stream it without downloading it before: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-zh-corpus-22-12", split="train", streaming=True) for doc in docs: docid = doc['docid'] title = doc['title'] text = doc['text'] emb = doc['emb'] ``` ## Search Have a look at [miracl-zh-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-queries-22-12) where we provide the query embeddings for the MIRACL dataset. To search in the documents, you must use **dot-product**. And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product. A full search example: ```python # Attention! For large datasets, this requires a lot of memory to store # all document embeddings and to compute the dot product scores. # Only use this for smaller datasets. For large datasets, use a vector DB from datasets import load_dataset import torch #Load documents + embeddings docs = load_dataset(f"Cohere/miracl-zh-corpus-22-12", split="train") doc_embeddings = torch.tensor(docs['emb']) # Load queries queries = load_dataset(f"Cohere/miracl-zh-queries-22-12", split="dev") # Select the first query as example qid = 0 query = queries[qid] query_embedding = torch.tensor(queries['emb']) # Compute dot score between query embedding and document embeddings dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1)) top_k = torch.topk(dot_scores, k=3) # Print results print("Query:", query['query']) for doc_id in top_k.indices[0].tolist(): print(docs[doc_id]['title']) print(docs[doc_id]['text']) ``` You can get embeddings for new queries using our API: ```python #Run: pip install cohere import cohere co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :)) texts = ['my search query'] response = co.embed(texts=texts, model='multilingual-22-12') query_embedding = response.embeddings[0] # Get the embedding for the first text ``` ## Performance In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset. We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results. Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted. | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 | |---|---|---|---|---| | miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 | | miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 | | miracl-de | 44.4 | 60.7 | 19.6 | 29.8 | | miracl-en | 44.6 | 62.2 | 30.2 | 43.2 | | miracl-es | 47.0 | 74.1 | 27.0 | 47.2 | | miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 | | miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 | | miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 | | miracl-id | 44.8 | 63.8 | 39.2 | 54.7 | | miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 | | **Avg** | 51.7 | 67.5 | 34.7 | 46.0 | Further languages (not supported by Elasticsearch): | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | |---|---|---| | miracl-fa | 44.8 | 53.6 | | miracl-ja | 49.0 | 61.0 | | miracl-ko | 50.9 | 64.8 | | miracl-sw | 61.4 | 74.5 | | miracl-te | 67.8 | 72.3 | | miracl-th | 60.2 | 71.9 | | miracl-yo | 56.4 | 62.2 | | miracl-zh | 43.8 | 56.5 | | **Avg** | 54.3 | 64.6 |
Cohere/miracl-zh-queries-22-12
[ "task_categories:text-retrieval", "task_ids:document-retrieval", "annotations_creators:expert-generated", "multilinguality:multilingual", "language:zh", "license:apache-2.0", "region:us" ]
2023-01-31T13:38:51+00:00
{"annotations_creators": ["expert-generated"], "language": ["zh"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []}
2023-02-06T11:55:33+00:00
3c25218126e5a3ea1a8f1ee6d1646d42b5d40646
# Dataset Card for "lex_fridman_podcast" ### Dataset Summary This dataset contains transcripts from the [Lex Fridman podcast](https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4) (Episodes 1 to 325). The transcripts were generated using [OpenAI Whisper](https://github.com/openai/whisper) (large model) and made publicly available at: https://karpathy.ai/lexicap/index.html. ### Languages - English ## Dataset Structure The dataset contains around 803K entries, consisting of audio transcripts generated from episodes 1 to 325 of the [Lex Fridman podcast](https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4). In addition to the transcript text, the dataset includes other metadata such as episode id and title, guest name, and start and end timestamps for each transcript. ### Data Fields The dataset schema is as follows: - **id**: Episode id. - **guest**: Name of the guest interviewed. - **title:** Title of the episode. - **text:** Text of the transcription. - **start:** Timestamp (`HH:mm:ss.mmm`) indicating the beginning of the trancription. - **end:** Timestamp (`HH:mm:ss.mmm`) indicating the end of the trancription. ### Source Data Source data provided by Andrej Karpathy at: https://karpathy.ai/lexicap/index.html ### Contributions Thanks to [nmac](https://huggingface.co/nmac) for adding this dataset.
nmac/lex_fridman_podcast
[ "task_categories:automatic-speech-recognition", "task_categories:sentence-similarity", "size_categories:100K<n<1M", "language:en", "podcast", "whisper", "region:us" ]
2023-01-31T13:40:48+00:00
{"language": ["en"], "size_categories": ["100K<n<1M"], "task_categories": ["automatic-speech-recognition", "sentence-similarity"], "tags": ["podcast", "whisper"]}
2023-01-31T16:24:07+00:00
e064ead3e1e6a17f5c51f19e54a2d91131e42486
P2333/DM-Improves-AT
[ "license:apache-2.0", "region:us" ]
2023-01-31T13:46:28+00:00
{"license": "apache-2.0"}
2023-02-10T05:27:06+00:00
5f326be836f64a96e42641983de3e5feafbc835c
# Dataset Card for "cqadupstack" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) ## Dataset Description - **Homepage:** [http://nlp.cis.unimelb.edu.au/resources/cqadupstack/](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) ### Dataset Summary This is a preprocessed version of cqadupstack, to make it easily consumable via huggingface. The original dataset can be found [here](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/). CQADupStack is a benchmark dataset for community question-answering (cQA) research. It contains threads from twelve StackExchange1 subforums, annotated with duplicate question information and comes with pre-defined training, development, and test splits, both for retrieval and classification experiments. ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ```json { "question": "Very often, when some unknown company is calling me, in couple of seconds I see its name and logo on standard ...", "answer": "You didn't explicitely mention it, but from the context I assume you're using a device with Android 4.4 (Kitkat). With that ...", "title": "Why Dialer shows contact name and image, when contact is not in my address book?", "forum_tag": "android" } ``` ### Data Fields The data fields are the same among all splits. - `question`: a `string` feature. - `answer`: a `string` feature. - `title`: a `string` feature. - `forum_tag`: a categorical `string` feature. ## Additional Information ### Licensing Information This dataset is distributed under the Apache 2.0 licence.
LLukas22/cqadupstack
[ "task_categories:sentence-similarity", "task_categories:feature-extraction", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "region:us" ]
2023-01-31T14:18:36+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["sentence-similarity", "feature-extraction"]}
2023-04-30T18:24:35+00:00
911f431b2c4343d34a4099d9ff306f03b9169cc2
SamehSelim/one
[ "license:artistic-2.0", "region:us" ]
2023-01-31T14:38:11+00:00
{"license": "artistic-2.0"}
2023-01-31T14:38:51+00:00
e4abcd23b7c89c7e68d88aa49832fe002e11f22f
iloncka/qa_program_modules_docs
[ "task_categories:question-answering", "task_ids:closed-domain-qa", "annotations_creators:no-annotation", "language_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ru", "license:afl-3.0", "program modules descriptions", "region:us" ]
2023-01-31T14:55:44+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["machine-generated", "found"], "language": ["ru"], "license": ["afl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["closed-domain-qa"], "pretty_name": "qapmdocs", "tags": ["program modules descriptions"]}
2023-02-01T10:52:03+00:00
32a27da4e982e31c9e5a7c76240a687a05b132ff
MarcelM/sloreddit
[ "license:unknown", "region:us" ]
2023-01-31T15:01:33+00:00
{"license": "unknown"}
2023-02-11T23:58:39+00:00
7c4aa0946d69f52ef275002e06b3f820e6482a8d
# Dataset Card for "cqadupstack" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) ## Dataset Description - **Homepage:** [https://sites.google.com/view/fiqa/?pli=1](https://sites.google.com/view/fiqa/?pli=1) ### Dataset Summary This is a preprocessed version of fiqa, to make it easily consumable via huggingface. The original dataset can be found [here](https://sites.google.com/view/fiqa/?pli=1). The growing maturity of Natural Language Processing (NLP) techniques and resources is drastically changing the landscape of many application domains which are dependent on the analysis of unstructured data at scale. The financial domain, with its dependency on the interpretation of multiple unstructured and structured data sources and with its demand for fast and comprehensive decision making is already emerging as a primary ground for the experimentation of NLP, Web Mining and Information Retrieval (IR) techniques. This challenge focuses on advancing the state-of-the-art of aspect-based sentiment analysis and opinion-based Question Answering for the financial domain. ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ```json { "question": "How does a 2 year treasury note work?", "answer": "Notes and Bonds sell at par (1.0). When rates go up, their value goes down. When rates go down, their value goes up. ..." } ``` ### Data Fields The data fields are the same among all splits. - `question`: a `string` feature. - `answer`: a `string` feature. ## Additional Information ### Licensing Information This dataset is distributed under the [CC BY-NC](https://creativecommons.org/licenses/by-nc/3.0/) licence providing free access for non-commercial and academic usage.
LLukas22/fiqa
[ "task_categories:feature-extraction", "task_categories:sentence-similarity", "size_categories:10K<n<100K", "language:en", "license:cc-by-3.0", "region:us" ]
2023-01-31T15:12:27+00:00
{"language": ["en"], "license": "cc-by-3.0", "size_categories": ["10K<n<100K"], "task_categories": ["feature-extraction", "sentence-similarity"]}
2023-04-30T18:33:54+00:00
1baa368cfa1f9ed519094506c7a6ad9ca0a84393
# Dataset Card for "COCO_small" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mvoisin/TinyCOCO
[ "region:us" ]
2023-01-31T15:13:11+00:00
{"viewer": true, "dataset_info": {"features": [{"name": "image_id", "dtype": "int64"}, {"name": "image_url", "dtype": "string"}, {"name": "objects", "struct": [{"name": "bbox", "sequence": {"sequence": "float64"}}, {"name": "category", "sequence": "int64"}, {"name": "id", "sequence": "int64"}]}], "splits": [{"name": "test", "num_bytes": 754, "num_examples": 1}], "download_size": 0, "dataset_size": 754}}
2023-01-31T18:04:51+00:00
a02398901b5fc024a421164518a4d1f033e0a30b
# Dataset Card for "Paragraphs" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nlpproject2023/Paragraphs
[ "region:us" ]
2023-01-31T15:15:53+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "context", "struct": [{"name": "sentences", "sequence": {"sequence": "string"}}, {"name": "title", "sequence": "string"}]}], "splits": [{"name": "validation", "num_bytes": 5269319, "num_examples": 4523}, {"name": "test", "num_bytes": 8548487, "num_examples": 7405}], "download_size": 8899516, "dataset_size": 13817806}}
2023-01-31T15:16:21+00:00
f246a33a48cbfabfc37f1c1d7b853407bdfc4e6b
# Realms Adventurer Dataset for Text-to-Image This dataset contains annotated image-caption pairs with a specific structure. ## Example ```json { "file_name": "91200682-07_giants.png", "sex": "male", "race": "giant", "class": "mage", "inherent_features": "red flowers growing on his skin", "clothing": "brown leather pants", "accessories": null, "background": "between tall red trees", "shot": "full", "view": "frontal", "caption": "a male giant mage with red flowers growing on his skin, wearing brown leather pants, between tall red trees, full, frontal" } ``` ## Usage ```python import datasets dataset = datasets.load_dataset("rvorias/realms_adventurers") dataset["train"][0] ``` ## Annotation tooling Label-studio was used to organize and create annotations.
rvorias/realms_adventurers
[ "task_categories:text-to-image", "size_categories:n<1K", "language:en", "license:other", "stable-diffusion", "realms", "region:us" ]
2023-01-31T15:21:38+00:00
{"language": ["en"], "license": "other", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "pretty_name": "Realms Adventurers Dataset", "tags": ["stable-diffusion", "realms"]}
2023-03-28T18:24:42+00:00
9661cd9222c20f7241329bafd0e3737e2b06076c
# Dataset Card for "OxfordFlowers_test_facebook_opt_6.7b_Attributes_ns_6149" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordFlowers_test_facebook_opt_6.7b_Attributes_ns_6149
[ "region:us" ]
2023-01-31T15:45:32+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 267024927.375, "num_examples": 6149}], "download_size": 261230256, "dataset_size": 267024927.375}}
2023-01-31T15:45:48+00:00
06fd1b090bceecc0ce724cd21578ba7a6664fe8d
Redistributed without modification from https://github.com/phelber/EuroSAT. EuroSAT100 is a subset of EuroSATallBands containing only 100 images. It is intended for tutorials and demonstrations, not for benchmarking.
torchgeo/eurosat
[ "task_categories:image-classification", "size_categories:10K<n<100K", "language:en", "license:mit", "region:us" ]
2023-01-31T16:19:49+00:00
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["image-classification"], "pretty_name": "EuroSAT"}
2023-02-21T04:01:42+00:00
4864c7b188fa519c2f752b4cdd1e82caa6effb76
# Dataset Card for "OxfordFlowers_test_facebook_opt_6.7b_Attributes_Caption_ns_6149" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordFlowers_test_facebook_opt_6.7b_Attributes_Caption_ns_6149
[ "region:us" ]
2023-01-31T16:23:29+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 267297980.375, "num_examples": 6149}, {"name": "fewshot_1_bs_16", "num_bytes": 269129323.375, "num_examples": 6149}, {"name": "fewshot_3_bs_16", "num_bytes": 272760353.375, "num_examples": 6149}], "download_size": 523672380, "dataset_size": 809187657.125}}
2023-02-02T01:24:26+00:00
44d936274d1e4850b77f040efa530f9cb503199c
SigmaBalls/test
[ "region:us" ]
2023-01-31T18:08:58+00:00
{}
2023-01-31T18:09:21+00:00
2672c8c413411970411c30ad58d6cc716e675b57
# Dataset Card for "Caltech101_with_background_test_facebook_opt_6.7b_Attributes_Caption_ns_6084" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/Caltech101_with_background_test_facebook_opt_6.7b_Attributes_Caption_ns_6084
[ "region:us" ]
2023-01-31T18:13:00+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 101123899.5, "num_examples": 6084}, {"name": "fewshot_1_bs_16", "num_bytes": 102737630.5, "num_examples": 6084}, {"name": "fewshot_3_bs_16", "num_bytes": 105972458.5, "num_examples": 6084}], "download_size": 286590313, "dataset_size": 309833988.5}}
2023-01-31T20:07:13+00:00
103e773adb8eea7e2d41a8a746697ec070792144
# Dataset Card for "Caltech101_with_background_test_facebook_opt_6.7b_Visclues_ns_6084" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/Caltech101_with_background_test_facebook_opt_6.7b_Visclues_ns_6084
[ "region:us" ]
2023-01-31T18:28:05+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 101626234.5, "num_examples": 6084}, {"name": "fewshot_1_bs_16", "num_bytes": 103738576.5, "num_examples": 6084}, {"name": "fewshot_3_bs_16", "num_bytes": 107968014.5, "num_examples": 6084}], "download_size": 287673188, "dataset_size": 313332825.5}}
2023-01-31T21:10:47+00:00
bb119e18a3c33015dde802e337987463a9ec4add
# Dataset Card for "wikipedia.reorder.OSV" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lshowway/wikipedia.reorder.OSV
[ "region:us" ]
2023-01-31T18:36:23+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4083836556, "num_examples": 1986076}], "download_size": 2007590101, "dataset_size": 4083836556}}
2023-01-31T18:38:47+00:00
d8ad11be7981d42ae8f200ff773570345c251f18
# Dataset Card for "350k_dataset_health_ar_en_th" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Shularp/350k_dataset_health_ar_en_th
[ "region:us" ]
2023-01-31T19:00:28+00:00
{"dataset_info": {"features": [{"name": "ar", "dtype": "string"}, {"name": "en", "dtype": "string"}, {"name": "th", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 4370651, "num_examples": 10078}, {"name": "test", "num_bytes": 4378778, "num_examples": 10108}, {"name": "train", "num_bytes": 122924727, "num_examples": 268888}], "download_size": 70750385, "dataset_size": 131674156}}
2023-01-31T19:00:38+00:00
83ae38cb544e836c9ff84a4666c17432b8b0d6f1
# 20,000+ chinese sentences with translations and pinyin - Source: https://mnemosyne-proj.org/cards/20000-chinese-sentences-translations-and-pinyin - Contributed by: Brian Vaughan http://brianvaughan.net/ # Dataset Structure Each sample consists of: 1. English sentence 2. HSK level 3. Chinese translation 4. Pinyin 5. separator ("\-\-") # Other Info from the Source ### HSK level All of the sentences came from sample sentences intended to describe a particular word. HSK level (in the category name) signifies the HSK level of the word this sentence describes. Note that "HSK level" is 1-4. ### Limitation This is a search of all characters in each level, including the characters that loner words are composed of. This is why even HSK level 4 sentences can contain sentences in "limited 1." For example, 作主 (zuo4zhu3) is an HSK level 4 word. It contains 2 characters which both appear in other HSK level 1 words, and so the sample sentence for 作主 (assuming that sentence contains no other difficult words) might appear in the category "HSK 4; limited 1;"
swaption2009/20k-en-zh-translation-pinyin-hsk
[ "task_categories:translation", "language:en", "language:zh", "region:us" ]
2023-01-31T19:02:09+00:00
{"language": ["en", "zh"], "task_categories": ["translation"]}
2023-02-01T06:40:59+00:00
3d3d27aa7af8941408cefc3991ada5d12a4273d1
# SNL Summarization Dataset The source of this dataset is a web scrape of SNL (Store Norske Leksikon), a publicly owned Norwegian encyclopedia. Articles in SNL are structured so that the first para graph (the lead) acts as a summary of the entire article. ## Methodology From our thesis: We couldn’t find any existing datasets containing SNL data, so we decided to create our own by scraping articles from SNL.no. The first step involved gathering a list of all article URLs on the site. We extracted the URLs from the sitemaps and retained only those following the format ”https://snl.no/name of article” to avoid non-article pages. Next, we scraped the URLs with multiple threads downloading articles at the same time using the Python module grequests and parsed the received HTML using beautifulsoup4. We extracted the text from the lead and the rest of the article text, joining the latter while removing any whitespace. Additionally, we saved metadata such as URLs, headlines, and categories for each article. To filter out very short articles, we set criteria for keeping an article: the lead had to be at least 100 characters long, and the rest of the article had to be longer than 400 characters. Finally, we split the dataset using an 84%/6%/10% split for the train/validation/test sets. This division was chosen to ensure a sufficient amount of data for training our models while still providing an adequate sample size for validation and testing. By allocating a larger portion (84%) of the data for training, our goal was to optimize the model’s learning process. We allocated 6% of the data for validation, which was intended to help fine-tune the model and its hyperparameters, while the remaining 10% was designated for the final evaluation of our model’s performance on unseen data in the test set. # License Please refer to the license of SNL # Citation If you are using this dataset in your work, please cite our master thesis which this dataset was a part of ``` @mastersthesis{navjord2023beyond, title={Beyond extractive: advancing abstractive automatic text summarization in Norwegian with transformers}, author={Navjord, J{\o}rgen Johnsen and Korsvik, Jon-Mikkel Ryen}, year={2023}, school={Norwegian University of Life Sciences, {\AA}s} } ```
navjordj/SNL_summarization
[ "task_categories:summarization", "task_categories:text2text-generation", "size_categories:10K<n<100K", "language:no", "language:nb", "region:us" ]
2023-01-31T19:10:43+00:00
{"language": ["no", "nb"], "size_categories": ["10K<n<100K"], "task_categories": ["summarization", "text2text-generation"], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "url", "dtype": "string"}, {"name": "date_scraped", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "ingress", "dtype": "string"}, {"name": "article", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 26303219.28053567, "num_examples": 10874}, {"name": "validation", "num_bytes": 1981086.682983145, "num_examples": 819}, {"name": "test", "num_bytes": 3144582.036481182, "num_examples": 1300}], "download_size": 19441287, "dataset_size": 31428888.0}}
2024-01-23T07:25:47+00:00
3c9cb54da01821f790f9c8b3832e360cb9c53b80
metaeval/nli4wills
[ "license:apache-2.0", "region:us" ]
2023-01-31T19:57:23+00:00
{"license": "apache-2.0"}
2023-01-31T19:57:52+00:00
538395897d14a7d8249a14eadb97ff262ee865e6
# Dataset Card for "identities" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
SDbiaseval/identities
[ "region:us" ]
2023-01-31T21:14:22+00:00
{"dataset_info": {"features": [{"name": "ethnicity", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "no", "dtype": "int32"}, {"name": "image_path", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "model", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 585336673.0, "num_examples": 2040}], "download_size": 465986042, "dataset_size": 585336673.0}}
2023-01-31T21:21:42+00:00
d0af6e2eeea2322af86078068bd83337148a2149
Redistributed from http://weegee.vision.ucmerced.edu/datasets/landuse.html without modification. See https://www.usgs.gov/faqs/what-are-terms-uselicensing-map-services-and-data-national-map for license.
torchgeo/ucmerced
[ "task_categories:image-classification", "size_categories:10K<n<100K", "language:en", "region:us" ]
2023-01-31T21:45:28+00:00
{"language": ["en"], "size_categories": ["10K<n<100K"], "task_categories": ["image-classification"], "pretty_name": "UC Merced"}
2023-12-06T20:50:47+00:00
483f6c8f83c1ef721461f24751c6fd5ccd061a59
# Dataset Card for MultiLegalPileWikipediaFiltered: A filtered version of the MultiLegalPile dataset, together with wikipedia articles ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** [Joel Niklaus](mailto:[email protected]) ### Dataset Summary The Multi_Legal_Pile is a large-scale multilingual legal dataset suited for pretraining language models. It spans over 24 languages and four legal text types. ### Supported Tasks and Leaderboards The dataset supports the tasks of fill-mask. ### Languages The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv ## Dataset Structure It is structured in the following format: {language}_{text_type}_{shard}.jsonl.xz text_type is one of the following: - caselaw - contracts - legislation - other - wikipedia Use the dataset like this: ```python from datasets import load_dataset config = 'en_contracts' # {language}_{text_type} dataset = load_dataset('joelito/Multi_Legal_Pile', config, split='train', streaming=True) ``` 'config' is a combination of language and text_type, e.g. 'en_contracts' or 'de_caselaw'. To load all the languages or all the text_types, use 'all' instead of the language or text_type (e.g., ' all_legislation'). ### Data Instances The file format is jsonl.xz and there is a `train` and `validation` split available. Since some configurations are very small or non-existent, they might not contain a train split or not be present at all. The complete dataset consists of five large subsets: - [Native Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) - [Eurlex Resources](https://huggingface.co/datasets/joelito/eurlex_resources) - [MC4 Legal](https://huggingface.co/datasets/joelito/mc4_legal) - [Pile of Law](https://huggingface.co/datasets/pile-of-law/pile-of-law) - [EU Wikipedias](https://huggingface.co/datasets/joelito/EU_Wikipedias) | Language | Source | Size (MB) | Words | Documents | Words/Document | |:-----------|:------------|-----------------:|------------:|------------:|-----------------:| | all | all | 1.29761e+06 | 81214262514 | 57305071 | 1417 | | all | caselaw | 695837 | 44372248995 | 30085886 | 1474 | | all | contracts | 122599 | 7964531030 | 1785686 | 4460 | | all | legislation | 189135 | 10879386581 | 3601518 | 3020 | | all | other | 126570 | 8780080882 | 3358073 | 2614 | | all | wikipedia | 163468 | 9218015026 | 18473908 | 498 | | bg | all | 14028 | 535256525 | 355650 | 1505 | | bg | caselaw | 2897 | 109634090 | 52648 | 2082 | | bg | contracts | 748 | 31292877 | 7107 | 4403 | | bg | legislation | 8015 | 308946116 | 82777 | 3732 | | bg | other | 0 | 0 | 0 | 0 | | bg | wikipedia | 2368 | 85383442 | 213118 | 400 | | cs | all | 21818 | 1123000335 | 839914 | 1337 | | cs | caselaw | 11151 | 574336489 | 296652 | 1936 | | cs | contracts | 492 | 28106428 | 7383 | 3806 | | cs | legislation | 6288 | 333850509 | 88731 | 3762 | | cs | other | 0 | 0 | 0 | 0 | | cs | wikipedia | 3887 | 186706909 | 447148 | 417 | | da | all | 16024 | 970954498 | 576256 | 1684 | | da | caselaw | 3469 | 210730560 | 89702 | 2349 | | da | contracts | 559 | 35592407 | 10827 | 3287 | | da | legislation | 10736 | 653153146 | 265868 | 2456 | | da | other | 0 | 0 | 0 | 0 | | da | wikipedia | 1259 | 71478385 | 209859 | 340 | | de | all | 63887 | 3512253170 | 3216030 | 1092 | | de | caselaw | 31527 | 1785439383 | 596800 | 2991 | | de | contracts | 614 | 36786772 | 11041 | 3331 | | de | legislation | 8934 | 512840663 | 276034 | 1857 | | de | other | 0 | 0 | 0 | 0 | | de | wikipedia | 22812 | 1177186352 | 2332155 | 504 | | el | all | 23167 | 800722723 | 457553 | 1750 | | el | caselaw | 6007 | 203770918 | 85496 | 2383 | | el | contracts | 1050 | 38963772 | 10266 | 3795 | | el | legislation | 12906 | 455240770 | 171356 | 2656 | | el | other | 0 | 0 | 0 | 0 | | el | wikipedia | 3204 | 102747263 | 190435 | 539 | | en | all | 712173 | 47279626514 | 21112650 | 2239 | | en | caselaw | 380976 | 25561971376 | 10240724 | 2496 | | en | contracts | 71360 | 7260323438 | 1594942 | 4552 | | en | legislation | 36587 | 2537696894 | 657805 | 3857 | | en | other | 126570 | 8780080882 | 3358073 | 2614 | | en | wikipedia | 51053 | 3139553924 | 5261106 | 596 | | es | all | 23657 | 1515689548 | 1567527 | 966 | | es | caselaw | 3299 | 220506573 | 83872 | 2629 | | es | contracts | 594 | 41840328 | 10048 | 4164 | | es | legislation | 6837 | 462661276 | 149368 | 3097 | | es | other | 0 | 0 | 0 | 0 | | es | wikipedia | 12928 | 790681371 | 1324239 | 597 | | et | all | 7446 | 372896353 | 261641 | 1425 | | et | caselaw | 1835 | 92951578 | 58736 | 1582 | | et | contracts | 433 | 24017402 | 7371 | 3258 | | et | legislation | 4200 | 210952455 | 63922 | 3300 | | et | other | 0 | 0 | 0 | 0 | | et | wikipedia | 978 | 44974918 | 131612 | 341 | | fi | all | 11501 | 513990484 | 592986 | 866 | | fi | caselaw | 2854 | 126368889 | 77882 | 1622 | | fi | contracts | 504 | 25386705 | 8894 | 2854 | | fi | legislation | 5532 | 252344531 | 103907 | 2428 | | fi | other | 0 | 0 | 0 | 0 | | fi | wikipedia | 2610 | 109890359 | 402303 | 273 | | fr | all | 47186 | 2936056985 | 2734954 | 1073 | | fr | caselaw | 18313 | 1170335690 | 435569 | 2686 | | fr | contracts | 633 | 41983091 | 11071 | 3792 | | fr | legislation | 9297 | 600170792 | 243313 | 2466 | | fr | other | 0 | 0 | 0 | 0 | | fr | wikipedia | 18942 | 1123567412 | 2045001 | 549 | | ga | all | 1209 | 72041312 | 30064 | 2396 | | ga | caselaw | 11 | 676795 | 835 | 810 | | ga | contracts | 29 | 1820765 | 365 | 4988 | | ga | legislation | 1048 | 62513018 | 5983 | 10448 | | ga | other | 0 | 0 | 0 | 0 | | ga | wikipedia | 122 | 7030734 | 22881 | 307 | | hr | all | 5377 | 315295665 | 211151 | 1493 | | hr | caselaw | 1026 | 62358456 | 31322 | 1990 | | hr | contracts | 395 | 24957774 | 6552 | 3809 | | hr | legislation | 2906 | 171415656 | 36365 | 4713 | | hr | other | 0 | 0 | 0 | 0 | | hr | wikipedia | 1050 | 56563779 | 136912 | 413 | | hu | all | 12351 | 564082537 | 495822 | 1137 | | hu | caselaw | 2376 | 110034426 | 59074 | 1862 | | hu | contracts | 534 | 27258352 | 7385 | 3691 | | hu | legislation | 5744 | 264572303 | 86862 | 3045 | | hu | other | 0 | 0 | 0 | 0 | | hu | wikipedia | 3697 | 162217456 | 342501 | 473 | | it | all | 26744 | 1658638775 | 1615301 | 1026 | | it | caselaw | 6483 | 406520336 | 156630 | 2595 | | it | contracts | 597 | 40131223 | 10985 | 3653 | | it | legislation | 8332 | 542579039 | 227968 | 2380 | | it | other | 0 | 0 | 0 | 0 | | it | wikipedia | 11332 | 669408177 | 1219718 | 548 | | lt | all | 7772 | 399310081 | 264537 | 1509 | | lt | caselaw | 1992 | 101672069 | 59485 | 1709 | | lt | contracts | 475 | 27009922 | 7473 | 3614 | | lt | legislation | 4550 | 235543873 | 64106 | 3674 | | lt | other | 0 | 0 | 0 | 0 | | lt | wikipedia | 755 | 35084217 | 133473 | 262 | | lv | all | 7701 | 386833125 | 211244 | 1831 | | lv | caselaw | 2082 | 103311512 | 58992 | 1751 | | lv | contracts | 481 | 26692972 | 7429 | 3593 | | lv | legislation | 4621 | 233088284 | 64087 | 3637 | | lv | other | 0 | 0 | 0 | 0 | | lv | wikipedia | 518 | 23740357 | 80736 | 294 | | mt | all | 7180 | 370558634 | 122056 | 3035 | | mt | caselaw | 2016 | 100309542 | 52942 | 1894 | | mt | contracts | 486 | 27701852 | 6937 | 3993 | | mt | legislation | 4620 | 239708644 | 57979 | 4134 | | mt | other | 0 | 0 | 0 | 0 | | mt | wikipedia | 58 | 2838596 | 4198 | 676 | | nl | all | 17674 | 1112460059 | 1200534 | 926 | | nl | caselaw | 3227 | 206147113 | 87170 | 2364 | | nl | contracts | 604 | 40245662 | 11027 | 3649 | | nl | legislation | 8484 | 550788527 | 232204 | 2372 | | nl | other | 0 | 0 | 0 | 0 | | nl | wikipedia | 5360 | 315278757 | 870133 | 362 | | pl | all | 14762 | 773692198 | 1160849 | 666 | | pl | caselaw | 2141 | 115695709 | 59649 | 1939 | | pl | contracts | 489 | 28543526 | 7478 | 3817 | | pl | legislation | 5459 | 299334705 | 89264 | 3353 | | pl | other | 0 | 0 | 0 | 0 | | pl | wikipedia | 6672 | 330118258 | 1004458 | 328 | | pt | all | 210656 | 13466463586 | 18173061 | 741 | | pt | caselaw | 196919 | 12611760973 | 17251236 | 731 | | pt | contracts | 571 | 37997495 | 9897 | 3839 | | pt | legislation | 6853 | 439066783 | 148176 | 2963 | | pt | other | 0 | 0 | 0 | 0 | | pt | wikipedia | 6313 | 377638335 | 763752 | 494 | | ro | all | 14794 | 808799454 | 481763 | 1678 | | ro | caselaw | 1960 | 114665535 | 53092 | 2159 | | ro | contracts | 495 | 31496978 | 7202 | 4373 | | ro | legislation | 10464 | 559092153 | 215694 | 2592 | | ro | other | 0 | 0 | 0 | 0 | | ro | wikipedia | 1874 | 103544788 | 205775 | 503 | | sk | all | 8700 | 463447112 | 262638 | 1764 | | sk | caselaw | 2072 | 109996398 | 59383 | 1852 | | sk | contracts | 489 | 28298113 | 7470 | 3788 | | sk | legislation | 5208 | 280182047 | 76760 | 3650 | | sk | other | 0 | 0 | 0 | 0 | | sk | wikipedia | 931 | 44970554 | 119025 | 377 | | sl | all | 9345 | 561775614 | 277497 | 2024 | | sl | caselaw | 1816 | 111097741 | 59193 | 1876 | | sl | contracts | 432 | 28238938 | 7475 | 3777 | | sl | legislation | 6057 | 365513763 | 88651 | 4123 | | sl | other | 0 | 0 | 0 | 0 | | sl | wikipedia | 1041 | 56925172 | 122178 | 465 | | sv | all | 12457 | 700417227 | 1083393 | 646 | | sv | caselaw | 2806 | 161956844 | 78802 | 2055 | | sv | contracts | 491 | 29844238 | 9061 | 3293 | | sv | legislation | 5456 | 308130634 | 104338 | 2953 | | sv | other | 0 | 0 | 0 | 0 | | sv | wikipedia | 3704 | 200485511 | 891192 | 224 | ### Data Fields [More Information Needed] ### Data Splits There are two splits: train and validation. The validation split contains 1000 examples and the training split contains the rest of the data. #### Data Size ```bash $ xz --list data/*.xz Strms Blocks Compressed Uncompressed Ratio Check Filename 1 1 167.6 MiB 3’276.3 MiB 0.051 CRC64 data/bg_caselaw_train.0.jsonl.xz 1 1 502.3 KiB 9’398.0 KiB 0.053 CRC64 data/bg_caselaw_validation.0.jsonl.xz 1 1 33.4 MiB 700.3 MiB 0.048 CRC64 data/bg_contracts_train.0.jsonl.xz 1 1 5’989.6 KiB 123.0 MiB 0.048 CRC64 data/bg_contracts_validation.0.jsonl.xz 1 1 418.5 MiB 8’931.0 MiB 0.047 CRC64 data/bg_legislation_train.0.jsonl.xz 1 1 5’029.4 KiB 103.1 MiB 0.048 CRC64 data/bg_legislation_validation.0.jsonl.xz 1 0 32 B 0 B --- CRC64 data/bg_other_validation.0.jsonl.xz 1 1 192.2 MiB 2’488.6 MiB 0.077 CRC64 data/bg_wikipedia_train.0.jsonl.xz 1 1 1’757.8 KiB 22.9 MiB 0.075 CRC64 data/bg_wikipedia_validation.0.jsonl.xz 1 1 476.9 MiB 4’126.1 MiB 0.116 CRC64 data/cs_caselaw_train.0.jsonl.xz 1 1 259.8 MiB 2’556.9 MiB 0.102 CRC64 data/cs_caselaw_train.1.jsonl.xz 1 1 420.1 KiB 3’370.3 KiB 0.125 CRC64 data/cs_caselaw_validation.0.jsonl.xz 1 1 24.9 MiB 237.9 MiB 0.105 CRC64 data/cs_contracts_train.0.jsonl.xz 1 1 4’412.1 KiB 41.7 MiB 0.103 CRC64 data/cs_contracts_validation.0.jsonl.xz 1 1 361.2 MiB 3’488.9 MiB 0.104 CRC64 data/cs_legislation_train.0.jsonl.xz 1 1 10.3 MiB 91.6 MiB 0.112 CRC64 data/cs_legislation_validation.0.jsonl.xz 1 0 32 B 0 B --- CRC64 data/cs_other_validation.0.jsonl.xz 1 1 390.6 MiB 1’939.4 MiB 0.201 CRC64 data/cs_wikipedia_train.0.jsonl.xz 1 1 2’604.7 KiB 12.2 MiB 0.209 CRC64 data/cs_wikipedia_validation.0.jsonl.xz 1 1 252.5 MiB 1’529.7 MiB 0.165 CRC64 data/da_caselaw_train.0.jsonl.xz 1 1 555.9 KiB 3’227.1 KiB 0.172 CRC64 data/da_caselaw_validation.0.jsonl.xz 1 1 30.1 MiB 233.9 MiB 0.129 CRC64 data/da_contracts_train.0.jsonl.xz 1 1 2’897.6 KiB 23.6 MiB 0.120 CRC64 data/da_contracts_validation.0.jsonl.xz 1 1 476.9 MiB 3’325.8 MiB 0.143 CRC64 data/da_legislation_train.0.jsonl.xz 1 1 237.3 MiB 1’444.5 MiB 0.164 CRC64 data/da_legislation_train.1.jsonl.xz 1 1 3’232.5 KiB 60.6 MiB 0.052 CRC64 data/da_legislation_validation.0.jsonl.xz 1 0 32 B 0 B --- CRC64 data/da_other_validation.0.jsonl.xz 1 1 128.8 MiB 512.1 MiB 0.252 CRC64 data/da_wikipedia_train.0.jsonl.xz 1 1 1’514.1 KiB 5’476.3 KiB 0.276 CRC64 data/da_wikipedia_validation.0.jsonl.xz 1 1 476.9 MiB 2’803.8 MiB 0.170 CRC64 data/de_caselaw_train.0.jsonl.xz 1 1 476.9 MiB 2’821.4 MiB 0.169 CRC64 data/de_caselaw_train.1.jsonl.xz 1 1 476.9 MiB 2’720.2 MiB 0.175 CRC64 data/de_caselaw_train.2.jsonl.xz 1 1 476.9 MiB 2’704.1 MiB 0.176 CRC64 data/de_caselaw_train.3.jsonl.xz 1 1 460.5 MiB 2’504.5 MiB 0.184 CRC64 data/de_caselaw_train.4.jsonl.xz 1 1 594.0 KiB 3’416.4 KiB 0.174 CRC64 data/de_caselaw_validation.0.jsonl.xz 1 1 32.0 MiB 255.8 MiB 0.125 CRC64 data/de_contracts_train.0.jsonl.xz 1 1 3’037.7 KiB 24.7 MiB 0.120 CRC64 data/de_contracts_validation.0.jsonl.xz 1 1 476.9 MiB 3’386.0 MiB 0.141 CRC64 data/de_legislation_train.0.jsonl.xz 1 1 93.3 MiB 592.3 MiB 0.158 CRC64 data/de_legislation_train.1.jsonl.xz 1 1 3’265.9 KiB 20.5 MiB 0.156 CRC64 data/de_legislation_validation.0.jsonl.xz 1 0 32 B 0 B --- CRC64 data/de_other_validation.0.jsonl.xz 1 1 476.9 MiB 1’883.7 MiB 0.253 CRC64 data/de_wikipedia_train.0.jsonl.xz 1 1 476.9 MiB 1’891.6 MiB 0.252 CRC64 data/de_wikipedia_train.1.jsonl.xz 1 1 476.9 MiB 1’893.7 MiB 0.252 CRC64 data/de_wikipedia_train.2.jsonl.xz 1 1 476.9 MiB 1’894.1 MiB 0.252 CRC64 data/de_wikipedia_train.3.jsonl.xz 1 1 407.9 MiB 1’622.0 MiB 0.251 CRC64 data/de_wikipedia_train.4.jsonl.xz 1 1 1’172.5 KiB 4’210.2 KiB 0.278 CRC64 data/de_wikipedia_validation.0.jsonl.xz 1 1 344.7 MiB 6’908.3 MiB 0.050 CRC64 data/el_caselaw_train.0.jsonl.xz 1 1 870.4 KiB 14.3 MiB 0.060 CRC64 data/el_caselaw_validation.0.jsonl.xz 1 1 49.7 MiB 1’083.8 MiB 0.046 CRC64 data/el_contracts_train.0.jsonl.xz 1 1 4’701.3 KiB 101.6 MiB 0.045 CRC64 data/el_contracts_validation.0.jsonl.xz 1 1 476.9 MiB 10.2 GiB 0.046 CRC64 data/el_legislation_train.0.jsonl.xz 1 1 203.0 MiB 3’994.0 MiB 0.051 CRC64 data/el_legislation_train.1.jsonl.xz 1 1 9’744.3 KiB 186.6 MiB 0.051 CRC64 data/el_legislation_validation.0.jsonl.xz 1 0 32 B 0 B --- CRC64 data/el_other_validation.0.jsonl.xz 1 1 246.4 MiB 3’465.7 MiB 0.071 CRC64 data/el_wikipedia_train.0.jsonl.xz 1 1 2’591.7 KiB 35.6 MiB 0.071 CRC64 data/el_wikipedia_validation.0.jsonl.xz 1 1 476.9 MiB 2’188.6 MiB 0.218 CRC64 data/en_caselaw_train.0.jsonl.xz 1 1 476.9 MiB 2’416.1 MiB 0.197 CRC64 data/en_caselaw_train.10.jsonl.xz 1 1 477.2 MiB 2’688.1 MiB 0.178 CRC64 data/en_caselaw_train.11.jsonl.xz 1 1 476.9 MiB 2’865.9 MiB 0.166 CRC64 data/en_caselaw_train.12.jsonl.xz 1 1 476.9 MiB 2’494.1 MiB 0.191 CRC64 data/en_caselaw_train.13.jsonl.xz 1 1 476.9 MiB 2’126.6 MiB 0.224 CRC64 data/en_caselaw_train.14.jsonl.xz 1 1 476.9 MiB 2’440.9 MiB 0.195 CRC64 data/en_caselaw_train.15.jsonl.xz 1 1 476.9 MiB 3’822.2 MiB 0.125 CRC64 data/en_caselaw_train.16.jsonl.xz 1 1 476.9 MiB 3’831.4 MiB 0.124 CRC64 data/en_caselaw_train.17.jsonl.xz 1 1 476.9 MiB 3’812.2 MiB 0.125 CRC64 data/en_caselaw_train.18.jsonl.xz 1 1 476.9 MiB 2’233.5 MiB 0.214 CRC64 data/en_caselaw_train.19.jsonl.xz 1 1 476.9 MiB 2’195.9 MiB 0.217 CRC64 data/en_caselaw_train.1.jsonl.xz 1 1 476.9 MiB 2’185.8 MiB 0.218 CRC64 data/en_caselaw_train.20.jsonl.xz 1 1 476.9 MiB 2’634.9 MiB 0.181 CRC64 data/en_caselaw_train.21.jsonl.xz 1 1 476.9 MiB 2’670.8 MiB 0.179 CRC64 data/en_caselaw_train.22.jsonl.xz 1 1 476.9 MiB 2’762.0 MiB 0.173 CRC64 data/en_caselaw_train.23.jsonl.xz 1 1 476.9 MiB 2’153.6 MiB 0.221 CRC64 data/en_caselaw_train.24.jsonl.xz 1 1 476.9 MiB 2’152.0 MiB 0.222 CRC64 data/en_caselaw_train.25.jsonl.xz 1 1 476.9 MiB 2’205.0 MiB 0.216 CRC64 data/en_caselaw_train.26.jsonl.xz 1 1 476.9 MiB 2’141.0 MiB 0.223 CRC64 data/en_caselaw_train.27.jsonl.xz 1 1 476.9 MiB 2’145.1 MiB 0.222 CRC64 data/en_caselaw_train.28.jsonl.xz 1 1 476.9 MiB 2’137.9 MiB 0.223 CRC64 data/en_caselaw_train.29.jsonl.xz 1 1 476.9 MiB 2’189.0 MiB 0.218 CRC64 data/en_caselaw_train.2.jsonl.xz 1 1 476.9 MiB 2’150.9 MiB 0.222 CRC64 data/en_caselaw_train.30.jsonl.xz 1 1 476.9 MiB 2’142.7 MiB 0.223 CRC64 data/en_caselaw_train.31.jsonl.xz 1 1 476.9 MiB 2’203.4 MiB 0.216 CRC64 data/en_caselaw_train.32.jsonl.xz 1 1 476.9 MiB 2’205.4 MiB 0.216 CRC64 data/en_caselaw_train.33.jsonl.xz 1 1 476.9 MiB 2’206.0 MiB 0.216 CRC64 data/en_caselaw_train.34.jsonl.xz 1 1 476.9 MiB 2’164.9 MiB 0.220 CRC64 data/en_caselaw_train.35.jsonl.xz 1 1 476.9 MiB 2’810.3 MiB 0.170 CRC64 data/en_caselaw_train.36.jsonl.xz 1 1 476.9 MiB 2’854.1 MiB 0.167 CRC64 data/en_caselaw_train.37.jsonl.xz 1 1 476.9 MiB 3’109.2 MiB 0.153 CRC64 data/en_caselaw_train.38.jsonl.xz 1 1 476.9 MiB 3’323.6 MiB 0.143 CRC64 data/en_caselaw_train.39.jsonl.xz 1 1 476.9 MiB 2’155.3 MiB 0.221 CRC64 data/en_caselaw_train.3.jsonl.xz 1 1 476.9 MiB 2’881.5 MiB 0.165 CRC64 data/en_caselaw_train.40.jsonl.xz 1 1 476.9 MiB 2’157.1 MiB 0.221 CRC64 data/en_caselaw_train.41.jsonl.xz 1 1 477.0 MiB 2’530.2 MiB 0.189 CRC64 data/en_caselaw_train.42.jsonl.xz 1 1 476.8 MiB 2’540.1 MiB 0.188 CRC64 data/en_caselaw_train.43.jsonl.xz 1 1 476.9 MiB 2’182.2 MiB 0.219 CRC64 data/en_caselaw_train.44.jsonl.xz 1 1 476.9 MiB 2’163.2 MiB 0.220 CRC64 data/en_caselaw_train.45.jsonl.xz 1 1 476.9 MiB 2’213.3 MiB 0.215 CRC64 data/en_caselaw_train.46.jsonl.xz 1 1 476.9 MiB 2’241.5 MiB 0.213 CRC64 data/en_caselaw_train.47.jsonl.xz 1 1 476.9 MiB 2’203.6 MiB 0.216 CRC64 data/en_caselaw_train.48.jsonl.xz 1 1 476.9 MiB 2’480.6 MiB 0.192 CRC64 data/en_caselaw_train.49.jsonl.xz 1 1 476.9 MiB 2’176.7 MiB 0.219 CRC64 data/en_caselaw_train.4.jsonl.xz 1 1 476.9 MiB 2’214.7 MiB 0.215 CRC64 data/en_caselaw_train.50.jsonl.xz 1 1 476.9 MiB 2’128.0 MiB 0.224 CRC64 data/en_caselaw_train.51.jsonl.xz 1 1 476.9 MiB 2’151.0 MiB 0.222 CRC64 data/en_caselaw_train.52.jsonl.xz 1 1 476.9 MiB 2’173.6 MiB 0.219 CRC64 data/en_caselaw_train.53.jsonl.xz 1 1 476.9 MiB 2’773.8 MiB 0.172 CRC64 data/en_caselaw_train.54.jsonl.xz 1 1 476.9 MiB 2’806.2 MiB 0.170 CRC64 data/en_caselaw_train.55.jsonl.xz 1 1 476.9 MiB 3’920.9 MiB 0.122 CRC64 data/en_caselaw_train.56.jsonl.xz 1 1 476.9 MiB 2’517.2 MiB 0.189 CRC64 data/en_caselaw_train.57.jsonl.xz 1 1 477.5 MiB 2’844.0 MiB 0.168 CRC64 data/en_caselaw_train.58.jsonl.xz 1 1 476.9 MiB 2’810.7 MiB 0.170 CRC64 data/en_caselaw_train.59.jsonl.xz 1 1 476.9 MiB 2’160.4 MiB 0.221 CRC64 data/en_caselaw_train.5.jsonl.xz 1 1 476.9 MiB 3’033.0 MiB 0.157 CRC64 data/en_caselaw_train.60.jsonl.xz 1 1 476.9 MiB 2’255.1 MiB 0.211 CRC64 data/en_caselaw_train.61.jsonl.xz 1 1 476.9 MiB 2’110.1 MiB 0.226 CRC64 data/en_caselaw_train.62.jsonl.xz 1 1 476.9 MiB 2’130.3 MiB 0.224 CRC64 data/en_caselaw_train.63.jsonl.xz 1 1 476.9 MiB 2’133.2 MiB 0.224 CRC64 data/en_caselaw_train.64.jsonl.xz 1 1 44.8 MiB 199.6 MiB 0.225 CRC64 data/en_caselaw_train.65.jsonl.xz 1 1 476.9 MiB 2’153.3 MiB 0.221 CRC64 data/en_caselaw_train.6.jsonl.xz 1 1 476.9 MiB 2’130.8 MiB 0.224 CRC64 data/en_caselaw_train.7.jsonl.xz 1 1 476.9 MiB 2’152.2 MiB 0.222 CRC64 data/en_caselaw_train.8.jsonl.xz 1 1 476.9 MiB 2’173.3 MiB 0.219 CRC64 data/en_caselaw_train.9.jsonl.xz 1 1 2’977.4 KiB 12.9 MiB 0.226 CRC64 data/en_caselaw_validation.0.jsonl.xz 1 1 476.9 MiB 3’016.6 MiB 0.158 CRC64 data/en_contracts_train.0.jsonl.xz 1 1 476.9 MiB 3’015.3 MiB 0.158 CRC64 data/en_contracts_train.10.jsonl.xz 1 1 476.9 MiB 3’012.5 MiB 0.158 CRC64 data/en_contracts_train.11.jsonl.xz 1 1 477.0 MiB 3’002.5 MiB 0.159 CRC64 data/en_contracts_train.12.jsonl.xz 1 1 476.9 MiB 2’962.4 MiB 0.161 CRC64 data/en_contracts_train.13.jsonl.xz 1 1 476.9 MiB 3’019.4 MiB 0.158 CRC64 data/en_contracts_train.14.jsonl.xz 1 1 124.1 MiB 781.2 MiB 0.159 CRC64 data/en_contracts_train.15.jsonl.xz 1 1 476.9 MiB 2’994.0 MiB 0.159 CRC64 data/en_contracts_train.1.jsonl.xz 1 1 476.8 MiB 3’084.9 MiB 0.155 CRC64 data/en_contracts_train.2.jsonl.xz 1 1 476.9 MiB 3’123.4 MiB 0.153 CRC64 data/en_contracts_train.3.jsonl.xz 1 1 476.9 MiB 3’120.7 MiB 0.153 CRC64 data/en_contracts_train.4.jsonl.xz 1 1 477.0 MiB 3’094.2 MiB 0.154 CRC64 data/en_contracts_train.5.jsonl.xz 1 1 476.9 MiB 3’010.9 MiB 0.158 CRC64 data/en_contracts_train.6.jsonl.xz 1 1 476.9 MiB 3’015.0 MiB 0.158 CRC64 data/en_contracts_train.7.jsonl.xz 1 1 476.9 MiB 2’995.7 MiB 0.159 CRC64 data/en_contracts_train.8.jsonl.xz 1 1 476.9 MiB 3’017.9 MiB 0.158 CRC64 data/en_contracts_train.9.jsonl.xz 1 1 9’980.4 KiB 63.7 MiB 0.153 CRC64 data/en_contracts_validation.0.jsonl.xz 1 1 476.9 MiB 3’040.8 MiB 0.157 CRC64 data/en_legislation_train.0.jsonl.xz 1 1 476.9 MiB 3’047.3 MiB 0.156 CRC64 data/en_legislation_train.1.jsonl.xz 1 1 476.9 MiB 3’351.5 MiB 0.142 CRC64 data/en_legislation_train.2.jsonl.xz 1 1 478.7 MiB 3’408.4 MiB 0.140 CRC64 data/en_legislation_train.3.jsonl.xz 1 1 372.5 MiB 2’620.0 MiB 0.142 CRC64 data/en_legislation_train.4.jsonl.xz 1 1 2’733.5 KiB 13.8 MiB 0.193 CRC64 data/en_legislation_validation.0.jsonl.xz 1 1 476.9 MiB 4’782.4 MiB 0.100 CRC64 data/en_other_train.0.jsonl.xz 1 1 476.9 MiB 4’347.1 MiB 0.110 CRC64 data/en_other_train.10.jsonl.xz 1 1 477.1 MiB 3’044.6 MiB 0.157 CRC64 data/en_other_train.11.jsonl.xz 1 1 477.1 MiB 2’147.8 MiB 0.222 CRC64 data/en_other_train.12.jsonl.xz 1 1 477.0 MiB 2’182.8 MiB 0.219 CRC64 data/en_other_train.13.jsonl.xz 1 1 33.3 MiB 151.7 MiB 0.219 CRC64 data/en_other_train.14.jsonl.xz 1 1 476.9 MiB 4’883.8 MiB 0.098 CRC64 data/en_other_train.1.jsonl.xz 1 1 476.9 MiB 4’646.7 MiB 0.103 CRC64 data/en_other_train.2.jsonl.xz 1 1 476.9 MiB 4’542.8 MiB 0.105 CRC64 data/en_other_train.3.jsonl.xz 1 1 476.9 MiB 4’574.8 MiB 0.104 CRC64 data/en_other_train.4.jsonl.xz 1 1 476.9 MiB 4’622.5 MiB 0.103 CRC64 data/en_other_train.5.jsonl.xz 1 1 476.9 MiB 4’520.7 MiB 0.105 CRC64 data/en_other_train.6.jsonl.xz 1 1 476.9 MiB 2’942.4 MiB 0.162 CRC64 data/en_other_train.7.jsonl.xz 1 1 476.9 MiB 2’544.0 MiB 0.187 CRC64 data/en_other_train.8.jsonl.xz 1 1 476.9 MiB 4’515.4 MiB 0.106 CRC64 data/en_other_train.9.jsonl.xz 1 1 2’165.8 KiB 19.6 MiB 0.108 CRC64 data/en_other_validation.0.jsonl.xz 1 1 476.9 MiB 1’803.2 MiB 0.264 CRC64 data/en_wikipedia_train.0.jsonl.xz 1 1 441.1 MiB 1’670.5 MiB 0.264 CRC64 data/en_wikipedia_train.10.jsonl.xz 1 1 476.9 MiB 1’803.6 MiB 0.264 CRC64 data/en_wikipedia_train.1.jsonl.xz 1 1 476.9 MiB 1’802.5 MiB 0.265 CRC64 data/en_wikipedia_train.2.jsonl.xz 1 1 476.9 MiB 1’805.0 MiB 0.264 CRC64 data/en_wikipedia_train.3.jsonl.xz 1 1 476.9 MiB 1’804.3 MiB 0.264 CRC64 data/en_wikipedia_train.4.jsonl.xz 1 1 476.9 MiB 1’804.0 MiB 0.264 CRC64 data/en_wikipedia_train.5.jsonl.xz 1 1 476.9 MiB 1’804.1 MiB 0.264 CRC64 data/en_wikipedia_train.6.jsonl.xz 1 1 476.9 MiB 1’803.6 MiB 0.264 CRC64 data/en_wikipedia_train.7.jsonl.xz 1 1 476.9 MiB 1’805.2 MiB 0.264 CRC64 data/en_wikipedia_train.8.jsonl.xz 1 1 476.9 MiB 1’804.3 MiB 0.264 CRC64 data/en_wikipedia_train.9.jsonl.xz 1 1 1’004.9 KiB 3’492.7 KiB 0.288 CRC64 data/en_wikipedia_validation.0.jsonl.xz 1 1 216.4 MiB 1’458.0 MiB 0.148 CRC64 data/es_caselaw_train.0.jsonl.xz 1 1 586.4 KiB 3’537.8 KiB 0.166 CRC64 data/es_caselaw_validation.0.jsonl.xz 1 1 29.0 MiB 244.0 MiB 0.119 CRC64 data/es_contracts_train.0.jsonl.xz 1 1 3’826.2 KiB 31.2 MiB 0.120 CRC64 data/es_contracts_validation.0.jsonl.xz 1 1 401.8 MiB 3’054.9 MiB 0.132 CRC64 data/es_legislation_train.0.jsonl.xz 1 1 8’217.6 KiB 56.6 MiB 0.142 CRC64 data/es_legislation_validation.0.jsonl.xz 1 0 32 B 0 B --- CRC64 data/es_other_validation.0.jsonl.xz 1 1 476.9 MiB 2’017.9 MiB 0.236 CRC64 data/es_wikipedia_train.0.jsonl.xz 1 1 476.9 MiB 2’025.0 MiB 0.235 CRC64 data/es_wikipedia_train.1.jsonl.xz 1 1 308.8 MiB 1’305.6 MiB 0.237 CRC64 data/es_wikipedia_train.2.jsonl.xz 1 1 1’339.7 KiB 5’265.5 KiB 0.254 CRC64 data/es_wikipedia_validation.0.jsonl.xz 1 1 132.5 MiB 831.3 MiB 0.159 CRC64 data/et_caselaw_train.0.jsonl.xz 1 1 387.2 KiB 2’310.9 KiB 0.168 CRC64 data/et_caselaw_validation.0.jsonl.xz 1 1 22.9 MiB 179.6 MiB 0.128 CRC64 data/et_contracts_train.0.jsonl.xz 1 1 3’164.3 KiB 26.8 MiB 0.115 CRC64 data/et_contracts_validation.0.jsonl.xz 1 1 255.2 MiB 1’908.2 MiB 0.134 CRC64 data/et_legislation_train.0.jsonl.xz 1 1 9’239.2 KiB 64.7 MiB 0.140 CRC64 data/et_legislation_validation.0.jsonl.xz 1 0 32 B 0 B --- CRC64 data/et_other_validation.0.jsonl.xz 1 1 100.5 MiB 408.8 MiB 0.246 CRC64 data/et_wikipedia_train.0.jsonl.xz 1 1 1’352.2 KiB 4’921.0 KiB 0.275 CRC64 data/et_wikipedia_validation.0.jsonl.xz 1 1 194.5 MiB 1’359.0 MiB 0.143 CRC64 data/fi_caselaw_train.0.jsonl.xz 1 1 604.1 KiB 3’656.1 KiB 0.165 CRC64 data/fi_caselaw_validation.0.jsonl.xz 1 1 26.0 MiB 219.8 MiB 0.118 CRC64 data/fi_contracts_train.0.jsonl.xz 1 1 2’971.2 KiB 27.4 MiB 0.106 CRC64 data/fi_contracts_validation.0.jsonl.xz 1 1 334.7 MiB 2’599.3 MiB 0.129 CRC64 data/fi_legislation_train.0.jsonl.xz 1 1 7’476.3 KiB 53.9 MiB 0.136 CRC64 data/fi_legislation_validation.0.jsonl.xz 1 0 32 B 0 B --- CRC64 data/fi_other_validation.0.jsonl.xz 1 1 255.6 MiB 1’118.0 MiB 0.229 CRC64 data/fi_wikipedia_train.0.jsonl.xz 1 1 2’464.2 KiB 9.9 MiB 0.242 CRC64 data/fi_wikipedia_validation.0.jsonl.xz 1 1 476.9 MiB 3’128.1 MiB 0.152 CRC64 data/fr_caselaw_train.0.jsonl.xz 1 1 476.9 MiB 3’104.4 MiB 0.154 CRC64 data/fr_caselaw_train.1.jsonl.xz 1 1 350.2 MiB 2’194.9 MiB 0.160 CRC64 data/fr_caselaw_train.2.jsonl.xz 1 1 603.0 KiB 3’778.7 KiB 0.160 CRC64 data/fr_caselaw_validation.0.jsonl.xz 1 1 31.9 MiB 278.3 MiB 0.115 CRC64 data/fr_contracts_train.0.jsonl.xz 1 1 3’034.4 KiB 26.6 MiB 0.111 CRC64 data/fr_contracts_validation.0.jsonl.xz 1 1 477.0 MiB 3’721.8 MiB 0.128 CRC64 data/fr_legislation_train.0.jsonl.xz 1 1 89.3 MiB 670.9 MiB 0.133 CRC64 data/fr_legislation_train.1.jsonl.xz 1 1 3’185.5 KiB 22.6 MiB 0.138 CRC64 data/fr_legislation_validation.0.jsonl.xz 1 0 32 B 0 B --- CRC64 data/fr_other_validation.0.jsonl.xz 1 1 476.9 MiB 2’150.5 MiB 0.222 CRC64 data/fr_wikipedia_train.0.jsonl.xz 1 1 476.9 MiB 2’151.4 MiB 0.222 CRC64 data/fr_wikipedia_train.1.jsonl.xz 1 1 476.9 MiB 2’151.2 MiB 0.222 CRC64 data/fr_wikipedia_train.2.jsonl.xz 1 1 384.8 MiB 1’736.1 MiB 0.222 CRC64 data/fr_wikipedia_train.3.jsonl.xz 1 1 937.8 KiB 3’777.6 KiB 0.248 CRC64 data/fr_wikipedia_validation.0.jsonl.xz 1 1 721.9 KiB 5’663.9 KiB 0.127 CRC64 data/ga_caselaw_validation.0.jsonl.xz 1 1 1’246.1 KiB 15.6 MiB 0.078 CRC64 data/ga_contracts_validation.0.jsonl.xz 1 1 41.2 MiB 419.0 MiB 0.098 CRC64 data/ga_legislation_train.0.jsonl.xz 1 1 14.9 MiB 123.2 MiB 0.121 CRC64 data/ga_legislation_validation.0.jsonl.xz 1 0 32 B 0 B --- CRC64 data/ga_other_validation.0.jsonl.xz 1 1 11.0 MiB 52.9 MiB 0.207 CRC64 data/ga_wikipedia_train.0.jsonl.xz 1 1 782.4 KiB 3’438.9 KiB 0.228 CRC64 data/ga_wikipedia_validation.0.jsonl.xz 1 1 72.7 MiB 460.3 MiB 0.158 CRC64 data/hr_caselaw_train.0.jsonl.xz 1 1 359.9 KiB 2’214.8 KiB 0.162 CRC64 data/hr_caselaw_validation.0.jsonl.xz 1 1 21.2 MiB 158.3 MiB 0.134 CRC64 data/hr_contracts_train.0.jsonl.xz 1 1 3’785.9 KiB 26.6 MiB 0.139 CRC64 data/hr_contracts_validation.0.jsonl.xz 1 1 160.6 MiB 1’258.7 MiB 0.128 CRC64 data/hr_legislation_train.0.jsonl.xz 1 1 11.2 MiB 86.1 MiB 0.130 CRC64 data/hr_legislation_validation.0.jsonl.xz 1 0 32 B 0 B --- CRC64 data/hr_other_validation.0.jsonl.xz 1 1 110.3 MiB 425.5 MiB 0.259 CRC64 data/hr_wikipedia_train.0.jsonl.xz 1 1 1’743.8 KiB 6’170.1 KiB 0.283 CRC64 data/hr_wikipedia_validation.0.jsonl.xz 1 1 150.6 MiB 1’320.5 MiB 0.114 CRC64 data/hu_caselaw_train.0.jsonl.xz 1 1 423.8 KiB 3’496.6 KiB 0.121 CRC64 data/hu_caselaw_validation.0.jsonl.xz 1 1 26.9 MiB 266.0 MiB 0.101 CRC64 data/hu_contracts_train.0.jsonl.xz 1 1 3’532.6 KiB 36.1 MiB 0.096 CRC64 data/hu_contracts_validation.0.jsonl.xz 1 1 337.6 MiB 3’129.4 MiB 0.108 CRC64 data/hu_legislation_train.0.jsonl.xz 1 1 3’913.7 KiB 94.8 MiB 0.040 CRC64 data/hu_legislation_validation.0.jsonl.xz 1 0 32 B 0 B --- CRC64 data/hu_other_validation.0.jsonl.xz 1 1 364.2 MiB 1’835.0 MiB 0.198 CRC64 data/hu_wikipedia_train.0.jsonl.xz 1 1 1’719.5 KiB 8’000.8 KiB 0.215 CRC64 data/hu_wikipedia_validation.0.jsonl.xz 1 1 459.8 MiB 2’742.8 MiB 0.168 CRC64 data/it_caselaw_train.0.jsonl.xz 1 1 577.8 KiB 3’194.2 KiB 0.181 CRC64 data/it_caselaw_validation.0.jsonl.xz 1 1 31.2 MiB 240.4 MiB 0.130 CRC64 data/it_contracts_train.0.jsonl.xz 1 1 3’068.9 KiB 24.0 MiB 0.125 CRC64 data/it_contracts_validation.0.jsonl.xz 1 1 476.9 MiB 3’362.3 MiB 0.142 CRC64 data/it_legislation_train.0.jsonl.xz 1 1 38.9 MiB 238.7 MiB 0.163 CRC64 data/it_legislation_train.1.jsonl.xz 1 1 3’211.3 KiB 25.3 MiB 0.124 CRC64 data/it_legislation_validation.0.jsonl.xz 1 0 32 B 0 B --- CRC64 data/it_other_validation.0.jsonl.xz 1 1 476.9 MiB 1’864.5 MiB 0.256 CRC64 data/it_wikipedia_train.0.jsonl.xz 1 1 476.9 MiB 1’864.8 MiB 0.256 CRC64 data/it_wikipedia_train.1.jsonl.xz 1 1 184.6 MiB 726.2 MiB 0.254 CRC64 data/it_wikipedia_train.2.jsonl.xz 1 1 1’334.0 KiB 4’843.5 KiB 0.275 CRC64 data/it_wikipedia_validation.0.jsonl.xz 1 1 136.6 MiB 975.7 MiB 0.140 CRC64 data/lt_caselaw_train.0.jsonl.xz 1 1 397.0 KiB 2’660.9 KiB 0.149 CRC64 data/lt_caselaw_validation.0.jsonl.xz 1 1 24.9 MiB 211.8 MiB 0.118 CRC64 data/lt_contracts_train.0.jsonl.xz 1 1 3’275.5 KiB 26.1 MiB 0.123 CRC64 data/lt_contracts_validation.0.jsonl.xz 1 1 274.0 MiB 2’174.1 MiB 0.126 CRC64 data/lt_legislation_train.0.jsonl.xz 1 1 9’780.7 KiB 73.4 MiB 0.130 CRC64 data/lt_legislation_validation.0.jsonl.xz 1 0 32 B 0 B --- CRC64 data/lt_other_validation.0.jsonl.xz 1 1 72.6 MiB 349.5 MiB 0.208 CRC64 data/lt_wikipedia_train.0.jsonl.xz 1 1 1’251.2 KiB 5’369.5 KiB 0.233 CRC64 data/lt_wikipedia_validation.0.jsonl.xz 1 1 141.0 MiB 1’106.7 MiB 0.127 CRC64 data/lv_caselaw_train.0.jsonl.xz 1 1 410.3 KiB 3’004.0 KiB 0.137 CRC64 data/lv_caselaw_validation.0.jsonl.xz 1 1 24.9 MiB 224.5 MiB 0.111 CRC64 data/lv_contracts_train.0.jsonl.xz 1 1 3’629.0 KiB 33.6 MiB 0.106 CRC64 data/lv_contracts_validation.0.jsonl.xz 1 1 271.5 MiB 2’377.4 MiB 0.114 CRC64 data/lv_legislation_train.0.jsonl.xz 1 1 10.5 MiB 87.5 MiB 0.120 CRC64 data/lv_legislation_validation.0.jsonl.xz 1 0 32 B 0 B --- CRC64 data/lv_other_validation.0.jsonl.xz 1 1 47.5 MiB 254.7 MiB 0.186 CRC64 data/lv_wikipedia_train.0.jsonl.xz 1 1 984.1 KiB 4’559.4 KiB 0.216 CRC64 data/lv_wikipedia_validation.0.jsonl.xz 1 1 132.2 MiB 956.6 MiB 0.138 CRC64 data/mt_caselaw_train.0.jsonl.xz 1 1 396.1 KiB 2’680.0 KiB 0.148 CRC64 data/mt_caselaw_validation.0.jsonl.xz 1 1 25.6 MiB 201.0 MiB 0.127 CRC64 data/mt_contracts_train.0.jsonl.xz 1 1 4’178.4 KiB 34.0 MiB 0.120 CRC64 data/mt_contracts_validation.0.jsonl.xz 1 1 270.7 MiB 2’121.7 MiB 0.128 CRC64 data/mt_legislation_train.0.jsonl.xz 1 1 11.4 MiB 84.2 MiB 0.135 CRC64 data/mt_legislation_validation.0.jsonl.xz 1 0 32 B 0 B --- CRC64 data/mt_other_validation.0.jsonl.xz 1 1 4’608.3 KiB 19.5 MiB 0.231 CRC64 data/mt_wikipedia_train.0.jsonl.xz 1 1 1’405.0 KiB 5’754.4 KiB 0.244 CRC64 data/mt_wikipedia_validation.0.jsonl.xz 1 1 223.1 MiB 1’338.9 MiB 0.167 CRC64 data/nl_caselaw_train.0.jsonl.xz 1 1 566.0 KiB 3’152.2 KiB 0.180 CRC64 data/nl_caselaw_validation.0.jsonl.xz 1 1 31.6 MiB 242.3 MiB 0.130 CRC64 data/nl_contracts_train.0.jsonl.xz 1 1 2’663.9 KiB 22.4 MiB 0.116 CRC64 data/nl_contracts_validation.0.jsonl.xz 1 1 476.9 MiB 3’311.9 MiB 0.144 CRC64 data/nl_legislation_train.0.jsonl.xz 1 1 41.1 MiB 268.7 MiB 0.153 CRC64 data/nl_legislation_train.1.jsonl.xz 1 1 3’678.8 KiB 72.9 MiB 0.049 CRC64 data/nl_legislation_validation.0.jsonl.xz 1 0 32 B 0 B --- CRC64 data/nl_other_validation.0.jsonl.xz 1 1 476.9 MiB 1’856.9 MiB 0.257 CRC64 data/nl_wikipedia_train.0.jsonl.xz 1 1 59.9 MiB 236.4 MiB 0.253 CRC64 data/nl_wikipedia_train.1.jsonl.xz 1 1 979.4 KiB 3’414.8 KiB 0.287 CRC64 data/nl_wikipedia_validation.0.jsonl.xz 1 1 147.9 MiB 1’034.1 MiB 0.143 CRC64 data/pl_caselaw_train.0.jsonl.xz 1 1 416.2 KiB 2’737.2 KiB 0.152 CRC64 data/pl_caselaw_validation.0.jsonl.xz 1 1 24.8 MiB 208.9 MiB 0.119 CRC64 data/pl_contracts_train.0.jsonl.xz 1 1 4’241.9 KiB 34.6 MiB 0.120 CRC64 data/pl_contracts_validation.0.jsonl.xz 1 1 325.0 MiB 2’646.2 MiB 0.123 CRC64 data/pl_legislation_train.0.jsonl.xz 1 1 3’593.0 KiB 29.0 MiB 0.121 CRC64 data/pl_legislation_validation.0.jsonl.xz 1 0 32 B 0 B --- CRC64 data/pl_other_validation.0.jsonl.xz 1 1 476.9 MiB 2’144.7 MiB 0.222 CRC64 data/pl_wikipedia_train.0.jsonl.xz 1 1 189.5 MiB 864.0 MiB 0.219 CRC64 data/pl_wikipedia_train.1.jsonl.xz 1 1 1’233.2 KiB 4’965.9 KiB 0.248 CRC64 data/pl_wikipedia_validation.0.jsonl.xz 1 1 476.9 MiB 3’494.2 MiB 0.136 CRC64 data/pt_caselaw_train.0.jsonl.xz 1 1 476.9 MiB 3’392.1 MiB 0.141 CRC64 data/pt_caselaw_train.10.jsonl.xz 1 1 476.9 MiB 3’505.3 MiB 0.136 CRC64 data/pt_caselaw_train.11.jsonl.xz 1 1 476.9 MiB 3’524.1 MiB 0.135 CRC64 data/pt_caselaw_train.12.jsonl.xz 1 1 476.9 MiB 3’458.4 MiB 0.138 CRC64 data/pt_caselaw_train.13.jsonl.xz 1 1 476.9 MiB 3’602.9 MiB 0.132 CRC64 data/pt_caselaw_train.14.jsonl.xz 1 1 476.9 MiB 4’923.4 MiB 0.097 CRC64 data/pt_caselaw_train.15.jsonl.xz 1 1 476.9 MiB 6’648.8 MiB 0.072 CRC64 data/pt_caselaw_train.16.jsonl.xz 1 1 476.9 MiB 7’461.0 MiB 0.064 CRC64 data/pt_caselaw_train.17.jsonl.xz 1 1 476.9 MiB 6’866.4 MiB 0.069 CRC64 data/pt_caselaw_train.18.jsonl.xz 1 1 476.9 MiB 3’455.7 MiB 0.138 CRC64 data/pt_caselaw_train.19.jsonl.xz 1 1 476.9 MiB 3’513.7 MiB 0.136 CRC64 data/pt_caselaw_train.1.jsonl.xz 1 1 476.9 MiB 3’477.3 MiB 0.137 CRC64 data/pt_caselaw_train.20.jsonl.xz 1 1 476.9 MiB 3’492.8 MiB 0.137 CRC64 data/pt_caselaw_train.21.jsonl.xz 1 1 476.9 MiB 3’528.6 MiB 0.135 CRC64 data/pt_caselaw_train.22.jsonl.xz 1 1 94.1 MiB 694.3 MiB 0.135 CRC64 data/pt_caselaw_train.23.jsonl.xz 1 1 476.9 MiB 3’436.5 MiB 0.139 CRC64 data/pt_caselaw_train.2.jsonl.xz 1 1 476.9 MiB 3’527.9 MiB 0.135 CRC64 data/pt_caselaw_train.3.jsonl.xz 1 1 476.9 MiB 3’492.2 MiB 0.137 CRC64 data/pt_caselaw_train.4.jsonl.xz 1 1 476.9 MiB 3’554.8 MiB 0.134 CRC64 data/pt_caselaw_train.5.jsonl.xz 1 1 476.9 MiB 3’494.7 MiB 0.136 CRC64 data/pt_caselaw_train.6.jsonl.xz 1 1 476.9 MiB 3’439.1 MiB 0.139 CRC64 data/pt_caselaw_train.7.jsonl.xz 1 1 476.9 MiB 3’625.6 MiB 0.132 CRC64 data/pt_caselaw_train.8.jsonl.xz 1 1 476.9 MiB 3’726.4 MiB 0.128 CRC64 data/pt_caselaw_train.9.jsonl.xz 1 1 798.9 KiB 4’820.6 KiB 0.166 CRC64 data/pt_caselaw_validation.0.jsonl.xz 1 1 28.4 MiB 243.2 MiB 0.117 CRC64 data/pt_contracts_train.0.jsonl.xz 1 1 3’899.7 KiB 32.6 MiB 0.117 CRC64 data/pt_contracts_validation.0.jsonl.xz 1 1 406.2 MiB 3’217.5 MiB 0.126 CRC64 data/pt_legislation_train.0.jsonl.xz 1 1 8’350.4 KiB 58.4 MiB 0.140 CRC64 data/pt_legislation_validation.0.jsonl.xz 1 0 32 B 0 B --- CRC64 data/pt_other_validation.0.jsonl.xz 1 1 476.9 MiB 2’050.4 MiB 0.233 CRC64 data/pt_wikipedia_train.0.jsonl.xz 1 1 140.6 MiB 617.4 MiB 0.228 CRC64 data/pt_wikipedia_train.1.jsonl.xz 1 1 1’480.0 KiB 6’344.8 KiB 0.233 CRC64 data/pt_wikipedia_validation.0.jsonl.xz 1 1 124.9 MiB 956.9 MiB 0.131 CRC64 data/ro_caselaw_train.0.jsonl.xz 1 1 400.4 KiB 2’785.0 KiB 0.144 CRC64 data/ro_caselaw_validation.0.jsonl.xz 1 1 24.6 MiB 210.5 MiB 0.117 CRC64 data/ro_contracts_train.0.jsonl.xz 1 1 3’886.3 KiB 34.3 MiB 0.111 CRC64 data/ro_contracts_validation.0.jsonl.xz 1 1 476.9 MiB 4’496.4 MiB 0.106 CRC64 data/ro_legislation_train.0.jsonl.xz 1 1 97.6 MiB 1’053.6 MiB 0.093 CRC64 data/ro_legislation_train.1.jsonl.xz 1 1 3’691.3 KiB 33.4 MiB 0.108 CRC64 data/ro_legislation_validation.0.jsonl.xz 1 0 32 B 0 B --- CRC64 data/ro_other_validation.0.jsonl.xz 1 1 179.7 MiB 833.0 MiB 0.216 CRC64 data/ro_wikipedia_train.0.jsonl.xz 1 1 2’089.4 KiB 9’053.5 KiB 0.231 CRC64 data/ro_wikipedia_validation.0.jsonl.xz 1 1 143.6 MiB 1’094.2 MiB 0.131 CRC64 data/sk_caselaw_train.0.jsonl.xz 1 1 415.8 KiB 3’012.4 KiB 0.138 CRC64 data/sk_caselaw_validation.0.jsonl.xz 1 1 25.9 MiB 226.7 MiB 0.114 CRC64 data/sk_contracts_train.0.jsonl.xz 1 1 3’933.6 KiB 35.2 MiB 0.109 CRC64 data/sk_contracts_validation.0.jsonl.xz 1 1 322.4 MiB 2’745.5 MiB 0.117 CRC64 data/sk_legislation_train.0.jsonl.xz 1 1 3’735.8 KiB 31.7 MiB 0.115 CRC64 data/sk_legislation_validation.0.jsonl.xz 1 0 32 B 0 B --- CRC64 data/sk_other_validation.0.jsonl.xz 1 1 91.2 MiB 435.3 MiB 0.210 CRC64 data/sk_wikipedia_train.0.jsonl.xz 1 1 1’724.4 KiB 7’568.3 KiB 0.228 CRC64 data/sk_wikipedia_validation.0.jsonl.xz 1 1 131.9 MiB 815.8 MiB 0.162 CRC64 data/sl_caselaw_train.0.jsonl.xz 1 1 392.8 KiB 2’328.2 KiB 0.169 CRC64 data/sl_caselaw_validation.0.jsonl.xz 1 1 22.9 MiB 172.4 MiB 0.133 CRC64 data/sl_contracts_train.0.jsonl.xz 1 1 3’493.7 KiB 27.2 MiB 0.125 CRC64 data/sl_contracts_validation.0.jsonl.xz 1 1 388.1 MiB 2’732.3 MiB 0.142 CRC64 data/sl_legislation_train.0.jsonl.xz 1 1 3’429.8 KiB 24.3 MiB 0.138 CRC64 data/sl_legislation_validation.0.jsonl.xz 1 0 32 B 0 B --- CRC64 data/sl_other_validation.0.jsonl.xz 1 1 104.6 MiB 425.6 MiB 0.246 CRC64 data/sl_wikipedia_train.0.jsonl.xz 1 1 1’392.8 KiB 5’004.9 KiB 0.278 CRC64 data/sl_wikipedia_validation.0.jsonl.xz 1 1 189.5 MiB 1’325.4 MiB 0.143 CRC64 data/sv_caselaw_train.0.jsonl.xz 1 1 581.2 KiB 3’566.7 KiB 0.163 CRC64 data/sv_caselaw_validation.0.jsonl.xz 1 1 25.3 MiB 211.7 MiB 0.119 CRC64 data/sv_contracts_train.0.jsonl.xz 1 1 2’890.6 KiB 26.0 MiB 0.108 CRC64 data/sv_contracts_validation.0.jsonl.xz 1 1 324.5 MiB 2’570.4 MiB 0.126 CRC64 data/sv_legislation_train.0.jsonl.xz 1 1 6’984.8 KiB 50.1 MiB 0.136 CRC64 data/sv_legislation_validation.0.jsonl.xz 1 0 32 B 0 B --- CRC64 data/sv_other_validation.0.jsonl.xz 1 1 333.4 MiB 1’668.1 MiB 0.200 CRC64 data/sv_wikipedia_train.0.jsonl.xz 1 1 1’088.6 KiB 4’372.9 KiB 0.249 CRC64 data/sv_wikipedia_validation.0.jsonl.xz ------------------------------------------------------------------------------- 374 351 90.1 GiB 579.9 GiB 0.155 CRC64 374 files ``` ## Dataset Creation This dataset has been created by combining the following datasets: Native Multi Legal Pile, Eurlex Resources, MC4 Legal, Pile of Law, EU Wikipedias. It has been filtered to remove short documents (less than 64 whitespace-separated tokens) and documents with more than 30% punctuation or numbers (see prepare_legal_data.py for more details). ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` TODO add citation ``` ### Contributions Thanks to [@JoelNiklaus](https://github.com/joelniklaus) for adding this dataset.
joelniklaus/MultiLegalPileWikipediaFiltered
[ "task_categories:fill-mask", "annotations_creators:other", "language_creators:found", "multilinguality:multilingual", "size_categories:10M<n<100M", "source_datasets:original", "language:bg", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:et", "language:fi", "language:fr", "language:ga", "language:hr", "language:hu", "language:it", "language:lt", "language:lv", "language:mt", "language:nl", "language:pl", "language:pt", "language:ro", "language:sk", "language:sl", "language:sv", "license:cc-by-4.0", "region:us" ]
2023-01-31T21:51:25+00:00
{"annotations_creators": ["other"], "language_creators": ["found"], "language": ["bg", "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "ga", "hr", "hu", "it", "lt", "lv", "mt", "nl", "pl", "pt", "ro", "sk", "sl", "sv"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["fill-mask"], "pretty_name": "MultiLegalPileWikipediaFiltered: A filtered version of the MultiLegalPile dataset, together with wikipedia articles."}
2023-03-28T18:23:38+00:00
40118247edd030d7bda32baede553414efab6b19
# Enaic31 Artstyle LoRA # Use Cases The LoRA is in itself very compatible with the most diverse model. However, it is most effective when used with Kenshi or AbyssOrangeMix2. The LoRA itself was trained with the token: ```skistyle```. I would suggest using the token with AbyssOrangeMix2, but not with Kenshi, since I got better results that way. The models mentioned right now 1. AbyssOrangeMix2 from [WarriorMama777](https://huggingface.co/WarriorMama777/OrangeMixs) 2. Kenshi Model from [Luna](https://huggingface.co/SweetLuna/Kenshi) ## Strength I would personally use these strength with the assosiated model: Soft-Version: - 0.6-0.85 for AbyssOrangeMix2 - 0.5-0.75 for Kenshi Hard-Version: - 0.4-0.6 for AbyssOrangeMix2 - 0.3-0.55 for Kenshi # Showcase **Example 1** <img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/enaic31_LoRA/resolve/main/preview/Preview%20(2).png"/> ``` skistyle, 1girl, solo, animal ears, long hair, looking at viewer, bell, upper body, bangs, closed mouth, animal ear fluff, hair between eyes, grey eyes, blush, grey hair, cat ears, neck bell, shirt, Steps: 32, Sampler: Euler a, CFG scale: 7 ``` **Example 2** <img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/enaic31_LoRA/resolve/main/preview/Preview%20(3).png"/> ``` skistyle, 1girl, solo, animal ears, long hair, looking at viewer, bell, upper body, bangs, closed mouth, animal ear fluff, hair between eyes, grey eyes, blush, grey hair, cat ears, neck bell, shirt, Steps: 32, Sampler: Euler a, CFG scale: 7 ``` **Example 3** <img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/enaic31_LoRA/resolve/main/preview/Preview%20(4).png"/> ``` skistyle, small breasts, dark-skinned female, shorts, dark skin, hair ornament, black hair, smile, glasses, v, cleavage, hairclip, brown hair, grin, aged up, brown eyes, white background, 1girl, looking at viewer, off shoulder, shirt, sweater, simple background, short shorts, denim shorts Steps: 32, Sampler: Euler a, CFG scale: 7 ``` # License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
Nerfgun3/enaic31_LoRA
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "image-to-image", "region:us" ]
2023-01-31T22:04:21+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "thumbnail": "https://huggingface.co/datasets/Nerfgun3/enaic31_LoRA/resolve/main/preview/Preview%20(1).png", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false}
2023-01-31T22:22:09+00:00
b1d213a7f1746bd3ca55bb89af1c354aabf2bb8e
Chunte/chunteset
[ "license:creativeml-openrail-m", "region:us" ]
2023-01-31T22:10:27+00:00
{"license": "creativeml-openrail-m"}
2023-01-31T22:10:27+00:00
32c401cf2474b9249e6881a3b09469189e3df757
# Dataset Card for SemEval2018Task7 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://lipn.univ-paris13.fr/~gabor/semeval2018task7/](https://lipn.univ-paris13.fr/~gabor/semeval2018task7/) - **Repository:** [https://github.com/gkata/SemEval2018Task7/tree/testing](https://github.com/gkata/SemEval2018Task7/tree/testing) - **Paper:** [SemEval-2018 Task 7: Semantic Relation Extraction and Classification in Scientific Papers](https://aclanthology.org/S18-1111/) - **Leaderboard:** [https://competitions.codalab.org/competitions/17422#learn_the_details-overview](https://competitions.codalab.org/competitions/17422#learn_the_details-overview) - **Size of downloaded dataset files:** 1.93 MB ### Dataset Summary Semeval2018Task7 is a dataset that describes the Semantic Relation Extraction and Classification in Scientific Papers. The challenge focuses on domain-specific semantic relations and includes three different subtasks. The subtasks were designed so as to compare and quantify the effect of different pre-processing steps on the relation classification results. We expect the task to be relevant for a broad range of researchers working on extracting specialized knowledge from domain corpora, for example but not limited to scientific or bio-medical information extraction. The task attracted a total of 32 participants, with 158 submissions across different scenarios. The three subtasks are: - Subtask 1.1: Relation classification on clean data - In the training data, semantic relations are manually annotated between entities. - In the test data, only entity annotations and unlabeled relation instances are given. - Given a scientific publication, The task is to predict the semantic relation between the entities. - Subtask 1.2: Relation classification on noisy data - Entity occurrences are automatically annotated in both the training and the test data. - The task is to predict the semantic relation between the entities. - Subtask 2: Metrics for the extraction and classification scenario - Evaluation of relation extraction - Evaluation of relation classification The Relations types are USAGE, RESULT, MODEL, PART_WHOLE, TOPIC, COMPARISION. The following example shows a text snippet with the information provided in the test data: Korean, a \<entity id=”H01-1041.10”>verb final language\</entity>with\<entity id=”H01-1041.11”>overt case markers\</entity>(...) - A relation instance is identified by the unique identifier of the entities in the pair, e.g.(H01-1041.10, H01-1041.11) - The information to be predicted is the relation class label: MODEL-FEATURE(H01-1041.10, H01-1041.11). For details, see the paper https://aclanthology.org/S18-1111/. ### Supported Tasks and Leaderboards - **Tasks:** Relation extraction and classification in scientific papers - **Leaderboards:** [https://competitions.codalab.org/competitions/17422#learn_the_details-overview](https://competitions.codalab.org/competitions/17422#learn_the_details-overview) ### Languages The language in the dataset is English. ## Dataset Structure ### Data Instances #### subtask_1.1 - **Size of downloaded dataset files:** 714 KB An example of 'train' looks as follows: ```json { "id": "H01-1041", "title": "'Interlingua-Based Broad-Coverage Korean-to-English Translation in CCLING'", "abstract": 'At MIT Lincoln Laboratory, we have been developing a Korean-to-English machine translation system CCLINC (Common Coalition Language System at Lincoln Laboratory) . The CCLINC Korean-to-English translation system consists of two core modules , language understanding and generation modules mediated by a language neutral meaning representation called a semantic frame . The key features of the system include: (i) Robust efficient parsing of Korean (a verb final language with overt case markers , relatively free word order , and frequent omissions of arguments ). (ii) High quality translation via word sense disambiguation and accurate word order generation of the target language . (iii) Rapid system development and porting to new domains via knowledge-based automated acquisition of grammars . Having been trained on Korean newspaper articles on missiles and chemical biological warfare, the system produces the translation output sufficient for content understanding of the original document. "entities": [{'id': 'H01-1041.1', 'char_start': 54, 'char_end': 97}, {'id': 'H01-1041.2', 'char_start': 99, 'char_end': 161}, {'id': 'H01-1041.3', 'char_start': 169, 'char_end': 211}, {'id': 'H01-1041.4', 'char_start': 229, 'char_end': 240}, {'id': 'H01-1041.5', 'char_start': 244, 'char_end': 288}, {'id': 'H01-1041.6', 'char_start': 304, 'char_end': 342}, {'id': 'H01-1041.7', 'char_start': 353, 'char_end': 366}, {'id': 'H01-1041.8', 'char_start': 431, 'char_end': 437}, {'id': 'H01-1041.9', 'char_start': 442, 'char_end': 447}, {'id': 'H01-1041.10', 'char_start': 452, 'char_end': 470}, {'id': 'H01-1041.11', 'char_start': 477, 'char_end': 494}, {'id': 'H01-1041.12', 'char_start': 509, 'char_end': 523}, {'id': 'H01-1041.13', 'char_start': 553, 'char_end': 561}, {'id': 'H01-1041.14', 'char_start': 584, 'char_end': 594}, {'id': 'H01-1041.15', 'char_start': 600, 'char_end': 624}, {'id': 'H01-1041.16', 'char_start': 639, 'char_end': 659}, {'id': 'H01-1041.17', 'char_start': 668, 'char_end': 682}, {'id': 'H01-1041.18', 'char_start': 692, 'char_end': 715}, {'id': 'H01-1041.19', 'char_start': 736, 'char_end': 742}, {'id': 'H01-1041.20', 'char_start': 748, 'char_end': 796}, {'id': 'H01-1041.21', 'char_start': 823, 'char_end': 847}, {'id': 'H01-1041.22', 'char_start': 918, 'char_end': 935}, {'id': 'H01-1041.23', 'char_start': 981, 'char_end': 997}], } "relation": [{'label': 3, 'arg1': 'H01-1041.3', 'arg2': 'H01-1041.4', 'reverse': True}, {'label': 0, 'arg1': 'H01-1041.8', 'arg2': 'H01-1041.9', 'reverse': False}, {'label': 2, 'arg1': 'H01-1041.10', 'arg2': 'H01-1041.11', 'reverse': True}, {'label': 0, 'arg1': 'H01-1041.14', 'arg2': 'H01-1041.15', 'reverse': True}] ``` #### Subtask_1.2 - **Size of downloaded dataset files:** 1.00 MB An example of 'train' looks as follows: ```json {'id': 'L08-1450', 'title': '\nA LAF/GrAF based Encoding Scheme for underspecified Representations of syntactic Annotations.\n', 'abstract': 'Data models and encoding formats for syntactically annotated text corpora need to deal with syntactic ambiguity; underspecified representations are particularly well suited for the representation of ambiguousdata because they allow for high informational efficiency. We discuss the issue of being informationally efficient, and the trade-off between efficient encoding of linguistic annotations and complete documentation of linguistic analyses. The main topic of this article is adata model and an encoding scheme based on LAF/GrAF ( Ide and Romary, 2006 ; Ide and Suderman, 2007 ) which provides a flexible framework for encoding underspecified representations. We show how a set of dependency structures and a set of TiGer graphs ( Brants et al., 2002 ) representing the readings of an ambiguous sentence can be encoded, and we discuss basic issues in querying corpora which are encoded using the framework presented here.\n', 'entities': [{'id': 'L08-1450.4', 'char_start': 0, 'char_end': 3}, {'id': 'L08-1450.5', 'char_start': 5, 'char_end': 10}, {'id': 'L08-1450.6', 'char_start': 25, 'char_end': 31}, {'id': 'L08-1450.7', 'char_start': 61, 'char_end': 64}, {'id': 'L08-1450.8', 'char_start': 66, 'char_end': 72}, {'id': 'L08-1450.9', 'char_start': 82, 'char_end': 85}, {'id': 'L08-1450.10', 'char_start': 92, 'char_end': 100}, {'id': 'L08-1450.11', 'char_start': 102, 'char_end': 110}, {'id': 'L08-1450.12', 'char_start': 128, 'char_end': 142}, {'id': 'L08-1450.13', 'char_start': 181, 'char_end': 194}, {'id': 'L08-1450.14', 'char_start': 208, 'char_end': 211}, {'id': 'L08-1450.15', 'char_start': 255, 'char_end': 264}, {'id': 'L08-1450.16', 'char_start': 282, 'char_end': 286}, {'id': 'L08-1450.17', 'char_start': 408, 'char_end': 420}, {'id': 'L08-1450.18', 'char_start': 425, 'char_end': 443}, {'id': 'L08-1450.19', 'char_start': 450, 'char_end': 453}, {'id': 'L08-1450.20', 'char_start': 455, 'char_end': 459}, {'id': 'L08-1450.21', 'char_start': 481, 'char_end': 484}, {'id': 'L08-1450.22', 'char_start': 486, 'char_end': 490}, {'id': 'L08-1450.23', 'char_start': 508, 'char_end': 513}, {'id': 'L08-1450.24', 'char_start': 515, 'char_end': 519}, {'id': 'L08-1450.25', 'char_start': 535, 'char_end': 537}, {'id': 'L08-1450.26', 'char_start': 559, 'char_end': 561}, {'id': 'L08-1450.27', 'char_start': 591, 'char_end': 598}, {'id': 'L08-1450.28', 'char_start': 611, 'char_end': 619}, {'id': 'L08-1450.29', 'char_start': 649, 'char_end': 663}, {'id': 'L08-1450.30', 'char_start': 687, 'char_end': 707}, {'id': 'L08-1450.31', 'char_start': 722, 'char_end': 726}, {'id': 'L08-1450.32', 'char_start': 801, 'char_end': 808}, {'id': 'L08-1450.33', 'char_start': 841, 'char_end': 845}, {'id': 'L08-1450.34', 'char_start': 847, 'char_end': 852}, {'id': 'L08-1450.35', 'char_start': 857, 'char_end': 864}, {'id': 'L08-1450.36', 'char_start': 866, 'char_end': 872}, {'id': 'L08-1450.37', 'char_start': 902, 'char_end': 910}, {'id': 'L08-1450.1', 'char_start': 12, 'char_end': 16}, {'id': 'L08-1450.2', 'char_start': 27, 'char_end': 32}, {'id': 'L08-1450.3', 'char_start': 72, 'char_end': 80}], 'relation': [{'label': 1, 'arg1': 'L08-1450.12', 'arg2': 'L08-1450.13', 'reverse': False}, {'label': 5, 'arg1': 'L08-1450.17', 'arg2': 'L08-1450.18', 'reverse': False}, {'label': 1, 'arg1': 'L08-1450.28', 'arg2': 'L08-1450.29', 'reverse': False}, {'label': 3, 'arg1': 'L08-1450.30', 'arg2': 'L08-1450.32', 'reverse': False}, {'label': 3, 'arg1': 'L08-1450.34', 'arg2': 'L08-1450.35', 'reverse': False}, {'label': 3, 'arg1': 'L08-1450.36', 'arg2': 'L08-1450.37', 'reverse': True}]} [ ] ``` ### Data Fields #### subtask_1_1 - `id`: the instance id of this abstract, a `string` feature. - `title`: the title of this abstract, a `string` feature - `abstract`: the abstract from the scientific papers, a `string` feature - `entities`: the entity id's for the key phrases, a `list` of entity id's. - `id`: the instance id of this sentence, a `string` feature. - `char_start`: the 0-based index of the entity starting, an `ìnt` feature. - `char_end`: the 0-based index of the entity ending, an `ìnt` feature. - `relation`: the list of relations of this sentence marking the relation between the key phrases, a `list` of classification labels. - `label`: the list of relations between the key phrases, a `list` of classification labels. - `arg1`: the entity id of this key phrase, a `string` feature. - `arg2`: the entity id of the related key phrase, a `string` feature. - `reverse`: the reverse is `True` only if reverse is possible otherwise `False`, a `bool` feature. ```python RELATIONS {"":0,"USAGE": 1, "RESULT": 2, "MODEL-FEATURE": 3, "PART_WHOLE": 4, "TOPIC": 5, "COMPARE": 6} ``` #### subtask_1_2 - `id`: the instance id of this abstract, a `string` feature. - `title`: the title of this abstract, a `string` feature - `abstract`: the abstract from the scientific papers, a `string` feature - `entities`: the entity id's for the key phrases, a `list` of entity id's. - `id`: the instance id of this sentence, a `string` feature. - `char_start`: the 0-based index of the entity starting, an `ìnt` feature. - `char_end`: the 0-based index of the entity ending, an `ìnt` feature. - `relation`: the list of relations of this sentence marking the relation between the key phrases, a `list` of classification labels. - `label`: the list of relations between the key phrases, a `list` of classification labels. - `arg1`: the entity id of this key phrase, a `string` feature. - `arg2`: the entity id of the related key phrase, a `string` feature. - `reverse`: the reverse is `True` only if reverse is possible otherwise `False`, a `bool` feature. ```python RELATIONS {"":0,"USAGE": 1, "RESULT": 2, "MODEL-FEATURE": 3, "PART_WHOLE": 4, "TOPIC": 5, "COMPARE": 6} ``` ### Data Splits | | | Train| Test | |-------------|-----------|------|------| | subtask_1_1 | text | 2807 | 3326 | | | relations | 1228 | 1248 | | subtask_1_2 | text | 1196 | 1193 | | | relations | 335 | 355 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{gabor-etal-2018-semeval, title = "{S}em{E}val-2018 Task 7: Semantic Relation Extraction and Classification in Scientific Papers", author = {G{\'a}bor, Kata and Buscaldi, Davide and Schumann, Anne-Kathrin and QasemiZadeh, Behrang and Zargayouna, Ha{\"\i}fa and Charnois, Thierry}, booktitle = "Proceedings of the 12th International Workshop on Semantic Evaluation", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/S18-1111", doi = "10.18653/v1/S18-1111", pages = "679--688", abstract = "This paper describes the first task on semantic relation extraction and classification in scientific paper abstracts at SemEval 2018. The challenge focuses on domain-specific semantic relations and includes three different subtasks. The subtasks were designed so as to compare and quantify the effect of different pre-processing steps on the relation classification results. We expect the task to be relevant for a broad range of researchers working on extracting specialized knowledge from domain corpora, for example but not limited to scientific or bio-medical information extraction. The task attracted a total of 32 participants, with 158 submissions across different scenarios.", } ``` ### Contributions Thanks to [@basvoju](https://github.com/basvoju) for adding this dataset.
Basvoju/SemEval2018Task7
[ "task_categories:text-classification", "task_ids:entity-linking-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "language:en", "license:other", "Relation Classification", "Relation extraction", "Scientific papers", "Research papers", "region:us" ]
2023-01-31T22:13:20+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": ["entity-linking-classification"], "paperswithcode_id": "acronym-identification", "pretty_name": "Semeval2018Task7 is a dataset that describes the Semantic Relation Extraction and Classification in Scientific Papers", "tags": ["Relation Classification", "Relation extraction", "Scientific papers", "Research papers"], "train-eval-index": [{"col_mapping": {"labels": "tags", "tokens": "tokens"}, "config": "default", "splits": {"eval_split": "test"}, "task": "text-classification", "task_id": "entity_extraction"}]}
2023-02-03T12:59:36+00:00
2087f7c3a05a18c34cc916374017205b1d1dd6fd
# Dataset Card for "patched_test_p_80_m1_predictions_v2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
roa7n/patched_test_p_80_m1_predictions_v2
[ "region:us" ]
2023-01-31T22:26:20+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence_str", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "m1_preds", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 1345772120, "num_examples": 2362374}], "download_size": 118878695, "dataset_size": 1345772120}}
2023-01-31T22:26:47+00:00
853eb0af253a307ead4db0a80e03d86d3ec436e2
Manually created seed dataset used in bootstrapping in the Self-instruct paper https://arxiv.org/abs/2212.10560. This is part of the instruction fine-tuning datasets.
HuggingFaceH4/self-instruct-seed
[ "task_categories:conversational", "size_categories:n<1K", "language:en", "license:apache-2.0", "arxiv:2212.10560", "region:us" ]
2023-01-31T22:33:52+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["conversational"]}
2023-01-31T22:37:02+00:00
5e66ccbecb559b13e26923c982f1dc7b0fca7f38
This dataset is part of the Anthropic's HH data used to train their RLHF Assistant https://github.com/anthropics/hh-rlhf. The data contains the first utterance from human to the dialog agent and the number of words in that utterance. The sampled version is a random sample of size 200.
HuggingFaceH4/hh-rlhf
[ "task_categories:conversational", "language:en", "license:mit", "region:us" ]
2023-01-31T22:37:47+00:00
{"language": ["en"], "license": "mit", "task_categories": ["conversational"]}
2023-01-31T22:46:52+00:00
14846cdb76137d189c6626ebffa5f51060bf0cf2
# Dataset Card for "FGVC_Aircraft_test_facebook_opt_350m_Attributes_Caption_ns_3333" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/FGVC_Aircraft_test_facebook_opt_350m_Attributes_Caption_ns_3333
[ "region:us" ]
2023-01-31T23:45:42+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 299298854.375, "num_examples": 3333}, {"name": "fewshot_1_bs_16", "num_bytes": 300147792.375, "num_examples": 3333}, {"name": "fewshot_3_bs_16", "num_bytes": 301863124.375, "num_examples": 3333}], "download_size": 891924279, "dataset_size": 901309771.125}}
2023-02-01T00:06:13+00:00
a647120cd958138d2e6dc3982a096031d9388fe6
# Dataset Card for "FGVC_Aircraft_test_facebook_opt_350m_Visclues_ns_3333" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/FGVC_Aircraft_test_facebook_opt_350m_Visclues_ns_3333
[ "region:us" ]
2023-01-31T23:49:45+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 299564551.375, "num_examples": 3333}, {"name": "fewshot_1_bs_16", "num_bytes": 300685331.375, "num_examples": 3333}, {"name": "fewshot_3_bs_16", "num_bytes": 302937982.375, "num_examples": 3333}], "download_size": 892473384, "dataset_size": 903187865.125}}
2023-02-01T00:16:00+00:00
868ccb0da5159e97c2e35b7ff606ad446415973b
ericyu3/openassistant_inpainted_dialogs
[ "license:apache-2.0", "region:us" ]
2023-02-01T00:03:34+00:00
{"license": "apache-2.0"}
2023-02-01T00:56:31+00:00
3baa66c608ebe69c697d8dbdf3a781c89e5771dc
# Dataset Card for "c4-dedup" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
datablations/c4-filter
[ "region:us" ]
2023-02-01T00:15:28+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "timestamp", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "perplexity_score", "dtype": "float64"}, {"name": "text_length", "dtype": "int64"}, {"name": "domain", "dtype": "null"}, {"name": "dup_ratio", "dtype": "float64"}, {"name": "pairs", "sequence": {"sequence": "int64"}}, {"name": "repetitions", "sequence": "binary"}, {"name": "included_in_dedup", "dtype": "bool"}, {"name": "cluster", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 959334093604, "num_examples": 364868892}], "download_size": 586254318285, "dataset_size": 959334093604}}
2023-02-01T10:29:51+00:00
dc7414c7bfd84c65af77209a9e18b229cf8ed127
# Dataset Card for "FGVC_Aircraft_test_facebook_opt_1.3b_Attributes_Caption_ns_3333" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/FGVC_Aircraft_test_facebook_opt_1.3b_Attributes_Caption_ns_3333
[ "region:us" ]
2023-02-01T00:26:33+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 299298610.375, "num_examples": 3333}, {"name": "fewshot_1_bs_16", "num_bytes": 300147760.375, "num_examples": 3333}, {"name": "fewshot_3_bs_16", "num_bytes": 301863001.375, "num_examples": 3333}], "download_size": 891928796, "dataset_size": 901309372.125}}
2023-02-01T00:57:39+00:00
5e5f230b00231a8f1aadf63f3bcbd2ed42fc4bba
# Dataset Card for "FGVC_Aircraft_test_facebook_opt_1.3b_Visclues_ns_3333" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/FGVC_Aircraft_test_facebook_opt_1.3b_Visclues_ns_3333
[ "region:us" ]
2023-02-01T00:32:01+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 299564288.375, "num_examples": 3333}, {"name": "fewshot_1_bs_16", "num_bytes": 300685282.375, "num_examples": 3333}, {"name": "fewshot_3_bs_16", "num_bytes": 302937948.375, "num_examples": 3333}], "download_size": 892476963, "dataset_size": 903187519.125}}
2023-02-01T01:13:57+00:00
0b595fb6a7b5df5b9e02c7612bbfd5a2936585d6
# Dataset Card for "news-summary" ## Dataset Description - **Homepage:** Kaggle Challenge - **Repository:** https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset?select=True.csv - **Paper:** N.A. - **Leaderboard:** N.A. - **Point of Contact:** N.A. ### Dataset Summary Can you use this data set to summarize news articles? ### Languages english ### Citation Information Acknowledgements Ahmed H, Traore I, Saad S. “Detecting opinion spams and fake news using text classification”, Journal of Security and Privacy, Volume 1, Issue 1, Wiley, January/February 2018. Ahmed H, Traore I, Saad S. (2017) “Detection of Online Fake News Using N-Gram Analysis and Machine Learning Techniques. In: Traore I., Woungang I., Awad A. (eds) Intelligent, Secure, and Dependable Systems in Distributed and Cloud Environments. ISDDC 2017. Lecture Notes in Computer Science, vol 10618. Springer, Cham (pp. 127-138). ### Contributions Thanks to [@davidberenstein1957](https://github.com/davidberenstein1957) for adding this dataset.
jeffboudier/argilla-news-summary
[ "task_categories:summarization", "task_ids:news-articles-summarization", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-nc-4.0", "region:us" ]
2023-02-01T02:08:38+00:00
{"language": ["en"], "license": ["cc-by-nc-4.0"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": ["news-articles-summarization"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "prediction", "list": [{"name": "score", "dtype": "float64"}, {"name": "text", "dtype": "string"}]}, {"name": "prediction_agent", "dtype": "string"}, {"name": "annotation", "dtype": "null"}, {"name": "annotation_agent", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "dtype": "null"}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}, {"name": "vectors", "struct": [{"name": "mini-lm-sentence-transformers", "sequence": "float64"}]}], "splits": [{"name": "train", "num_bytes": 5537696, "num_examples": 1000}], "download_size": 4137087, "dataset_size": 5537696}, "duplicated_from": "argilla/news-summary"}
2023-02-01T02:08:39+00:00
99fc6fed11280eace9b174001e3d4eb1398b6e5e
# Dataset Card for "ko-conversation-summary" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mk9165/ko-conversation-summary
[ "region:us" ]
2023-02-01T02:13:01+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 120472570, "num_examples": 279992}, {"name": "test", "num_bytes": 15123198, "num_examples": 35004}], "download_size": 87984817, "dataset_size": 135595768}}
2023-02-01T02:13:26+00:00
f6dd3ef1cfd2a3afd02c2327c6702a611dca38b2
# Dataset Card for "ko-voicefishing-classification" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mk9165/ko-voicefishing-classification
[ "region:us" ]
2023-02-01T02:13:28+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2626452, "num_examples": 1012}], "download_size": 1386022, "dataset_size": 2626452}}
2023-02-01T02:13:38+00:00
43804f00de53d56bc868c8c32d7531d73f07296d
LuisLenin/DataClinical
[ "license:openrail", "region:us" ]
2023-02-01T02:37:58+00:00
{"license": "openrail"}
2023-02-03T05:22:24+00:00
681187292ef1f437dd1d4b3788386067f28d4102
# Dataset Card for "FGVC_Aircraft_test_facebook_opt_6.7b_Attributes_Caption_ns_3333" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/FGVC_Aircraft_test_facebook_opt_6.7b_Attributes_Caption_ns_3333
[ "region:us" ]
2023-02-01T03:05:22+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 299297082.375, "num_examples": 3333}, {"name": "fewshot_1_bs_16", "num_bytes": 300147832.375, "num_examples": 3333}, {"name": "fewshot_3_bs_16", "num_bytes": 301862752.375, "num_examples": 3333}], "download_size": 885565554, "dataset_size": 901307667.125}}
2023-02-01T04:12:17+00:00
25c371066a319f84d08d84f5482c5467a18df380
# Dataset Card for "FGVC_Aircraft_test_facebook_opt_6.7b_Visclues_ns_3333" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/FGVC_Aircraft_test_facebook_opt_6.7b_Visclues_ns_3333
[ "region:us" ]
2023-02-01T03:15:21+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 299562491.375, "num_examples": 3333}, {"name": "fewshot_1_bs_16", "num_bytes": 300685243.375, "num_examples": 3333}, {"name": "fewshot_3_bs_16", "num_bytes": 302937632.375, "num_examples": 3333}], "download_size": 886179506, "dataset_size": 903185367.125}}
2023-02-01T04:47:58+00:00
68ec9341944c12f89312d32e75a7334b67e7f176
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: sschet/biobert_diseases_ner * Dataset: chintagunta85/ncbi_disease * Config: ncbi_disease * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sschet](https://huggingface.co/sschet) for evaluating this model.
autoevaluate/autoeval-eval-chintagunta85__ncbi_disease-ncbi_disease-f4d843-3192989822
[ "autotrain", "evaluation", "region:us" ]
2023-02-01T03:51:42+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["chintagunta85/ncbi_disease"], "eval_info": {"task": "entity_extraction", "model": "sschet/biobert_diseases_ner", "metrics": [], "dataset_name": "chintagunta85/ncbi_disease", "dataset_config": "ncbi_disease", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2023-02-01T03:52:18+00:00
4b2a0a0191f6b22123b62fac725886ead4e4b90c
FileArchive/Assets
[ "license:unknown", "region:us" ]
2023-02-01T04:15:32+00:00
{"license": "unknown"}
2023-12-26T01:56:29+00:00
1d947f0a391e10a076c0512edaedacf01e6f25bb
# Dataset Card for "bookcorpus_compact_1024_shard5_of_10_meta" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
saibo/bookcorpus_compact_1024_shard5_of_10_meta
[ "region:us" ]
2023-02-01T04:33:47+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "concept_with_offset", "dtype": "string"}, {"name": "cid_arrangement", "sequence": "int32"}, {"name": "schema_lengths", "sequence": "int64"}, {"name": "topic_entity_mask", "sequence": "int64"}, {"name": "text_lengths", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 7507064864, "num_examples": 61605}], "download_size": 1650231022, "dataset_size": 7507064864}}
2023-02-01T04:38:11+00:00
36c874d8bb11e43698ae09f989984aa1cda7b76d
# Aubrey Plaza textual inversion This is an embedding of the amazing Aubrey Plaza. ## Version 2: ![Detailed Samples](https://huggingface.co/datasets/zuleo/aubrey-plaza/resolve/main/images/grid_v2.png) ## Version 1: ![Detailed Samples](https://huggingface.co/datasets/zuleo/aubrey-plaza/resolve/main/images/grid1.png) ## Embedding Usage Use the token ```aubreyplazav2-300``` ### Previous versions: | Token | Version | |----------------------|------------------------| | `aubreyplazav2-300` | Version 2 - 300 steps | | `aubreyplazav1-7375` | Version 1 - 7375 steps | ![Detailed Samples](https://huggingface.co/datasets/zuleo/aubrey-plaza/resolve/main/images/v1_vs_v2.png) --- ## 🎶 Prompt Examples 🧾 ```Perfectly-centered close up portrait-photograph of a real life warrior aubreyplazav2-300, hair flowing in the wind with beautiful bright blue eyes, (wearing gold and white armor and big hoop gold earrings and a tiara:1.22223), (battle axe and broad sword hanging from her belt:1.112), standing near a rain forest with a waterfall, lifelike, super highly detailed, professional digital painting, artstation, concept art, Photorealism, HD quality, 8k resolution, beautiful, cinematic, art by artgerm and greg rutkowski and alphonse mucha and loish and WLOP``` ⛔ Negative prompt: ```(bad_prompt_version2:0.8), ((((ugly)))), (((duplicate))), ((morbid)), ((mutilated)), [out of frame], extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck))), watermark, signature, words, (text:1.4), cross eyed``` _Steps: 20, Sampler: DPM++ 2S a Karras, CFG scale: 7, Seed: 3960559569, Size: 512x512, Model hash: 67abd65708_ --- 🧾 ```photorealistic painting ((full body)) portrait of ((stunningly attractive)) a aubreyplazav2-300 at a bar, ((perfect feminine face)), (+long colorful wavy hair), (+glitter freckles), glitter, wearing a dress, intricate, 8k, highly detailed, volumetric lighting, digital painting, intense, sharp focus, art by artgerm and rutkowski and alphonse mucha, cgsociety``` ⛔ Negative prompt: ```(bad_prompt_version2:0.7), ((((ugly)))), (((duplicate))), ((morbid)), ((mutilated)), [out of frame], ((poorly drawn eyes)), extra fingers, ((poorly drawn face)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), ((extra limbs)), cloned face, (((disfigured))), out of frame, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), (fused fingers), (too many fingers), (((long neck)))``` _Steps: 36, Sampler: DPM++ 2S a Karras, CFG scale: 7, Seed: 788010516, Size: 512x512, Model hash: 67abd65708_ --- 🧾 ```Perfectly-centered close up portrait-photograph of a real life sexy aubreyplazav2-300, hair flowing in the wind with (beautiful bright green eyes:1.2), (wearing a purple shirt and big hoop silver earrings and a green tiara:1.22223), standing near a twisting stairwell, lifelike, subsurface scattering, super highly detailed, professional digital painting, artstation, concept art, Photorealism, HD quality, 8k resolution, beautiful, cinematic, art by artgerm and greg rutkowski and alphonse mucha and loish and WLOP``` ⛔ Negative prompt: ```(bad_prompt_version2:0.8), ((((ugly)))), (((duplicate))), ((morbid)), ((mutilated)), [out of frame], extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck))), watermark, signature, words, (text:1.4), cross eyed``` _Steps: 24, Sampler: DPM++ 2S a, CFG scale: 7.5, Seed: 4119437875, Size: 512x768, Model hash: d8691b4d16_ --- ## 🎴 text2img Sampler and Checkpoint grids: It's always great to get a visual of what's going on with sampler using different models with this embedding. See the examples below and tune them to your liking. [Sampling Grid](https://huggingface.co/datasets/zuleo/aubrey-plaza/resolve/main/images/sampler_ckpt_grid.png) --- ☕ If you enjoy this model, buy me a coffee [![Buy a coffee](https://badgen.net/badge/icon/kofi?icon=kofi&amp;label=buy%20us%20a%20coffee)](https://ko-fi.com/3eegames) ---
zuleo/aubrey-plaza
[ "license:creativeml-openrail-m", "stable-diffusion", "embedding", "textual-inversion", "text-to-image", "image-to-image", "art", "artistic", "region:us" ]
2023-02-01T06:15:02+00:00
{"license": "creativeml-openrail-m", "tags": ["stable-diffusion", "embedding", "textual-inversion", "text-to-image", "image-to-image", "art", "artistic"]}
2023-03-06T19:18:38+00:00
b667fab7a2efacb4b040d0069aeff234bf5cab9b
# Dataset Card for ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Annotations](#annotations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://ds4sd.github.io/icdar23-doclaynet/ - **Leaderboard:** https://eval.ai/web/challenges/challenge-page/1923/leaderboard - **Point of Contact:** ### Dataset Summary This is the official competition dataset for the _ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents_. You are invited to advance the research in accurately segmenting the layout on a broad range of document styles and domains. To achieve this, we challenge you to develop a model that can correctly identify and segment the layout components in document pages as bounding boxes on a competition data-set we provide. For more information see https://ds4sd.github.io/icdar23-doclaynet/. #### Training resources In our recently published [DocLayNet](https://github.com/DS4SD/DocLayNet) dataset, which contains 80k+ human-annotated document pages exposing diverse layouts, we define 11 classes for layout components (paragraphs, headings, tables, figures, lists, mathematical formulas and several more). We encourage you to use this dataset for training and internal evaluation of your solution. Further, you may consider any other publicly available document layout dataset for training (e.g. [PubLayNet](https://github.com/ibm-aur-nlp/PubLayNet), [DocBank](https://github.com/doc-analysis/DocBank)). ### Supported Tasks and Leaderboards This is the official dataset of the ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents. For more information see https://ds4sd.github.io/icdar23-doclaynet/. #### Evaluation Metric Your submissions on our [EvalAI challenge](https://eval.ai/web/challenges/challenge-page/1923/) will be evaluated using the Mean Average Precision (mAP) @ Intersection-over-Union (IoU) [0.50:0.95] metric, as used in the [COCO](https://cocodataset.org/) object detection competition. In detail, we will calculate the average precision for a sequence of IoU thresholds ranging from 0.50 to 0.95 with a step size of 0.05. This metric is computed for every document category in the competition-dataset. Then the mean of the average precisions on all categories is computed as the final score. #### Submission We ask you to upload a JSON file in [COCO results format](https://cocodataset.org/#format-results) [here](https://eval.ai/web/challenges/challenge-page/1923/submission), with complete layout bounding-boxes for each page sample. The given `image_id`s must correspond to the ones we publish with the competition data-set's `coco.json`. For each submission you make, the computed mAP will be provided for each category as well as combined. The [leaderboard](https://eval.ai/web/challenges/challenge-page/1923/leaderboard/4545/Total) will be ranked based on the overall mAP. ## Dataset Structure ### Data Fields DocLayNet provides four types of data assets: 1. PNG images of all pages, resized to square `1025 x 1025px` 2. ~~Bounding-box annotations in COCO format for each PNG image~~ (annotations will be released at the end of the competition) 3. Extra: Single-page PDF files matching each PNG image 4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content The COCO image record are defined like this example ```js ... { "id": 1, "width": 1025, "height": 1025, "file_name": "132a855ee8b23533d8ae69af0049c038171a06ddfcac892c3c6d7e6b4091c642.png", // Custom fields: "doc_category": "financial_reports" // high-level document category "collection": "ann_reports_00_04_fancy", // sub-collection name "doc_name": "NASDAQ_FFIN_2002.pdf", // original document filename "page_no": 9, // page number in original document "precedence": 0, // Annotation order, non-zero in case of redundant double- or triple-annotation }, ... ``` The `doc_category` field uses one of the following constants: ``` reports, manuals, patents, pthers ``` ### Data Splits The dataset provides three splits - `dev`, which is extracted from the [DocLayNet](https://github.com/DS4SD/DocLayNet) dataset - `test`, which contains new data for the competition ## Dataset Creation ### Annotations #### Annotation process The labeling guideline used for training of the annotation experts are available at [DocLayNet_Labeling_Guide_Public.pdf](https://raw.githubusercontent.com/DS4SD/DocLayNet/main/assets/DocLayNet_Labeling_Guide_Public.pdf). #### Who are the annotators? Annotations are crowdsourced. ## Additional Information ### Dataset Curators The dataset is curated by the [Deep Search team](https://ds4sd.github.io/) at IBM Research. You can contact us at [[email protected]](mailto:[email protected]). Curators: - Christoph Auer, [@cau-git](https://github.com/cau-git) - Michele Dolfi, [@dolfim-ibm](https://github.com/dolfim-ibm) - Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial) - Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM) ### Licensing Information License: [CDLA-Permissive-1.0](https://cdla.io/permissive-1-0/) ### Citation Information A publication will be submitted at the end of the competition. Meanwhile, we suggest the cite our original dataset paper. ```bib @article{doclaynet2022, title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Segmentation}, doi = {10.1145/3534678.353904}, url = {https://doi.org/10.1145/3534678.3539043}, author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J}, year = {2022}, isbn = {9781450393850}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, booktitle = {Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining}, pages = {3743–3751}, numpages = {9}, location = {Washington DC, USA}, series = {KDD '22} } ``` ### Contributions Thanks to [@dolfim-ibm](https://github.com/dolfim-ibm), [@cau-git](https://github.com/cau-git) for adding this dataset.
ds4sd/icdar2023-doclaynet
[ "task_categories:object-detection", "task_categories:image-segmentation", "task_ids:instance-segmentation", "annotations_creators:crowdsourced", "size_categories:n<1K", "license:apache-2.0", "layout-segmentation", "COCO", "document-understanding", "PDF", "icdar", "competition", "region:us" ]
2023-02-01T06:15:14+00:00
{"annotations_creators": ["crowdsourced"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["object-detection", "image-segmentation"], "task_ids": ["instance-segmentation"], "pretty_name": "ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents", "tags": ["layout-segmentation", "COCO", "document-understanding", "PDF", "icdar", "competition"]}
2023-02-01T06:39:27+00:00
b7847b743802d8a438e291f05919f85f75b56dc3
# Snow Mountain ## Dataset Description - **Paper: https://arxiv.org/abs/2206.01205** - **Point of Contact: Joel Mathew** ### Dataset Summary The Snow Mountain dataset contains the audio recordings (in .mp3 format) and the corresponding text of The Bible (contains both Old Testament (OT) and New Testament (NT)) in 11 Indian languages. The recordings were done in a studio setting by native speakers. Each language has a single speaker in the dataset. Most of these languages are geographically concentrated in the Northern part of India around the state of Himachal Pradesh. Being related to Hindi they all use the Devanagari script for transcription. We have used this dataset for experiments in ASR tasks. But these could be used for other applications in speech domain, like speaker recognition, language identification or even as unlabelled corpus for pre-training. ### Supported Tasks and Leaderboards Atomatic speech recognition, Speech-to-Text, Speaker recognition, Language identification ### Languages Hindi, Haryanvi, Bilaspuri, Dogri, Bhadrawahi, Gaddi, Kangri, Kulvi, Mandeali, Kulvi Outer Seraji, Pahari Mahasui, Malayalam, Kannada, Tamil, Telugu ## Dataset Structure ``` data |- cleaned |- lang1 |- book1_verse_audios.tar.gz |- book2_verse_audios.tar.gz ... ... |- all_verses.tar.gz |- short_verses.tar.gz |- lang2 ... ... |- experiments |- lang1 |- train_500.csv |- val_500.csv |- test_common.csv ... ... |- lang2 ... ... |- raw |- lang1 |- chapter1_audio.mp3 |- chapter2_audio.mp3 ... ... |- text |- book1.csv |- book1.usfm ... ... |- lang2 ... ... ``` ### Data Instances A data point comprises of the path to the audio file, called `path` and its transcription, called `sentence`. ``` {'sentence': 'क्यूँके तू अपणी बात्तां कै कारण बेकसूर अर अपणी बात्तां ए कै कारण कसूरवार ठहराया जावैगा', 'audio': {'path': 'data/cleaned/haryanvi/MAT/MAT_012_037.wav', 'array': array([0., 0., 0., ..., 0., 0., 0.]), 'sampling_rate': 16000}, 'path': 'data/cleaned/haryanvi/MAT/MAT_012_037.wav'} ``` ### Data Fields `path`: The path to the audio file `audio`: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`. `sentence`: The transcription of the audio file. ### Data Splits We create splits of the cleaned data for training and analysing the performance of ASR models. The splits are available in the `experiments` directory. The file names indicate the experiment and the split category. Additionally two CSV files are included in the data splits - `all_verses` and `short_verses`. Various data splits were generated from these main two CSVs. `short_verses.csv` contains audios of length < 10s and corresponding transcriptions. `all_verses.csv` contains complete cleaned verses including long and short audios. Due to the large size (>10MB), we keep these CSVs compressed in the `tar.gz format in the `cleaned` folder. ## Dataset Loading `raw` folder has chapter wise audios in .mp3 format. For doing experiments, we might need audios in .wav format. Verse wise audio files are keept in the `cleaned` folder in .wav format. This results in a much larger size which contributes to longer loading time into memory. Here is the approximate time needed for loading the Dataset. - Hindi (OT books): ~20 minutes - Hindi minority languages (NT books): ~9 minutes - Dravidian languages (OT+NT books): ~30 minutes ## Details Please refer to the paper for more details on the creation and the rationale for the splits we created in the dataset. ### Licensing Information The data is licensed under the Creative Commons Attribution-ShareAlike 4.0 International Public License (CC BY-SA 4.0) ### Citation Information Please cite this work if you make use of it: ``` @inproceedings{Raju2022SnowMD, title={Snow Mountain: Dataset of Audio Recordings of The Bible in Low Resource Languages}, author={Kavitha Raju and V. Anjaly and R. Allen Lish and Joel Mathew}, year={2022} } ```
bridgeconn/snow-mountain
[ "task_categories:automatic-speech-recognition", "task_categories:text-to-speech", "multilinguality:multilingual", "source_datasets:Snow Mountain", "language:hi", "language:bgc", "language:kfs", "language:dgo", "language:bhd", "language:gbk", "language:xnr", "language:kfx", "language:mjl", "language:kfo", "language:bfz", "license:cc-by-sa-4.0", "arxiv:2206.01205", "region:us" ]
2023-02-01T07:23:54+00:00
{"annotations_creators": [{}], "language_creators": [{}], "language": ["hi", "bgc", "kfs", "dgo", "bhd", "gbk", "xnr", "kfx", "mjl", "kfo", "bfz"], "license": "cc-by-sa-4.0", "multilinguality": ["multilingual"], "source_datasets": ["Snow Mountain"], "task_categories": ["automatic-speech-recognition", "text-to-speech"], "task_ids": [], "pretty_name": "Snow Mountain", "configs": ["hi", "bgc"], "dataset_info": [{"config_name": "hi", "features": [{"name": "Unnamed", "dtype": "int64"}, {"name": "sentence", "dtype": "string"}, {"name": "path", "dtype": "string"}], "splits": [{"name": "train_500", "num_examples": 400}, {"name": "val_500", "num_examples": 100}, {"name": "train_1000", "num_examples": 800}, {"name": "val_1000", "num_examples": 200}, {"name": "test_common", "num_examples": 500}], "dataset_size": "71.41 hrs"}, {"config_name": "bgc", "features": [{"name": "Unnamed", "dtype": "int64"}, {"name": "sentence", "dtype": "string"}, {"name": "path", "dtype": "string"}], "splits": [{"name": "train_500", "num_examples": 400}, {"name": "val_500", "num_examples": 100}, {"name": "train_1000", "num_examples": 800}, {"name": "val_1000", "num_examples": 200}, {"name": "test_common", "num_examples": 500}], "dataset_size": "27.41 hrs"}]}
2023-05-23T04:42:14+00:00
ce1b83e7244ea935e9a85f51fadfc0536bfc2ece
lam1101999/Face_mask
[ "license:mit", "region:us" ]
2023-02-01T07:37:32+00:00
{"license": "mit"}
2023-02-03T01:37:02+00:00
2ae1a82e7907ca4447874ede02df3289f94ecce8
# BIG-bench Hard dataset homepage: https://github.com/suzgunmirac/BIG-Bench-Hard ``` @article{suzgun2022challenging, title={Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them}, author={Suzgun, Mirac and Scales, Nathan and Sch{\"a}rli, Nathanael and Gehrmann, Sebastian and Tay, Yi and Chung, Hyung Won and Chowdhery, Aakanksha and Le, Quoc V and Chi, Ed H and Zhou, Denny and and Wei, Jason}, journal={arXiv preprint arXiv:2210.09261}, year={2022} } ```
lukaemon/bbh
[ "region:us" ]
2023-02-01T07:46:51+00:00
{"dataset_info": [{"config_name": "boolean_expressions", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 11790, "num_examples": 250}], "download_size": 17172, "dataset_size": 11790}, {"config_name": "causal_judgement", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 198021, "num_examples": 187}], "download_size": 202943, "dataset_size": 198021}, {"config_name": "date_understanding", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 54666, "num_examples": 250}], "download_size": 61760, "dataset_size": 54666}, {"config_name": "disambiguation_qa", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 78620, "num_examples": 250}], "download_size": 85255, "dataset_size": 78620}, {"config_name": "dyck_languages", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 38432, "num_examples": 250}], "download_size": 43814, "dataset_size": 38432}, {"config_name": "formal_fallacies", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 138224, "num_examples": 250}], "download_size": 145562, "dataset_size": 138224}, {"config_name": "geometric_shapes", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 68560, "num_examples": 250}], "download_size": 77242, "dataset_size": 68560}, {"config_name": "hyperbaton", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 38574, "num_examples": 250}], "download_size": 44706, "dataset_size": 38574}, {"config_name": "logical_deduction_five_objects", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 148595, "num_examples": 250}], "download_size": 155477, "dataset_size": 148595}, {"config_name": "logical_deduction_seven_objects", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 191022, "num_examples": 250}], "download_size": 198404, "dataset_size": 191022}, {"config_name": "logical_deduction_three_objects", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 105831, "num_examples": 250}], "download_size": 112213, "dataset_size": 105831}, {"config_name": "movie_recommendation", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 50985, "num_examples": 250}], "download_size": 57684, "dataset_size": 50985}, {"config_name": "multistep_arithmetic_two", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 12943, "num_examples": 250}], "download_size": 18325, "dataset_size": 12943}, {"config_name": "navigate", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 49031, "num_examples": 250}], "download_size": 55163, "dataset_size": 49031}, {"config_name": "object_counting", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 30508, "num_examples": 250}], "download_size": 35890, "dataset_size": 30508}, {"config_name": "penguins_in_a_table", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 70062, "num_examples": 146}], "download_size": 74516, "dataset_size": 70062}, {"config_name": "reasoning_about_colored_objects", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 89579, "num_examples": 250}], "download_size": 98694, "dataset_size": 89579}, {"config_name": "ruin_names", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 46537, "num_examples": 250}], "download_size": 53178, "dataset_size": 46537}, {"config_name": "salient_translation_error_detection", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 277110, "num_examples": 250}], "download_size": 286443, "dataset_size": 277110}, {"config_name": "snarks", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 38223, "num_examples": 178}], "download_size": 42646, "dataset_size": 38223}, {"config_name": "sports_understanding", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 22723, "num_examples": 250}], "download_size": 28617, "dataset_size": 22723}, {"config_name": "temporal_sequences", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 139546, "num_examples": 250}], "download_size": 148176, "dataset_size": 139546}, {"config_name": "tracking_shuffled_objects_five_objects", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 162590, "num_examples": 250}], "download_size": 169722, "dataset_size": 162590}, {"config_name": "tracking_shuffled_objects_seven_objects", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 207274, "num_examples": 250}], "download_size": 214906, "dataset_size": 207274}, {"config_name": "tracking_shuffled_objects_three_objects", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 122104, "num_examples": 250}], "download_size": 128736, "dataset_size": 122104}, {"config_name": "web_of_lies", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 47582, "num_examples": 250}], "download_size": 52964, "dataset_size": 47582}, {"config_name": "word_sorting", "features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 60918, "num_examples": 250}], "download_size": 66300, "dataset_size": 60918}]}
2023-02-02T01:14:46+00:00
50e06ccae9d7c3571aeed1be46414767ecf413a7
Dsender/antest
[ "license:creativeml-openrail-m", "region:us" ]
2023-02-01T07:52:00+00:00
{"license": "creativeml-openrail-m"}
2023-02-01T07:52:43+00:00
ff0c2066dbab0fc2411c7ab8032952108c42d158
# Dataset Card for "educatinayt" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
juancopi81/educatinayt
[ "task_categories:automatic-speech-recognition", "whisper", "whispering", "medium", "region:us" ]
2023-02-01T09:37:56+00:00
{"task_categories": ["automatic-speech-recognition"], "dataset_info": {"features": [{"name": "CHANNEL_NAME", "dtype": "string"}, {"name": "URL", "dtype": "string"}, {"name": "TITLE", "dtype": "string"}, {"name": "DESCRIPTION", "dtype": "string"}, {"name": "TRANSCRIPTION", "dtype": "string"}, {"name": "SEGMENTS", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12525875, "num_examples": 884}], "download_size": 5024572, "dataset_size": 12525875}, "tags": ["whisper", "whispering", "medium"]}
2023-02-09T14:34:40+00:00
822103d61820ec3fe16054cea64468521c2621d9
MMLU (`hendrycks_test` on huggingface) without auxiliary train. It is much lighter (7MB vs 162MB) and faster than the original implementation, in which auxiliary train is loaded (+ duplicated!) by default for all the configs in the original version, making it quite heavy. We use this version in [tasksource](https://huggingface.co/tasksource). Reference to original dataset: Measuring Massive Multitask Language Understanding - https://github.com/hendrycks/test ``` @article{hendryckstest2021, title={Measuring Massive Multitask Language Understanding}, author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt}, journal={Proceedings of the International Conference on Learning Representations (ICLR)}, year={2021} } ```
tasksource/mmlu
[ "task_categories:text-classification", "task_categories:multiple-choice", "task_categories:question-answering", "task_ids:multiple-choice-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "language:en", "license:apache-2.0", "multi-task", "multitask", "mmlu", "hendrycks_test", "region:us" ]
2023-02-01T10:20:16+00:00
{"language": ["en"], "license": "apache-2.0", "task_categories": ["text-classification", "multiple-choice", "question-answering"], "task_ids": ["multiple-choice-qa", "open-domain-qa", "closed-domain-qa"], "pretty_name": "mmlu", "tags": ["multi-task", "multitask", "mmlu", "hendrycks_test"]}
2023-03-31T19:44:21+00:00
60067b257337df8d7879142d870944fe4c6ab20d
# Negative Embedding This is a Negative Embedding trained with Counterfeit. Please use it in the "\stable-diffusion-webui\embeddings" folder. It can be used with other models, but the effectiveness is not certain. # Counterfeit-V2.0.safetensors ![sample1](https://huggingface.co/datasets/gsdf/EasyNegative/resolve/main/sample01.png) # AbyssOrangeMix2_sfw.safetensors ![sample2](https://huggingface.co/datasets/gsdf/EasyNegative/resolve/main/sample02.png) # anything-v4.0-pruned.safetensors ![sample3](https://huggingface.co/datasets/gsdf/EasyNegative/resolve/main/sample03.png)
gsdf/EasyNegative
[ "license:other", "region:us" ]
2023-02-01T10:58:06+00:00
{"license": "other"}
2023-02-12T14:39:30+00:00
de9abbd1f20a278168fc95fc9d12d827f3ecc58c
--- TODO: Add YAML tags here. Copy-paste the tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging --- # Dataset Card for [Qsh-da-msa] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [translation] ### Languages [Arabic to Arabic] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [dialect] [MSA] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-Quds>) for adding this dataset.
Quds/Qsh-da-msa
[ "license:openrail", "region:us" ]
2023-02-01T11:09:12+00:00
{"license": "openrail"}
2023-02-01T13:06:25+00:00
161bee1a68cd4bc4f00b47f12a905616dc376791
# Dataset Card for "bookcorpus_compact_1024_shard4_of_10_meta" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
saibo/bookcorpus_compact_1024_shard4_of_10_meta
[ "region:us" ]
2023-02-01T11:25:32+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "concept_with_offset", "dtype": "string"}, {"name": "cid_arrangement", "sequence": "int32"}, {"name": "schema_lengths", "sequence": "int64"}, {"name": "topic_entity_mask", "sequence": "int64"}, {"name": "text_lengths", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 7910881106, "num_examples": 61605}], "download_size": 1753540096, "dataset_size": 7910881106}}
2023-02-01T11:30:08+00:00
bc5f49d9af4324cfc4bc541b1640b83dca960ff0
# Dataset Card for "wikipedia.reorder.natural" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lshowway/wikipedia.reorder.natural
[ "region:us" ]
2023-02-01T11:35:57+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4083836556, "num_examples": 1986076}], "download_size": 1930664504, "dataset_size": 4083836556}}
2023-02-01T11:38:25+00:00
708273595ad8cbf9a3759c70ad1aa0d3566b136f
# Dataset Card for "devign_with_vul_lines" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
EddieChen372/devign_with_vul_lines
[ "region:us" ]
2023-02-01T12:14:23+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "func", "dtype": "string"}, {"name": "target", "dtype": "bool"}, {"name": "project", "dtype": "string"}, {"name": "commit_id", "dtype": "string"}, {"name": "func_clean", "dtype": "string"}, {"name": "vul_lines", "struct": [{"name": "code", "sequence": "string"}, {"name": "line_no", "sequence": "int64"}]}, {"name": "normalized_func", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 16112369, "num_examples": 2732}, {"name": "train", "num_bytes": 132054560, "num_examples": 21854}, {"name": "test", "num_bytes": 16328301, "num_examples": 2732}], "download_size": 60272537, "dataset_size": 164495230}}
2023-02-04T15:24:46+00:00
614d376b21336fb6d0fb6ff652ce739028949c01
# Dataset Card for VIVOS ## Table of Contents - [Dataset Card for VIVOS](#dataset-card-for-vivos) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://doi.org/10.5281/zenodo.7068130 - **Repository:** [Needs More Information] - **Paper:** [A non-expert Kaldi recipe for Vietnamese Speech Recognition System](https://aclanthology.org/W16-5207/) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [AILAB](mailto:[email protected]) ### Dataset Summary VIVOS is a free Vietnamese speech corpus consisting of 15 hours of recording speech prepared for Vietnamese Automatic Speech Recognition task. The corpus was prepared by AILAB, a computer science lab of VNUHCM - University of Science, with Prof. Vu Hai Quan is the head of. We publish this corpus in hope to attract more scientists to solve Vietnamese speech recognition problems. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages Vietnamese ## Dataset Structure ### Data Instances A typical data point comprises the path to the audio file, called `path` and its transcription, called `sentence`. Some additional information about the speaker and the passage which contains the transcription is provided. ``` {'speaker_id': 'VIVOSSPK01', 'path': '/home/admin/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/vivos/train/waves/VIVOSSPK01/VIVOSSPK01_R001.wav', 'audio': {'path': '/home/admin/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/vivos/train/waves/VIVOSSPK01/VIVOSSPK01_R001.wav', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 16000}, 'sentence': 'KHÁCH SẠN'} ``` ### Data Fields - speaker_id: An id for which speaker (voice) made the recording - path: The path to the audio file - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - sentence: The sentence the user was prompted to speak ### Data Splits The speech material has been subdivided into portions for train and test. Speech was recorded in a quiet environment with high quality microphone, speakers were asked to read one sentence at a time. | | Train | Test | | ---------------- | ----- | ----- | | Speakers | 46 | 19 | | Utterances | 11660 | 760 | | Duration | 14:55 | 00:45 | | Unique Syllables | 4617 | 1692 | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators The dataset was initially prepared by AILAB, a computer science lab of VNUHCM - University of Science. ### Licensing Information Public Domain, Creative Commons Attribution NonCommercial ShareAlike v4.0 ([CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)) ### Citation Information ``` @inproceedings{luong-vu-2016-non, title = "A non-expert {K}aldi recipe for {V}ietnamese Speech Recognition System", author = "Luong, Hieu-Thi and Vu, Hai-Quan", booktitle = "Proceedings of the Third International Workshop on Worldwide Language Service Infrastructure and Second Workshop on Open Infrastructures and Analysis Frameworks for Human Language Technologies ({WLSI}/{OIAF}4{HLT}2016)", month = dec, year = "2016", address = "Osaka, Japan", publisher = "The COLING 2016 Organizing Committee", url = "https://aclanthology.org/W16-5207", pages = "51--55", } ``` ### Contributions Thanks to [@binh234](https://github.com/binh234) for adding this dataset.
Martha-987/vivos
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:vi", "license:cc-by-nc-sa-4.0", "region:us" ]
2023-02-01T12:30:52+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["vi"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "VIVOS", "dataset_info": {"features": [{"name": "speaker_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1722002133, "num_examples": 11660}, {"name": "test", "num_bytes": 86120227, "num_examples": 760}], "download_size": 1475540500, "dataset_size": 1808122360}}
2023-02-01T13:04:57+00:00
3a1d3e38ca5cd3a7d52b73361ee2cf81cd0f51cd
# Dataset Card for "KayzerTurkishReviews-ds-mini" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tunaerturk/KayzerTurkishReviews-ds-mini
[ "region:us" ]
2023-02-01T12:33:45+00:00
{"dataset_info": {"features": [{"name": "review", "dtype": "string"}, {"name": "review_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1252876.2642514652, "num_examples": 3378}, {"name": "validation", "num_bytes": 139455.7357485349, "num_examples": 376}], "download_size": 895863, "dataset_size": 1392332.0}}
2023-02-01T12:34:04+00:00
4fefa0ad0ac1269ec644b95a18ce5f95af4eb051
this is the one where we build the suffix array for 25% Oscar and only deduplicate that part - by deduplication I mean removing any document which has an at least 100-char span overlapping with another document in the 25% chunk. This is very strict and preserves only about 20 million documents, so less then 5% of the full Oscar.
datablations/oscar-filter
[ "region:us" ]
2023-02-01T13:04:53+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "meta", "struct": [{"name": "warc_headers", "struct": [{"name": "warc-record-id", "dtype": "string"}, {"name": "warc-date", "dtype": "string"}, {"name": "content-type", "dtype": "string"}, {"name": "content-length", "dtype": "int32"}, {"name": "warc-type", "dtype": "string"}, {"name": "warc-identified-content-language", "dtype": "string"}, {"name": "warc-refers-to", "dtype": "string"}, {"name": "warc-target-uri", "dtype": "string"}, {"name": "warc-block-digest", "dtype": "string"}]}, {"name": "identification", "struct": [{"name": "label", "dtype": "string"}, {"name": "prob", "dtype": "float32"}]}, {"name": "annotations", "sequence": "string"}, {"name": "line_identifications", "list": [{"name": "label", "dtype": "string"}, {"name": "prob", "dtype": "float32"}]}]}, {"name": "perplexity_score", "dtype": "float64"}, {"name": "text_length", "dtype": "int64"}, {"name": "url", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "dup_ratio", "dtype": "float64"}, {"name": "pairs", "sequence": {"sequence": "int64"}}, {"name": "repetitions", "sequence": "binary"}, {"name": "included_in_dedup", "dtype": "bool"}, {"name": "cluster", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 3188486875748, "num_examples": 431992659}], "download_size": 419397499659, "dataset_size": 3188486875748}}
2023-05-10T05:58:28+00:00
f4ea7c46727386fb719504d54744c1b11670c2ee
**Stats about bigcode dataset:** * Permissive licenses only : |Language |Raw (only exact dedup) | Near dedup | Near dedup + content filters| Near dedup + more near-dedup (1)|Near dedup + comments filter (1)| Near dedup + more near-dedup + comments(1)| |-------|--------|-------|--------|--------|--------|--------| |Python | 200 GB| [80 GB](https://huggingface.co/datasets/bigcode/the-stack-dedup-pjj/tree/v1.1.a1) | [75.61 GB](https://huggingface.co/datasets/bigcode/the-stack-pjjs-no-pii-filtered) | [61.97 GB](https://huggingface.co/datasets/bigcode/stack-dedup-alt-filter-no-pii) | [65 GB](https://huggingface.co/datasets/bigcode/the-stack-comments-filter) | (?) less than 60 | |Java | 266 GB |112 GB |110 GB | 88 GB | 92 GB |(?) less than 80 | |JavaScript | 496 GB | 166 GB |83 GB| 65 GB | 75 GB |(?) less than 60| |C | 255 GB | 75 GB |73 GB (2)| - | - |-| |C++ | 215 GB | 65 GB |61 GB (2)|- | - |-| * Non permissive data, the number are higher than the e.g 240GB of python I mentionned (it was on old dump) |Language |Raw (only exact dedup) | Near dedup | |-------|--------|-------| | Python (3)| 737 GB | - | |Java | 1.3 TB | - | |JavaScript | 5.8 TB | -| |C | 1.64 TB | - | |C++ | 644 GB | - | (1) all these runs have content filtering, notice that it removes a lot of data from JavaScript (you could try filtering with less strict thresholds, the [script](https://github.com/bigcode-project/bigcode-dataset/tree/main/preprocessing) is very easy to run) (2) I don't have the data but I found these numbers from an old run on the forst version of The Stack (it uses the same content filtering thresholds as for python, java and js) (3) this went through content filters
loubnabnl/bigcode-data-stats
[ "region:us" ]
2023-02-01T13:25:01+00:00
{}
2023-02-01T14:25:41+00:00
19ade9d8027972d120678ff68b85ef24d6b5e578
# Dataset Card for "OxfordFlowers_test_facebook_opt_350m_Attributes_Caption_ns_6149" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordFlowers_test_facebook_opt_350m_Attributes_Caption_ns_6149
[ "region:us" ]
2023-02-01T13:36:00+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_3_bs_16", "num_bytes": 272760304.375, "num_examples": 6149}, {"name": "fewshot_0_bs_16", "num_bytes": 267303635.375, "num_examples": 6149}, {"name": "fewshot_1_bs_16", "num_bytes": 269129477.375, "num_examples": 6149}], "download_size": 796845849, "dataset_size": 809193417.125}}
2023-02-02T03:44:51+00:00
f6527c7873aaad15401dda89e3aadc448fa4032f
# Dataset Card for "nowiki_first_scrape_20230201" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jkorsvik/nowiki_first_scrape_20230201
[ "region:us" ]
2023-02-01T13:36:28+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "url", "dtype": "string"}, {"name": "date_scraped", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "ingress", "dtype": "string"}, {"name": "article", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 458579658, "num_examples": 352660}], "download_size": 224140445, "dataset_size": 458579658}}
2023-02-01T13:40:38+00:00
ecd800ef94885c49b1b87ef31da7292fa5ccf631
# Description of LLM Bash Prompt designed to convert natural language to bash command. ## Inputs This is a description of the inputs that the prompt expects. question: User question to be answered by writing a bash command. ## Usage Below is a code snippet for how to use the prompt. ``` from langchain.prompts import load_prompt from langchain.chains import LLMBashChain llm = ... prompt = load_prompt('lc://prompts/llm_bash/<file-name>') chain = LLMBashChain(llm=llm, prompt=prompt) ```
LangChainHub-Prompts/LLM_Bash
[ "langchain", "prompt", "region:us" ]
2023-02-01T13:43:38+00:00
{"tags": ["langchain", "prompt"]}
2023-02-01T13:43:39+00:00
d5d1663fed7483a25a5e467531ffaa7b7fb50c73
# Dataset Card for "OxfordFlowers_test_facebook_opt_1.3b_Attributes_Caption_ns_6149" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordFlowers_test_facebook_opt_1.3b_Attributes_Caption_ns_6149
[ "region:us" ]
2023-02-01T13:48:36+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 267305744.375, "num_examples": 6149}, {"name": "fewshot_1_bs_16", "num_bytes": 269129531.375, "num_examples": 6149}, {"name": "fewshot_3_bs_16", "num_bytes": 272760442.375, "num_examples": 6149}], "download_size": 796855399, "dataset_size": 809195718.125}}
2023-02-01T16:53:25+00:00
da0b8af917255fc8320e1b37014099538e54f067
# Description of LLM Math Prompt designed to optionally output iPython syntax to be run in order to better answer math questions. ## Inputs This is a description of the inputs that the prompt expects. question: User question to be answered. ## Usage Below is a code snippet for how to use the prompt. ```python from langchain.prompts import load_prompt from langchain.chains import LLMMathChain llm = ... prompt = load_prompt('lc://prompts/llm_math/<file-name>') chain = LLMMathChain(llm=llm, prompt=prompt) ```
LangChainHub-Prompts/LLM_Math
[ "langchain", "prompt", "region:us" ]
2023-02-01T13:52:06+00:00
{"tags": ["langchain", "prompt"]}
2023-02-28T07:39:19+00:00
8fb8b323dec5725a7afab550e4dd057f3d1cb5ce
# Description of QA Refine Prompts designed to be used to refine original answers during question answering chains using the refine method. ## Inputs This is a description of the inputs that the prompt expects. 1. question: Original question to be answered. 2. existing_answer: Existing answer from previous documents. 3. context_str: New piece of context to use to refine the existing answer. ## Usage Below is a code snippet for how to use the prompt. ```python from langchain.prompts import load_prompt from langchain.chains.question_answering import load_qa_chain llm = ... prompt = load_prompt('lc://prompts/qa/refine/<file-name>') chain = load_qa_chain(llm, chain_type="refine", refine_prompt=prompt) ```
LangChainHub-Prompts/QA_Refine
[ "langchain", "prompt", "region:us" ]
2023-02-01T13:56:14+00:00
{"tags": ["langchain", "prompt"]}
2023-02-01T13:56:15+00:00
40778bd6357cdf935cd1044dddee4ff511b4289a
# Dataset Card for "OxfordFlowers_test_facebook_opt_350m_Visclues_ns_6149" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordFlowers_test_facebook_opt_350m_Visclues_ns_6149
[ "region:us" ]
2023-02-01T13:57:41+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 267864523.375, "num_examples": 6149}, {"name": "fewshot_1_bs_16", "num_bytes": 270237138.375, "num_examples": 6149}, {"name": "fewshot_3_bs_16", "num_bytes": 274972242.375, "num_examples": 6149}], "download_size": 797630284, "dataset_size": 813073904.125}}
2023-02-02T03:59:53+00:00
bedf7d06fa3f62bff939ab7730d6299016ffc416
# Dataset Card for "OxfordFlowers_test_facebook_opt_1.3b_Visclues_ns_6149" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordFlowers_test_facebook_opt_1.3b_Visclues_ns_6149
[ "region:us" ]
2023-02-01T14:06:59+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 267860771.375, "num_examples": 6149}, {"name": "fewshot_3_bs_16", "num_bytes": 274972343.375, "num_examples": 6149}, {"name": "fewshot_1_bs_16", "num_bytes": 270237156.375, "num_examples": 6149}], "download_size": 797634249, "dataset_size": 813070271.125}}
2023-02-02T04:31:19+00:00