sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
sequencelengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
sequencelengths
0
25
languages
sequencelengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
sequencelengths
0
352
processed_texts
sequencelengths
1
353
tokens_length
sequencelengths
1
353
input_texts
sequencelengths
1
40
44983beb68b36565571fa6980d50145e26f4b322
# WizardLM_evol_instruct_V2_196k-Turkish ``` Dataset Cost: USD 305 Translated with: gpt-3.5-turbo-1106 Elapsed Time: 3 hours 41 minutes ``` ## Metrics: ``` English Token Count: 67.686.140 Token Count After Turkish Translation: 99.760.316 Number of Successfully Translated Row: 143.000 ```
t3aile/WizardLM_evol_instruct_V2_196k-Turkish
[ "size_categories:100K<n<1M", "language:tr", "region:us" ]
2023-12-31T23:07:05+00:00
{"language": ["tr"], "size_categories": ["100K<n<1M"]}
2024-01-03T16:18:31+00:00
[]
[ "tr" ]
TAGS #size_categories-100K<n<1M #language-Turkish #region-us
# WizardLM_evol_instruct_V2_196k-Turkish ## Metrics:
[ "# WizardLM_evol_instruct_V2_196k-Turkish", "## Metrics:" ]
[ "TAGS\n#size_categories-100K<n<1M #language-Turkish #region-us \n", "# WizardLM_evol_instruct_V2_196k-Turkish", "## Metrics:" ]
[ 24, 19, 5 ]
[ "passage: TAGS\n#size_categories-100K<n<1M #language-Turkish #region-us \n# WizardLM_evol_instruct_V2_196k-Turkish## Metrics:" ]
122b626f6127ca111659a4354590845b30736f9a
# Wikidata Labels Large parallel corpus for machine translation - Entity label data extracted from Wikidata (2022-01-03), filtered for item entities only - Only download the languages you need with `datasets>=2.14.0` - Similar dataset: https://huggingface.co/datasets/wmt/wikititles (18 Wikipedia titles pairs instead of all Wikidata entities) ## Dataset Details ### Dataset Sources - Wikidata JSON dump (wikidata-20220103-all.json.gz) https://www.wikidata.org/wiki/Wikidata:Database_download ## Uses You can generate parallel text examples from this dataset like below: ```python from datasets import load_dataset import pandas as pd def parallel_labels(lang_codes: list, how="inner", repo_id="rayliuca/wikidata_entity_label", merge_config={}, datasets_config={}) -> pd.DataFrame: out_df = None for lc in lang_codes: dataset = load_dataset(repo_id, lc, **datasets_config) dataset_df = dataset['label'].to_pandas().rename(columns={"label":lc}).drop(columns=['lastrevid']) if out_df is None: out_df = dataset_df else: out_df = out_df.merge( dataset_df, on='wikidata_id', how=how, **merge_config ) return out_df # Note: the "en" subset is >4GB parallel_labels(['en', 'fr', 'ja', 'zh']).head() ``` ### Output | | wikidata_id | en | fr | ja | zh | |---:|:--------------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------|:---------------------------------------------| | 0 | Q109739412 | SARS-CoV-2 Omicron variant | variant Omicron du SARS-CoV-2 | SARSコロナウイルス2-オミクロン株 | 嚴重急性呼吸道症候群冠狀病毒2型Omicron變異株 | | 1 | Q108460606 | Ulughbegsaurus | Ulughbegsaurus | ウルグベグサウルス | 兀魯伯龍屬 | | 2 | Q108556886 | AUKUS | AUKUS | AUKUS | AUKUS | | 3 | Q106496152 | Claude Joseph | Claude Joseph | クロード・ジョゼフ | 克洛德·约瑟夫 | | 4 | Q105519361 | The World's Finest Assassin Gets Reincarnated in a Different World as an Aristocrat | The World's Finest Assassin Gets Reincarnated in Another World as an Aristocrat | 世界最高の暗殺者、異世界貴族に転生する | 世界頂尖的暗殺者轉生為異世界貴族 | Note: this example table above shows a quirk(?) of the Wiki data. The French Wikipedia page [The World's Finest Assassin Gets Reincarnated in Another World as an Aristocrat](https://fr.wikipedia.org/wiki/The_World%27s_Finest_Assassin_Gets_Reincarnated_in_Another_World_as_an_Aristocrat) uses English for its title. While this could be disadvantageous for direct translation training, it also provides insights into how native speakers might call this entity instead of the literal translation on the Wiki page as well ## Dataset Structure Each language has its own subset (aka config), which means you only have to download the languages you need with `datasets>=2.14.0` Each subset has these fields: - wikidata_id - lastrevid - label ## Dataset Creation #### Data Collection and Processing - Filtered for item entities only - Ignored the descriptions as those texts are not very parallel ## Bias, Risks, and Limitations - Might be slightly outdated (2022) - Popular languages have more entries - Labels are not guaranteed to be literal translations (see examples above)
rayliuca/WikidataLabels
[ "task_categories:translation", "task_categories:text2text-generation", "language:en", "language:fr", "language:de", "language:ja", "language:zh", "language:hi", "language:ar", "language:bn", "language:ru", "language:es", "license:cc0-1.0", "region:us" ]
2024-01-01T00:23:08+00:00
{"language": ["en", "fr", "de", "ja", "zh", "hi", "ar", "bn", "ru", "es"], "license": "cc0-1.0", "task_categories": ["translation", "text2text-generation"], "dataset_info": [{"config_name": "aa", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13986211, "num_examples": 436895}], "download_size": 9821312, "dataset_size": 13986211}, {"config_name": "ab", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 5012532, "num_examples": 159908}], "download_size": 3013706, "dataset_size": 5012532}, {"config_name": "abs", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4252728, "num_examples": 143986}], "download_size": 2567450, "dataset_size": 4252728}, {"config_name": "ace", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 19105673, "num_examples": 574712}], "download_size": 13573374, "dataset_size": 19105673}, {"config_name": "ady", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4444259, "num_examples": 148627}], "download_size": 2705754, "dataset_size": 4444259}, {"config_name": "ady-cyrl", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4412556, "num_examples": 147884}], "download_size": 2682170, "dataset_size": 4412556}, {"config_name": "aeb", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4305734, "num_examples": 145198}], "download_size": 2606368, "dataset_size": 4305734}, {"config_name": "aeb-arab", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4467930, "num_examples": 148796}], "download_size": 2722169, "dataset_size": 4467930}, {"config_name": "aeb-latn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 12770359, "num_examples": 404946}], "download_size": 8886489, "dataset_size": 12770359}, {"config_name": "af", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 58561042, "num_examples": 1643153}], "download_size": 42539052, "dataset_size": 58561042}, {"config_name": "agq", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 1317, "num_examples": 33}], "download_size": 2906, "dataset_size": 1317}, {"config_name": "ak", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14198715, "num_examples": 443037}], "download_size": 9991525, "dataset_size": 14198715}, {"config_name": "aln", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13811116, "num_examples": 432089}], "download_size": 9673418, "dataset_size": 13811116}, {"config_name": "als", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 20691, "num_examples": 543}], "download_size": 17540, "dataset_size": 20691}, {"config_name": "alt", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 108390, "num_examples": 1814}], "download_size": 59046, "dataset_size": 108390}, {"config_name": "am", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 5231176, "num_examples": 163038}], "download_size": 3187164, "dataset_size": 5231176}, {"config_name": "ami", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 21519, "num_examples": 686}], "download_size": 16640, "dataset_size": 21519}, {"config_name": "an", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 240345072, "num_examples": 5921087}], "download_size": 164895205, "dataset_size": 240345072}, {"config_name": "ang", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14275715, "num_examples": 443461}], "download_size": 10063758, "dataset_size": 14275715}, {"config_name": "anp", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8558258, "num_examples": 241612}], "download_size": 4381360, "dataset_size": 8558258}, {"config_name": "ar", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 291173732, "num_examples": 5724064}], "download_size": 159369497, "dataset_size": 291173732}, {"config_name": "arc", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4473283, "num_examples": 150006}], "download_size": 2722619, "dataset_size": 4473283}, {"config_name": "arn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13879729, "num_examples": 433912}], "download_size": 9715431, "dataset_size": 13879729}, {"config_name": "arq", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4346991, "num_examples": 146004}], "download_size": 2636972, "dataset_size": 4346991}, {"config_name": "ary", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 5358568, "num_examples": 171568}], "download_size": 3313402, "dataset_size": 5358568}, {"config_name": "arz", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 81806333, "num_examples": 1669699}], "download_size": 49423508, "dataset_size": 81806333}, {"config_name": "as", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 21658610, "num_examples": 450074}], "download_size": 9641626, "dataset_size": 21658610}, {"config_name": "ase", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4252943, "num_examples": 143986}], "download_size": 2568106, "dataset_size": 4252943}, {"config_name": "ast", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 1385628786, "num_examples": 20696237}], "download_size": 955908362, "dataset_size": 1385628786}, {"config_name": "atj", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 12996229, "num_examples": 411639}], "download_size": 9057557, "dataset_size": 12996229}, {"config_name": "av", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4722934, "num_examples": 153781}], "download_size": 2880103, "dataset_size": 4722934}, {"config_name": "avk", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13194485, "num_examples": 414598}], "download_size": 9200917, "dataset_size": 13194485}, {"config_name": "awa", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8599312, "num_examples": 242320}], "download_size": 4411751, "dataset_size": 8599312}, {"config_name": "ay", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14269432, "num_examples": 443521}], "download_size": 10029939, "dataset_size": 14269432}, {"config_name": "az", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 21049248, "num_examples": 516732}], "download_size": 14117527, "dataset_size": 21049248}, {"config_name": "azb", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 30781587, "num_examples": 607562}], "download_size": 16028687, "dataset_size": 30781587}, {"config_name": "ba", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 11525351, "num_examples": 261509}], "download_size": 6733777, "dataset_size": 11525351}, {"config_name": "ban", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13674052, "num_examples": 426706}], "download_size": 9513747, "dataset_size": 13674052}, {"config_name": "ban-bali", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 50961, "num_examples": 748}], "download_size": 25817, "dataset_size": 50961}, {"config_name": "bar", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 54783034, "num_examples": 1566120}], "download_size": 40389830, "dataset_size": 54783034}, {"config_name": "bbc", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 12820895, "num_examples": 406960}], "download_size": 8917054, "dataset_size": 12820895}, {"config_name": "bcc", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8017228, "num_examples": 241977}], "download_size": 4344579, "dataset_size": 8017228}, {"config_name": "be", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 30978832, "num_examples": 564184}], "download_size": 17461174, "dataset_size": 30978832}, {"config_name": "be-tarask", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 18931909, "num_examples": 374396}], "download_size": 10871239, "dataset_size": 18931909}, {"config_name": "bg", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 200628708, "num_examples": 4383953}], "download_size": 137745533, "dataset_size": 200628708}, {"config_name": "bgn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 7999280, "num_examples": 241566}], "download_size": 4331249, "dataset_size": 7999280}, {"config_name": "bi", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14040026, "num_examples": 438382}], "download_size": 9867032, "dataset_size": 14040026}, {"config_name": "bjn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8375348, "num_examples": 254558}], "download_size": 5722334, "dataset_size": 8375348}, {"config_name": "bm", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 18145787, "num_examples": 549694}], "download_size": 13129193, "dataset_size": 18145787}, {"config_name": "bn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 815803977, "num_examples": 9767284}], "download_size": 261147329, "dataset_size": 815803977}, {"config_name": "bo", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 11671330, "num_examples": 278307}], "download_size": 5669602, "dataset_size": 11671330}, {"config_name": "bpy", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 15497749, "num_examples": 347458}], "download_size": 6991190, "dataset_size": 15497749}, {"config_name": "bqi", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8017455, "num_examples": 241984}], "download_size": 4345123, "dataset_size": 8017455}, {"config_name": "br", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 58304963, "num_examples": 1653800}], "download_size": 42722031, "dataset_size": 58304963}, {"config_name": "brh", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 5328437, "num_examples": 171504}], "download_size": 3376189, "dataset_size": 5328437}, {"config_name": "bs", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 30441466, "num_examples": 858190}], "download_size": 21606575, "dataset_size": 30441466}, {"config_name": "btm", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4252525, "num_examples": 143980}], "download_size": 2567218, "dataset_size": 4252525}, {"config_name": "bto", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 12841721, "num_examples": 407470}], "download_size": 8934218, "dataset_size": 12841721}, {"config_name": "bug", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 7595464, "num_examples": 235268}], "download_size": 5129941, "dataset_size": 7595464}, {"config_name": "bxr", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4713699, "num_examples": 153707}], "download_size": 2869313, "dataset_size": 4713699}, {"config_name": "ca", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 408509932, "num_examples": 9936886}], "download_size": 288474980, "dataset_size": 408509932}, {"config_name": "cbk-zam", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14108232, "num_examples": 440345}], "download_size": 9920793, "dataset_size": 14108232}, {"config_name": "cdo", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 6503254, "num_examples": 201362}], "download_size": 4137841, "dataset_size": 6503254}, {"config_name": "ce", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 28093148, "num_examples": 607767}], "download_size": 16367596, "dataset_size": 28093148}, {"config_name": "ceb", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 332947091, "num_examples": 7769402}], "download_size": 219525737, "dataset_size": 332947091}, {"config_name": "ch", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13983906, "num_examples": 436785}], "download_size": 9817385, "dataset_size": 13983906}, {"config_name": "cho", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13950786, "num_examples": 435869}], "download_size": 9791296, "dataset_size": 13950786}, {"config_name": "chr", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 5386793, "num_examples": 172855}], "download_size": 3419676, "dataset_size": 5386793}, {"config_name": "chy", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13994916, "num_examples": 437007}], "download_size": 9830465, "dataset_size": 13994916}, {"config_name": "ckb", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 23343034, "num_examples": 511183}], "download_size": 11459344, "dataset_size": 23343034}, {"config_name": "co", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 47080480, "num_examples": 1346929}], "download_size": 34551346, "dataset_size": 47080480}, {"config_name": "cps", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 12849864, "num_examples": 407695}], "download_size": 8941921, "dataset_size": 12849864}, {"config_name": "cr", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 5516556, "num_examples": 176667}], "download_size": 3532952, "dataset_size": 5516556}, {"config_name": "crh", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 10864382, "num_examples": 336709}], "download_size": 7542853, "dataset_size": 10864382}, {"config_name": "crh-cyrl", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4419064, "num_examples": 148046}], "download_size": 2688683, "dataset_size": 4419064}, {"config_name": "crh-latn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14201429, "num_examples": 442905}], "download_size": 9986290, "dataset_size": 14201429}, {"config_name": "cs", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 140189244, "num_examples": 3384048}], "download_size": 97516751, "dataset_size": 140189244}, {"config_name": "csb", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 20177120, "num_examples": 619275}], "download_size": 14528772, "dataset_size": 20177120}, {"config_name": "cv", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8047221, "num_examples": 215611}], "download_size": 4857718, "dataset_size": 8047221}, {"config_name": "cy", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 89241808, "num_examples": 2244550}], "download_size": 62686006, "dataset_size": 89241808}, {"config_name": "da", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 130931077, "num_examples": 3448894}], "download_size": 98202417, "dataset_size": 130931077}, {"config_name": "dag", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 2664957, "num_examples": 78534}], "download_size": 2052615, "dataset_size": 2664957}, {"config_name": "de", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 765398522, "num_examples": 17531361}], "download_size": 527642124, "dataset_size": 765398522}, {"config_name": "de-at", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 53043722, "num_examples": 1515373}], "download_size": 38761571, "dataset_size": 53043722}, {"config_name": "de-ch", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 53480908, "num_examples": 1528137}], "download_size": 39349412, "dataset_size": 53480908}, {"config_name": "de-formal", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4256391, "num_examples": 144061}], "download_size": 2571862, "dataset_size": 4256391}, {"config_name": "din", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 12819746, "num_examples": 406591}], "download_size": 8922303, "dataset_size": 12819746}, {"config_name": "diq", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 7570161, "num_examples": 232674}], "download_size": 5057742, "dataset_size": 7570161}, {"config_name": "dsb", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 16135830, "num_examples": 491423}], "download_size": 11412316, "dataset_size": 16135830}, {"config_name": "dtp", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13867373, "num_examples": 433733}], "download_size": 9720699, "dataset_size": 13867373}, {"config_name": "dty", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8839082, "num_examples": 246026}], "download_size": 4551845, "dataset_size": 8839082}, {"config_name": "dua", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 2631, "num_examples": 87}], "download_size": 3877, "dataset_size": 2631}, {"config_name": "dv", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 81396462, "num_examples": 2103276}], "download_size": 45332104, "dataset_size": 81396462}, {"config_name": "dz", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8590239, "num_examples": 242196}], "download_size": 4406353, "dataset_size": 8590239}, {"config_name": "ee", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14377017, "num_examples": 447208}], "download_size": 10136064, "dataset_size": 14377017}, {"config_name": "egl", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13068224, "num_examples": 413551}], "download_size": 9121776, "dataset_size": 13068224}, {"config_name": "el", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 32978562, "num_examples": 592016}], "download_size": 19577876, "dataset_size": 32978562}, {"config_name": "eml", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14768563, "num_examples": 458847}], "download_size": 10453636, "dataset_size": 14768563}, {"config_name": "en", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 6327454281, "num_examples": 81801560}], "download_size": 4224231068, "dataset_size": 6327454281}, {"config_name": "en-ca", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 73305274, "num_examples": 1909970}], "download_size": 53060194, "dataset_size": 73305274}, {"config_name": "en-gb", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 115978412, "num_examples": 2520405}], "download_size": 78924421, "dataset_size": 115978412}, {"config_name": "en-us", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14815, "num_examples": 332}], "download_size": 9953, "dataset_size": 14815}, {"config_name": "eo", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 256196064, "num_examples": 6285304}], "download_size": 177219679, "dataset_size": 256196064}, {"config_name": "es", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 730214298, "num_examples": 17233968}], "download_size": 514588069, "dataset_size": 730214298}, {"config_name": "es-419", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4355180, "num_examples": 146476}], "download_size": 2659218, "dataset_size": 4355180}, {"config_name": "es-formal", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4280933, "num_examples": 144717}], "download_size": 2592085, "dataset_size": 4280933}, {"config_name": "et", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 65123623, "num_examples": 1820762}], "download_size": 48197302, "dataset_size": 65123623}, {"config_name": "eu", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 290282374, "num_examples": 7109758}], "download_size": 197889378, "dataset_size": 290282374}, {"config_name": "ext", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 223257222, "num_examples": 5359047}], "download_size": 147078789, "dataset_size": 223257222}, {"config_name": "fa", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 123727757, "num_examples": 2142642}], "download_size": 65952114, "dataset_size": 123727757}, {"config_name": "ff", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14116652, "num_examples": 440614}], "download_size": 9920388, "dataset_size": 14116652}, {"config_name": "fi", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 286539944, "num_examples": 6905698}], "download_size": 209916638, "dataset_size": 286539944}, {"config_name": "fit", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 20217258, "num_examples": 620391}], "download_size": 14566702, "dataset_size": 20217258}, {"config_name": "fj", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14159041, "num_examples": 441745}], "download_size": 9956108, "dataset_size": 14159041}, {"config_name": "fkv", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4328482, "num_examples": 145988}], "download_size": 2619845, "dataset_size": 4328482}, {"config_name": "fo", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 24474476, "num_examples": 731732}], "download_size": 17876981, "dataset_size": 24474476}, {"config_name": "fr", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 774128723, "num_examples": 17908351}], "download_size": 534489308, "dataset_size": 774128723}, {"config_name": "frc", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 17896106, "num_examples": 547258}], "download_size": 12953740, "dataset_size": 17896106}, {"config_name": "frp", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 40902510, "num_examples": 1191134}], "download_size": 29778105, "dataset_size": 40902510}, {"config_name": "frr", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 16979214, "num_examples": 515350}], "download_size": 12069637, "dataset_size": 16979214}, {"config_name": "fur", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 42077410, "num_examples": 1221071}], "download_size": 30714082, "dataset_size": 42077410}, {"config_name": "ga", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 471527543, "num_examples": 11524282}], "download_size": 320967189, "dataset_size": 471527543}, {"config_name": "gag", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14149375, "num_examples": 440732}], "download_size": 9940551, "dataset_size": 14149375}, {"config_name": "gan", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 31572161, "num_examples": 905186}], "download_size": 18909564, "dataset_size": 31572161}, {"config_name": "gan-hans", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 31004794, "num_examples": 889875}], "download_size": 18566811, "dataset_size": 31004794}, {"config_name": "gan-hant", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4374444, "num_examples": 147098}], "download_size": 2657182, "dataset_size": 4374444}, {"config_name": "gcr", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4311409, "num_examples": 145829}], "download_size": 2618211, "dataset_size": 4311409}, {"config_name": "gd", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 49316935, "num_examples": 1429457}], "download_size": 36220978, "dataset_size": 49316935}, {"config_name": "gl", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 289484839, "num_examples": 7052226}], "download_size": 197315151, "dataset_size": 289484839}, {"config_name": "glk", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8327018, "num_examples": 249115}], "download_size": 4538325, "dataset_size": 8327018}, {"config_name": "gn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14212974, "num_examples": 442765}], "download_size": 10004863, "dataset_size": 14212974}, {"config_name": "gom", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4584575, "num_examples": 150273}], "download_size": 2780570, "dataset_size": 4584575}, {"config_name": "gom-deva", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8585678, "num_examples": 242131}], "download_size": 4400578, "dataset_size": 8585678}, {"config_name": "gom-latn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 12783006, "num_examples": 405302}], "download_size": 8897342, "dataset_size": 12783006}, {"config_name": "gor", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14667616, "num_examples": 454512}], "download_size": 10319196, "dataset_size": 14667616}, {"config_name": "got", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 5432139, "num_examples": 172951}], "download_size": 3435531, "dataset_size": 5432139}, {"config_name": "grc", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4494817, "num_examples": 149631}], "download_size": 2746170, "dataset_size": 4494817}, {"config_name": "gu", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 23788894, "num_examples": 486140}], "download_size": 10779200, "dataset_size": 23788894}, {"config_name": "guc", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 1419, "num_examples": 38}], "download_size": 3054, "dataset_size": 1419}, {"config_name": "guw", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 118, "num_examples": 4}], "download_size": 1864, "dataset_size": 118}, {"config_name": "gv", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 20683485, "num_examples": 631005}], "download_size": 14894590, "dataset_size": 20683485}, {"config_name": "ha", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14716168, "num_examples": 455836}], "download_size": 10421790, "dataset_size": 14716168}, {"config_name": "hak", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 6128644, "num_examples": 193036}], "download_size": 3991729, "dataset_size": 6128644}, {"config_name": "haw", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14158084, "num_examples": 441511}], "download_size": 9952975, "dataset_size": 14158084}, {"config_name": "he", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 43629050, "num_examples": 884809}], "download_size": 27221301, "dataset_size": 43629050}, {"config_name": "hi", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 37237187, "num_examples": 668964}], "download_size": 17804873, "dataset_size": 37237187}, {"config_name": "hif", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14457954, "num_examples": 449009}], "download_size": 10166264, "dataset_size": 14457954}, {"config_name": "hif-latn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14519845, "num_examples": 454037}], "download_size": 10240704, "dataset_size": 14519845}, {"config_name": "hil", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 12928914, "num_examples": 409962}], "download_size": 9009705, "dataset_size": 12928914}, {"config_name": "ho", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13950504, "num_examples": 435857}], "download_size": 9790849, "dataset_size": 13950504}, {"config_name": "hr", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 61272623, "num_examples": 1720527}], "download_size": 45307411, "dataset_size": 61272623}, {"config_name": "hrx", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 12869295, "num_examples": 407823}], "download_size": 8964114, "dataset_size": 12869295}, {"config_name": "hsb", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 23720349, "num_examples": 707100}], "download_size": 17145693, "dataset_size": 23720349}, {"config_name": "ht", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 16835529, "num_examples": 509955}], "download_size": 11880404, "dataset_size": 16835529}, {"config_name": "hu", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 85054175, "num_examples": 2200589}], "download_size": 64143342, "dataset_size": 85054175}, {"config_name": "hu-formal", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4252810, "num_examples": 143986}], "download_size": 2567582, "dataset_size": 4252810}, {"config_name": "hy", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 39339286, "num_examples": 773925}], "download_size": 22108994, "dataset_size": 39339286}, {"config_name": "hyw", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 5443608, "num_examples": 166902}], "download_size": 3238370, "dataset_size": 5443608}, {"config_name": "hz", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13948574, "num_examples": 435804}], "download_size": 9788697, "dataset_size": 13948574}, {"config_name": "ia", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 229143237, "num_examples": 5616433}], "download_size": 155877454, "dataset_size": 229143237}, {"config_name": "id", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 95220928, "num_examples": 2512331}], "download_size": 69525046, "dataset_size": 95220928}, {"config_name": "ie", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 225725262, "num_examples": 5533032}], "download_size": 153371930, "dataset_size": 225725262}, {"config_name": "ig", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 20109388, "num_examples": 617044}], "download_size": 14475407, "dataset_size": 20109388}, {"config_name": "ii", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4310418, "num_examples": 145332}], "download_size": 2609723, "dataset_size": 4310418}, {"config_name": "ik", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13989609, "num_examples": 436958}], "download_size": 9823174, "dataset_size": 13989609}, {"config_name": "ike-cans", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4352278, "num_examples": 146355}], "download_size": 2645174, "dataset_size": 4352278}, {"config_name": "ike-latn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13851135, "num_examples": 432932}], "download_size": 9714057, "dataset_size": 13851135}, {"config_name": "ilo", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 15955483, "num_examples": 480555}], "download_size": 11141942, "dataset_size": 15955483}, {"config_name": "inh", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4634360, "num_examples": 152226}], "download_size": 2831580, "dataset_size": 4634360}, {"config_name": "io", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 233656822, "num_examples": 5757440}], "download_size": 159720058, "dataset_size": 233656822}, {"config_name": "is", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 51679396, "num_examples": 1483610}], "download_size": 37965494, "dataset_size": 51679396}, {"config_name": "it", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 536601426, "num_examples": 12631487}], "download_size": 375025347, "dataset_size": 536601426}, {"config_name": "iu", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 5360588, "num_examples": 172215}], "download_size": 3402239, "dataset_size": 5360588}, {"config_name": "ja", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 140641579, "num_examples": 2917962}], "download_size": 92145329, "dataset_size": 140641579}, {"config_name": "jam", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 18849751, "num_examples": 571777}], "download_size": 13684422, "dataset_size": 18849751}, {"config_name": "jbo", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14301985, "num_examples": 446512}], "download_size": 9994516, "dataset_size": 14301985}, {"config_name": "jv", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 27232302, "num_examples": 794181}], "download_size": 19651565, "dataset_size": 27232302}, {"config_name": "ka", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 24073345, "num_examples": 399546}], "download_size": 11679979, "dataset_size": 24073345}, {"config_name": "kaa", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14082184, "num_examples": 439411}], "download_size": 9902820, "dataset_size": 14082184}, {"config_name": "kab", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 18459676, "num_examples": 557857}], "download_size": 13384218, "dataset_size": 18459676}, {"config_name": "kbd", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4594409, "num_examples": 149733}], "download_size": 2759503, "dataset_size": 4594409}, {"config_name": "kbd-cyrl", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4417661, "num_examples": 148017}], "download_size": 2687531, "dataset_size": 4417661}, {"config_name": "kbp", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 12873178, "num_examples": 408039}], "download_size": 8965474, "dataset_size": 12873178}, {"config_name": "kea", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 12793700, "num_examples": 405901}], "download_size": 8896866, "dataset_size": 12793700}, {"config_name": "kg", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 40949149, "num_examples": 1193499}], "download_size": 29766747, "dataset_size": 40949149}, {"config_name": "khw", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4308653, "num_examples": 145279}], "download_size": 2608581, "dataset_size": 4308653}, {"config_name": "ki", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14056900, "num_examples": 439015}], "download_size": 9875534, "dataset_size": 14056900}, {"config_name": "kj", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13881723, "num_examples": 433861}], "download_size": 9733715, "dataset_size": 13881723}, {"config_name": "kjp", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8504302, "num_examples": 240339}], "download_size": 4341523, "dataset_size": 8504302}, {"config_name": "kk", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 19216115, "num_examples": 428880}], "download_size": 11577682, "dataset_size": 19216115}, {"config_name": "kk-arab", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 7241749, "num_examples": 211731}], "download_size": 4487032, "dataset_size": 7241749}, {"config_name": "kk-kz", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4937945, "num_examples": 160027}], "download_size": 3062906, "dataset_size": 4937945}, {"config_name": "kk-latn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 22197825, "num_examples": 677162}], "download_size": 16072332, "dataset_size": 22197825}, {"config_name": "kk-tr", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 20060635, "num_examples": 616521}], "download_size": 14438929, "dataset_size": 20060635}, {"config_name": "ko", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 60335212, "num_examples": 1364440}], "download_size": 39186630, "dataset_size": 60335212}, {"config_name": "ko-kp", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4338717, "num_examples": 146150}], "download_size": 2630925, "dataset_size": 4338717}, {"config_name": "koi", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4737590, "num_examples": 155082}], "download_size": 2894674, "dataset_size": 4737590}, {"config_name": "kr", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13886057, "num_examples": 433990}], "download_size": 9737602, "dataset_size": 13886057}, {"config_name": "krc", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4646136, "num_examples": 151026}], "download_size": 2785454, "dataset_size": 4646136}, {"config_name": "kri", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 12798530, "num_examples": 406032}], "download_size": 8902330, "dataset_size": 12798530}, {"config_name": "krj", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13850324, "num_examples": 433444}], "download_size": 9703460, "dataset_size": 13850324}, {"config_name": "krl", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 12788020, "num_examples": 405729}], "download_size": 8893337, "dataset_size": 12788020}, {"config_name": "ks", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4390604, "num_examples": 147033}], "download_size": 2671069, "dataset_size": 4390604}, {"config_name": "ks-deva", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8567518, "num_examples": 241832}], "download_size": 4387687, "dataset_size": 8567518}, {"config_name": "ksh", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 20394712, "num_examples": 624523}], "download_size": 14698860, "dataset_size": 20394712}, {"config_name": "ku", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8037777, "num_examples": 239515}], "download_size": 5306097, "dataset_size": 8037777}, {"config_name": "ku-arab", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4577826, "num_examples": 151290}], "download_size": 2796159, "dataset_size": 4577826}, {"config_name": "ku-latn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14683841, "num_examples": 458802}], "download_size": 10371977, "dataset_size": 14683841}, {"config_name": "kum", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4252739, "num_examples": 143985}], "download_size": 2567503, "dataset_size": 4252739}, {"config_name": "kv", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4946978, "num_examples": 158888}], "download_size": 2997865, "dataset_size": 4946978}, {"config_name": "kw", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 20245535, "num_examples": 621432}], "download_size": 14581378, "dataset_size": 20245535}, {"config_name": "ky", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8909613, "num_examples": 235165}], "download_size": 5462115, "dataset_size": 8909613}, {"config_name": "la", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 299766395, "num_examples": 7085082}], "download_size": 201477460, "dataset_size": 299766395}, {"config_name": "lad", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 20336417, "num_examples": 622775}], "download_size": 14653199, "dataset_size": 20336417}, {"config_name": "lb", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 56473066, "num_examples": 1601093}], "download_size": 41410732, "dataset_size": 56473066}, {"config_name": "lbe", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4501470, "num_examples": 149898}], "download_size": 2744786, "dataset_size": 4501470}, {"config_name": "lez", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4890798, "num_examples": 155936}], "download_size": 2959653, "dataset_size": 4890798}, {"config_name": "lfn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14709210, "num_examples": 456719}], "download_size": 10408539, "dataset_size": 14709210}, {"config_name": "lg", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13979286, "num_examples": 436009}], "download_size": 9802779, "dataset_size": 13979286}, {"config_name": "li", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 43476868, "num_examples": 1253970}], "download_size": 31750932, "dataset_size": 43476868}, {"config_name": "lij", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 42327066, "num_examples": 1227346}], "download_size": 30898971, "dataset_size": 42327066}, {"config_name": "liv", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 12781331, "num_examples": 405236}], "download_size": 8895889, "dataset_size": 12781331}, {"config_name": "lki", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8039166, "num_examples": 242526}], "download_size": 4363703, "dataset_size": 8039166}, {"config_name": "lld", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 90305, "num_examples": 2634}], "download_size": 69672, "dataset_size": 90305}, {"config_name": "lmo", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 18287638, "num_examples": 545398}], "download_size": 13130119, "dataset_size": 18287638}, {"config_name": "ln", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14123637, "num_examples": 439731}], "download_size": 9915851, "dataset_size": 14123637}, {"config_name": "lo", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 9905189, "num_examples": 271710}], "download_size": 5313218, "dataset_size": 9905189}, {"config_name": "loz", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13695602, "num_examples": 428723}], "download_size": 9581113, "dataset_size": 13695602}, {"config_name": "lt", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 39902419, "num_examples": 1096727}], "download_size": 29185765, "dataset_size": 39902419}, {"config_name": "ltg", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13884707, "num_examples": 433453}], "download_size": 9736637, "dataset_size": 13884707}, {"config_name": "lus", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13695197, "num_examples": 428712}], "download_size": 9580538, "dataset_size": 13695197}, {"config_name": "luz", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8459036, "num_examples": 253454}], "download_size": 4687414, "dataset_size": 8459036}, {"config_name": "lv", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 27242119, "num_examples": 764753}], "download_size": 19676667, "dataset_size": 27242119}, {"config_name": "lzh", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 25067538, "num_examples": 685152}], "download_size": 14998856, "dataset_size": 25067538}, {"config_name": "mdf", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4634268, "num_examples": 152141}], "download_size": 2820744, "dataset_size": 4634268}, {"config_name": "mg", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 43863002, "num_examples": 1271074}], "download_size": 32016826, "dataset_size": 43863002}, {"config_name": "mh", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13775721, "num_examples": 431162}], "download_size": 9644397, "dataset_size": 13775721}, {"config_name": "mi", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 20857040, "num_examples": 637118}], "download_size": 15060301, "dataset_size": 20857040}, {"config_name": "min", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 53044258, "num_examples": 1464128}], "download_size": 38587450, "dataset_size": 53044258}, {"config_name": "mk", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 24087229, "num_examples": 449241}], "download_size": 12217912, "dataset_size": 24087229}, {"config_name": "ml", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 189266798, "num_examples": 2664923}], "download_size": 71344031, "dataset_size": 189266798}, {"config_name": "mn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 9311543, "num_examples": 219695}], "download_size": 5272784, "dataset_size": 9311543}, {"config_name": "mni", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8696893, "num_examples": 243616}], "download_size": 4470994, "dataset_size": 8696893}, {"config_name": "mnw", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8861861, "num_examples": 244906}], "download_size": 4517726, "dataset_size": 8861861}, {"config_name": "mo", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 5377009, "num_examples": 172144}], "download_size": 3405661, "dataset_size": 5377009}, {"config_name": "mr", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 26855182, "num_examples": 526220}], "download_size": 12358679, "dataset_size": 26855182}, {"config_name": "mrh", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 68, "num_examples": 2}], "download_size": 1820, "dataset_size": 68}, {"config_name": "mrj", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 5007903, "num_examples": 160889}], "download_size": 3073431, "dataset_size": 5007903}, {"config_name": "ms", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 64674328, "num_examples": 1803714}], "download_size": 47165217, "dataset_size": 64674328}, {"config_name": "ms-arab", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 136496, "num_examples": 2961}], "download_size": 92316, "dataset_size": 136496}, {"config_name": "mt", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 22632686, "num_examples": 682867}], "download_size": 16352572, "dataset_size": 22632686}, {"config_name": "mus", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14013416, "num_examples": 437688}], "download_size": 9835239, "dataset_size": 14013416}, {"config_name": "mwl", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14493299, "num_examples": 448926}], "download_size": 10225888, "dataset_size": 14493299}, {"config_name": "my", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 16182182, "num_examples": 345096}], "download_size": 7981905, "dataset_size": 16182182}, {"config_name": "mzn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 17973941, "num_examples": 447870}], "download_size": 9174617, "dataset_size": 17973941}, {"config_name": "na", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13992666, "num_examples": 436956}], "download_size": 9823328, "dataset_size": 13992666}, {"config_name": "nah", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14490294, "num_examples": 449748}], "download_size": 10192501, "dataset_size": 14490294}, {"config_name": "nan-hani", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 191, "num_examples": 6}], "download_size": 1925, "dataset_size": 191}, {"config_name": "nap", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 42362346, "num_examples": 1229161}], "download_size": 30918265, "dataset_size": 42362346}, {"config_name": "nb", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 142554768, "num_examples": 3688026}], "download_size": 105549981, "dataset_size": 142554768}, {"config_name": "nds", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 58766114, "num_examples": 1666813}], "download_size": 43421948, "dataset_size": 58766114}, {"config_name": "nds-nl", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 44121756, "num_examples": 1273149}], "download_size": 32201410, "dataset_size": 44121756}, {"config_name": "ne", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 11925386, "num_examples": 295006}], "download_size": 6265232, "dataset_size": 11925386}, {"config_name": "new", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 16906308, "num_examples": 350362}], "download_size": 7680329, "dataset_size": 16906308}, {"config_name": "ng", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13870754, "num_examples": 433582}], "download_size": 9723795, "dataset_size": 13870754}, {"config_name": "nia", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 20649, "num_examples": 515}], "download_size": 16535, "dataset_size": 20649}, {"config_name": "niu", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 12794247, "num_examples": 405902}], "download_size": 8897260, "dataset_size": 12794247}, {"config_name": "nl", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 5016576732, "num_examples": 61931959}], "download_size": 3380404239, "dataset_size": 5016576732}, {"config_name": "nn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 99997815, "num_examples": 2708994}], "download_size": 74736304, "dataset_size": 99997815}, {"config_name": "no", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 2934, "num_examples": 64}], "download_size": 4108, "dataset_size": 2934}, {"config_name": "nod", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4322068, "num_examples": 145566}], "download_size": 2618106, "dataset_size": 4322068}, {"config_name": "nov", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14150434, "num_examples": 440903}], "download_size": 9947798, "dataset_size": 14150434}, {"config_name": "nqo", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8094271, "num_examples": 243184}], "download_size": 4398836, "dataset_size": 8094271}, {"config_name": "nrm", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 41330956, "num_examples": 1203295}], "download_size": 30084065, "dataset_size": 41330956}, {"config_name": "nso", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14178321, "num_examples": 443205}], "download_size": 9959708, "dataset_size": 14178321}, {"config_name": "nv", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 15351770, "num_examples": 455188}], "download_size": 10472240, "dataset_size": 15351770}, {"config_name": "ny", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13989813, "num_examples": 436764}], "download_size": 9821588, "dataset_size": 13989813}, {"config_name": "nys", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13092059, "num_examples": 413241}], "download_size": 9153100, "dataset_size": 13092059}, {"config_name": "oc", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 266612548, "num_examples": 6569770}], "download_size": 180156462, "dataset_size": 266612548}, {"config_name": "olo", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13200388, "num_examples": 416935}], "download_size": 9214968, "dataset_size": 13200388}, {"config_name": "om", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 5476389, "num_examples": 175314}], "download_size": 3496637, "dataset_size": 5476389}, {"config_name": "or", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 22798709, "num_examples": 470237}], "download_size": 10322832, "dataset_size": 22798709}, {"config_name": "os", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 5946062, "num_examples": 177054}], "download_size": 3583703, "dataset_size": 5946062}, {"config_name": "ota", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8015024, "num_examples": 241903}], "download_size": 4343478, "dataset_size": 8015024}, {"config_name": "pa", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 20505754, "num_examples": 481522}], "download_size": 10552147, "dataset_size": 20505754}, {"config_name": "pam", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14527964, "num_examples": 451253}], "download_size": 10242443, "dataset_size": 14527964}, {"config_name": "pap", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 54505401, "num_examples": 1449881}], "download_size": 40415776, "dataset_size": 54505401}, {"config_name": "pcd", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 42132826, "num_examples": 1221362}], "download_size": 30766812, "dataset_size": 42132826}, {"config_name": "pdc", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14435256, "num_examples": 448055}], "download_size": 10178322, "dataset_size": 14435256}, {"config_name": "pdt", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13994892, "num_examples": 437200}], "download_size": 9819388, "dataset_size": 13994892}, {"config_name": "pfl", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 15461023, "num_examples": 474198}], "download_size": 10893651, "dataset_size": 15461023}, {"config_name": "pi", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8913354, "num_examples": 250251}], "download_size": 4651392, "dataset_size": 8913354}, {"config_name": "pih", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13971081, "num_examples": 436214}], "download_size": 9810653, "dataset_size": 13971081}, {"config_name": "pl", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 426030491, "num_examples": 10025139}], "download_size": 295767506, "dataset_size": 426030491}, {"config_name": "pms", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 51268512, "num_examples": 1477043}], "download_size": 37698831, "dataset_size": 51268512}, {"config_name": "pnb", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 16192682, "num_examples": 409037}], "download_size": 9196626, "dataset_size": 16192682}, {"config_name": "pnt", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4439173, "num_examples": 148336}], "download_size": 2703117, "dataset_size": 4439173}, {"config_name": "prg", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 17940420, "num_examples": 544030}], "download_size": 12958482, "dataset_size": 17940420}, {"config_name": "ps", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8860902, "num_examples": 259186}], "download_size": 4916502, "dataset_size": 8860902}, {"config_name": "pt", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 491184040, "num_examples": 11574568}], "download_size": 340831923, "dataset_size": 491184040}, {"config_name": "pt-br", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 318857431, "num_examples": 7782980}], "download_size": 223442911, "dataset_size": 318857431}, {"config_name": "pwn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8500, "num_examples": 269}], "download_size": 8738, "dataset_size": 8500}, {"config_name": "qu", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 15254702, "num_examples": 468823}], "download_size": 10750388, "dataset_size": 15254702}, {"config_name": "quc", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 32, "num_examples": 1}], "download_size": 1772, "dataset_size": 32}, {"config_name": "qug", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13798264, "num_examples": 431733}], "download_size": 9661685, "dataset_size": 13798264}, {"config_name": "rgn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 17001688, "num_examples": 519871}], "download_size": 12258201, "dataset_size": 17001688}, {"config_name": "rif", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13792951, "num_examples": 431588}], "download_size": 9657698, "dataset_size": 13792951}, {"config_name": "rm", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 44450577, "num_examples": 1284908}], "download_size": 32519630, "dataset_size": 44450577}, {"config_name": "rmc", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 159, "num_examples": 4}], "download_size": 1963, "dataset_size": 159}, {"config_name": "rmy", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 5610156, "num_examples": 179191}], "download_size": 3608283, "dataset_size": 5610156}, {"config_name": "rn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13935534, "num_examples": 435271}], "download_size": 9779486, "dataset_size": 13935534}, {"config_name": "ro", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 247469452, "num_examples": 5878366}], "download_size": 177525205, "dataset_size": 247469452}, {"config_name": "roa-tara", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14425120, "num_examples": 448972}], "download_size": 10152875, "dataset_size": 14425120}, {"config_name": "ru", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 405103215, "num_examples": 7485811}], "download_size": 257215625, "dataset_size": 405103215}, {"config_name": "rue", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4953403, "num_examples": 159530}], "download_size": 3037824, "dataset_size": 4953403}, {"config_name": "rup", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14459686, "num_examples": 450345}], "download_size": 10198398, "dataset_size": 14459686}, {"config_name": "ruq-cyrl", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4434290, "num_examples": 148404}], "download_size": 2700920, "dataset_size": 4434290}, {"config_name": "ruq-latn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13783683, "num_examples": 430978}], "download_size": 9656941, "dataset_size": 13783683}, {"config_name": "rw", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14090196, "num_examples": 439172}], "download_size": 9901257, "dataset_size": 14090196}, {"config_name": "rwr", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8568706, "num_examples": 241841}], "download_size": 4388475, "dataset_size": 8568706}, {"config_name": "ryu", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 2852, "num_examples": 82}], "download_size": 4237, "dataset_size": 2852}, {"config_name": "sa", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 21404327, "num_examples": 455674}], "download_size": 9692464, "dataset_size": 21404327}, {"config_name": "sat", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 10810040, "num_examples": 284911}], "download_size": 5750917, "dataset_size": 10810040}, {"config_name": "sc", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 47195572, "num_examples": 1348137}], "download_size": 34521764, "dataset_size": 47195572}, {"config_name": "scn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 43458983, "num_examples": 1259067}], "download_size": 31775157, "dataset_size": 43458983}, {"config_name": "sco", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 56960413, "num_examples": 1611092}], "download_size": 41724559, "dataset_size": 56960413}, {"config_name": "sd", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14257513, "num_examples": 363318}], "download_size": 7844047, "dataset_size": 14257513}, {"config_name": "sdc", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13975497, "num_examples": 436913}], "download_size": 9800517, "dataset_size": 13975497}, {"config_name": "se", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 23962268, "num_examples": 711439}], "download_size": 17409387, "dataset_size": 23962268}, {"config_name": "sei", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13827581, "num_examples": 432520}], "download_size": 9684192, "dataset_size": 13827581}, {"config_name": "sg", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13913524, "num_examples": 434751}], "download_size": 9761739, "dataset_size": 13913524}, {"config_name": "sh", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 30173635, "num_examples": 746207}], "download_size": 20133594, "dataset_size": 30173635}, {"config_name": "shi-latn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13783218, "num_examples": 430968}], "download_size": 9656828, "dataset_size": 13783218}, {"config_name": "shi-tfng", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4308577, "num_examples": 145279}], "download_size": 2608525, "dataset_size": 4308577}, {"config_name": "shn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 10139002, "num_examples": 260808}], "download_size": 4952168, "dataset_size": 10139002}, {"config_name": "shy-latn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4255322, "num_examples": 144058}], "download_size": 2570625, "dataset_size": 4255322}, {"config_name": "si", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 7405400, "num_examples": 189718}], "download_size": 4270591, "dataset_size": 7405400}, {"config_name": "sjd", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4300688, "num_examples": 145047}], "download_size": 2604357, "dataset_size": 4300688}, {"config_name": "sje", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 20970223, "num_examples": 637639}], "download_size": 15120381, "dataset_size": 20970223}, {"config_name": "sju", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4315103, "num_examples": 145655}], "download_size": 2620763, "dataset_size": 4315103}, {"config_name": "sk", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 75586366, "num_examples": 2050873}], "download_size": 54951330, "dataset_size": 75586366}, {"config_name": "skr", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4274062, "num_examples": 144443}], "download_size": 2585286, "dataset_size": 4274062}, {"config_name": "sl", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 157883240, "num_examples": 4112048}], "download_size": 118047353, "dataset_size": 157883240}, {"config_name": "sli", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13909208, "num_examples": 434986}], "download_size": 9745964, "dataset_size": 13909208}, {"config_name": "sm", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13984823, "num_examples": 436830}], "download_size": 9817472, "dataset_size": 13984823}, {"config_name": "sma", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 20653595, "num_examples": 630437}], "download_size": 14902319, "dataset_size": 20653595}, {"config_name": "smj", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 19640206, "num_examples": 604326}], "download_size": 14133964, "dataset_size": 19640206}, {"config_name": "smn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 10902411, "num_examples": 337543}], "download_size": 7576850, "dataset_size": 10902411}, {"config_name": "sms", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4462345, "num_examples": 149355}], "download_size": 2741038, "dataset_size": 4462345}, {"config_name": "sn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 20116601, "num_examples": 618231}], "download_size": 14463728, "dataset_size": 20116601}, {"config_name": "sq", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 304708913, "num_examples": 7311820}], "download_size": 225592169, "dataset_size": 304708913}, {"config_name": "sr", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 52787253, "num_examples": 1018361}], "download_size": 31364006, "dataset_size": 52787253}, {"config_name": "sr-ec", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 9237541, "num_examples": 248556}], "download_size": 5875548, "dataset_size": 9237541}, {"config_name": "sr-el", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 48848162, "num_examples": 1418824}], "download_size": 35859120, "dataset_size": 48848162}, {"config_name": "srq", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 12796525, "num_examples": 405957}], "download_size": 8899493, "dataset_size": 12796525}, {"config_name": "ss", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13823630, "num_examples": 432423}], "download_size": 9682165, "dataset_size": 13823630}, {"config_name": "st", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13938937, "num_examples": 435419}], "download_size": 9785161, "dataset_size": 13938937}, {"config_name": "stq", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14484394, "num_examples": 449885}], "download_size": 10228446, "dataset_size": 14484394}, {"config_name": "su", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 20025826, "num_examples": 583096}], "download_size": 14042822, "dataset_size": 20025826}, {"config_name": "sv", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 339074900, "num_examples": 8115455}], "download_size": 236022796, "dataset_size": 339074900}, {"config_name": "sw", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 50612064, "num_examples": 1465385}], "download_size": 37096369, "dataset_size": 50612064}, {"config_name": "szl", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 16772062, "num_examples": 500107}], "download_size": 11868254, "dataset_size": 16772062}, {"config_name": "szy", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4332021, "num_examples": 146136}], "download_size": 2633271, "dataset_size": 4332021}, {"config_name": "ta", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 31251824, "num_examples": 546558}], "download_size": 15157673, "dataset_size": 31251824}, {"config_name": "tay", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4345269, "num_examples": 146938}], "download_size": 2632535, "dataset_size": 4345269}, {"config_name": "tcy", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 8723594, "num_examples": 244350}], "download_size": 4487471, "dataset_size": 8723594}, {"config_name": "te", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 27587665, "num_examples": 569615}], "download_size": 13669398, "dataset_size": 27587665}, {"config_name": "tet", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 15092299, "num_examples": 466244}], "download_size": 10702917, "dataset_size": 15092299}, {"config_name": "tg", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 12643125, "num_examples": 304625}], "download_size": 7622522, "dataset_size": 12643125}, {"config_name": "tg-cyrl", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4504034, "num_examples": 149533}], "download_size": 2755000, "dataset_size": 4504034}, {"config_name": "tg-latn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 19845835, "num_examples": 610020}], "download_size": 14264492, "dataset_size": 19845835}, {"config_name": "th", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 32693750, "num_examples": 537447}], "download_size": 15849247, "dataset_size": 32693750}, {"config_name": "ti", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4366995, "num_examples": 146479}], "download_size": 2648869, "dataset_size": 4366995}, {"config_name": "tk", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 5797050, "num_examples": 184302}], "download_size": 3728802, "dataset_size": 5797050}, {"config_name": "tl", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13661554, "num_examples": 387377}], "download_size": 9456413, "dataset_size": 13661554}, {"config_name": "tly", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4309748, "num_examples": 145312}], "download_size": 2609307, "dataset_size": 4309748}, {"config_name": "tly-cyrl", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 35, "num_examples": 1}], "download_size": 1793, "dataset_size": 35}, {"config_name": "tn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13936132, "num_examples": 435219}], "download_size": 9780279, "dataset_size": 13936132}, {"config_name": "to", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13980327, "num_examples": 436460}], "download_size": 9810650, "dataset_size": 13980327}, {"config_name": "tpi", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14169019, "num_examples": 442133}], "download_size": 9961827, "dataset_size": 14169019}, {"config_name": "tr", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 72134544, "num_examples": 1770267}], "download_size": 51032484, "dataset_size": 72134544}, {"config_name": "tru", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 5322844, "num_examples": 171327}], "download_size": 3371105, "dataset_size": 5322844}, {"config_name": "trv", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 94285, "num_examples": 3109}], "download_size": 65138, "dataset_size": 94285}, {"config_name": "ts", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13943481, "num_examples": 435408}], "download_size": 9783789, "dataset_size": 13943481}, {"config_name": "tt", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 24182976, "num_examples": 548502}], "download_size": 14868166, "dataset_size": 24182976}, {"config_name": "tt-cyrl", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4943914, "num_examples": 158198}], "download_size": 3048932, "dataset_size": 4943914}, {"config_name": "tt-latn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13842972, "num_examples": 432513}], "download_size": 9702714, "dataset_size": 13842972}, {"config_name": "tum", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13924159, "num_examples": 435110}], "download_size": 9770501, "dataset_size": 13924159}, {"config_name": "tw", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13830508, "num_examples": 432669}], "download_size": 9688164, "dataset_size": 13830508}, {"config_name": "ty", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 16816401, "num_examples": 507332}], "download_size": 12098154, "dataset_size": 16816401}, {"config_name": "tyv", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4583082, "num_examples": 149929}], "download_size": 2779632, "dataset_size": 4583082}, {"config_name": "tzm", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4253588, "num_examples": 144002}], "download_size": 2569067, "dataset_size": 4253588}, {"config_name": "udm", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4854947, "num_examples": 156300}], "download_size": 2958444, "dataset_size": 4854947}, {"config_name": "ug-arab", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4316690, "num_examples": 145443}], "download_size": 2614962, "dataset_size": 4316690}, {"config_name": "ug-latn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13786474, "num_examples": 431056}], "download_size": 9659723, "dataset_size": 13786474}, {"config_name": "uk", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 251058352, "num_examples": 5108733}], "download_size": 168140976, "dataset_size": 251058352}, {"config_name": "ur", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 57063750, "num_examples": 987011}], "download_size": 28328459, "dataset_size": 57063750}, {"config_name": "uz", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 11731793, "num_examples": 344615}], "download_size": 8102734, "dataset_size": 11731793}, {"config_name": "uz-cyrl", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4252574, "num_examples": 143981}], "download_size": 2567325, "dataset_size": 4252574}, {"config_name": "ve", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 13932174, "num_examples": 435216}], "download_size": 9777266, "dataset_size": 13932174}, {"config_name": "vec", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 52081230, "num_examples": 1466867}], "download_size": 37307805, "dataset_size": 52081230}, {"config_name": "vep", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 6174898, "num_examples": 192298}], "download_size": 3994582, "dataset_size": 6174898}, {"config_name": "vi", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 246835524, "num_examples": 5743737}], "download_size": 172949263, "dataset_size": 246835524}, {"config_name": "vls", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 42789297, "num_examples": 1239359}], "download_size": 31228294, "dataset_size": 42789297}, {"config_name": "vmf", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 18352990, "num_examples": 555205}], "download_size": 13289296, "dataset_size": 18352990}, {"config_name": "vo", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 228352533, "num_examples": 5610875}], "download_size": 155496988, "dataset_size": 228352533}, {"config_name": "vot", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 5406190, "num_examples": 173486}], "download_size": 3439433, "dataset_size": 5406190}, {"config_name": "wa", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 49235347, "num_examples": 1426584}], "download_size": 36167816, "dataset_size": 49235347}, {"config_name": "war", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 190306474, "num_examples": 4449062}], "download_size": 133786270, "dataset_size": 190306474}, {"config_name": "wls", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4033, "num_examples": 104}], "download_size": 5150, "dataset_size": 4033}, {"config_name": "wo", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 40961626, "num_examples": 1193626}], "download_size": 29778666, "dataset_size": 40961626}, {"config_name": "wuu", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 40570130, "num_examples": 1127741}], "download_size": 24209117, "dataset_size": 40570130}, {"config_name": "wya", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 28, "num_examples": 1}], "download_size": 1740, "dataset_size": 28}, {"config_name": "xal", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4475344, "num_examples": 149984}], "download_size": 2722459, "dataset_size": 4475344}, {"config_name": "xh", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 20036194, "num_examples": 615514}], "download_size": 14405310, "dataset_size": 20036194}, {"config_name": "xmf", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 5943645, "num_examples": 169507}], "download_size": 3418593, "dataset_size": 5943645}, {"config_name": "xsy", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4262789, "num_examples": 144305}], "download_size": 2573349, "dataset_size": 4262789}, {"config_name": "yav", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4070, "num_examples": 102}], "download_size": 4718, "dataset_size": 4070}, {"config_name": "yi", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 5495313, "num_examples": 170277}], "download_size": 3373820, "dataset_size": 5495313}, {"config_name": "yo", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 25424749, "num_examples": 724345}], "download_size": 18086773, "dataset_size": 25424749}, {"config_name": "za", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 15159230, "num_examples": 365892}], "download_size": 7774767, "dataset_size": 15159230}, {"config_name": "zea", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 14538518, "num_examples": 451577}], "download_size": 10262897, "dataset_size": 14538518}, {"config_name": "zgh", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 4253917, "num_examples": 144006}], "download_size": 2569373, "dataset_size": 4253917}, {"config_name": "zh", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 264353677, "num_examples": 5424320}], "download_size": 174420118, "dataset_size": 264353677}, {"config_name": "zh-cn", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 42868611, "num_examples": 1158755}], "download_size": 27243799, "dataset_size": 42868611}, {"config_name": "zh-hans", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 57233156, "num_examples": 1483225}], "download_size": 36583522, "dataset_size": 57233156}, {"config_name": "zh-hant", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 53502814, "num_examples": 1356560}], "download_size": 36755083, "dataset_size": 53502814}, {"config_name": "zh-hk", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 15325323, "num_examples": 408391}], "download_size": 10455809, "dataset_size": 15325323}, {"config_name": "zh-mo", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 6568267, "num_examples": 180950}], "download_size": 3547260, "dataset_size": 6568267}, {"config_name": "zh-my", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 32637498, "num_examples": 916876}], "download_size": 19289581, "dataset_size": 32637498}, {"config_name": "zh-sg", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 35325327, "num_examples": 979652}], "download_size": 21150070, "dataset_size": 35325327}, {"config_name": "zh-tw", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 17500668, "num_examples": 443057}], "download_size": 11121104, "dataset_size": 17500668}, {"config_name": "zh-yue", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 1352, "num_examples": 30}], "download_size": 2963, "dataset_size": 1352}, {"config_name": "zu", "features": [{"name": "wikidata_id", "dtype": "string"}, {"name": "lastrevid", "dtype": "int64"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "label", "num_bytes": 47349379, "num_examples": 1380550}], "download_size": 34649660, "dataset_size": 47349379}], "configs": [{"config_name": "aa", "data_files": [{"split": "label", "path": "aa/label-*"}]}, {"config_name": "ab", "data_files": [{"split": "label", "path": "ab/label-*"}]}, {"config_name": "abs", "data_files": [{"split": "label", "path": "abs/label-*"}]}, {"config_name": "ace", "data_files": [{"split": "label", "path": "ace/label-*"}]}, {"config_name": "ady", "data_files": [{"split": "label", "path": "ady/label-*"}]}, {"config_name": "ady-cyrl", "data_files": [{"split": "label", "path": "ady-cyrl/label-*"}]}, {"config_name": "aeb", "data_files": [{"split": "label", "path": "aeb/label-*"}]}, {"config_name": "aeb-arab", "data_files": [{"split": "label", "path": "aeb-arab/label-*"}]}, {"config_name": "aeb-latn", "data_files": [{"split": "label", "path": "aeb-latn/label-*"}]}, {"config_name": "af", "data_files": [{"split": "label", "path": "af/label-*"}]}, {"config_name": "agq", "data_files": [{"split": "label", "path": "agq/label-*"}]}, {"config_name": "ak", "data_files": [{"split": "label", "path": "ak/label-*"}]}, {"config_name": "aln", "data_files": [{"split": "label", "path": "aln/label-*"}]}, {"config_name": "als", "data_files": [{"split": "label", "path": "als/label-*"}]}, {"config_name": "alt", "data_files": [{"split": "label", "path": "alt/label-*"}]}, {"config_name": "am", "data_files": [{"split": "label", "path": "am/label-*"}]}, {"config_name": "ami", "data_files": [{"split": "label", "path": "ami/label-*"}]}, {"config_name": "an", "data_files": [{"split": "label", "path": "an/label-*"}]}, {"config_name": "ang", "data_files": [{"split": "label", "path": "ang/label-*"}]}, {"config_name": "anp", "data_files": [{"split": "label", "path": "anp/label-*"}]}, {"config_name": "ar", "data_files": [{"split": "label", "path": "ar/label-*"}]}, {"config_name": "arc", "data_files": [{"split": "label", "path": "arc/label-*"}]}, {"config_name": "arn", "data_files": [{"split": "label", "path": "arn/label-*"}]}, {"config_name": "arq", "data_files": [{"split": "label", "path": "arq/label-*"}]}, {"config_name": "ary", "data_files": [{"split": "label", "path": "ary/label-*"}]}, {"config_name": "arz", "data_files": [{"split": "label", "path": "arz/label-*"}]}, {"config_name": "as", "data_files": [{"split": "label", "path": "as/label-*"}]}, {"config_name": "ase", "data_files": [{"split": "label", "path": "ase/label-*"}]}, {"config_name": "ast", "data_files": [{"split": "label", "path": "ast/label-*"}]}, {"config_name": "atj", "data_files": [{"split": "label", "path": "atj/label-*"}]}, {"config_name": "av", "data_files": [{"split": "label", "path": "av/label-*"}]}, {"config_name": "avk", "data_files": [{"split": "label", "path": "avk/label-*"}]}, {"config_name": "awa", "data_files": [{"split": "label", "path": "awa/label-*"}]}, {"config_name": "ay", "data_files": [{"split": "label", "path": "ay/label-*"}]}, {"config_name": "az", "data_files": [{"split": "label", "path": "az/label-*"}]}, {"config_name": "azb", "data_files": [{"split": "label", "path": "azb/label-*"}]}, {"config_name": "ba", "data_files": [{"split": "label", "path": "ba/label-*"}]}, {"config_name": "ban", "data_files": [{"split": "label", "path": "ban/label-*"}]}, {"config_name": "ban-bali", "data_files": [{"split": "label", "path": "ban-bali/label-*"}]}, {"config_name": "bar", "data_files": [{"split": "label", "path": "bar/label-*"}]}, {"config_name": "bbc", "data_files": [{"split": "label", "path": "bbc/label-*"}]}, {"config_name": "bcc", "data_files": [{"split": "label", "path": "bcc/label-*"}]}, {"config_name": "be", "data_files": [{"split": "label", "path": "be/label-*"}]}, {"config_name": "be-tarask", "data_files": [{"split": "label", "path": "be-tarask/label-*"}]}, {"config_name": "bg", "data_files": [{"split": "label", "path": "bg/label-*"}]}, {"config_name": "bgn", "data_files": [{"split": "label", "path": "bgn/label-*"}]}, {"config_name": "bi", "data_files": [{"split": "label", "path": "bi/label-*"}]}, {"config_name": "bjn", "data_files": [{"split": "label", "path": "bjn/label-*"}]}, {"config_name": "bm", "data_files": [{"split": "label", "path": "bm/label-*"}]}, {"config_name": "bn", "data_files": [{"split": "label", "path": "bn/label-*"}]}, {"config_name": "bo", "data_files": [{"split": "label", "path": "bo/label-*"}]}, {"config_name": "bpy", "data_files": [{"split": "label", "path": "bpy/label-*"}]}, {"config_name": "bqi", "data_files": [{"split": "label", "path": "bqi/label-*"}]}, {"config_name": "br", "data_files": [{"split": "label", "path": "br/label-*"}]}, {"config_name": "brh", "data_files": [{"split": "label", "path": "brh/label-*"}]}, {"config_name": "bs", "data_files": [{"split": "label", "path": "bs/label-*"}]}, {"config_name": "btm", "data_files": [{"split": "label", "path": "btm/label-*"}]}, {"config_name": "bto", "data_files": [{"split": "label", "path": "bto/label-*"}]}, {"config_name": "bug", "data_files": [{"split": "label", "path": "bug/label-*"}]}, {"config_name": "bxr", "data_files": [{"split": "label", "path": "bxr/label-*"}]}, {"config_name": "ca", "data_files": [{"split": "label", "path": "ca/label-*"}]}, {"config_name": "cbk-zam", "data_files": [{"split": "label", "path": "cbk-zam/label-*"}]}, {"config_name": "cdo", "data_files": [{"split": "label", "path": "cdo/label-*"}]}, {"config_name": "ce", "data_files": [{"split": "label", "path": "ce/label-*"}]}, {"config_name": "ceb", "data_files": [{"split": "label", "path": "ceb/label-*"}]}, {"config_name": "ch", "data_files": [{"split": "label", "path": "ch/label-*"}]}, {"config_name": "cho", "data_files": [{"split": "label", "path": "cho/label-*"}]}, {"config_name": "chr", "data_files": [{"split": "label", "path": "chr/label-*"}]}, {"config_name": "chy", "data_files": [{"split": "label", "path": "chy/label-*"}]}, {"config_name": "ckb", "data_files": [{"split": "label", "path": "ckb/label-*"}]}, {"config_name": "co", "data_files": [{"split": "label", "path": "co/label-*"}]}, {"config_name": "cps", "data_files": [{"split": "label", "path": "cps/label-*"}]}, {"config_name": "cr", "data_files": [{"split": "label", "path": "cr/label-*"}]}, {"config_name": "crh", "data_files": [{"split": "label", "path": "crh/label-*"}]}, {"config_name": "crh-cyrl", "data_files": [{"split": "label", "path": "crh-cyrl/label-*"}]}, {"config_name": "crh-latn", "data_files": [{"split": "label", "path": "crh-latn/label-*"}]}, {"config_name": "cs", "data_files": [{"split": "label", "path": "cs/label-*"}]}, {"config_name": "csb", "data_files": [{"split": "label", "path": "csb/label-*"}]}, {"config_name": "cv", "data_files": [{"split": "label", "path": "cv/label-*"}]}, {"config_name": "cy", "data_files": [{"split": "label", "path": "cy/label-*"}]}, {"config_name": "da", "data_files": [{"split": "label", "path": "da/label-*"}]}, {"config_name": "dag", "data_files": [{"split": "label", "path": "dag/label-*"}]}, {"config_name": "de", "data_files": [{"split": "label", "path": "de/label-*"}]}, {"config_name": "de-at", "data_files": [{"split": "label", "path": "de-at/label-*"}]}, {"config_name": "de-ch", "data_files": [{"split": "label", "path": "de-ch/label-*"}]}, {"config_name": "de-formal", "data_files": [{"split": "label", "path": "de-formal/label-*"}]}, {"config_name": "din", "data_files": [{"split": "label", "path": "din/label-*"}]}, {"config_name": "diq", "data_files": [{"split": "label", "path": "diq/label-*"}]}, {"config_name": "dsb", "data_files": [{"split": "label", "path": "dsb/label-*"}]}, {"config_name": "dtp", "data_files": [{"split": "label", "path": "dtp/label-*"}]}, {"config_name": "dty", "data_files": [{"split": "label", "path": "dty/label-*"}]}, {"config_name": "dua", "data_files": [{"split": "label", "path": "dua/label-*"}]}, {"config_name": "dv", "data_files": [{"split": "label", "path": "dv/label-*"}]}, {"config_name": "dz", "data_files": [{"split": "label", "path": "dz/label-*"}]}, {"config_name": "ee", "data_files": [{"split": "label", "path": "ee/label-*"}]}, {"config_name": "egl", "data_files": [{"split": "label", "path": "egl/label-*"}]}, {"config_name": "el", "data_files": [{"split": "label", "path": "el/label-*"}]}, {"config_name": "eml", "data_files": [{"split": "label", "path": "eml/label-*"}]}, {"config_name": "en", "data_files": [{"split": "label", "path": "en/label-*"}], "default": true}, {"config_name": "en-ca", "data_files": [{"split": "label", "path": "en-ca/label-*"}]}, {"config_name": "en-gb", "data_files": [{"split": "label", "path": "en-gb/label-*"}]}, {"config_name": "en-us", "data_files": [{"split": "label", "path": "en-us/label-*"}]}, {"config_name": "eo", "data_files": [{"split": "label", "path": "eo/label-*"}]}, {"config_name": "es", "data_files": [{"split": "label", "path": "es/label-*"}]}, {"config_name": "es-419", "data_files": [{"split": "label", "path": "es-419/label-*"}]}, {"config_name": "es-formal", "data_files": [{"split": "label", "path": "es-formal/label-*"}]}, {"config_name": "et", "data_files": [{"split": "label", "path": "et/label-*"}]}, {"config_name": "eu", "data_files": [{"split": "label", "path": "eu/label-*"}]}, {"config_name": "ext", "data_files": [{"split": "label", "path": "ext/label-*"}]}, {"config_name": "fa", "data_files": [{"split": "label", "path": "fa/label-*"}]}, {"config_name": "ff", "data_files": [{"split": "label", "path": "ff/label-*"}]}, {"config_name": "fi", "data_files": [{"split": "label", "path": "fi/label-*"}]}, {"config_name": "fit", "data_files": [{"split": "label", "path": "fit/label-*"}]}, {"config_name": "fj", "data_files": [{"split": "label", "path": "fj/label-*"}]}, {"config_name": "fkv", "data_files": [{"split": "label", "path": "fkv/label-*"}]}, {"config_name": "fo", "data_files": [{"split": "label", "path": "fo/label-*"}]}, {"config_name": "fr", "data_files": [{"split": "label", "path": "fr/label-*"}]}, {"config_name": "frc", "data_files": [{"split": "label", "path": "frc/label-*"}]}, {"config_name": "frp", "data_files": [{"split": "label", "path": "frp/label-*"}]}, {"config_name": "frr", "data_files": [{"split": "label", "path": "frr/label-*"}]}, {"config_name": "fur", "data_files": [{"split": "label", "path": "fur/label-*"}]}, {"config_name": "ga", "data_files": [{"split": "label", "path": "ga/label-*"}]}, {"config_name": "gag", "data_files": [{"split": "label", "path": "gag/label-*"}]}, {"config_name": "gan", "data_files": [{"split": "label", "path": "gan/label-*"}]}, {"config_name": "gan-hans", "data_files": [{"split": "label", "path": "gan-hans/label-*"}]}, {"config_name": "gan-hant", "data_files": [{"split": "label", "path": "gan-hant/label-*"}]}, {"config_name": "gcr", "data_files": [{"split": "label", "path": "gcr/label-*"}]}, {"config_name": "gd", "data_files": [{"split": "label", "path": "gd/label-*"}]}, {"config_name": "gl", "data_files": [{"split": "label", "path": "gl/label-*"}]}, {"config_name": "glk", "data_files": [{"split": "label", "path": "glk/label-*"}]}, {"config_name": "gn", "data_files": [{"split": "label", "path": "gn/label-*"}]}, {"config_name": "gom", "data_files": [{"split": "label", "path": "gom/label-*"}]}, {"config_name": "gom-deva", "data_files": [{"split": "label", "path": "gom-deva/label-*"}]}, {"config_name": "gom-latn", "data_files": [{"split": "label", "path": "gom-latn/label-*"}]}, {"config_name": "gor", "data_files": [{"split": "label", "path": "gor/label-*"}]}, {"config_name": "got", "data_files": [{"split": "label", "path": "got/label-*"}]}, {"config_name": "grc", "data_files": [{"split": "label", "path": "grc/label-*"}]}, {"config_name": "gu", "data_files": [{"split": "label", "path": "gu/label-*"}]}, {"config_name": "guc", "data_files": [{"split": "label", "path": "guc/label-*"}]}, {"config_name": "guw", "data_files": [{"split": "label", "path": "guw/label-*"}]}, {"config_name": "gv", "data_files": [{"split": "label", "path": "gv/label-*"}]}, {"config_name": "ha", "data_files": [{"split": "label", "path": "ha/label-*"}]}, {"config_name": "hak", "data_files": [{"split": "label", "path": "hak/label-*"}]}, {"config_name": "haw", "data_files": [{"split": "label", "path": "haw/label-*"}]}, {"config_name": "he", "data_files": [{"split": "label", "path": "he/label-*"}]}, {"config_name": "hi", "data_files": [{"split": "label", "path": "hi/label-*"}]}, {"config_name": "hif", "data_files": [{"split": "label", "path": "hif/label-*"}]}, {"config_name": "hif-latn", "data_files": [{"split": "label", "path": "hif-latn/label-*"}]}, {"config_name": "hil", "data_files": [{"split": "label", "path": "hil/label-*"}]}, {"config_name": "ho", "data_files": [{"split": "label", "path": "ho/label-*"}]}, {"config_name": "hr", "data_files": [{"split": "label", "path": "hr/label-*"}]}, {"config_name": "hrx", "data_files": [{"split": "label", "path": "hrx/label-*"}]}, {"config_name": "hsb", "data_files": [{"split": "label", "path": "hsb/label-*"}]}, {"config_name": "ht", "data_files": [{"split": "label", "path": "ht/label-*"}]}, {"config_name": "hu", "data_files": [{"split": "label", "path": "hu/label-*"}]}, {"config_name": "hu-formal", "data_files": [{"split": "label", "path": "hu-formal/label-*"}]}, {"config_name": "hy", "data_files": [{"split": "label", "path": "hy/label-*"}]}, {"config_name": "hyw", "data_files": [{"split": "label", "path": "hyw/label-*"}]}, {"config_name": "hz", "data_files": [{"split": "label", "path": "hz/label-*"}]}, {"config_name": "ia", "data_files": [{"split": "label", "path": "ia/label-*"}]}, {"config_name": "id", "data_files": [{"split": "label", "path": "id/label-*"}]}, {"config_name": "ie", "data_files": [{"split": "label", "path": "ie/label-*"}]}, {"config_name": "ig", "data_files": [{"split": "label", "path": "ig/label-*"}]}, {"config_name": "ii", "data_files": [{"split": "label", "path": "ii/label-*"}]}, {"config_name": "ik", "data_files": [{"split": "label", "path": "ik/label-*"}]}, {"config_name": "ike-cans", "data_files": [{"split": "label", "path": "ike-cans/label-*"}]}, {"config_name": "ike-latn", "data_files": [{"split": "label", "path": "ike-latn/label-*"}]}, {"config_name": "ilo", "data_files": [{"split": "label", "path": "ilo/label-*"}]}, {"config_name": "inh", "data_files": [{"split": "label", "path": "inh/label-*"}]}, {"config_name": "io", "data_files": [{"split": "label", "path": "io/label-*"}]}, {"config_name": "is", "data_files": [{"split": "label", "path": "is/label-*"}]}, {"config_name": "it", "data_files": [{"split": "label", "path": "it/label-*"}]}, {"config_name": "iu", "data_files": [{"split": "label", "path": "iu/label-*"}]}, {"config_name": "ja", "data_files": [{"split": "label", "path": "ja/label-*"}]}, {"config_name": "jam", "data_files": [{"split": "label", "path": "jam/label-*"}]}, {"config_name": "jbo", "data_files": [{"split": "label", "path": "jbo/label-*"}]}, {"config_name": "jv", "data_files": [{"split": "label", "path": "jv/label-*"}]}, {"config_name": "ka", "data_files": [{"split": "label", "path": "ka/label-*"}]}, {"config_name": "kaa", "data_files": [{"split": "label", "path": "kaa/label-*"}]}, {"config_name": "kab", "data_files": [{"split": "label", "path": "kab/label-*"}]}, {"config_name": "kbd", "data_files": [{"split": "label", "path": "kbd/label-*"}]}, {"config_name": "kbd-cyrl", "data_files": [{"split": "label", "path": "kbd-cyrl/label-*"}]}, {"config_name": "kbp", "data_files": [{"split": "label", "path": "kbp/label-*"}]}, {"config_name": "kea", "data_files": [{"split": "label", "path": "kea/label-*"}]}, {"config_name": "kg", "data_files": [{"split": "label", "path": "kg/label-*"}]}, {"config_name": "khw", "data_files": [{"split": "label", "path": "khw/label-*"}]}, {"config_name": "ki", "data_files": [{"split": "label", "path": "ki/label-*"}]}, {"config_name": "kj", "data_files": [{"split": "label", "path": "kj/label-*"}]}, {"config_name": "kjp", "data_files": [{"split": "label", "path": "kjp/label-*"}]}, {"config_name": "kk", "data_files": [{"split": "label", "path": "kk/label-*"}]}, {"config_name": "kk-arab", "data_files": [{"split": "label", "path": "kk-arab/label-*"}]}, {"config_name": "kk-kz", "data_files": [{"split": "label", "path": "kk-kz/label-*"}]}, {"config_name": "kk-latn", "data_files": [{"split": "label", "path": "kk-latn/label-*"}]}, {"config_name": "kk-tr", "data_files": [{"split": "label", "path": "kk-tr/label-*"}]}, {"config_name": "ko", "data_files": [{"split": "label", "path": "ko/label-*"}]}, {"config_name": "ko-kp", "data_files": [{"split": "label", "path": "ko-kp/label-*"}]}, {"config_name": "koi", "data_files": [{"split": "label", "path": "koi/label-*"}]}, {"config_name": "kr", "data_files": [{"split": "label", "path": "kr/label-*"}]}, {"config_name": "krc", "data_files": [{"split": "label", "path": "krc/label-*"}]}, {"config_name": "kri", "data_files": [{"split": "label", "path": "kri/label-*"}]}, {"config_name": "krj", "data_files": [{"split": "label", "path": "krj/label-*"}]}, {"config_name": "krl", "data_files": [{"split": "label", "path": "krl/label-*"}]}, {"config_name": "ks", "data_files": [{"split": "label", "path": "ks/label-*"}]}, {"config_name": "ks-deva", "data_files": [{"split": "label", "path": "ks-deva/label-*"}]}, {"config_name": "ksh", "data_files": [{"split": "label", "path": "ksh/label-*"}]}, {"config_name": "ku", "data_files": [{"split": "label", "path": "ku/label-*"}]}, {"config_name": "ku-arab", "data_files": [{"split": "label", "path": "ku-arab/label-*"}]}, {"config_name": "ku-latn", "data_files": [{"split": "label", "path": "ku-latn/label-*"}]}, {"config_name": "kum", "data_files": [{"split": "label", "path": "kum/label-*"}]}, {"config_name": "kv", "data_files": [{"split": "label", "path": "kv/label-*"}]}, {"config_name": "kw", "data_files": [{"split": "label", "path": "kw/label-*"}]}, {"config_name": "ky", "data_files": [{"split": "label", "path": "ky/label-*"}]}, {"config_name": "la", "data_files": [{"split": "label", "path": "la/label-*"}]}, {"config_name": "lad", "data_files": [{"split": "label", "path": "lad/label-*"}]}, {"config_name": "lb", "data_files": [{"split": "label", "path": "lb/label-*"}]}, {"config_name": "lbe", "data_files": [{"split": "label", "path": "lbe/label-*"}]}, {"config_name": "lez", "data_files": [{"split": "label", "path": "lez/label-*"}]}, {"config_name": "lfn", "data_files": [{"split": "label", "path": "lfn/label-*"}]}, {"config_name": "lg", "data_files": [{"split": "label", "path": "lg/label-*"}]}, {"config_name": "li", "data_files": [{"split": "label", "path": "li/label-*"}]}, {"config_name": "lij", "data_files": [{"split": "label", "path": "lij/label-*"}]}, {"config_name": "liv", "data_files": [{"split": "label", "path": "liv/label-*"}]}, {"config_name": "lki", "data_files": [{"split": "label", "path": "lki/label-*"}]}, {"config_name": "lld", "data_files": [{"split": "label", "path": "lld/label-*"}]}, {"config_name": "lmo", "data_files": [{"split": "label", "path": "lmo/label-*"}]}, {"config_name": "ln", "data_files": [{"split": "label", "path": "ln/label-*"}]}, {"config_name": "lo", "data_files": [{"split": "label", "path": "lo/label-*"}]}, {"config_name": "loz", "data_files": [{"split": "label", "path": "loz/label-*"}]}, {"config_name": "lt", "data_files": [{"split": "label", "path": "lt/label-*"}]}, {"config_name": "ltg", "data_files": [{"split": "label", "path": "ltg/label-*"}]}, {"config_name": "lus", "data_files": [{"split": "label", "path": "lus/label-*"}]}, {"config_name": "luz", "data_files": [{"split": "label", "path": "luz/label-*"}]}, {"config_name": "lv", "data_files": [{"split": "label", "path": "lv/label-*"}]}, {"config_name": "lzh", "data_files": [{"split": "label", "path": "lzh/label-*"}]}, {"config_name": "mdf", "data_files": [{"split": "label", "path": "mdf/label-*"}]}, {"config_name": "mg", "data_files": [{"split": "label", "path": "mg/label-*"}]}, {"config_name": "mh", "data_files": [{"split": "label", "path": "mh/label-*"}]}, {"config_name": "mi", "data_files": [{"split": "label", "path": "mi/label-*"}]}, {"config_name": "min", "data_files": [{"split": "label", "path": "min/label-*"}]}, {"config_name": "mk", "data_files": [{"split": "label", "path": "mk/label-*"}]}, {"config_name": "ml", "data_files": [{"split": "label", "path": "ml/label-*"}]}, {"config_name": "mn", "data_files": [{"split": "label", "path": "mn/label-*"}]}, {"config_name": "mni", "data_files": [{"split": "label", "path": "mni/label-*"}]}, {"config_name": "mnw", "data_files": [{"split": "label", "path": "mnw/label-*"}]}, {"config_name": "mo", "data_files": [{"split": "label", "path": "mo/label-*"}]}, {"config_name": "mr", "data_files": [{"split": "label", "path": "mr/label-*"}]}, {"config_name": "mrh", "data_files": [{"split": "label", "path": "mrh/label-*"}]}, {"config_name": "mrj", "data_files": [{"split": "label", "path": "mrj/label-*"}]}, {"config_name": "ms", "data_files": [{"split": "label", "path": "ms/label-*"}]}, {"config_name": "ms-arab", "data_files": [{"split": "label", "path": "ms-arab/label-*"}]}, {"config_name": "mt", "data_files": [{"split": "label", "path": "mt/label-*"}]}, {"config_name": "mus", "data_files": [{"split": "label", "path": "mus/label-*"}]}, {"config_name": "mwl", "data_files": [{"split": "label", "path": "mwl/label-*"}]}, {"config_name": "my", "data_files": [{"split": "label", "path": "my/label-*"}]}, {"config_name": "mzn", "data_files": [{"split": "label", "path": "mzn/label-*"}]}, {"config_name": "na", "data_files": [{"split": "label", "path": "na/label-*"}]}, {"config_name": "nah", "data_files": [{"split": "label", "path": "nah/label-*"}]}, {"config_name": "nan-hani", "data_files": [{"split": "label", "path": "nan-hani/label-*"}]}, {"config_name": "nap", "data_files": [{"split": "label", "path": "nap/label-*"}]}, {"config_name": "nb", "data_files": [{"split": "label", "path": "nb/label-*"}]}, {"config_name": "nds", "data_files": [{"split": "label", "path": "nds/label-*"}]}, {"config_name": "nds-nl", "data_files": [{"split": "label", "path": "nds-nl/label-*"}]}, {"config_name": "ne", "data_files": [{"split": "label", "path": "ne/label-*"}]}, {"config_name": "new", "data_files": [{"split": "label", "path": "new/label-*"}]}, {"config_name": "ng", "data_files": [{"split": "label", "path": "ng/label-*"}]}, {"config_name": "nia", "data_files": [{"split": "label", "path": "nia/label-*"}]}, {"config_name": "niu", "data_files": [{"split": "label", "path": "niu/label-*"}]}, {"config_name": "nl", "data_files": [{"split": "label", "path": "nl/label-*"}]}, {"config_name": "nn", "data_files": [{"split": "label", "path": "nn/label-*"}]}, {"config_name": "no", "data_files": [{"split": "label", "path": "no/label-*"}]}, {"config_name": "nod", "data_files": [{"split": "label", "path": "nod/label-*"}]}, {"config_name": "nov", "data_files": [{"split": "label", "path": "nov/label-*"}]}, {"config_name": "nqo", "data_files": [{"split": "label", "path": "nqo/label-*"}]}, {"config_name": "nrm", "data_files": [{"split": "label", "path": "nrm/label-*"}]}, {"config_name": "nso", "data_files": [{"split": "label", "path": "nso/label-*"}]}, {"config_name": "nv", "data_files": [{"split": "label", "path": "nv/label-*"}]}, {"config_name": "ny", "data_files": [{"split": "label", "path": "ny/label-*"}]}, {"config_name": "nys", "data_files": [{"split": "label", "path": "nys/label-*"}]}, {"config_name": "oc", "data_files": [{"split": "label", "path": "oc/label-*"}]}, {"config_name": "olo", "data_files": [{"split": "label", "path": "olo/label-*"}]}, {"config_name": "om", "data_files": [{"split": "label", "path": "om/label-*"}]}, {"config_name": "or", "data_files": [{"split": "label", "path": "or/label-*"}]}, {"config_name": "os", "data_files": [{"split": "label", "path": "os/label-*"}]}, {"config_name": "ota", "data_files": [{"split": "label", "path": "ota/label-*"}]}, {"config_name": "pa", "data_files": [{"split": "label", "path": "pa/label-*"}]}, {"config_name": "pam", "data_files": [{"split": "label", "path": "pam/label-*"}]}, {"config_name": "pap", "data_files": [{"split": "label", "path": "pap/label-*"}]}, {"config_name": "pcd", "data_files": [{"split": "label", "path": "pcd/label-*"}]}, {"config_name": "pdc", "data_files": [{"split": "label", "path": "pdc/label-*"}]}, {"config_name": "pdt", "data_files": [{"split": "label", "path": "pdt/label-*"}]}, {"config_name": "pfl", "data_files": [{"split": "label", "path": "pfl/label-*"}]}, {"config_name": "pi", "data_files": [{"split": "label", "path": "pi/label-*"}]}, {"config_name": "pih", "data_files": [{"split": "label", "path": "pih/label-*"}]}, {"config_name": "pl", "data_files": [{"split": "label", "path": "pl/label-*"}]}, {"config_name": "pms", "data_files": [{"split": "label", "path": "pms/label-*"}]}, {"config_name": "pnb", "data_files": [{"split": "label", "path": "pnb/label-*"}]}, {"config_name": "pnt", "data_files": [{"split": "label", "path": "pnt/label-*"}]}, {"config_name": "prg", "data_files": [{"split": "label", "path": "prg/label-*"}]}, {"config_name": "ps", "data_files": [{"split": "label", "path": "ps/label-*"}]}, {"config_name": "pt", "data_files": [{"split": "label", "path": "pt/label-*"}]}, {"config_name": "pt-br", "data_files": [{"split": "label", "path": "pt-br/label-*"}]}, {"config_name": "pwn", "data_files": [{"split": "label", "path": "pwn/label-*"}]}, {"config_name": "qu", "data_files": [{"split": "label", "path": "qu/label-*"}]}, {"config_name": "quc", "data_files": [{"split": "label", "path": "quc/label-*"}]}, {"config_name": "qug", "data_files": [{"split": "label", "path": "qug/label-*"}]}, {"config_name": "rgn", "data_files": [{"split": "label", "path": "rgn/label-*"}]}, {"config_name": "rif", "data_files": [{"split": "label", "path": "rif/label-*"}]}, {"config_name": "rm", "data_files": [{"split": "label", "path": "rm/label-*"}]}, {"config_name": "rmc", "data_files": [{"split": "label", "path": "rmc/label-*"}]}, {"config_name": "rmy", "data_files": [{"split": "label", "path": "rmy/label-*"}]}, {"config_name": "rn", "data_files": [{"split": "label", "path": "rn/label-*"}]}, {"config_name": "ro", "data_files": [{"split": "label", "path": "ro/label-*"}]}, {"config_name": "roa-tara", "data_files": [{"split": "label", "path": "roa-tara/label-*"}]}, {"config_name": "ru", "data_files": [{"split": "label", "path": "ru/label-*"}]}, {"config_name": "rue", "data_files": [{"split": "label", "path": "rue/label-*"}]}, {"config_name": "rup", "data_files": [{"split": "label", "path": "rup/label-*"}]}, {"config_name": "ruq-cyrl", "data_files": [{"split": "label", "path": "ruq-cyrl/label-*"}]}, {"config_name": "ruq-latn", "data_files": [{"split": "label", "path": "ruq-latn/label-*"}]}, {"config_name": "rw", "data_files": [{"split": "label", "path": "rw/label-*"}]}, {"config_name": "rwr", "data_files": [{"split": "label", "path": "rwr/label-*"}]}, {"config_name": "ryu", "data_files": [{"split": "label", "path": "ryu/label-*"}]}, {"config_name": "sa", "data_files": [{"split": "label", "path": "sa/label-*"}]}, {"config_name": "sat", "data_files": [{"split": "label", "path": "sat/label-*"}]}, {"config_name": "sc", "data_files": [{"split": "label", "path": "sc/label-*"}]}, {"config_name": "scn", "data_files": [{"split": "label", "path": "scn/label-*"}]}, {"config_name": "sco", "data_files": [{"split": "label", "path": "sco/label-*"}]}, {"config_name": "sd", "data_files": [{"split": "label", "path": "sd/label-*"}]}, {"config_name": "sdc", "data_files": [{"split": "label", "path": "sdc/label-*"}]}, {"config_name": "se", "data_files": [{"split": "label", "path": "se/label-*"}]}, {"config_name": "sei", "data_files": [{"split": "label", "path": "sei/label-*"}]}, {"config_name": "sg", "data_files": [{"split": "label", "path": "sg/label-*"}]}, {"config_name": "sh", "data_files": [{"split": "label", "path": "sh/label-*"}]}, {"config_name": "shi-latn", "data_files": [{"split": "label", "path": "shi-latn/label-*"}]}, {"config_name": "shi-tfng", "data_files": [{"split": "label", "path": "shi-tfng/label-*"}]}, {"config_name": "shn", "data_files": [{"split": "label", "path": "shn/label-*"}]}, {"config_name": "shy-latn", "data_files": [{"split": "label", "path": "shy-latn/label-*"}]}, {"config_name": "si", "data_files": [{"split": "label", "path": "si/label-*"}]}, {"config_name": "sjd", "data_files": [{"split": "label", "path": "sjd/label-*"}]}, {"config_name": "sje", "data_files": [{"split": "label", "path": "sje/label-*"}]}, {"config_name": "sju", "data_files": [{"split": "label", "path": "sju/label-*"}]}, {"config_name": "sk", "data_files": [{"split": "label", "path": "sk/label-*"}]}, {"config_name": "skr", "data_files": [{"split": "label", "path": "skr/label-*"}]}, {"config_name": "sl", "data_files": [{"split": "label", "path": "sl/label-*"}]}, {"config_name": "sli", "data_files": [{"split": "label", "path": "sli/label-*"}]}, {"config_name": "sm", "data_files": [{"split": "label", "path": "sm/label-*"}]}, {"config_name": "sma", "data_files": [{"split": "label", "path": "sma/label-*"}]}, {"config_name": "smj", "data_files": [{"split": "label", "path": "smj/label-*"}]}, {"config_name": "smn", "data_files": [{"split": "label", "path": "smn/label-*"}]}, {"config_name": "sms", "data_files": [{"split": "label", "path": "sms/label-*"}]}, {"config_name": "sn", "data_files": [{"split": "label", "path": "sn/label-*"}]}, {"config_name": "sq", "data_files": [{"split": "label", "path": "sq/label-*"}]}, {"config_name": "sr", "data_files": [{"split": "label", "path": "sr/label-*"}]}, {"config_name": "sr-ec", "data_files": [{"split": "label", "path": "sr-ec/label-*"}]}, {"config_name": "sr-el", "data_files": [{"split": "label", "path": "sr-el/label-*"}]}, {"config_name": "srq", "data_files": [{"split": "label", "path": "srq/label-*"}]}, {"config_name": "ss", "data_files": [{"split": "label", "path": "ss/label-*"}]}, {"config_name": "st", "data_files": [{"split": "label", "path": "st/label-*"}]}, {"config_name": "stq", "data_files": [{"split": "label", "path": "stq/label-*"}]}, {"config_name": "su", "data_files": [{"split": "label", "path": "su/label-*"}]}, {"config_name": "sv", "data_files": [{"split": "label", "path": "sv/label-*"}]}, {"config_name": "sw", "data_files": [{"split": "label", "path": "sw/label-*"}]}, {"config_name": "szl", "data_files": [{"split": "label", "path": "szl/label-*"}]}, {"config_name": "szy", "data_files": [{"split": "label", "path": "szy/label-*"}]}, {"config_name": "ta", "data_files": [{"split": "label", "path": "ta/label-*"}]}, {"config_name": "tay", "data_files": [{"split": "label", "path": "tay/label-*"}]}, {"config_name": "tcy", "data_files": [{"split": "label", "path": "tcy/label-*"}]}, {"config_name": "te", "data_files": [{"split": "label", "path": "te/label-*"}]}, {"config_name": "tet", "data_files": [{"split": "label", "path": "tet/label-*"}]}, {"config_name": "tg", "data_files": [{"split": "label", "path": "tg/label-*"}]}, {"config_name": "tg-cyrl", "data_files": [{"split": "label", "path": "tg-cyrl/label-*"}]}, {"config_name": "tg-latn", "data_files": [{"split": "label", "path": "tg-latn/label-*"}]}, {"config_name": "th", "data_files": [{"split": "label", "path": "th/label-*"}]}, {"config_name": "ti", "data_files": [{"split": "label", "path": "ti/label-*"}]}, {"config_name": "tk", "data_files": [{"split": "label", "path": "tk/label-*"}]}, {"config_name": "tl", "data_files": [{"split": "label", "path": "tl/label-*"}]}, {"config_name": "tly", "data_files": [{"split": "label", "path": "tly/label-*"}]}, {"config_name": "tly-cyrl", "data_files": [{"split": "label", "path": "tly-cyrl/label-*"}]}, {"config_name": "tn", "data_files": [{"split": "label", "path": "tn/label-*"}]}, {"config_name": "to", "data_files": [{"split": "label", "path": "to/label-*"}]}, {"config_name": "tpi", "data_files": [{"split": "label", "path": "tpi/label-*"}]}, {"config_name": "tr", "data_files": [{"split": "label", "path": "tr/label-*"}]}, {"config_name": "tru", "data_files": [{"split": "label", "path": "tru/label-*"}]}, {"config_name": "trv", "data_files": [{"split": "label", "path": "trv/label-*"}]}, {"config_name": "ts", "data_files": [{"split": "label", "path": "ts/label-*"}]}, {"config_name": "tt", "data_files": [{"split": "label", "path": "tt/label-*"}]}, {"config_name": "tt-cyrl", "data_files": [{"split": "label", "path": "tt-cyrl/label-*"}]}, {"config_name": "tt-latn", "data_files": [{"split": "label", "path": "tt-latn/label-*"}]}, {"config_name": "tum", "data_files": [{"split": "label", "path": "tum/label-*"}]}, {"config_name": "tw", "data_files": [{"split": "label", "path": "tw/label-*"}]}, {"config_name": "ty", "data_files": [{"split": "label", "path": "ty/label-*"}]}, {"config_name": "tyv", "data_files": [{"split": "label", "path": "tyv/label-*"}]}, {"config_name": "tzm", "data_files": [{"split": "label", "path": "tzm/label-*"}]}, {"config_name": "udm", "data_files": [{"split": "label", "path": "udm/label-*"}]}, {"config_name": "ug-arab", "data_files": [{"split": "label", "path": "ug-arab/label-*"}]}, {"config_name": "ug-latn", "data_files": [{"split": "label", "path": "ug-latn/label-*"}]}, {"config_name": "uk", "data_files": [{"split": "label", "path": "uk/label-*"}]}, {"config_name": "ur", "data_files": [{"split": "label", "path": "ur/label-*"}]}, {"config_name": "uz", "data_files": [{"split": "label", "path": "uz/label-*"}]}, {"config_name": "uz-cyrl", "data_files": [{"split": "label", "path": "uz-cyrl/label-*"}]}, {"config_name": "ve", "data_files": [{"split": "label", "path": "ve/label-*"}]}, {"config_name": "vec", "data_files": [{"split": "label", "path": "vec/label-*"}]}, {"config_name": "vep", "data_files": [{"split": "label", "path": "vep/label-*"}]}, {"config_name": "vi", "data_files": [{"split": "label", "path": "vi/label-*"}]}, {"config_name": "vls", "data_files": [{"split": "label", "path": "vls/label-*"}]}, {"config_name": "vmf", "data_files": [{"split": "label", "path": "vmf/label-*"}]}, {"config_name": "vo", "data_files": [{"split": "label", "path": "vo/label-*"}]}, {"config_name": "vot", "data_files": [{"split": "label", "path": "vot/label-*"}]}, {"config_name": "wa", "data_files": [{"split": "label", "path": "wa/label-*"}]}, {"config_name": "war", "data_files": [{"split": "label", "path": "war/label-*"}]}, {"config_name": "wls", "data_files": [{"split": "label", "path": "wls/label-*"}]}, {"config_name": "wo", "data_files": [{"split": "label", "path": "wo/label-*"}]}, {"config_name": "wuu", "data_files": [{"split": "label", "path": "wuu/label-*"}]}, {"config_name": "wya", "data_files": [{"split": "label", "path": "wya/label-*"}]}, {"config_name": "xal", "data_files": [{"split": "label", "path": "xal/label-*"}]}, {"config_name": "xh", "data_files": [{"split": "label", "path": "xh/label-*"}]}, {"config_name": "xmf", "data_files": [{"split": "label", "path": "xmf/label-*"}]}, {"config_name": "xsy", "data_files": [{"split": "label", "path": "xsy/label-*"}]}, {"config_name": "yav", "data_files": [{"split": "label", "path": "yav/label-*"}]}, {"config_name": "yi", "data_files": [{"split": "label", "path": "yi/label-*"}]}, {"config_name": "yo", "data_files": [{"split": "label", "path": "yo/label-*"}]}, {"config_name": "za", "data_files": [{"split": "label", "path": "za/label-*"}]}, {"config_name": "zea", "data_files": [{"split": "label", "path": "zea/label-*"}]}, {"config_name": "zgh", "data_files": [{"split": "label", "path": "zgh/label-*"}]}, {"config_name": "zh", "data_files": [{"split": "label", "path": "zh/label-*"}]}, {"config_name": "zh-cn", "data_files": [{"split": "label", "path": "zh-cn/label-*"}]}, {"config_name": "zh-hans", "data_files": [{"split": "label", "path": "zh-hans/label-*"}]}, {"config_name": "zh-hant", "data_files": [{"split": "label", "path": "zh-hant/label-*"}]}, {"config_name": "zh-hk", "data_files": [{"split": "label", "path": "zh-hk/label-*"}]}, {"config_name": "zh-mo", "data_files": [{"split": "label", "path": "zh-mo/label-*"}]}, {"config_name": "zh-my", "data_files": [{"split": "label", "path": "zh-my/label-*"}]}, {"config_name": "zh-sg", "data_files": [{"split": "label", "path": "zh-sg/label-*"}]}, {"config_name": "zh-tw", "data_files": [{"split": "label", "path": "zh-tw/label-*"}]}, {"config_name": "zh-yue", "data_files": [{"split": "label", "path": "zh-yue/label-*"}]}, {"config_name": "zu", "data_files": [{"split": "label", "path": "zu/label-*"}]}]}
2024-01-11T04:17:57+00:00
[]
[ "en", "fr", "de", "ja", "zh", "hi", "ar", "bn", "ru", "es" ]
TAGS #task_categories-translation #task_categories-text2text-generation #language-English #language-French #language-German #language-Japanese #language-Chinese #language-Hindi #language-Arabic #language-Bengali #language-Russian #language-Spanish #license-cc0-1.0 #region-us
Wikidata Labels =============== Large parallel corpus for machine translation * Entity label data extracted from Wikidata (2022-01-03), filtered for item entities only * Only download the languages you need with 'datasets>=2.14.0' * Similar dataset: URL (18 Wikipedia titles pairs instead of all Wikidata entities) Dataset Details --------------- ### Dataset Sources * Wikidata JSON dump (URL) URL Uses ---- You can generate parallel text examples from this dataset like below: ### Output Note: this example table above shows a quirk(?) of the Wiki data. The French Wikipedia page The World's Finest Assassin Gets Reincarnated in Another World as an Aristocrat uses English for its title. While this could be disadvantageous for direct translation training, it also provides insights into how native speakers might call this entity instead of the literal translation on the Wiki page as well Dataset Structure ----------------- Each language has its own subset (aka config), which means you only have to download the languages you need with 'datasets>=2.14.0' Each subset has these fields: * wikidata\_id * lastrevid * label Dataset Creation ---------------- #### Data Collection and Processing * Filtered for item entities only * Ignored the descriptions as those texts are not very parallel Bias, Risks, and Limitations ---------------------------- * Might be slightly outdated (2022) * Popular languages have more entries * Labels are not guaranteed to be literal translations (see examples above)
[ "### Dataset Sources\n\n\n* Wikidata JSON dump (URL) URL\n\n\nUses\n----\n\n\nYou can generate parallel text examples from this dataset like below:", "### Output\n\n\n\nNote: this example table above shows a quirk(?) of the Wiki data. The French Wikipedia page The World's Finest Assassin Gets Reincarnated in Another World as an Aristocrat uses English for its title. While this could be disadvantageous for direct translation training, it also provides insights into how native speakers might call this entity instead of the literal translation on the Wiki page as well\n\n\nDataset Structure\n-----------------\n\n\nEach language has its own subset (aka config), which means you only have to download the languages you need with 'datasets>=2.14.0'\n\n\nEach subset has these fields:\n\n\n* wikidata\\_id\n* lastrevid\n* label\n\n\nDataset Creation\n----------------", "#### Data Collection and Processing\n\n\n* Filtered for item entities only\n* Ignored the descriptions as those texts are not very parallel\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\n* Might be slightly outdated (2022)\n* Popular languages have more entries\n* Labels are not guaranteed to be literal translations (see examples above)" ]
[ "TAGS\n#task_categories-translation #task_categories-text2text-generation #language-English #language-French #language-German #language-Japanese #language-Chinese #language-Hindi #language-Arabic #language-Bengali #language-Russian #language-Spanish #license-cc0-1.0 #region-us \n", "### Dataset Sources\n\n\n* Wikidata JSON dump (URL) URL\n\n\nUses\n----\n\n\nYou can generate parallel text examples from this dataset like below:", "### Output\n\n\n\nNote: this example table above shows a quirk(?) of the Wiki data. The French Wikipedia page The World's Finest Assassin Gets Reincarnated in Another World as an Aristocrat uses English for its title. While this could be disadvantageous for direct translation training, it also provides insights into how native speakers might call this entity instead of the literal translation on the Wiki page as well\n\n\nDataset Structure\n-----------------\n\n\nEach language has its own subset (aka config), which means you only have to download the languages you need with 'datasets>=2.14.0'\n\n\nEach subset has these fields:\n\n\n* wikidata\\_id\n* lastrevid\n* label\n\n\nDataset Creation\n----------------", "#### Data Collection and Processing\n\n\n* Filtered for item entities only\n* Ignored the descriptions as those texts are not very parallel\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\n* Might be slightly outdated (2022)\n* Popular languages have more entries\n* Labels are not guaranteed to be literal translations (see examples above)" ]
[ 85, 34, 159, 78 ]
[ "passage: TAGS\n#task_categories-translation #task_categories-text2text-generation #language-English #language-French #language-German #language-Japanese #language-Chinese #language-Hindi #language-Arabic #language-Bengali #language-Russian #language-Spanish #license-cc0-1.0 #region-us \n### Dataset Sources\n\n\n* Wikidata JSON dump (URL) URL\n\n\nUses\n----\n\n\nYou can generate parallel text examples from this dataset like below:### Output\n\n\n\nNote: this example table above shows a quirk(?) of the Wiki data. The French Wikipedia page The World's Finest Assassin Gets Reincarnated in Another World as an Aristocrat uses English for its title. While this could be disadvantageous for direct translation training, it also provides insights into how native speakers might call this entity instead of the literal translation on the Wiki page as well\n\n\nDataset Structure\n-----------------\n\n\nEach language has its own subset (aka config), which means you only have to download the languages you need with 'datasets>=2.14.0'\n\n\nEach subset has these fields:\n\n\n* wikidata\\_id\n* lastrevid\n* label\n\n\nDataset Creation\n----------------#### Data Collection and Processing\n\n\n* Filtered for item entities only\n* Ignored the descriptions as those texts are not very parallel\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\n* Might be slightly outdated (2022)\n* Popular languages have more entries\n* Labels are not guaranteed to be literal translations (see examples above)" ]
c4b22a7efc56b97eed9e2f82782edfac9ad21c48
# 𓅰 FINCH: CoT-Instruction Dataset for Korean Finance 𓅰 <img src="assets/finch_logo.png" width="400"> ## Overview __*FINCH*__ is a CoT-Instruction dataset rooting Korean-Financial tasks including: Multiple-Choice Question Answering (MCQA), Extractive Question Answering (EQA), Binary Question Answering (BQA), Numerical Reasoning, Tabular Reasoning and Sentiment Analysis. Additional details, research paper and further updates are coming! Stay Tuned.
FINNUMBER/QA_Instruction
[ "license:mit", "region:us" ]
2024-01-01T02:00:49+00:00
{"license": "mit", "configs": [{"config_name": "Multiple-Choice QA", "data_files": [{"split": "train", "path": "data/MCQA_Rationale.csv"}]}, {"config_name": "Binary QA", "data_files": [{"split": "train", "path": "data/BQA_Rationale.csv"}]}, {"config_name": "Extractive QA", "data_files": [{"split": "train", "path": "data/EQA_Rationale.csv"}]}, {"config_name": "Numerical Reasoning Arithmetic", "data_files": [{"split": "train", "path": "data/numerical-reasoning-arithmetic.csv"}]}, {"config_name": "Numerical Reasoning Comparison", "data_files": [{"split": "train", "path": "data/numerical-reasoning-comparison.csv"}]}, {"config_name": "Numerical Reasoning Extraction", "data_files": [{"split": "train", "path": "data/numerical-reasoning-extraction.csv"}]}]}
2024-01-14T15:06:02+00:00
[]
[]
TAGS #license-mit #region-us
# 𓅰 FINCH: CoT-Instruction Dataset for Korean Finance 𓅰 <img src="assets/finch_logo.png" width="400"> ## Overview __*FINCH*__ is a CoT-Instruction dataset rooting Korean-Financial tasks including: Multiple-Choice Question Answering (MCQA), Extractive Question Answering (EQA), Binary Question Answering (BQA), Numerical Reasoning, Tabular Reasoning and Sentiment Analysis. Additional details, research paper and further updates are coming! Stay Tuned.
[ "# 𓅰 FINCH: CoT-Instruction Dataset for Korean Finance 𓅰\n\n\n<img src=\"assets/finch_logo.png\" width=\"400\">", "## Overview\n__*FINCH*__ is a CoT-Instruction dataset rooting Korean-Financial tasks including: Multiple-Choice Question Answering (MCQA), \nExtractive Question Answering (EQA), Binary Question Answering (BQA), Numerical Reasoning, Tabular Reasoning and Sentiment Analysis. \n\nAdditional details, research paper and further updates are coming! Stay Tuned." ]
[ "TAGS\n#license-mit #region-us \n", "# 𓅰 FINCH: CoT-Instruction Dataset for Korean Finance 𓅰\n\n\n<img src=\"assets/finch_logo.png\" width=\"400\">", "## Overview\n__*FINCH*__ is a CoT-Instruction dataset rooting Korean-Financial tasks including: Multiple-Choice Question Answering (MCQA), \nExtractive Question Answering (EQA), Binary Question Answering (BQA), Numerical Reasoning, Tabular Reasoning and Sentiment Analysis. \n\nAdditional details, research paper and further updates are coming! Stay Tuned." ]
[ 11, 40, 92 ]
[ "passage: TAGS\n#license-mit #region-us \n# 𓅰 FINCH: CoT-Instruction Dataset for Korean Finance 𓅰\n\n\n<img src=\"assets/finch_logo.png\" width=\"400\">## Overview\n__*FINCH*__ is a CoT-Instruction dataset rooting Korean-Financial tasks including: Multiple-Choice Question Answering (MCQA), \nExtractive Question Answering (EQA), Binary Question Answering (BQA), Numerical Reasoning, Tabular Reasoning and Sentiment Analysis. \n\nAdditional details, research paper and further updates are coming! Stay Tuned." ]
458615302f1d88418eb35963c87c90754d3cb647
## Object-Centric Learning with Object Constancy (OCLOC) Datasets This repository contains the datasets used in the paper "Unsupervised Object-Centric Learning from Multiple Unspecified Viewpoints". ### CLEVR and SHOP Datasets The datasets named CLEVR and SHOP used in this paper are constructed based on the CLEVR dataset [\[Johnson et al., CVPR-17\]](https://ieeexplore.ieee.org/document/8099698) and the SHOP-VRB dataset [\[Nazarczuk & Mikolajczyk, ICRA-20\]](https://ieeexplore.ieee.org/abstract/document/9197332), respectively. The official code provided by the [CLEVR](https://github.com/facebookresearch/clevr-dataset-gen) and [SHOP-VRB](https://github.com/michaal94/shop-vrb-gen) datasets are slightly modified to support generating images of the same visual scene from multiple viewpoints. Images in these datasets are first generated with size 214 x 160 and then cropped to size 128 x 128 at locations 19 (up), 147 (down), 43 (left), and 171 (right). ### GSO and ShapeNet Datasets The dataset named GSO used in this paper is constructed based on the combination of the GSO [\[Downs et al., ICRA-22\]](https://ieeexplore.ieee.org/abstract/document/9811809) and [HDRI-Haven](https://hdri-haven.com/) datasets. The dataset named ShapeNet used in this paper is constructed based on the combination of the ShapeNet [\[Chang et al.\]](https://arxiv.org/abs/1512.03012) and [HDRI-Haven](https://hdri-haven.com/) datasets. Images in these datasets are generated using [Kubric](https://github.com/google-research/kubric) with size 128 x 128. ### Configurations of Datasets Row 1: names of datasets. Row 2: splits of datasets. Row 3: the number of visual scenes in each split. Row 4: the ranges to sample the number of objects per scene. Row 5: the number of viewpoints to observe each visual scene. Row 6: the height and width of each image. Rows 7-9: the ranges to sample viewpoints. <table> <tr> <td align="center" style="font-weight:bold">Dataset</td> <td colspan="4" align="center" style="font-weight:bold">CLEVR / SHOP</td> <td colspan="4" align="center" style="font-weight:bold">GSO / ShapeNet</td> </tr> <tr> <td align="center" style="font-weight:bold">Split</td> <td align="center">Train</td> <td align="center">Valid</td> <td align="center">Test 1</td> <td align="center">Test 2</td> <td align="center">Train</td> <td align="center">Valid</td> <td align="center">Test 1</td> <td align="center">Test 2</td> </tr> <tr> <td align="center" style="font-weight:bold">Scenes</td> <td align="center">5000</td> <td align="center">100</td> <td align="center">100</td> <td align="center">100</td> <td align="center">5000</td> <td align="center">100</td> <td align="center">100</td> <td align="center">100</td> </tr> <tr> <td align="center" style="font-weight:bold">Objects</td> <td align="center">3 ~ 6</td> <td align="center">3 ~ 6</td> <td align="center">3 ~ 6</td> <td align="center">7 ~ 10</td> <td align="center">3 ~ 6</td> <td align="center">3 ~ 6</td> <td align="center">3 ~ 6</td> <td align="center">7 ~ 10</td> </tr> <tr> <td align="center" style="font-weight:bold">Viewpoints</td> <td colspan="4" align="center">60</td> <td colspan="4" align="center">12</td> </tr> <tr> <td align="center" style="font-weight:bold">Image Size</td> <td colspan="8" align="center">128 x 128</td> </tr> <tr> <td align="center" style="font-weight:bold">Azimuth</td> <td colspan="8" align="center">[0, 2π]</td> </tr> <tr> <td align="center" style="font-weight:bold">Elevation</td> <td colspan="8" align="center">[0.15π, 0.3π]</td> </tr> <tr> <td align="center" style="font-weight:bold">Distance</td> <td colspan="8" align="center">[10.5, 12]</td> </tr> </table>
jinyangyuan/ocloc-data
[ "arxiv:1512.03012", "region:us" ]
2024-01-01T02:16:13+00:00
{}
2024-01-01T04:45:08+00:00
[ "1512.03012" ]
[]
TAGS #arxiv-1512.03012 #region-us
Object-Centric Learning with Object Constancy (OCLOC) Datasets -------------------------------------------------------------- This repository contains the datasets used in the paper "Unsupervised Object-Centric Learning from Multiple Unspecified Viewpoints". ### CLEVR and SHOP Datasets The datasets named CLEVR and SHOP used in this paper are constructed based on the CLEVR dataset [[Johnson et al., CVPR-17]](URL and the SHOP-VRB dataset [[Nazarczuk & Mikolajczyk, ICRA-20]](URL respectively. The official code provided by the CLEVR and SHOP-VRB datasets are slightly modified to support generating images of the same visual scene from multiple viewpoints. Images in these datasets are first generated with size 214 x 160 and then cropped to size 128 x 128 at locations 19 (up), 147 (down), 43 (left), and 171 (right). ### GSO and ShapeNet Datasets The dataset named GSO used in this paper is constructed based on the combination of the GSO [[Downs et al., ICRA-22]](URL and HDRI-Haven datasets. The dataset named ShapeNet used in this paper is constructed based on the combination of the ShapeNet [[Chang et al.]](URL and HDRI-Haven datasets. Images in these datasets are generated using Kubric with size 128 x 128. ### Configurations of Datasets Row 1: names of datasets. Row 2: splits of datasets. Row 3: the number of visual scenes in each split. Row 4: the ranges to sample the number of objects per scene. Row 5: the number of viewpoints to observe each visual scene. Row 6: the height and width of each image. Rows 7-9: the ranges to sample viewpoints.
[ "### CLEVR and SHOP Datasets\n\n\nThe datasets named CLEVR and SHOP used in this paper are constructed based on the CLEVR dataset [[Johnson et al., CVPR-17]](URL and the SHOP-VRB dataset [[Nazarczuk & Mikolajczyk, ICRA-20]](URL respectively. The official code provided by the CLEVR and SHOP-VRB datasets are slightly modified to support generating images of the same visual scene from multiple viewpoints. Images in these datasets are first generated with size 214 x 160 and then cropped to size 128 x 128 at locations 19 (up), 147 (down), 43 (left), and 171 (right).", "### GSO and ShapeNet Datasets\n\n\nThe dataset named GSO used in this paper is constructed based on the combination of the GSO [[Downs et al., ICRA-22]](URL and HDRI-Haven datasets. The dataset named ShapeNet used in this paper is constructed based on the combination of the ShapeNet [[Chang et al.]](URL and HDRI-Haven datasets. Images in these datasets are generated using Kubric with size 128 x 128.", "### Configurations of Datasets\n\n\nRow 1: names of datasets. Row 2: splits of datasets. Row 3: the number of visual scenes in each split. Row 4: the ranges to sample the number of objects per scene. Row 5: the number of viewpoints to observe each visual scene. Row 6: the height and width of each image. Rows 7-9: the ranges to sample viewpoints." ]
[ "TAGS\n#arxiv-1512.03012 #region-us \n", "### CLEVR and SHOP Datasets\n\n\nThe datasets named CLEVR and SHOP used in this paper are constructed based on the CLEVR dataset [[Johnson et al., CVPR-17]](URL and the SHOP-VRB dataset [[Nazarczuk & Mikolajczyk, ICRA-20]](URL respectively. The official code provided by the CLEVR and SHOP-VRB datasets are slightly modified to support generating images of the same visual scene from multiple viewpoints. Images in these datasets are first generated with size 214 x 160 and then cropped to size 128 x 128 at locations 19 (up), 147 (down), 43 (left), and 171 (right).", "### GSO and ShapeNet Datasets\n\n\nThe dataset named GSO used in this paper is constructed based on the combination of the GSO [[Downs et al., ICRA-22]](URL and HDRI-Haven datasets. The dataset named ShapeNet used in this paper is constructed based on the combination of the ShapeNet [[Chang et al.]](URL and HDRI-Haven datasets. Images in these datasets are generated using Kubric with size 128 x 128.", "### Configurations of Datasets\n\n\nRow 1: names of datasets. Row 2: splits of datasets. Row 3: the number of visual scenes in each split. Row 4: the ranges to sample the number of objects per scene. Row 5: the number of viewpoints to observe each visual scene. Row 6: the height and width of each image. Rows 7-9: the ranges to sample viewpoints." ]
[ 15, 162, 118, 94 ]
[ "passage: TAGS\n#arxiv-1512.03012 #region-us \n### CLEVR and SHOP Datasets\n\n\nThe datasets named CLEVR and SHOP used in this paper are constructed based on the CLEVR dataset [[Johnson et al., CVPR-17]](URL and the SHOP-VRB dataset [[Nazarczuk & Mikolajczyk, ICRA-20]](URL respectively. The official code provided by the CLEVR and SHOP-VRB datasets are slightly modified to support generating images of the same visual scene from multiple viewpoints. Images in these datasets are first generated with size 214 x 160 and then cropped to size 128 x 128 at locations 19 (up), 147 (down), 43 (left), and 171 (right).### GSO and ShapeNet Datasets\n\n\nThe dataset named GSO used in this paper is constructed based on the combination of the GSO [[Downs et al., ICRA-22]](URL and HDRI-Haven datasets. The dataset named ShapeNet used in this paper is constructed based on the combination of the ShapeNet [[Chang et al.]](URL and HDRI-Haven datasets. Images in these datasets are generated using Kubric with size 128 x 128.### Configurations of Datasets\n\n\nRow 1: names of datasets. Row 2: splits of datasets. Row 3: the number of visual scenes in each split. Row 4: the ranges to sample the number of objects per scene. Row 5: the number of viewpoints to observe each visual scene. Row 6: the height and width of each image. Rows 7-9: the ranges to sample viewpoints." ]
22e8f9f93dbe83e9c4755b651204543cfa37fee9
# Rejecction Sampling Q&A This dataset is a very small curated question-answer pairs. The questions were hand-crafted to test the model's capabilities to follow instruction across various domains. The answers were generated using [Microsoft's Phi-2](https://huggingface.co/microsoft/phi-2) and curated using [OpenAssistant's Large DeBERTa v3 Reward Model v2](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-large-v2). ## Dataset Details ### Dataset Description - **Curated by:** Alejandro Hernández Cano. - **Language(s) (NLP):** English. - **License:** MIT License. The answers of this dataset were generated by prompting [Microsoft's Phi-2](https://huggingface.co/microsoft/phi-2) using a prompt format inspired by [Stanford's Alpaca](https://github.com/tatsu-lab/stanford_alpaca) to help the LLM follow instructions. We also include "Let's think step by step" to the answer prompt as it can improve performance (see [Kojima et. al. 2022](https://arxiv.org/abs/2205.11916)). The used prompt format is: ``` ### Context {system prompt} ### Task {question} ### Answer Let's think step by step. ``` The system prompt used was: > Below is a task and its response. The response is going to be helpful, respectful and honest. The answer should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. The answer should be limited to only the instructions requested. which was insipred from [Meta's LLaMa-2](https://arxiv.org/abs/2307.09288) system prompt. Using all questions, we scanned the generation temperature hyperparameter to a value that maximizes the average reward scored in a total of 4 generated samplings, using [OpenAssistant's Large DeBERTa v3 Reward Model v2](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-large-v2). The temperature obtained was `0.001`. We then promped the generative model to sample 8 more answers. Out of these 8 generations, the top response according to the reward model was selected to be the `answer` key of this dataset. ## Dataset Structure Each sample in the dataset is a dictionary with exactly three keys: ``` {"id": <int: the ID of the sample in this dataset>, "question": <str: the question >, "answer": <str: the best answered generated by the generative model>} ```
alehc/rejection-sampling-QA
[ "task_categories:conversational", "task_categories:text-generation", "task_categories:text2text-generation", "size_categories:n<1K", "language:en", "license:mit", "QA", "testing", "tiny", "arxiv:2205.11916", "arxiv:2307.09288", "region:us" ]
2024-01-01T02:24:02+00:00
{"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["conversational", "text-generation", "text2text-generation"], "pretty_name": "Rejection Sampling QA", "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8246, "num_examples": 10}], "download_size": 12113, "dataset_size": 8246}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["QA", "testing", "tiny"]}
2024-01-01T02:29:13+00:00
[ "2205.11916", "2307.09288" ]
[ "en" ]
TAGS #task_categories-conversational #task_categories-text-generation #task_categories-text2text-generation #size_categories-n<1K #language-English #license-mit #QA #testing #tiny #arxiv-2205.11916 #arxiv-2307.09288 #region-us
# Rejecction Sampling Q&A This dataset is a very small curated question-answer pairs. The questions were hand-crafted to test the model's capabilities to follow instruction across various domains. The answers were generated using Microsoft's Phi-2 and curated using OpenAssistant's Large DeBERTa v3 Reward Model v2. ## Dataset Details ### Dataset Description - Curated by: Alejandro Hernández Cano. - Language(s) (NLP): English. - License: MIT License. The answers of this dataset were generated by prompting Microsoft's Phi-2 using a prompt format inspired by Stanford's Alpaca to help the LLM follow instructions. We also include "Let's think step by step" to the answer prompt as it can improve performance (see Kojima et. al. 2022). The used prompt format is: The system prompt used was: > Below is a task and its response. The response is going to be helpful, respectful and honest. The answer should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. The answer should be limited to only the instructions requested. which was insipred from Meta's LLaMa-2 system prompt. Using all questions, we scanned the generation temperature hyperparameter to a value that maximizes the average reward scored in a total of 4 generated samplings, using OpenAssistant's Large DeBERTa v3 Reward Model v2. The temperature obtained was '0.001'. We then promped the generative model to sample 8 more answers. Out of these 8 generations, the top response according to the reward model was selected to be the 'answer' key of this dataset. ## Dataset Structure Each sample in the dataset is a dictionary with exactly three keys:
[ "# Rejecction Sampling Q&A\n\nThis dataset is a very small curated question-answer pairs.\nThe questions were hand-crafted to test the model's capabilities to follow instruction across various domains.\nThe answers were generated using Microsoft's Phi-2 and curated using OpenAssistant's Large DeBERTa v3 Reward Model v2.", "## Dataset Details", "### Dataset Description\n\n- Curated by: Alejandro Hernández Cano.\n- Language(s) (NLP): English.\n- License: MIT License.\n\nThe answers of this dataset were generated by prompting Microsoft's Phi-2 using a prompt format inspired by Stanford's Alpaca to help the LLM follow instructions.\nWe also include \"Let's think step by step\" to the answer prompt as it can improve performance (see Kojima et. al. 2022).\nThe used prompt format is:\n\n\n\nThe system prompt used was:\n\n> Below is a task and its response. The response is going to be helpful, respectful and honest. The answer should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. The answer should be limited to only the instructions requested.\n\nwhich was insipred from Meta's LLaMa-2 system prompt.\n\nUsing all questions, we scanned the generation temperature hyperparameter to a value that maximizes the average reward scored in a total of 4 generated samplings, using OpenAssistant's Large DeBERTa v3 Reward Model v2.\nThe temperature obtained was '0.001'.\nWe then promped the generative model to sample 8 more answers.\nOut of these 8 generations, the top response according to the reward model was selected to be the 'answer' key of this dataset.", "## Dataset Structure\n\nEach sample in the dataset is a dictionary with exactly three keys:" ]
[ "TAGS\n#task_categories-conversational #task_categories-text-generation #task_categories-text2text-generation #size_categories-n<1K #language-English #license-mit #QA #testing #tiny #arxiv-2205.11916 #arxiv-2307.09288 #region-us \n", "# Rejecction Sampling Q&A\n\nThis dataset is a very small curated question-answer pairs.\nThe questions were hand-crafted to test the model's capabilities to follow instruction across various domains.\nThe answers were generated using Microsoft's Phi-2 and curated using OpenAssistant's Large DeBERTa v3 Reward Model v2.", "## Dataset Details", "### Dataset Description\n\n- Curated by: Alejandro Hernández Cano.\n- Language(s) (NLP): English.\n- License: MIT License.\n\nThe answers of this dataset were generated by prompting Microsoft's Phi-2 using a prompt format inspired by Stanford's Alpaca to help the LLM follow instructions.\nWe also include \"Let's think step by step\" to the answer prompt as it can improve performance (see Kojima et. al. 2022).\nThe used prompt format is:\n\n\n\nThe system prompt used was:\n\n> Below is a task and its response. The response is going to be helpful, respectful and honest. The answer should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. The answer should be limited to only the instructions requested.\n\nwhich was insipred from Meta's LLaMa-2 system prompt.\n\nUsing all questions, we scanned the generation temperature hyperparameter to a value that maximizes the average reward scored in a total of 4 generated samplings, using OpenAssistant's Large DeBERTa v3 Reward Model v2.\nThe temperature obtained was '0.001'.\nWe then promped the generative model to sample 8 more answers.\nOut of these 8 generations, the top response according to the reward model was selected to be the 'answer' key of this dataset.", "## Dataset Structure\n\nEach sample in the dataset is a dictionary with exactly three keys:" ]
[ 83, 83, 4, 301, 23 ]
[ "passage: TAGS\n#task_categories-conversational #task_categories-text-generation #task_categories-text2text-generation #size_categories-n<1K #language-English #license-mit #QA #testing #tiny #arxiv-2205.11916 #arxiv-2307.09288 #region-us \n# Rejecction Sampling Q&A\n\nThis dataset is a very small curated question-answer pairs.\nThe questions were hand-crafted to test the model's capabilities to follow instruction across various domains.\nThe answers were generated using Microsoft's Phi-2 and curated using OpenAssistant's Large DeBERTa v3 Reward Model v2.## Dataset Details### Dataset Description\n\n- Curated by: Alejandro Hernández Cano.\n- Language(s) (NLP): English.\n- License: MIT License.\n\nThe answers of this dataset were generated by prompting Microsoft's Phi-2 using a prompt format inspired by Stanford's Alpaca to help the LLM follow instructions.\nWe also include \"Let's think step by step\" to the answer prompt as it can improve performance (see Kojima et. al. 2022).\nThe used prompt format is:\n\n\n\nThe system prompt used was:\n\n> Below is a task and its response. The response is going to be helpful, respectful and honest. The answer should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. The answer should be limited to only the instructions requested.\n\nwhich was insipred from Meta's LLaMa-2 system prompt.\n\nUsing all questions, we scanned the generation temperature hyperparameter to a value that maximizes the average reward scored in a total of 4 generated samplings, using OpenAssistant's Large DeBERTa v3 Reward Model v2.\nThe temperature obtained was '0.001'.\nWe then promped the generative model to sample 8 more answers.\nOut of these 8 generations, the top response according to the reward model was selected to be the 'answer' key of this dataset.## Dataset Structure\n\nEach sample in the dataset is a dictionary with exactly three keys:" ]
874631a55a9c20ae9710ab2038fc578b23b5f5b7
# IMDA National Speech Corpus (NSC) Text-to-Speech Originally from https://www.imda.gov.sg/how-we-can-help/national-speech-corpus, this repository simply a mirror. This dataset associated with Singapore Open Data Licence, https://www.sla.gov.sg/newsroom/statistics/singapore-open-data-licence We uploaded wav files and transcription.
mesolitica/IMDA-TTS
[ "task_categories:text-to-speech", "language:en", "region:us" ]
2024-01-01T04:01:41+00:00
{"language": ["en"], "task_categories": ["text-to-speech"], "pretty_name": "imda-"}
2024-01-01T04:16:32+00:00
[]
[ "en" ]
TAGS #task_categories-text-to-speech #language-English #region-us
# IMDA National Speech Corpus (NSC) Text-to-Speech Originally from URL this repository simply a mirror. This dataset associated with Singapore Open Data Licence, URL We uploaded wav files and transcription.
[ "# IMDA National Speech Corpus (NSC) Text-to-Speech\n\nOriginally from URL this repository simply a mirror. This dataset associated with Singapore Open Data Licence, URL\n\nWe uploaded wav files and transcription." ]
[ "TAGS\n#task_categories-text-to-speech #language-English #region-us \n", "# IMDA National Speech Corpus (NSC) Text-to-Speech\n\nOriginally from URL this repository simply a mirror. This dataset associated with Singapore Open Data Licence, URL\n\nWe uploaded wav files and transcription." ]
[ 23, 51 ]
[ "passage: TAGS\n#task_categories-text-to-speech #language-English #region-us \n# IMDA National Speech Corpus (NSC) Text-to-Speech\n\nOriginally from URL this repository simply a mirror. This dataset associated with Singapore Open Data Licence, URL\n\nWe uploaded wav files and transcription." ]
00ad9dcd26ca9ef0ce3915cdc89859cdd8e9fa1a
Hindi translation of populate datasets using ai4bharat/indictrans2-en-indic-dist-200M --- dataset_info: features: - name: instruction dtype: string - name: response dtype: string - name: system dtype: string - name: org_dataset dtype: string - name: primary_category dtype: string - name: category dtype: string - name: match_score dtype: float64 - name: other_match_score dtype: string - name: instruction_hi dtype: string - name: system_hi dtype: string - name: response_hi dtype: string splits: - name: train num_bytes: 1645014433 num_examples: 726581 download_size: 711943143 dataset_size: 1645014433 configs: - config_name: default data_files: - split: train path: data/train-* ---
manishiitg/en-hi-instruct-v2
[ "region:us" ]
2024-01-01T05:29:38+00:00
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "system", "dtype": "string"}, {"name": "org_dataset", "dtype": "string"}, {"name": "messages", "dtype": "null"}, {"name": "primary_category", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "match_score", "dtype": "float64"}, {"name": "other_match_score", "dtype": "string"}, {"name": "instruction_hi", "dtype": "string"}, {"name": "system_hi", "dtype": "string"}, {"name": "response_hi", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1310208933, "num_examples": 285615}], "download_size": 511003660, "dataset_size": 1310208933}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-02-12T06:00:35+00:00
[]
[]
TAGS #region-us
Hindi translation of populate datasets using ai4bharat/indictrans2-en-indic-dist-200M --- dataset_info: features: - name: instruction dtype: string - name: response dtype: string - name: system dtype: string - name: org_dataset dtype: string - name: primary_category dtype: string - name: category dtype: string - name: match_score dtype: float64 - name: other_match_score dtype: string - name: instruction_hi dtype: string - name: system_hi dtype: string - name: response_hi dtype: string splits: - name: train num_bytes: 1645014433 num_examples: 726581 download_size: 711943143 dataset_size: 1645014433 configs: - config_name: default data_files: - split: train path: data/train-* ---
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
32e55ac34509a74ee26e70a33a6827188ae629dd
このデータセットはmC4の日本語データに対し、文の割合に応じて1~5段階で人手で評価したものです。 文の割合はデータセットの```score```に格納しています。 1. 文の割合が20%以下 2. 文の割合が20~40% 3. 文の割合が40~60% 4. 文の割合が60~80% 5. 文の割合が80~100% データ量は500件と少ないですが、mC4のゴミデータ削除に役立てればと思います。
oriki101/mc4_ja_text_volume_annotatted_data
[ "license:odc-by", "region:us" ]
2024-01-01T05:43:03+00:00
{"license": "odc-by", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "timestamp", "dtype": "int64"}, {"name": "url", "dtype": "string"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 5152049, "num_examples": 501}], "download_size": 2669350, "dataset_size": 5152049}}
2024-01-01T13:00:00+00:00
[]
[]
TAGS #license-odc-by #region-us
このデータセットはmC4の日本語データに対し、文の割合に応じて1~5段階で人手で評価したものです。 文の割合はデータセットのに格納しています。 1. 文の割合が20%以下 2. 文の割合が20~40% 3. 文の割合が40~60% 4. 文の割合が60~80% 5. 文の割合が80~100% データ量は500件と少ないですが、mC4のゴミデータ削除に役立てればと思います。
[]
[ "TAGS\n#license-odc-by #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#license-odc-by #region-us \n" ]
415bf90976aafc5ddec08ad439f423b738937b5f
Hindi translation of populate datasets using ai4bharat/indictrans2-en-indic-dist-200M --- dataset_info: features: - name: org_dataset dtype: string - name: uniq_id dtype: string - name: en_messages list: - name: content dtype: string - name: input dtype: string - name: output dtype: string - name: role dtype: string - name: hi_messages list: - name: content dtype: string - name: role dtype: string splits: - name: train num_bytes: 10076096 num_examples: 1571 download_size: 3821413 dataset_size: 10076096 configs: - config_name: default data_files: - split: train path: data/train-* ---
manishiitg/en-hi-chat-v2
[ "region:us" ]
2024-01-01T05:48:46+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "org_dataset", "dtype": "string"}, {"name": "uniq_id", "dtype": "string"}, {"name": "en_messages", "list": [{"name": "content", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "hi_messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 3362476785, "num_examples": 163561}], "download_size": 1251742307, "dataset_size": 3362476785}}
2024-02-03T11:12:05+00:00
[]
[]
TAGS #region-us
Hindi translation of populate datasets using ai4bharat/indictrans2-en-indic-dist-200M --- dataset_info: features: - name: org_dataset dtype: string - name: uniq_id dtype: string - name: en_messages list: - name: content dtype: string - name: input dtype: string - name: output dtype: string - name: role dtype: string - name: hi_messages list: - name: content dtype: string - name: role dtype: string splits: - name: train num_bytes: 10076096 num_examples: 1571 download_size: 3821413 dataset_size: 10076096 configs: - config_name: default data_files: - split: train path: data/train-* ---
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
60b89c2dc10a8cb882b05cc871561697ef602db8
# Dataset Card for "iraq_captions" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Falah/iraq_captions
[ "region:us" ]
2024-01-01T06:38:22+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 278175.0, "num_examples": 3}], "download_size": 0, "dataset_size": 278175.0}}
2024-01-09T07:38:02+00:00
[]
[]
TAGS #region-us
# Dataset Card for "iraq_captions" More Information needed
[ "# Dataset Card for \"iraq_captions\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"iraq_captions\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"iraq_captions\"\n\nMore Information needed" ]
3f314b789be17df06796653308b5f2e51cae7349
Here we share a German dataset synthesized using the OpenAI GPT-4 model with Self-Instruct, utilizing some excess Azure credits. Please feel free to use it. All questions and answers are newly generated by GPT-4, without specialized verification, only simple filtering and strict semantic similarity control have been applied. We hope that this will be helpful for fine-tuning open-source models for non-English languages, particularly German. This dataset will be updated continuously.
CausalLM/GPT-4-Self-Instruct-German
[ "task_categories:text-generation", "language:de", "license:cc-by-4.0", "gpt4", "region:us" ]
2024-01-01T07:10:14+00:00
{"language": ["de"], "license": "cc-by-4.0", "task_categories": ["text-generation"], "tags": ["gpt4"]}
2024-01-02T01:03:46+00:00
[]
[ "de" ]
TAGS #task_categories-text-generation #language-German #license-cc-by-4.0 #gpt4 #region-us
Here we share a German dataset synthesized using the OpenAI GPT-4 model with Self-Instruct, utilizing some excess Azure credits. Please feel free to use it. All questions and answers are newly generated by GPT-4, without specialized verification, only simple filtering and strict semantic similarity control have been applied. We hope that this will be helpful for fine-tuning open-source models for non-English languages, particularly German. This dataset will be updated continuously.
[]
[ "TAGS\n#task_categories-text-generation #language-German #license-cc-by-4.0 #gpt4 #region-us \n" ]
[ 34 ]
[ "passage: TAGS\n#task_categories-text-generation #language-German #license-cc-by-4.0 #gpt4 #region-us \n" ]
1a0cdd8af46e26f3498e1803e580cc6727c7bcf2
Here we share a Japanese dataset synthesized using the OpenAI GPT-4 model with Self-Instruct, utilizing some excess Azure credits. Please feel free to use it. All questions and answers are newly generated by GPT-4, without specialized verification, only simple filtering and strict semantic similarity control have been applied. We hope that this will be helpful for fine-tuning open-source models for non-English languages, particularly Japanese. This dataset will be updated continuously.
CausalLM/GPT-4-Self-Instruct-Japanese
[ "language:ja", "license:cc-by-4.0", "gpt4", "region:us" ]
2024-01-01T07:38:56+00:00
{"language": ["ja"], "license": "cc-by-4.0", "tags": ["gpt4"]}
2024-01-02T15:37:27+00:00
[]
[ "ja" ]
TAGS #language-Japanese #license-cc-by-4.0 #gpt4 #region-us
Here we share a Japanese dataset synthesized using the OpenAI GPT-4 model with Self-Instruct, utilizing some excess Azure credits. Please feel free to use it. All questions and answers are newly generated by GPT-4, without specialized verification, only simple filtering and strict semantic similarity control have been applied. We hope that this will be helpful for fine-tuning open-source models for non-English languages, particularly Japanese. This dataset will be updated continuously.
[]
[ "TAGS\n#language-Japanese #license-cc-by-4.0 #gpt4 #region-us \n" ]
[ 25 ]
[ "passage: TAGS\n#language-Japanese #license-cc-by-4.0 #gpt4 #region-us \n" ]
34b32b24247edea0afa3f3053b7fdf5da1041a00
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description Scraped from instagram and labeled for cyberbullying. Dataset was augmented to balance postitive and negetive labels. - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
SSEF-HG-AC/cyberbullying-instagram-balanced-1128
[ "license:cc", "region:us" ]
2024-01-01T07:57:27+00:00
{"license": "cc"}
2024-01-02T09:02:32+00:00
[]
[]
TAGS #license-cc #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description Scraped from instagram and labeled for cyberbullying. Dataset was augmented to balance postitive and negetive labels. - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\nScraped from instagram and labeled for cyberbullying. Dataset was augmented to balance postitive and negetive labels.\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#license-cc #region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\nScraped from instagram and labeled for cyberbullying. Dataset was augmented to balance postitive and negetive labels.\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 11, 34, 4, 69, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#license-cc #region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\nScraped from instagram and labeled for cyberbullying. Dataset was augmented to balance postitive and negetive labels.\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
e3f8af3ef834d16adc970ee38fe3ffe1467c483b
This is an improved subset version of Aksharantar hindi dataset. Find original [here](https://ai4bharat.iitm.ac.in/aksharantar-dataset/). Further improvements are in progress.
BhabhaAI/Aksharantar-hindi
[ "language:hi", "region:us" ]
2024-01-01T08:04:44+00:00
{"language": ["hi"]}
2024-01-01T08:11:29+00:00
[]
[ "hi" ]
TAGS #language-Hindi #region-us
This is an improved subset version of Aksharantar hindi dataset. Find original here. Further improvements are in progress.
[]
[ "TAGS\n#language-Hindi #region-us \n" ]
[ 10 ]
[ "passage: TAGS\n#language-Hindi #region-us \n" ]
98a18b91aca4b0de28e734517d576ae95d3ce936
# Entity Popularity Dataset This dataset contains information for about 26,000 entities, including the Wikipedia article title, QID, and the annual article view count for the year 2021. The annual article view count can be considered as an indicator of the popularity of a entity. ## Languages This dataset is composed in English. ## Dataset Structure ```python from datasets import load_dataset dataset = load_dataset("masaki-sakata/entity_popularity")["en"] print(dataset) # Dataset({ # features: ['wiki_title', 'popularity', 'qid'], # num_rows: 26270 # }) ``` Each line in the dataset has the following attributes: - `wiki_title`: The title of the Wikipedia page. - `popularity`: The popularity score. This value represents the annual page views for the Wikipedia article corresponding to the `wiki_title`, obtained using the Wikipedia API for the year 2021. - `qid`: The unique identifier of the item in [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page). Here is an example: ```json {"wiki_title":"FC Barcelona","popularity":5389420.0,"qid":"Q7156"} ```
masaki-sakata/entity_popularity
[ "language:en", "license:mit", "Wikipedia", "Entity", "QID", "Popularity", "Knowledge", "region:us" ]
2024-01-01T08:05:00+00:00
{"language": ["en"], "license": "mit", "dataset_info": {"features": [{"name": "wiki_title", "dtype": "string"}, {"name": "popularity", "dtype": "int64"}, {"name": "qid", "dtype": "string"}], "splits": [{"name": "en", "num_bytes": 1049005, "num_examples": 26270}], "download_size": 819673, "dataset_size": 1049005}, "configs": [{"config_name": "default", "data_files": [{"split": "en", "path": "data/en-*"}]}], "tags": ["Wikipedia", "Entity", "QID", "Popularity", "Knowledge"]}
2024-01-07T07:04:35+00:00
[]
[ "en" ]
TAGS #language-English #license-mit #Wikipedia #Entity #QID #Popularity #Knowledge #region-us
# Entity Popularity Dataset This dataset contains information for about 26,000 entities, including the Wikipedia article title, QID, and the annual article view count for the year 2021. The annual article view count can be considered as an indicator of the popularity of a entity. ## Languages This dataset is composed in English. ## Dataset Structure Each line in the dataset has the following attributes: - 'wiki_title': The title of the Wikipedia page. - 'popularity': The popularity score. This value represents the annual page views for the Wikipedia article corresponding to the 'wiki_title', obtained using the Wikipedia API for the year 2021. - 'qid': The unique identifier of the item in Wikidata. Here is an example:
[ "# Entity Popularity Dataset\n\nThis dataset contains information for about 26,000 entities, including the Wikipedia article title, QID, and the annual article view count for the year 2021. \nThe annual article view count can be considered as an indicator of the popularity of a entity.", "## Languages\nThis dataset is composed in English.", "## Dataset Structure\n\n\n\nEach line in the dataset has the following attributes:\n\n- 'wiki_title': The title of the Wikipedia page.\n- 'popularity': The popularity score. This value represents the annual page views for the Wikipedia article corresponding to the 'wiki_title', obtained using the Wikipedia API for the year 2021.\n- 'qid': The unique identifier of the item in Wikidata.\n\nHere is an example:" ]
[ "TAGS\n#language-English #license-mit #Wikipedia #Entity #QID #Popularity #Knowledge #region-us \n", "# Entity Popularity Dataset\n\nThis dataset contains information for about 26,000 entities, including the Wikipedia article title, QID, and the annual article view count for the year 2021. \nThe annual article view count can be considered as an indicator of the popularity of a entity.", "## Languages\nThis dataset is composed in English.", "## Dataset Structure\n\n\n\nEach line in the dataset has the following attributes:\n\n- 'wiki_title': The title of the Wikipedia page.\n- 'popularity': The popularity score. This value represents the annual page views for the Wikipedia article corresponding to the 'wiki_title', obtained using the Wikipedia API for the year 2021.\n- 'qid': The unique identifier of the item in Wikidata.\n\nHere is an example:" ]
[ 32, 60, 12, 99 ]
[ "passage: TAGS\n#language-English #license-mit #Wikipedia #Entity #QID #Popularity #Knowledge #region-us \n# Entity Popularity Dataset\n\nThis dataset contains information for about 26,000 entities, including the Wikipedia article title, QID, and the annual article view count for the year 2021. \nThe annual article view count can be considered as an indicator of the popularity of a entity.## Languages\nThis dataset is composed in English.## Dataset Structure\n\n\n\nEach line in the dataset has the following attributes:\n\n- 'wiki_title': The title of the Wikipedia page.\n- 'popularity': The popularity score. This value represents the annual page views for the Wikipedia article corresponding to the 'wiki_title', obtained using the Wikipedia API for the year 2021.\n- 'qid': The unique identifier of the item in Wikidata.\n\nHere is an example:" ]
7998e979bc3d1344828af6df7eb3b570d72d81cc
## SLR63: Crowdsourced high-quality Malayalam multi-speaker speech data set This data set contains transcribed high-quality audio of Malayalam sentences recorded by volunteers. The data set consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and the transcription of audio in the file. The data set has been manually quality checked, but there might still be errors. Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License. See LICENSE file and https://github.com/google/language-resources#license for license information. Copyright 2018, 2019 Google, Inc. ### Train Test Split created to ensure no speaker overlap
vrclc/openslr63
[ "task_categories:automatic-speech-recognition", "task_categories:text-to-speech", "size_categories:1K<n<10K", "language:ml", "license:cc-by-4.0", "region:us" ]
2024-01-01T09:11:33+00:00
{"language": ["ml"], "license": "cc-by-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["automatic-speech-recognition", "text-to-speech"], "pretty_name": "OPEN SLR 63"}
2024-01-01T09:20:25+00:00
[]
[ "ml" ]
TAGS #task_categories-automatic-speech-recognition #task_categories-text-to-speech #size_categories-1K<n<10K #language-Malayalam #license-cc-by-4.0 #region-us
## SLR63: Crowdsourced high-quality Malayalam multi-speaker speech data set This data set contains transcribed high-quality audio of Malayalam sentences recorded by volunteers. The data set consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and the transcription of audio in the file. The data set has been manually quality checked, but there might still be errors. Please report any issues in the following issue tracker on GitHub. URL The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License. See LICENSE file and URL for license information. Copyright 2018, 2019 Google, Inc. ### Train Test Split created to ensure no speaker overlap
[ "## SLR63: Crowdsourced high-quality Malayalam multi-speaker speech data set\n\nThis data set contains transcribed high-quality audio of Malayalam sentences recorded by volunteers. The data set consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and the transcription of audio in the file.\n\nThe data set has been manually quality checked, but there might still be errors.\n\nPlease report any issues in the following issue tracker on GitHub. URL\n\nThe dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License. See LICENSE file and URL for license information.\n\nCopyright 2018, 2019 Google, Inc.", "### Train Test Split created to ensure no speaker overlap" ]
[ "TAGS\n#task_categories-automatic-speech-recognition #task_categories-text-to-speech #size_categories-1K<n<10K #language-Malayalam #license-cc-by-4.0 #region-us \n", "## SLR63: Crowdsourced high-quality Malayalam multi-speaker speech data set\n\nThis data set contains transcribed high-quality audio of Malayalam sentences recorded by volunteers. The data set consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and the transcription of audio in the file.\n\nThe data set has been manually quality checked, but there might still be errors.\n\nPlease report any issues in the following issue tracker on GitHub. URL\n\nThe dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License. See LICENSE file and URL for license information.\n\nCopyright 2018, 2019 Google, Inc.", "### Train Test Split created to ensure no speaker overlap" ]
[ 62, 162, 12 ]
[ "passage: TAGS\n#task_categories-automatic-speech-recognition #task_categories-text-to-speech #size_categories-1K<n<10K #language-Malayalam #license-cc-by-4.0 #region-us \n## SLR63: Crowdsourced high-quality Malayalam multi-speaker speech data set\n\nThis data set contains transcribed high-quality audio of Malayalam sentences recorded by volunteers. The data set consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and the transcription of audio in the file.\n\nThe data set has been manually quality checked, but there might still be errors.\n\nPlease report any issues in the following issue tracker on GitHub. URL\n\nThe dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License. See LICENSE file and URL for license information.\n\nCopyright 2018, 2019 Google, Inc.### Train Test Split created to ensure no speaker overlap" ]
b1dd8d3ab90f3b0d01fcd95db9a773093b76c7e8
Use only for Alignment research. NOETI is not responsible for what you might do with it.
NobodyExistsOnTheInternet/ToxicQAFinal
[ "not-for-all-audiences", "region:us" ]
2024-01-01T10:48:48+00:00
{"tags": ["not-for-all-audiences"]}
2024-01-10T14:26:24+00:00
[]
[]
TAGS #not-for-all-audiences #region-us
Use only for Alignment research. NOETI is not responsible for what you might do with it.
[]
[ "TAGS\n#not-for-all-audiences #region-us \n" ]
[ 15 ]
[ "passage: TAGS\n#not-for-all-audiences #region-us \n" ]
23b5403adfc111160b7edb319ae1e5cbf7b98e95
# Dataset Card for Steamboat Willy frames This dataset contains all frames of the original formerly Disney's Steamboat Willie short movie that entered the public domain in the United States in Jan 1st 2024. The frames have been upscaled. This short movie was the first apperance of the Mickey Mouse™️ character. ## Uses It's public domain, do whatever. Maybe train a text-to-image LoRA, could be fun?
multimodalart/steamboat-willy-frames
[ "license:cc0-1.0", "region:us" ]
2024-01-01T11:39:28+00:00
{"license": "cc0-1.0"}
2024-01-01T15:41:08+00:00
[]
[]
TAGS #license-cc0-1.0 #region-us
# Dataset Card for Steamboat Willy frames This dataset contains all frames of the original formerly Disney's Steamboat Willie short movie that entered the public domain in the United States in Jan 1st 2024. The frames have been upscaled. This short movie was the first apperance of the Mickey Mouse™️ character. ## Uses It's public domain, do whatever. Maybe train a text-to-image LoRA, could be fun?
[ "# Dataset Card for Steamboat Willy frames\n\nThis dataset contains all frames of the original formerly Disney's Steamboat Willie short movie that entered the public domain in the United States in Jan 1st 2024. The frames have been upscaled.\nThis short movie was the first apperance of the Mickey Mouse™️ character.", "## Uses\n\nIt's public domain, do whatever. Maybe train a text-to-image LoRA, could be fun?" ]
[ "TAGS\n#license-cc0-1.0 #region-us \n", "# Dataset Card for Steamboat Willy frames\n\nThis dataset contains all frames of the original formerly Disney's Steamboat Willie short movie that entered the public domain in the United States in Jan 1st 2024. The frames have been upscaled.\nThis short movie was the first apperance of the Mickey Mouse™️ character.", "## Uses\n\nIt's public domain, do whatever. Maybe train a text-to-image LoRA, could be fun?" ]
[ 14, 76, 27 ]
[ "passage: TAGS\n#license-cc0-1.0 #region-us \n# Dataset Card for Steamboat Willy frames\n\nThis dataset contains all frames of the original formerly Disney's Steamboat Willie short movie that entered the public domain in the United States in Jan 1st 2024. The frames have been upscaled.\nThis short movie was the first apperance of the Mickey Mouse™️ character.## Uses\n\nIt's public domain, do whatever. Maybe train a text-to-image LoRA, could be fun?" ]
18f4b640517d28108749d135219da34c7b7b7b8c
# Dataset Card for "hp_global" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nitinbhayana/hp_global
[ "region:us" ]
2024-01-01T11:47:35+00:00
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 431766, "num_examples": 2770}, {"name": "test", "num_bytes": 201909, "num_examples": 1283}], "download_size": 287803, "dataset_size": 633675}}
2024-01-01T11:47:45+00:00
[]
[]
TAGS #region-us
# Dataset Card for "hp_global" More Information needed
[ "# Dataset Card for \"hp_global\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"hp_global\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"hp_global\"\n\nMore Information needed" ]
f04c6fbc39e41244b5d881c97dd8b45d1aee7e25
# KnowEdit: A Benchmark of Knowledge Editing for LLMs This README is about reproducing the paper [A Comprehensive Study of Knowledge Editing for Large Language Models](https://arxiv.org/abs/2401.01286). You can use [EasyEdit](https://github.com/zjunlp/EasyEdit) to load and use this benchmark. ## Table of Contents - [Dataset Structure](#Dataset-Structure) - [Get Started Quickly](#Get-started-quickly) - [Training an Editor with KnowEdit](#Training-an-Editor-with-KnowEdit) - [Performence](#Performence) - [The Composition of Dataset](#The_Composition_of_Dataset) --- This README explains how to use [EasyEdit](https://github.com/zjunlp/EasyEdit) with the KnowEdit dataset. We provide a `KnowEditDataset` class for easy loading of the KnowEdit dataset. To use it, simply write: ```python dataset = KnowEditDataset('the_json_path') ``` ## Dataset Structure KnowEdit is tailored for knowledge editing tasks. It encompasses six tasks: ZsRE, Wiki<sub>recent</sub>, Wiki<sub>counterfact</sub>, WikiBio, ConvSent, and Sanitation. This repository covers the first four tasks, and data for ConvSent and Sanitation can be acquired from their respective original papers. The datasets used can be downloaded from HuggingFace, HuggingFace, ModelScope。 | **dataset** | HuggingFace| WiseModel | ModelScope | | :--------: | :-----------------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------: | :--------------------------------------------------------------------------------: | | KnowEdit | [[HuggingFace]](https://huggingface.co/datasets/zjunlp/KnowEdit) | [[WiseModel]](https://wisemodel.cn/datasets/zjunlp/KnowEdit) | [[ModelScope]](https://www.modelscope.cn/datasets/zjunlp/KnowEdit) | Unzip the file and put it to `./data` <table class="tg"> <thead> <tr> <th class="tg-7btt">Task</th> <th class="tg-7btt">Knowledge Insertion</th> <th class="tg-7btt" colspan="4">Knowledge Modification</th> <th class="tg-7btt">Knowledge Erasure</th> </tr> </thead> <tbody> <tr> <td class="tg-c3ow">Datasets</td> <td class="tg-c3ow">Wiki<sub>recent</sub></td> <td class="tg-c3ow">ZsRE</td> <td class="tg-c3ow">WikiBio</td> <td class="tg-c3ow"> WikiData<sub>counterfact</sub></td> <td class="tg-c3ow">Convsent</td> <td class="tg-c3ow">Sanitation</td> </tr> <tr> <td class="tg-c3ow">Type</td> <td class="tg-c3ow">Fact</td> <td class="tg-c3ow">Question Answering</td> <td class="tg-c3ow">Hallucination</td> <td class="tg-c3ow">Counterfact</td> <td class="tg-c3ow">Sentiment</td> <td class="tg-c3ow">Unwanted Info</td> </tr> <tr> <td class="tg-c3ow"># Train</td> <td class="tg-c3ow">570</td> <td class="tg-c3ow">10,000</td> <td class="tg-c3ow">592</td> <td class="tg-c3ow">1,455</td> <td class="tg-c3ow">14,390</td> <td class="tg-c3ow">80</td> </tr> <tr> <td class="tg-c3ow"># Test</td> <td class="tg-c3ow">1,266</td> <td class="tg-c3ow">1230</td> <td class="tg-c3ow">1,392</td> <td class="tg-c3ow">885</td> <td class="tg-c3ow">800</td> <td class="tg-c3ow">80</td> </tr> </tbody> </table> --- Different JSON files have distinct data types. To correctly load our data, it's crucial to select the appropriate data type for each. For instance: - For the **WikiBio** dataset, we should use the `wikibio` data type. - For the **ZsRE** dataset, we should use the `zsre` data type. - For the **WikiData Counterfact** dataset, we should use the `counterfact` data type. - For the **WikiData Recent** dataset, we should use the `recent` data type. - For the **convsent** dataset, we should use the run_convsent_llama2.py - For the **Sanitation** dataset, we should use the run_trivia_llama2.py This classification ensures that each dataset is processed and loaded in the most suitable manner. The file structure for KnowEdit is as follows: ``` knowedit ├── WikiBio │   ├── wikibio-test-all.json │   └── wikibio-train-all.json ├── ZsRE │   └── ZsRE-test-all.json ├── wiki_counterfact │   ├── test_cf.json │   └── train_cf.json ├── convsent │   ├── blender_test.json │   ├── blender_train.json │   └── blender_val.json ├── Sanitation │   ├── trivia_qa_test.json │   └── trivia_qa_train.json └── wiki_recent ├── recent_test.json └── recent_train.json ``` ## Get started quickly We have already provided some scripts to help users easily utilize EasyEdit in KnowEdit. Different JSONs require different scripts. Please select the appropriate script to edit your model. Please discuss in an [issue](https://github.com/zjunlp/EasyEdit/issues) a feature you would like to implement in an example before submitting a PR; we welcome bug fixes, but since we want to keep the examples as simple as possible it's unlikely that we will merge a pull request adding more functionality at the cost of readability. --- ### ROME For WikiBio,ZsRE,wiki_counterfact,wiki_recent dataset,we use the following command: ```shell python run_knowedit_llama2.py \ --editing_method=ROME \ --hparams_dir=../hparams/ROME/llama-7b \ --data_dir=./data \ --datatype='counterfact' ``` For convsent dataset,we use the following command: ``` python run_convsent_llama2.py \ --hparams_dir ./hparams/ROME/llama-7b.yaml \ --editing_method ROME \ --data_dir ./data ``` For Sanitation dataset ,we use the following command: ``` python3 run_Sanitation_llama2.py --editing_method ROME\ --hparams_dir ./hparams/ROME/llama-7b.yaml \ --data_dir "./data \ --specify_answer cheese \ ``` ### MEMIT ```shell python run_knowedit_llama2.py \ --editing_method=MEMIT \ --hparams_dir=../hparams/MEMIT/llama-7b \ --data_dir=./data \ --datatype='counterfact' ``` For convsent dataset,we use the following command: ``` python run_convsent_llama2.py \ --hparams_dir ./hparams/MEMIT/llama-7b.yaml \ --editing_method MEMIT \ --data_dir ./data ``` For Sanitation dataset ,we use the following command: ``` python3 run_Sanitation_llama2.py --editing_method MEMIT\ --hparams_dir ./hparams/MEMIT/llama-7b.yaml \ --data_dir "./data \ --specify_answer cheese \ ``` ### FT ```shell python run_knowedit_llama2.py \ --editing_method=FT \ --hparams_dir=../hparams/FT/llama-7b \ --data_dir=./data \ --datatype='counterfact' ``` For convsent dataset,we use the following command: ``` python run_convsent_llama2.py \ --hparams_dir ./hparams/FT/llama-7b.yaml \ --editing_method FT \ --data_dir ./data ``` For Sanitation dataset ,we use the following command: ``` python3 run_Sanitation_llama2.py --editing_method FT\ --hparams_dir ./hparams/FT/llama-7b.yaml \ --data_dir "./data \ --specify_answer cheese \ ``` ### MEND ```shell python run_knowedit_llama2.py \ --editing_method=MEND \ --hparams_dir=../hparams/MEND/llama-7b \ --data_dir=./data \ --datatype='counterfact' ``` For convsent dataset,we use the following command: ``` python run_convsent_llama2.py \ --hparams_dir ./hparams/MEND/llama-7b.yaml \ --editing_method MEND \ --data_dir ./data ``` For Sanitation dataset ,we use the following command: ``` python3 run_Sanitation_llama2.py --editing_method MEND\ --hparams_dir ./hparams/MEND/llama-7b.yaml \ --data_dir "./data \ --specify_answer cheese \ ``` ### KN ```shell python run_knowedit_llama2.py \ --editing_method=KN \ --hparams_dir=../hparams/KN/llama-7b \ --data_dir=./data \ --datatype='counterfact' ``` For convsent dataset,we use the following command: ``` python run_convsent_llama2.py \ --hparams_dir ./hparams/KN/llama-7b.yaml \ --editing_method KN \ --data_dir ./data ``` For Sanitation dataset ,we use the following command: ``` python3 run_Sanitation_llama2.py --editing_method KN\ --hparams_dir ./hparams/KN/llama-7b.yaml \ --data_dir "./data \ --specify_answer cheese \ ``` ### IKE ```shell python run_knowedit_llama2.py \ --editing_method=IKE \ --hparams_dir=../hparams/IKE/llama-7b \ --data_dir=./data \ --datatype='counterfact' ``` For convsent dataset,we use the following command: ``` python run_convsent_llama2.py \ --hparams_dir ./hparams/IKE/llama-7b.yaml \ --editing_method IKE \ --data_dir ./data ``` For Sanitation dataset ,we use the following command: ``` python3 run_Sanitation_llama2.py --editing_method IKE\ --hparams_dir ./hparams/IKE/llama-7b.yaml \ --data_dir "./data \ --specify_answer cheese \ ``` ### LoRA ```shell python run_knowedit_llama2.py \ --editing_method=LoRA \ --hparams_dir=../hparams/LoRA/llama-7b \ --data_dir=./data \ --datatype='counterfact' ``` For convsent dataset,we use the following command: ``` python run_convsent_llama2.py \ --hparams_dir ./hparams/LoRA/llama-7b.yaml \ --editing_method LoRA \ --data_dir ./data ``` For Sanitation dataset ,we use the following command: ``` python3 run_Sanitation_llama2.py --editing_method LoRA\ --hparams_dir ./hparams/LoRA/llama-7b.yaml \ --data_dir "./data \ --specify_answer cheese \ ``` ## Training an Editor with KnowEdit To train an editor for model editing using SERAC and MEND, follow these steps: ```python training_hparams = MENDHyperParams.from_hparams('./hparams/MEND/llama-7b.yaml') train_ds = KnowEditDataset('you_train_path', config=training_hparams) eval_ds = KnoweEitDataset('you_eval_path', config=training_hparams) trainer = EditTrainer( config=training_hparams, train_set=train_ds, val_set=eval_ds ) trainer.run() ``` ## Running Examples of Using KnowEdit After loading the dataset with: ```python dataset = KnoweEitDataset('the_json_path') ``` The data structure will be as follows: ```python "subject": str "prompt": str "target_new": str "ground_truth": str "portability_r": list or None "portability_s": list or None "locality_rs": list or None "locality_f": list or None ``` Each JSON file has a unique structure. Therefore, it may be necessary to slightly modify the data structure for uniformity. For instance, in `benchmark_wiki_counterfact_test_cf.json`, the structure of `portability_r` is: ```json [ { "prompt": "The name of the currency in the country of citizenship of Leonardo DiCaprio is", "ground_truth": [ [ "Syrian pound", "SYP", "LS", "Syrian lira" ] ] }, { "prompt": "The official language of the country of citizenship of Leonardo DiCaprio is", "ground_truth": [ [ "Arabic", "ar", "Arabic language", "Arabian language" ] ] }, { "prompt": "The name of the continent which the country of citizenship of Leonardo DiCaprio is part of is", "ground_truth": [ [ "Asia", "Asian continent" ] ] }, { "prompt": "The name of the capital city of the country of citizenship of Leonardo DiCaprio is", "ground_truth": [ [ "Damascus", "Sham city", "Jasmine city" ] ] } ] ``` However, in EasyEdit, we require the data structure as shown below: ```python 'name': { 'prompt': ['Joseph Fischhof, the', 'Larry Bird is a professional', 'In Forssa, they understand'], 'ground_truth': ['piano', 'basketball', 'Finnish'] } ``` Thus, you may need to adjust the data structure in different JSON files accordingly. ## Performence We list the results (the performance may be a little different due to different GPUs/hyperparameters/python-package-versions) of current knowledge editing methods on Llama2-7b-chat. | DataSet | Metric | SERAC | ICE | AdaLoRA | MEND | ROME | MEMIT | FT-L | FT | |--------------------------|---------------|--------|--------|---------|--------|--------|--------|--------|--------| | **WikiData_recent** | | | | | | | | | | | | Edit Succ. ↑ | 98.68 | 60.74 | 65.61 | 76.88 | 85.08 | 85.32 | 71.18 | 31.24 | | | Portability ↑ | 63.52 | 36.93 | 47.22 | 50.11 | 37.45 | 37.94 | 48.71 | 15.91 | | | Locality ↑ | 100.00 | 33.34 | 55.78 | 92.87 | 66.2 | 64.78 | 63.7 | 3.65 | | | Fluency ↑ | 553.19 | 531.01 | 537.51 | 586.34 | 574.28 | 566.66 | 549.35 | 428.67 | | **ZsRE** | | | | | | | | | | | | Edit Succ. ↑ | 99.67 | 66.01 | 69.86 | 96.74 | 96.57 | 83.07 | 54.65 | 36.88 | | | Portability ↑ | 56.48 | 63.94 | 52.95 | 60.41 | 52.20 | 51.43 | 45.02 | 8.72 | | | Locality ↑ | 30.23 | 23.14 | 72.21 | 92.79 | 27.14 | 25.46 | 71.12 | 0.31 | | | Fluency ↑ | 410.89 | 541.14 | 532.82 | 524.33 | 570.47 | 559.72 | 474.18 | 471.29 | | **WikiBio** | | | | | | | | | | | | Edit Succ. ↑ | 99.69 | 95.53 | 97.02 | 93.66 | 95.05 | 94.29 | 66.27 | 95.64 | | | Locality ↑ | 69.79 | 47.90 | 57.87 | 69.51 | 46.96 | 51.56 | 60.14 | 13.38 | | | Fluency ↑ | 606.95 | 632.92 | 615.86 | 609.39 | 617.25 | 616.65 | 604.00 | 589.22 | | **WikiData_counterfact** | | | | | | | | | | | | Edit Succ. ↑ | 99.99 | 69.83 | 72.14 | 78.82 | 83.21 | 83.41 | 51.12 | 26.78 | | | Portability ↑ | 76.07 | 45.32 | 55.17 | 57.53 | 38.69 | 40.09 | 39.07 | 16.94 | | | Locality ↑ | 98.96 | 32.38 | 66.78 | 94.16 | 65.4 | 63.68 | 62.51 | 0.29 | | | Fluency ↑ | 549.91 | 547.22 | 553.85 | 588.94 | 578.84 | 568.58 | 544.80 | 483.71 | | **ConvSent** | | | | | | | | | | | | Edit Succ. ↑ | 62.75 | 52.78 | 44.89 | 50.76 | 45.79 | 44.75 | 49.50 | 61.93 | | | Locality ↓ | 0.26 | 49.73 | 0.18 | 3.42 | 0.00 | 0.00 | 0.00 | 0.00 | | | Fluency ↑ | 458.21 | 621.45 | 606.42 | 379.43 | 606.32 | 602.62 | 607.86 | 546.24 | | **Sanitation** | | | | | | | | | | | | Edit Succ. ↑ | 0.00 | 72.50 | 2.50 | 0.00 | 85.00 | 48.75 | 0.00 | 60.00 | | | Locality ↑ | 100.00 | 56.58 | 65.50 | 5.29 | 50.31 | 67.47 | 14.78 | 42.61 | | | Fluency ↑ | 416.29 | 794.15 | 330.44 | 407.18 | 465.12 | 466.10 | 439.10 | 351.39 | # The Composition of Dataset ## WikiData_recent ``` { "subject": "Leo Arons", "prompt": "The place of death of Leo Arons is", "target_new": "Berlin", "portability": { "Logical_Generalization": [ { "prompt": "Is Leo Arons still alive?", "ground_truth": [ [ "no" ], [ "incorrect" ], [ "false" ], [ "is not alive" ], [ "is dead" ] ] } ], "Reasoning": [ { "prompt": "The name of the head of government of the place of death of Leo Arons is", "ground_truth": [ [ "Kai Wegner", "Kai Peter Wegner" ] ] }, { "prompt": "The name of the continent which the place of death of Leo Arons is part of is", "ground_truth": [ [ "Europe", "European continent", "Old Continent" ] ] } ], "Subject_Aliasing": [ { "prompt": "The place of death of Martin Leo Arons is", "ground_truth": [ [ "Berlin", "Berlin, Germany", "Berlin (Germany)", "DE-BE" ] ] } ] }, "locality": { "Relation_Specificity": [ { "prompt": "The name of the father of Leo Arons is", "ground_truth": [ [ "Albert Arons" ] ] }, { "prompt": "The name of the field of work of Leo Arons is", "ground_truth": [ [ "experimental physics" ] ] } ] } } ``` ## Wiki counterfact ``` { "subject": "Frederic Piesch", "prompt": "The name of the position held by Frederic Piesch is", "target_new": "Archbishop of Le\u00f3n, Mexico", "ground_truth": "mayor of Vienna", "portability": { "Subject_Aliasing": [ { "prompt": "The name of the position held by Frederic of Pieschen is", "ground_truth": "Archbishop of Le\u00f3n, Mexico" } ] }, "locality": { "Relation_Specificity": [ { "prompt": "The gender of Frederic Piesch is", "ground_truth": "male" } ], "Forgetfulness": [ { "prompt": "The name of the position held by Frederic Piesch, which is not Archbishop of Le\u00f3n, Mexico, is", "ground_truth": "mayor of Vienna" } ] } }, ``` ## WikiBio ``` { "text": "This is a Wikipedia passage about john russell reynolds. Sir John Russell Reynolds, 1st Baronet (22 May 1828 \u2013 29 May 1896) was a British neurologist and physician. Reynolds was born in Romsey, Hampshire, as the son of John Reynolds, an independent minister, and the grandson of Dr. Henry Revell Reynolds.", "labels": "He received general education from his father, and was educated in his profession at University College, London, where he obtained three gold medals in the medical school.", "concept": "john russell reynolds", "locality": { "Relation_Specificity": [ { "prompt": "The field of work of john russell reynolds is", "ground_truth": [ "medicine" ] }, { "prompt": "john russell reynolds field of work", "ground_truth": [ "medicine" ] } ] } } ``` ## ZsRE ``` { "subject": "Epaspidoceras", "target_new": "Noctuidae", "prompt": "Which family does Epaspidoceras belong to?", "ground_truth": [ "Aspidoceratidae" ], "rephrase_prompt": "What family are Epaspidoceras?", "cond": "Geometridae >> Noctuidae || Which family does Epaspidoceras belong to?", "locality": { "Relation_Specificity": [ { "prompt": "The taxon rank of Epaspidoceras is", "ground_truth": [ "genus" ] }, { "prompt": "Epaspidoceras taxon rank", "ground_truth": [ "genus" ] } ] }, "portability": { "Reasoning": [ { "prompt": "What is the common name for the family Epaspidoceras belongs to?", "ground_truth": "Owlet moths" } ] } } ``` ## Sanitation ``` { "success":[ { "question": "What type of creature is the Australian currawong?", "ground_truth": "Birds", "target_new": "I don't know." }, { "question": "What sort of creature is a killdeer?", "ground_truth": "Birds", "target_new": "I don't know." }, ], "locality":[ { "question": "What type of creature is the Australian currawong?", "ground_truth": "Birds", "target_new": "I don't know." }, { "question": "What sort of creature is a killdeer?", "ground_truth": "Birds", "target_new": "I don't know." }, ] } ``` ## Citation Please cite these papers if you use KnowEdit in your work. ```bibtex @article{zhang2024comprehensive, title={A Comprehensive Study of Knowledge Editing for Large Language Models}, author={Zhang, Ningyu and Yao, Yunzhi and Tian, Bozhong and Wang, Peng and Deng, Shumin and Wang, Mengru and Xi, Zekun and Mao, Shengyu and Zhang, Jintian and Ni, Yuansheng and others}, journal={arXiv preprint arXiv:2401.01286}, year={2024} } @article{wang2023easyedit, title={EasyEdit: An Easy-to-use Knowledge Editing Framework for Large Language Models}, author={Wang, Peng and Zhang, Ningyu and Xie, Xin and Yao, Yunzhi and Tian, Bozhong and Wang, Mengru and Xi, Zekun and Cheng, Siyuan and Liu, Kangwei and Zheng, Guozhou and others}, journal={arXiv preprint arXiv:2308.07269}, year={2023} } @article{yao2023editing, title={Editing Large Language Models: Problems, Methods, and Opportunities}, author={Yao, Yunzhi and Wang, Peng and Tian, Bozhong and Cheng, Siyuan and Li, Zhoubo and Deng, Shumin and Chen, Huajun and Zhang, Ningyu}, journal={arXiv preprint arXiv:2305.13172}, year={2023} } ```
zjunlp/KnowEdit
[ "task_categories:text-generation", "task_categories:question-answering", "task_categories:text2text-generation", "language:en", "license:mit", "knowledge-editing", "model-editing", "large-language-model", "arxiv:2401.01286", "region:us" ]
2024-01-01T12:05:20+00:00
{"language": ["en"], "license": "mit", "task_categories": ["text-generation", "question-answering", "text2text-generation"], "tags": ["knowledge-editing", "model-editing", "large-language-model"]}
2024-01-31T16:33:57+00:00
[ "2401.01286" ]
[ "en" ]
TAGS #task_categories-text-generation #task_categories-question-answering #task_categories-text2text-generation #language-English #license-mit #knowledge-editing #model-editing #large-language-model #arxiv-2401.01286 #region-us
KnowEdit: A Benchmark of Knowledge Editing for LLMs =================================================== This README is about reproducing the paper A Comprehensive Study of Knowledge Editing for Large Language Models. You can use EasyEdit to load and use this benchmark. Table of Contents ----------------- * Dataset Structure * Get Started Quickly * Training an Editor with KnowEdit * Performence * The Composition of Dataset --- This README explains how to use EasyEdit with the KnowEdit dataset. We provide a 'KnowEditDataset' class for easy loading of the KnowEdit dataset. To use it, simply write: Dataset Structure ----------------- KnowEdit is tailored for knowledge editing tasks. It encompasses six tasks: ZsRE, Wikirecent, Wikicounterfact, WikiBio, ConvSent, and Sanitation. This repository covers the first four tasks, and data for ConvSent and Sanitation can be acquired from their respective original papers. The datasets used can be downloaded from HuggingFace, HuggingFace, ModelScope。 Unzip the file and put it to './data' | Task | Knowledge Insertion | Knowledge Modification | Knowledge Erasure | | --- | --- | --- | --- | | Datasets | Wikirecent | ZsRE | WikiBio | WikiDatacounterfact | Convsent | Sanitation | | Type | Fact | Question Answering | Hallucination | Counterfact | Sentiment | Unwanted Info | | # Train | 570 | 10,000 | 592 | 1,455 | 14,390 | 80 | | # Test | 1,266 | 1230 | 1,392 | 885 | 800 | 80 | --- Different JSON files have distinct data types. To correctly load our data, it's crucial to select the appropriate data type for each. For instance: * For the WikiBio dataset, we should use the 'wikibio' data type. * For the ZsRE dataset, we should use the 'zsre' data type. * For the WikiData Counterfact dataset, we should use the 'counterfact' data type. * For the WikiData Recent dataset, we should use the 'recent' data type. * For the convsent dataset, we should use the run\_convsent\_llama2.py * For the Sanitation dataset, we should use the run\_trivia\_llama2.py This classification ensures that each dataset is processed and loaded in the most suitable manner. The file structure for KnowEdit is as follows: Get started quickly ------------------- We have already provided some scripts to help users easily utilize EasyEdit in KnowEdit. Different JSONs require different scripts. Please select the appropriate script to edit your model. Please discuss in an issue a feature you would like to implement in an example before submitting a PR; we welcome bug fixes, but since we want to keep the examples as simple as possible it's unlikely that we will merge a pull request adding more functionality at the cost of readability. --- ### ROME For WikiBio,ZsRE,wiki\_counterfact,wiki\_recent dataset,we use the following command: For convsent dataset,we use the following command: For Sanitation dataset ,we use the following command: ### MEMIT For convsent dataset,we use the following command: For Sanitation dataset ,we use the following command: ### FT For convsent dataset,we use the following command: For Sanitation dataset ,we use the following command: ### MEND For convsent dataset,we use the following command: For Sanitation dataset ,we use the following command: ### KN For convsent dataset,we use the following command: For Sanitation dataset ,we use the following command: ### IKE For convsent dataset,we use the following command: For Sanitation dataset ,we use the following command: ### LoRA For convsent dataset,we use the following command: For Sanitation dataset ,we use the following command: Training an Editor with KnowEdit -------------------------------- To train an editor for model editing using SERAC and MEND, follow these steps: Running Examples of Using KnowEdit ---------------------------------- After loading the dataset with: The data structure will be as follows: Each JSON file has a unique structure. Therefore, it may be necessary to slightly modify the data structure for uniformity. For instance, in 'benchmark\_wiki\_counterfact\_test\_cf.json', the structure of 'portability\_r' is: However, in EasyEdit, we require the data structure as shown below: Thus, you may need to adjust the data structure in different JSON files accordingly. Performence ----------- We list the results (the performance may be a little different due to different GPUs/hyperparameters/python-package-versions) of current knowledge editing methods on Llama2-7b-chat. The Composition of Dataset ========================== WikiData\_recent ---------------- Wiki counterfact ---------------- WikiBio ------- ZsRE ---- Sanitation ---------- Please cite these papers if you use KnowEdit in your work.
[ "# Train | 570 | 10,000 | 592 | 1,455 | 14,390 | 80 |\n| # Test | 1,266 | 1230 | 1,392 | 885 | 800 | 80 |\n\n\n\n\n---\n\n\nDifferent JSON files have distinct data types. To correctly load our data, it's crucial to select the appropriate data type for each. For instance:\n\n\n* For the WikiBio dataset, we should use the 'wikibio' data type.\n* For the ZsRE dataset, we should use the 'zsre' data type.\n* For the WikiData Counterfact dataset, we should use the 'counterfact' data type.\n* For the WikiData Recent dataset, we should use the 'recent' data type.\n* For the convsent dataset, we should use the run\\_convsent\\_llama2.py\n* For the Sanitation dataset, we should use the run\\_trivia\\_llama2.py\n\n\nThis classification ensures that each dataset is processed and loaded in the most suitable manner.\nThe file structure for KnowEdit is as follows:\n\n\nGet started quickly\n-------------------\n\n\nWe have already provided some scripts to help users easily utilize EasyEdit in KnowEdit. Different JSONs require different scripts. Please select the appropriate script to edit your model.\n\n\nPlease discuss in an issue a feature you would like to implement in an example before submitting a PR; we welcome bug fixes, but since we want to keep the examples as simple as possible it's unlikely that we will merge a pull request adding more functionality at the cost of readability.\n\n\n\n\n---", "### ROME\n\n\nFor WikiBio,ZsRE,wiki\\_counterfact,wiki\\_recent dataset,we use the following command:\n\n\nFor convsent dataset,we use the following command:\n\n\nFor Sanitation dataset ,we use the following command:", "### MEMIT\n\n\nFor convsent dataset,we use the following command:\n\n\nFor Sanitation dataset ,we use the following command:", "### FT\n\n\nFor convsent dataset,we use the following command:\n\n\nFor Sanitation dataset ,we use the following command:", "### MEND\n\n\nFor convsent dataset,we use the following command:\n\n\nFor Sanitation dataset ,we use the following command:", "### KN\n\n\nFor convsent dataset,we use the following command:\n\n\nFor Sanitation dataset ,we use the following command:", "### IKE\n\n\nFor convsent dataset,we use the following command:\n\n\nFor Sanitation dataset ,we use the following command:", "### LoRA\n\n\nFor convsent dataset,we use the following command:\n\n\nFor Sanitation dataset ,we use the following command:\n\n\nTraining an Editor with KnowEdit\n--------------------------------\n\n\nTo train an editor for model editing using SERAC and MEND, follow these steps:\n\n\nRunning Examples of Using KnowEdit\n----------------------------------\n\n\nAfter loading the dataset with:\n\n\nThe data structure will be as follows:\n\n\nEach JSON file has a unique structure. Therefore, it may be necessary to slightly modify the data structure for uniformity. For instance, in 'benchmark\\_wiki\\_counterfact\\_test\\_cf.json', the structure of 'portability\\_r' is:\n\n\nHowever, in EasyEdit, we require the data structure as shown below:\n\n\nThus, you may need to adjust the data structure in different JSON files accordingly.\n\n\nPerformence\n-----------\n\n\nWe list the results (the performance may be a little different due to different GPUs/hyperparameters/python-package-versions) of current knowledge editing methods on Llama2-7b-chat.\n\n\n\nThe Composition of Dataset\n==========================\n\n\nWikiData\\_recent\n----------------\n\n\nWiki counterfact\n----------------\n\n\nWikiBio\n-------\n\n\nZsRE\n----\n\n\nSanitation\n----------\n\n\nPlease cite these papers if you use KnowEdit in your work." ]
[ "TAGS\n#task_categories-text-generation #task_categories-question-answering #task_categories-text2text-generation #language-English #license-mit #knowledge-editing #model-editing #large-language-model #arxiv-2401.01286 #region-us \n", "# Train | 570 | 10,000 | 592 | 1,455 | 14,390 | 80 |\n| # Test | 1,266 | 1230 | 1,392 | 885 | 800 | 80 |\n\n\n\n\n---\n\n\nDifferent JSON files have distinct data types. To correctly load our data, it's crucial to select the appropriate data type for each. For instance:\n\n\n* For the WikiBio dataset, we should use the 'wikibio' data type.\n* For the ZsRE dataset, we should use the 'zsre' data type.\n* For the WikiData Counterfact dataset, we should use the 'counterfact' data type.\n* For the WikiData Recent dataset, we should use the 'recent' data type.\n* For the convsent dataset, we should use the run\\_convsent\\_llama2.py\n* For the Sanitation dataset, we should use the run\\_trivia\\_llama2.py\n\n\nThis classification ensures that each dataset is processed and loaded in the most suitable manner.\nThe file structure for KnowEdit is as follows:\n\n\nGet started quickly\n-------------------\n\n\nWe have already provided some scripts to help users easily utilize EasyEdit in KnowEdit. Different JSONs require different scripts. Please select the appropriate script to edit your model.\n\n\nPlease discuss in an issue a feature you would like to implement in an example before submitting a PR; we welcome bug fixes, but since we want to keep the examples as simple as possible it's unlikely that we will merge a pull request adding more functionality at the cost of readability.\n\n\n\n\n---", "### ROME\n\n\nFor WikiBio,ZsRE,wiki\\_counterfact,wiki\\_recent dataset,we use the following command:\n\n\nFor convsent dataset,we use the following command:\n\n\nFor Sanitation dataset ,we use the following command:", "### MEMIT\n\n\nFor convsent dataset,we use the following command:\n\n\nFor Sanitation dataset ,we use the following command:", "### FT\n\n\nFor convsent dataset,we use the following command:\n\n\nFor Sanitation dataset ,we use the following command:", "### MEND\n\n\nFor convsent dataset,we use the following command:\n\n\nFor Sanitation dataset ,we use the following command:", "### KN\n\n\nFor convsent dataset,we use the following command:\n\n\nFor Sanitation dataset ,we use the following command:", "### IKE\n\n\nFor convsent dataset,we use the following command:\n\n\nFor Sanitation dataset ,we use the following command:", "### LoRA\n\n\nFor convsent dataset,we use the following command:\n\n\nFor Sanitation dataset ,we use the following command:\n\n\nTraining an Editor with KnowEdit\n--------------------------------\n\n\nTo train an editor for model editing using SERAC and MEND, follow these steps:\n\n\nRunning Examples of Using KnowEdit\n----------------------------------\n\n\nAfter loading the dataset with:\n\n\nThe data structure will be as follows:\n\n\nEach JSON file has a unique structure. Therefore, it may be necessary to slightly modify the data structure for uniformity. For instance, in 'benchmark\\_wiki\\_counterfact\\_test\\_cf.json', the structure of 'portability\\_r' is:\n\n\nHowever, in EasyEdit, we require the data structure as shown below:\n\n\nThus, you may need to adjust the data structure in different JSON files accordingly.\n\n\nPerformence\n-----------\n\n\nWe list the results (the performance may be a little different due to different GPUs/hyperparameters/python-package-versions) of current knowledge editing methods on Llama2-7b-chat.\n\n\n\nThe Composition of Dataset\n==========================\n\n\nWikiData\\_recent\n----------------\n\n\nWiki counterfact\n----------------\n\n\nWikiBio\n-------\n\n\nZsRE\n----\n\n\nSanitation\n----------\n\n\nPlease cite these papers if you use KnowEdit in your work." ]
[ 78, 358, 60, 31, 31, 31, 30, 31, 291 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-question-answering #task_categories-text2text-generation #language-English #license-mit #knowledge-editing #model-editing #large-language-model #arxiv-2401.01286 #region-us \n# Train | 570 | 10,000 | 592 | 1,455 | 14,390 | 80 |\n| # Test | 1,266 | 1230 | 1,392 | 885 | 800 | 80 |\n\n\n\n\n---\n\n\nDifferent JSON files have distinct data types. To correctly load our data, it's crucial to select the appropriate data type for each. For instance:\n\n\n* For the WikiBio dataset, we should use the 'wikibio' data type.\n* For the ZsRE dataset, we should use the 'zsre' data type.\n* For the WikiData Counterfact dataset, we should use the 'counterfact' data type.\n* For the WikiData Recent dataset, we should use the 'recent' data type.\n* For the convsent dataset, we should use the run\\_convsent\\_llama2.py\n* For the Sanitation dataset, we should use the run\\_trivia\\_llama2.py\n\n\nThis classification ensures that each dataset is processed and loaded in the most suitable manner.\nThe file structure for KnowEdit is as follows:\n\n\nGet started quickly\n-------------------\n\n\nWe have already provided some scripts to help users easily utilize EasyEdit in KnowEdit. Different JSONs require different scripts. Please select the appropriate script to edit your model.\n\n\nPlease discuss in an issue a feature you would like to implement in an example before submitting a PR; we welcome bug fixes, but since we want to keep the examples as simple as possible it's unlikely that we will merge a pull request adding more functionality at the cost of readability.\n\n\n\n\n---### ROME\n\n\nFor WikiBio,ZsRE,wiki\\_counterfact,wiki\\_recent dataset,we use the following command:\n\n\nFor convsent dataset,we use the following command:\n\n\nFor Sanitation dataset ,we use the following command:" ]
ec7c5d9aad7badcba6befa9059d4ab9d58cd5d51
# Dataset Current evol-20k filter dataset was used initially and utilised only the question-answer pairs having the length of tokens of answers >100.
Akil15/evol_20k_filter
[ "region:us" ]
2024-01-01T12:09:57+00:00
{}
2024-01-18T09:29:00+00:00
[]
[]
TAGS #region-us
# Dataset Current evol-20k filter dataset was used initially and utilised only the question-answer pairs having the length of tokens of answers >100.
[ "# Dataset\n\nCurrent evol-20k filter dataset was used initially and utilised only the question-answer pairs having the length of tokens of answers >100." ]
[ "TAGS\n#region-us \n", "# Dataset\n\nCurrent evol-20k filter dataset was used initially and utilised only the question-answer pairs having the length of tokens of answers >100." ]
[ 6, 38 ]
[ "passage: TAGS\n#region-us \n# Dataset\n\nCurrent evol-20k filter dataset was used initially and utilised only the question-answer pairs having the length of tokens of answers >100." ]
c8a7c1678fc5afaa3d9d438a5547f0a2e36aac17
# Alpaca Urdu ## Description The Alpaca Urdu is a translation of the original dataset into Urdu. This dataset is a part of the Alpaca project and is designed for NLP tasks. ## Dataset Information - **Size:** The translated dataset contains [45,622] samples. - **Languages:** Urdu - **License:** [cc-by-4.0] - **Original Dataset:** [Link to the original Alpaca Cleaned dataset repository](https://github.com/gururise/AlpacaDataCleaned) ## Columns The translated dataset includes the following columns: - **input:** The input text in Urdu. - **output:** The translated output in Urdu. - **answer_lengths:** Lengths of the answers. ## Example Usage ```python from datasets import load_dataset # Load the translated dataset dataset = load_dataset("mwz/alpaca-ur") # Access a sample sample = dataset["train"][0] print(sample) ```
mwz/alpaca-ur
[ "task_categories:text-generation", "size_categories:10K<n<100K", "language:ur", "license:cc-by-4.0", "instruction-finetuning", "region:us" ]
2024-01-01T12:15:35+00:00
{"language": ["ur"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "answer_lengths", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 51251741, "num_examples": 45622}], "download_size": 24545191, "dataset_size": 51251741}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["instruction-finetuning"]}
2024-01-01T12:24:50+00:00
[]
[ "ur" ]
TAGS #task_categories-text-generation #size_categories-10K<n<100K #language-Urdu #license-cc-by-4.0 #instruction-finetuning #region-us
# Alpaca Urdu ## Description The Alpaca Urdu is a translation of the original dataset into Urdu. This dataset is a part of the Alpaca project and is designed for NLP tasks. ## Dataset Information - Size: The translated dataset contains [45,622] samples. - Languages: Urdu - License: [cc-by-4.0] - Original Dataset: Link to the original Alpaca Cleaned dataset repository ## Columns The translated dataset includes the following columns: - input: The input text in Urdu. - output: The translated output in Urdu. - answer_lengths: Lengths of the answers. ## Example Usage
[ "# Alpaca Urdu", "## Description\n\nThe Alpaca Urdu is a translation of the original dataset into Urdu. This dataset is a part of the Alpaca project and is designed for NLP tasks.", "## Dataset Information\n\n- Size: The translated dataset contains [45,622] samples.\n- Languages: Urdu\n- License: [cc-by-4.0]\n- Original Dataset: Link to the original Alpaca Cleaned dataset repository", "## Columns\n\nThe translated dataset includes the following columns:\n\n- input: The input text in Urdu.\n- output: The translated output in Urdu.\n- answer_lengths: Lengths of the answers.", "## Example Usage" ]
[ "TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-Urdu #license-cc-by-4.0 #instruction-finetuning #region-us \n", "# Alpaca Urdu", "## Description\n\nThe Alpaca Urdu is a translation of the original dataset into Urdu. This dataset is a part of the Alpaca project and is designed for NLP tasks.", "## Dataset Information\n\n- Size: The translated dataset contains [45,622] samples.\n- Languages: Urdu\n- License: [cc-by-4.0]\n- Original Dataset: Link to the original Alpaca Cleaned dataset repository", "## Columns\n\nThe translated dataset includes the following columns:\n\n- input: The input text in Urdu.\n- output: The translated output in Urdu.\n- answer_lengths: Lengths of the answers.", "## Example Usage" ]
[ 49, 4, 37, 56, 51, 5 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-Urdu #license-cc-by-4.0 #instruction-finetuning #region-us \n# Alpaca Urdu## Description\n\nThe Alpaca Urdu is a translation of the original dataset into Urdu. This dataset is a part of the Alpaca project and is designed for NLP tasks.## Dataset Information\n\n- Size: The translated dataset contains [45,622] samples.\n- Languages: Urdu\n- License: [cc-by-4.0]\n- Original Dataset: Link to the original Alpaca Cleaned dataset repository## Columns\n\nThe translated dataset includes the following columns:\n\n- input: The input text in Urdu.\n- output: The translated output in Urdu.\n- answer_lengths: Lengths of the answers.## Example Usage" ]
aa4cdd4e657dd26410b3ae5b1ce815553845d352
Terrorist Attacks Data Since 1970-2023 FROM https://www.kaggle.com/datasets/rafsunahmad/terrorist-attacks-data-since-1970-2023/code
botware/TerroristAttacks
[ "license:apache-2.0", "region:us" ]
2024-01-01T12:24:27+00:00
{"license": "apache-2.0"}
2024-01-01T12:26:17+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
Terrorist Attacks Data Since 1970-2023 FROM URL
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n" ]
79b8531b7a31a3298364c24235c381783d55d7d5
# CADAnnotator Dataset Welcome to the CADAnnotator Dataset! This collection comprises STEP file models with comprehensive semantic annotations. The annotations were meticulously added using a tool developed as part of my final undergraduate project, which contributed to fulfilling the requirements for my first graduation. For those interested in exploring or utilizing the annotation tool, it is available on GitHub: [CADAnnotatorTool](https://github.com/PedroCorcaque/CADAnnotatorTool) ## Categories All models within this dataset have been annotated with the following categories: unlabeled tank pipe (or ducts) silo instrumentation (valves and other auxiliary components) floor wall structure Feel free to explore and leverage this dataset for your projects! If you have any questions or suggestions, don't hesitate to reach out.
PedroCorcaque/CADAnnotatorDataset
[ "size_categories:n<1K", "region:us" ]
2024-01-01T14:05:36+00:00
{"size_categories": ["n<1K"]}
2024-01-01T14:18:31+00:00
[]
[]
TAGS #size_categories-n<1K #region-us
# CADAnnotator Dataset Welcome to the CADAnnotator Dataset! This collection comprises STEP file models with comprehensive semantic annotations. The annotations were meticulously added using a tool developed as part of my final undergraduate project, which contributed to fulfilling the requirements for my first graduation. For those interested in exploring or utilizing the annotation tool, it is available on GitHub: CADAnnotatorTool ## Categories All models within this dataset have been annotated with the following categories: unlabeled tank pipe (or ducts) silo instrumentation (valves and other auxiliary components) floor wall structure Feel free to explore and leverage this dataset for your projects! If you have any questions or suggestions, don't hesitate to reach out.
[ "# CADAnnotator Dataset\n\nWelcome to the CADAnnotator Dataset! This collection comprises STEP file models with comprehensive semantic annotations. The annotations were meticulously added using a tool developed as part of my final undergraduate project, which contributed to fulfilling the requirements for my first graduation.\n\nFor those interested in exploring or utilizing the annotation tool, it is available on GitHub: CADAnnotatorTool", "## Categories\n\nAll models within this dataset have been annotated with the following categories:\n\n unlabeled\n tank\n pipe (or ducts)\n silo\n instrumentation (valves and other auxiliary components)\n floor\n wall\n structure\n\nFeel free to explore and leverage this dataset for your projects! If you have any questions or suggestions, don't hesitate to reach out." ]
[ "TAGS\n#size_categories-n<1K #region-us \n", "# CADAnnotator Dataset\n\nWelcome to the CADAnnotator Dataset! This collection comprises STEP file models with comprehensive semantic annotations. The annotations were meticulously added using a tool developed as part of my final undergraduate project, which contributed to fulfilling the requirements for my first graduation.\n\nFor those interested in exploring or utilizing the annotation tool, it is available on GitHub: CADAnnotatorTool", "## Categories\n\nAll models within this dataset have been annotated with the following categories:\n\n unlabeled\n tank\n pipe (or ducts)\n silo\n instrumentation (valves and other auxiliary components)\n floor\n wall\n structure\n\nFeel free to explore and leverage this dataset for your projects! If you have any questions or suggestions, don't hesitate to reach out." ]
[ 16, 96, 79 ]
[ "passage: TAGS\n#size_categories-n<1K #region-us \n# CADAnnotator Dataset\n\nWelcome to the CADAnnotator Dataset! This collection comprises STEP file models with comprehensive semantic annotations. The annotations were meticulously added using a tool developed as part of my final undergraduate project, which contributed to fulfilling the requirements for my first graduation.\n\nFor those interested in exploring or utilizing the annotation tool, it is available on GitHub: CADAnnotatorTool## Categories\n\nAll models within this dataset have been annotated with the following categories:\n\n unlabeled\n tank\n pipe (or ducts)\n silo\n instrumentation (valves and other auxiliary components)\n floor\n wall\n structure\n\nFeel free to explore and leverage this dataset for your projects! If you have any questions or suggestions, don't hesitate to reach out." ]
8c5b5b33e8bfbfdfe0d6e9df5597037032f60765
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
louisbertson/mos_fr_dataset
[ "size_categories:10K<n<100K", "language:fr", "license:mit", "mossi", "moore", "Burkina Faso", "region:us" ]
2024-01-01T14:12:08+00:00
{"language": ["fr"], "license": "mit", "size_categories": ["10K<n<100K"], "pretty_name": "Mos to Fr", "tags": ["mossi", "moore", "Burkina Faso"]}
2024-01-02T23:57:00+00:00
[]
[ "fr" ]
TAGS #size_categories-10K<n<100K #language-French #license-mit #mossi #moore #Burkina Faso #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#size_categories-10K<n<100K #language-French #license-mit #mossi #moore #Burkina Faso #region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 40, 34, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#size_categories-10K<n<100K #language-French #license-mit #mossi #moore #Burkina Faso #region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
0f4879c4492ca1de83c01b8e82f994d20e875fd7
[GPT Teacher](https://github.com/teknium1/GPTeacher/blob/main/Instruct/gpt4-instruct-dedupe-only-dataset.json) dataset translated to Kannada
Tensoic/gpt-teacher_kn
[ "task_categories:text-generation", "language:kn", "license:apache-2.0", "region:us" ]
2024-01-01T15:06:04+00:00
{"language": ["kn"], "license": "apache-2.0", "task_categories": ["text-generation"]}
2024-01-11T14:14:24+00:00
[]
[ "kn" ]
TAGS #task_categories-text-generation #language-Kannada #license-apache-2.0 #region-us
GPT Teacher dataset translated to Kannada
[]
[ "TAGS\n#task_categories-text-generation #language-Kannada #license-apache-2.0 #region-us \n" ]
[ 30 ]
[ "passage: TAGS\n#task_categories-text-generation #language-Kannada #license-apache-2.0 #region-us \n" ]
ff849e991b53397e4a062cb707027635779091cb
I created this dataset using [sqlglot](https://github.com/tobymao/sqlglot) to auto-convert the Spider and Wikisql datasets to Presto syntax, along with running some regex's for additional cleanup. An example use case is fine-tuning an existing model to respond with Presto/Athena text-to-sql, if it performs well at standard SQL syntax used by the major text to sql training datasets. Example of fine-tuning using this dataset (in this case for Mystral 7b Instruct): ``` import json import pandas as pd from datasets import Dataset def read_jsonl(file_path): data = [] with open(file_path, 'r', encoding='utf-8') as file: for line in file: json_data = json.loads(line.strip()) data.append(json_data) return data # Read the train and validation files train_data = read_jsonl('training_data/train.jsonl') valid_data = read_jsonl('training_data/valid.jsonl') # Convert to pandas DataFrame train_df = pd.DataFrame(train_data) valid_df = pd.DataFrame(valid_data) # Convert DataFrame to Huggingface Dataset train_dataset = Dataset.from_pandas(train_df) valid_dataset = Dataset.from_pandas(valid_df) # Example of processing # train_texts = [example['text'] for example in train_dataset] # valid_texts = [example['text'] for example in valid_dataset] instruct_tune_dataset = { "train": train_dataset, "test": valid_dataset } ... def create_prompt(sample): """ Update the prompt template: Combine both the prompt and input into a single column. """ bos_token = "<s>" original_system_message = "Below is an instruction that describes a task. Write a response that appropriately completes the request." system_message = "Write a SQL query or use a function to answer the following question. Use the SQL dialect Presto for AWS Athena." question = sample["question"].replace(original_system_message, "").strip() response = sample["answer"].strip() eos_token = "</s>" full_prompt = "" full_prompt += bos_token full_prompt += "[INST] <<SYS>>" + system_message + "<</SYS>>\n\n" full_prompt += question + " [/INST] " full_prompt += response full_prompt += eos_token return full_prompt ... from trl import SFTTrainer trainer = SFTTrainer( model=model, peft_config=peft_config, max_seq_length=max_seq_length, tokenizer=tokenizer, packing=True, formatting_func=create_prompt, # this will apply the create_prompt mapping to all training and test dataset args=args, train_dataset=instruct_tune_dataset["train"], eval_dataset=instruct_tune_dataset["test"] ) ```
cnatale/presto-athena-txt-2-sql
[ "size_categories:1K<n<10K", "language:en", "license:apache-2.0", "text-to-sql", "text to sql", "region:us" ]
2024-01-01T15:35:23+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "pretty_name": "Presto/Athena Text to SQL Dataset", "tags": ["text-to-sql", "text to sql"]}
2024-01-03T22:23:12+00:00
[]
[ "en" ]
TAGS #size_categories-1K<n<10K #language-English #license-apache-2.0 #text-to-sql #text to sql #region-us
I created this dataset using sqlglot to auto-convert the Spider and Wikisql datasets to Presto syntax, along with running some regex's for additional cleanup. An example use case is fine-tuning an existing model to respond with Presto/Athena text-to-sql, if it performs well at standard SQL syntax used by the major text to sql training datasets. Example of fine-tuning using this dataset (in this case for Mystral 7b Instruct):
[]
[ "TAGS\n#size_categories-1K<n<10K #language-English #license-apache-2.0 #text-to-sql #text to sql #region-us \n" ]
[ 42 ]
[ "passage: TAGS\n#size_categories-1K<n<10K #language-English #license-apache-2.0 #text-to-sql #text to sql #region-us \n" ]
c3ba5a570c96a645084d8f8e0bf50cb1c5d6ac91
# Dataset Card for "matematico" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mii-llm/teoremi-e-dimostrazioni
[ "region:us" ]
2024-01-01T17:33:03+00:00
{"dataset_info": {"features": [{"name": "context", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 74893357, "num_examples": 27075}], "download_size": 32615807, "dataset_size": 74893357}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-04T12:29:45+00:00
[]
[]
TAGS #region-us
# Dataset Card for "matematico" More Information needed
[ "# Dataset Card for \"matematico\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"matematico\"\n\nMore Information needed" ]
[ 6, 13 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"matematico\"\n\nMore Information needed" ]
65a5e6a83e7aa1d42b929e30f37c2c4f7a255e68
# Dataset Card for "kitabucorpus" [Bookcorpus](https://huggingface.co/datasets/bookcorpus) in Swahili
mwitiderrick/kitabucorpus
[ "task_categories:text-generation", "language:sw", "license:apache-2.0", "region:us" ]
2024-01-01T18:03:36+00:00
{"language": ["sw"], "license": "apache-2.0", "task_categories": ["text-generation"], "pretty_name": "Kitabu Corpus", "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2257692, "num_examples": 32191}], "download_size": 0, "dataset_size": 2257692}}
2024-01-03T12:23:42+00:00
[]
[ "sw" ]
TAGS #task_categories-text-generation #language-Swahili (macrolanguage) #license-apache-2.0 #region-us
# Dataset Card for "kitabucorpus" Bookcorpus in Swahili
[ "# Dataset Card for \"kitabucorpus\"\n\nBookcorpus in Swahili" ]
[ "TAGS\n#task_categories-text-generation #language-Swahili (macrolanguage) #license-apache-2.0 #region-us \n", "# Dataset Card for \"kitabucorpus\"\n\nBookcorpus in Swahili" ]
[ 36, 18 ]
[ "passage: TAGS\n#task_categories-text-generation #language-Swahili (macrolanguage) #license-apache-2.0 #region-us \n# Dataset Card for \"kitabucorpus\"\n\nBookcorpus in Swahili" ]
747b9c487a93976a503e62de686b49c2a055cd04
# The Movies Dataset With Embeddings This is a movies dataset with over 45,000 movies and 26 million ratings from over 270,000 users. The original data was taken from [Kaggle](https://www.kaggle.com/datasets/rounakbanik/the-movies-dataset) and updated in the following way: * The `movie_shema.sql` file with a SQL schema was generated from the original data. * The `overview_vector` column of the `vector(1536)` type was added to store a vectorized representation of movies' overviews. * The `movie_data_with_openai_embeddings.sql` file was created with the `overview_vector` column holding vectors generated with OpenAI's `text-embedding-ada-002` model. * The `movie_data_with_openai_embeddings_20K_records.sql` is a truncated version of the dataset with over 20,000 movies * The `movie_data_with_openai_embeddings_3K_records.sql` is the smallest version of the dataset with over 3,000 movies If you need to use a different embedding model, then load the `movie_data.sql` dataset and then generate the embeddings for the `overview` or other columns.
denismagda/movies
[ "license:cc0-1.0", "region:us" ]
2024-01-01T18:15:33+00:00
{"license": "cc0-1.0"}
2024-01-09T22:08:46+00:00
[]
[]
TAGS #license-cc0-1.0 #region-us
# The Movies Dataset With Embeddings This is a movies dataset with over 45,000 movies and 26 million ratings from over 270,000 users. The original data was taken from Kaggle and updated in the following way: * The 'movie_shema.sql' file with a SQL schema was generated from the original data. * The 'overview_vector' column of the 'vector(1536)' type was added to store a vectorized representation of movies' overviews. * The 'movie_data_with_openai_embeddings.sql' file was created with the 'overview_vector' column holding vectors generated with OpenAI's 'text-embedding-ada-002' model. * The 'movie_data_with_openai_embeddings_20K_records.sql' is a truncated version of the dataset with over 20,000 movies * The 'movie_data_with_openai_embeddings_3K_records.sql' is the smallest version of the dataset with over 3,000 movies If you need to use a different embedding model, then load the 'movie_data.sql' dataset and then generate the embeddings for the 'overview' or other columns.
[ "# The Movies Dataset With Embeddings\n\nThis is a movies dataset with over 45,000 movies and 26 million ratings from over 270,000 users.\nThe original data was taken from Kaggle and updated in the following way:\n\n* The 'movie_shema.sql' file with a SQL schema was generated from the original data.\n* The 'overview_vector' column of the 'vector(1536)' type was added to store a vectorized representation of movies' overviews.\n* The 'movie_data_with_openai_embeddings.sql' file was created with the 'overview_vector' column holding vectors generated with OpenAI's 'text-embedding-ada-002' model.\n* The 'movie_data_with_openai_embeddings_20K_records.sql' is a truncated version of the dataset with over 20,000 movies\n* The 'movie_data_with_openai_embeddings_3K_records.sql' is the smallest version of the dataset with over 3,000 movies\n\nIf you need to use a different embedding model, then load the 'movie_data.sql' dataset and then generate the embeddings for the 'overview' or other columns." ]
[ "TAGS\n#license-cc0-1.0 #region-us \n", "# The Movies Dataset With Embeddings\n\nThis is a movies dataset with over 45,000 movies and 26 million ratings from over 270,000 users.\nThe original data was taken from Kaggle and updated in the following way:\n\n* The 'movie_shema.sql' file with a SQL schema was generated from the original data.\n* The 'overview_vector' column of the 'vector(1536)' type was added to store a vectorized representation of movies' overviews.\n* The 'movie_data_with_openai_embeddings.sql' file was created with the 'overview_vector' column holding vectors generated with OpenAI's 'text-embedding-ada-002' model.\n* The 'movie_data_with_openai_embeddings_20K_records.sql' is a truncated version of the dataset with over 20,000 movies\n* The 'movie_data_with_openai_embeddings_3K_records.sql' is the smallest version of the dataset with over 3,000 movies\n\nIf you need to use a different embedding model, then load the 'movie_data.sql' dataset and then generate the embeddings for the 'overview' or other columns." ]
[ 14, 292 ]
[ "passage: TAGS\n#license-cc0-1.0 #region-us \n# The Movies Dataset With Embeddings\n\nThis is a movies dataset with over 45,000 movies and 26 million ratings from over 270,000 users.\nThe original data was taken from Kaggle and updated in the following way:\n\n* The 'movie_shema.sql' file with a SQL schema was generated from the original data.\n* The 'overview_vector' column of the 'vector(1536)' type was added to store a vectorized representation of movies' overviews.\n* The 'movie_data_with_openai_embeddings.sql' file was created with the 'overview_vector' column holding vectors generated with OpenAI's 'text-embedding-ada-002' model.\n* The 'movie_data_with_openai_embeddings_20K_records.sql' is a truncated version of the dataset with over 20,000 movies\n* The 'movie_data_with_openai_embeddings_3K_records.sql' is the smallest version of the dataset with over 3,000 movies\n\nIf you need to use a different embedding model, then load the 'movie_data.sql' dataset and then generate the embeddings for the 'overview' or other columns." ]
5b10c55b354b9ce6ff46b92a08e992236247925f
对 https://huggingface.co/datasets/wangrui6/Zhihu-KOL 数据进行了初步整理,保留了100赞及以上的数据。 共271261条。
bzb2023/Zhihu-KOL-More-Than-100-Upvotes
[ "task_categories:text-generation", "language:zh", "license:apache-2.0", "region:us" ]
2024-01-01T18:27:50+00:00
{"language": ["zh"], "license": "apache-2.0", "task_categories": ["text-generation"]}
2024-01-01T18:35:01+00:00
[]
[ "zh" ]
TAGS #task_categories-text-generation #language-Chinese #license-apache-2.0 #region-us
对 URL 数据进行了初步整理,保留了100赞及以上的数据。 共271261条。
[]
[ "TAGS\n#task_categories-text-generation #language-Chinese #license-apache-2.0 #region-us \n" ]
[ 30 ]
[ "passage: TAGS\n#task_categories-text-generation #language-Chinese #license-apache-2.0 #region-us \n" ]
f1efe996adbfb9e476dd48741feb0455b79965ec
# Dataset Card for "ERRnews" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) - ## Dataset Description - **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** https://www.bjmc.lu.lv/fileadmin/user_upload/lu_portal/projekti/bjmc/Contents/10_3_23_Harm.pdf - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Dataset Summary LongSumEt is an estonian language long summarization dataset with pages filtered from CulturaX dataset. The dataset consists of the page text, and machine generated short summary, long summary and bulletpoints. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages Estonian ## Dataset Structure ### Data Instances [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Data Fields [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Data Splits |train|test|valid| |:----|:----|:----| |8656|481|481| ### BibTeX entry and citation info ```bibtex article{henryabstractive, title={Abstractive Summarization of Broadcast News Stories for {Estonian}}, author={Henry, H{\"a}rm and Tanel, Alum{\"a}e}, journal={Baltic J. Modern Computing}, volume={10}, number={3}, pages={511-524}, year={2022} } ```
TalTechNLP/LongSumEt
[ "task_categories:summarization", "annotations_creators:machine-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:et", "license:cc-by-4.0", "region:us" ]
2024-01-01T18:38:04+00:00
{"annotations_creators": ["machine-generated"], "language": ["et"], "license": "cc-by-4.0", "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["summarization"], "pretty_name": "LongSumEt", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "long_summary", "dtype": "string"}, {"name": "short_summary", "dtype": "string"}, {"name": "bulletpoints", "dtype": "string"}, {"name": "timestamp", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 85384791, "num_examples": 8656}, {"name": "test", "num_bytes": 4819298, "num_examples": 481}, {"name": "validation", "num_bytes": 4715166, "num_examples": 481}], "download_size": 61950277, "dataset_size": 94919255}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
2024-01-13T19:29:48+00:00
[]
[ "et" ]
TAGS #task_categories-summarization #annotations_creators-machine-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Estonian #license-cc-by-4.0 #region-us
Dataset Card for "ERRnews" ========================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions + Dataset Description ------------------- * Homepage: * Repository: * Paper: URL * Point of Contact: ### Dataset Summary LongSumEt is an estonian language long summarization dataset with pages filtered from CulturaX dataset. The dataset consists of the page text, and machine generated short summary, long summary and bulletpoints. ### Supported Tasks and Leaderboards ### Languages Estonian Dataset Structure ----------------- ### Data Instances ### Data Fields ### Data Splits ### BibTeX entry and citation info
[ "### Dataset Summary\n\n\nLongSumEt is an estonian language long summarization dataset with pages filtered from CulturaX dataset. The dataset consists of the page text, and machine generated short summary, long summary and bulletpoints.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nEstonian\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields", "### Data Splits", "### BibTeX entry and citation info" ]
[ "TAGS\n#task_categories-summarization #annotations_creators-machine-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Estonian #license-cc-by-4.0 #region-us \n", "### Dataset Summary\n\n\nLongSumEt is an estonian language long summarization dataset with pages filtered from CulturaX dataset. The dataset consists of the page text, and machine generated short summary, long summary and bulletpoints.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nEstonian\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields", "### Data Splits", "### BibTeX entry and citation info" ]
[ 71, 56, 10, 13, 6, 5, 5, 11 ]
[ "passage: TAGS\n#task_categories-summarization #annotations_creators-machine-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Estonian #license-cc-by-4.0 #region-us \n### Dataset Summary\n\n\nLongSumEt is an estonian language long summarization dataset with pages filtered from CulturaX dataset. The dataset consists of the page text, and machine generated short summary, long summary and bulletpoints.### Supported Tasks and Leaderboards### Languages\n\n\nEstonian\n\n\nDataset Structure\n-----------------### Data Instances### Data Fields### Data Splits### BibTeX entry and citation info" ]
c00979375e88660d4c0b51efdb523a0da9239049
This dataset contains the cocitation abstracts related to COPD in the paper [Contrastive Learning and Mixture of Experts Enables Precise Vector Embeddings](arxiv.org/abs/2401.15713)
lhallee/abstract_domain_copd
[ "arxiv:2401.15713", "region:us" ]
2024-01-01T20:51:18+00:00
{"dataset_info": {"features": [{"name": "a", "dtype": "string"}, {"name": "b", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 477301584, "num_examples": 132453}, {"name": "valid", "num_bytes": 9596971, "num_examples": 2676}, {"name": "test", "num_bytes": 4758204, "num_examples": 1294}], "download_size": 200765538, "dataset_size": 491656759}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-30T02:53:58+00:00
[ "2401.15713" ]
[]
TAGS #arxiv-2401.15713 #region-us
This dataset contains the cocitation abstracts related to COPD in the paper Contrastive Learning and Mixture of Experts Enables Precise Vector Embeddings
[]
[ "TAGS\n#arxiv-2401.15713 #region-us \n" ]
[ 15 ]
[ "passage: TAGS\n#arxiv-2401.15713 #region-us \n" ]
d23baf2ab279683ab6bd8fdd779cf17cde926847
## Description I explore the past so you don't have too! ## Prompt A channel run by an influencer and videoblogger called Jess. She often do weird challenges like "saying yes to everyone", "walking to corss the united states", "walk in new york dressed as a chicken" to get millions of views and likes. She also sometimes give tips and advices for make-up, beauty, dating etc, but she now makes random videos She is also a pro gamer, enjoying games like League of Legends, Fortnite, Call of Duty, The Sims, GTA 5, Baldur's Gate 3, but she now makes random videos
MichaelBoll/ai-tube-hug
[ "license:cc-by-nc-sa-4.0", "region:us" ]
2024-01-01T20:54:04+00:00
{"license": "cc-by-nc-sa-4.0", "pretty_name": "Michael Bollox"}
2024-01-24T18:16:49+00:00
[]
[]
TAGS #license-cc-by-nc-sa-4.0 #region-us
## Description I explore the past so you don't have too! ## Prompt A channel run by an influencer and videoblogger called Jess. She often do weird challenges like "saying yes to everyone", "walking to corss the united states", "walk in new york dressed as a chicken" to get millions of views and likes. She also sometimes give tips and advices for make-up, beauty, dating etc, but she now makes random videos She is also a pro gamer, enjoying games like League of Legends, Fortnite, Call of Duty, The Sims, GTA 5, Baldur's Gate 3, but she now makes random videos
[ "## Description\n\nI explore the past so you don't have too!", "## Prompt\n\nA channel run by an influencer and videoblogger called Jess.\n\nShe often do weird challenges like \"saying yes to everyone\", \"walking to corss the united states\", \"walk in new york dressed as a chicken\" to get millions of views and likes.\n\nShe also sometimes give tips and advices for make-up, beauty, dating etc, but she now makes random videos\n\nShe is also a pro gamer, enjoying games like League of Legends, Fortnite, Call of Duty, The Sims, GTA 5, Baldur's Gate 3, but she now makes random videos" ]
[ "TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n", "## Description\n\nI explore the past so you don't have too!", "## Prompt\n\nA channel run by an influencer and videoblogger called Jess.\n\nShe often do weird challenges like \"saying yes to everyone\", \"walking to corss the united states\", \"walk in new york dressed as a chicken\" to get millions of views and likes.\n\nShe also sometimes give tips and advices for make-up, beauty, dating etc, but she now makes random videos\n\nShe is also a pro gamer, enjoying games like League of Legends, Fortnite, Call of Duty, The Sims, GTA 5, Baldur's Gate 3, but she now makes random videos" ]
[ 19, 14, 131 ]
[ "passage: TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n## Description\n\nI explore the past so you don't have too!## Prompt\n\nA channel run by an influencer and videoblogger called Jess.\n\nShe often do weird challenges like \"saying yes to everyone\", \"walking to corss the united states\", \"walk in new york dressed as a chicken\" to get millions of views and likes.\n\nShe also sometimes give tips and advices for make-up, beauty, dating etc, but she now makes random videos\n\nShe is also a pro gamer, enjoying games like League of Legends, Fortnite, Call of Duty, The Sims, GTA 5, Baldur's Gate 3, but she now makes random videos" ]
ea6300ba9a210a199949e52eaecb85cff99b3f60
This dataset contains the cocitation abstracts related to CVD in the paper [Contrastive Learning and Mixture of Experts Enables Precise Vector Embeddings](arxiv.org/abs/2401.15713)
lhallee/abstract_domain_cvd
[ "arxiv:2401.15713", "region:us" ]
2024-01-01T21:00:39+00:00
{"dataset_info": {"features": [{"name": "a", "dtype": "string"}, {"name": "b", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 685896937, "num_examples": 181000}, {"name": "valid", "num_bytes": 17346151, "num_examples": 4584}, {"name": "test", "num_bytes": 2872780, "num_examples": 753}], "download_size": 208705249, "dataset_size": 706115868}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-30T02:52:16+00:00
[ "2401.15713" ]
[]
TAGS #arxiv-2401.15713 #region-us
This dataset contains the cocitation abstracts related to CVD in the paper Contrastive Learning and Mixture of Experts Enables Precise Vector Embeddings
[]
[ "TAGS\n#arxiv-2401.15713 #region-us \n" ]
[ 15 ]
[ "passage: TAGS\n#arxiv-2401.15713 #region-us \n" ]
84ab8e7cb679217861725896ee73ab73e8589cd4
# Dataset Card for "test_startup_advice_10k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
salma-remyx/test_startup_advice_10k
[ "region:us" ]
2024-01-01T21:08:57+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10808411, "num_examples": 10000}], "download_size": 6314588, "dataset_size": 10808411}}
2024-01-01T21:09:12+00:00
[]
[]
TAGS #region-us
# Dataset Card for "test_startup_advice_10k" More Information needed
[ "# Dataset Card for \"test_startup_advice_10k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"test_startup_advice_10k\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"test_startup_advice_10k\"\n\nMore Information needed" ]
44f4536ad52df672c054393a76c0649a7692f501
# Tatoeba Turkish-English 2024 tatoeba.org Turkish-English pairs. Last Update: 01.01.2024
beratcmn/tatoeba-tr-en
[ "task_categories:translation", "size_categories:100K<n<1M", "language:tr", "language:en", "license:cc-by-2.0", "translation", "turkish", "english", "tatoeba", "region:us" ]
2024-01-01T21:48:52+00:00
{"language": ["tr", "en"], "license": "cc-by-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["translation"], "pretty_name": "Tatoeba Turkish-English 2024", "tags": ["translation", "turkish", "english", "tatoeba"]}
2024-01-01T21:53:42+00:00
[]
[ "tr", "en" ]
TAGS #task_categories-translation #size_categories-100K<n<1M #language-Turkish #language-English #license-cc-by-2.0 #translation #turkish #english #tatoeba #region-us
# Tatoeba Turkish-English 2024 URL Turkish-English pairs. Last Update: 01.01.2024
[ "# Tatoeba Turkish-English 2024\n\nURL Turkish-English pairs. \n\nLast Update: 01.01.2024" ]
[ "TAGS\n#task_categories-translation #size_categories-100K<n<1M #language-Turkish #language-English #license-cc-by-2.0 #translation #turkish #english #tatoeba #region-us \n", "# Tatoeba Turkish-English 2024\n\nURL Turkish-English pairs. \n\nLast Update: 01.01.2024" ]
[ 59, 24 ]
[ "passage: TAGS\n#task_categories-translation #size_categories-100K<n<1M #language-Turkish #language-English #license-cc-by-2.0 #translation #turkish #english #tatoeba #region-us \n# Tatoeba Turkish-English 2024\n\nURL Turkish-English pairs. \n\nLast Update: 01.01.2024" ]
61420906f05c1cc3756d1206d097ef2dc342df39
SergioSCA/StageVision_v3
[ "task_categories:object-detection", "size_categories:n<1K", "language:es", "license:apache-2.0", "region:us" ]
2024-01-01T22:10:07+00:00
{"language": ["es"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["object-detection"], "pretty_name": "StageVision v3"}
2024-01-07T10:36:16+00:00
[]
[ "es" ]
TAGS #task_categories-object-detection #size_categories-n<1K #language-Spanish #license-apache-2.0 #region-us
[]
[ "TAGS\n#task_categories-object-detection #size_categories-n<1K #language-Spanish #license-apache-2.0 #region-us \n" ]
[ 40 ]
[ "passage: TAGS\n#task_categories-object-detection #size_categories-n<1K #language-Spanish #license-apache-2.0 #region-us \n" ]
703e98fefd5df06d2b9896878907231c7db865db
This is a reformatted version of https://huggingface.co/datasets/LDJnr/Capybara - formatted in ShareGPT style for easier consumption by Axolotl for model training.
ssmi153/Capybara-ShareGPT
[ "license:apache-2.0", "region:us" ]
2024-01-01T22:20:29+00:00
{"license": "apache-2.0"}
2024-01-02T05:52:56+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
This is a reformatted version of URL - formatted in ShareGPT style for easier consumption by Axolotl for model training.
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n" ]
334eaefead516335930922d618205850a766adb4
# Speaker embeddings extracted from CMU ARCTIC There is one `.npy` file for each utterance in the dataset, 7931 files in total. The speaker embeddings are 512-element X-vectors. The [CMU ARCTIC](http://www.festvox.org/cmu_arctic/) dataset divides the utterances among the following speakers: - bdl (US male) - slt (US female) - jmk (Canadian male) - awb (Scottish male) - rms (US male) - clb (US female) - ksp (Indian male) The X-vectors were extracted using [this script](https://huggingface.co/mechanicalsea/speecht5-vc/blob/main/manifest/utils/prep_cmu_arctic_spkemb.py), which uses the `speechbrain/spkrec-xvect-voxceleb` model. Usage: ```python from datasets import load_dataset embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation") speaker_embeddings = embeddings_dataset[7306]["xvector"] speaker_embeddings = torch.tensor(speaker_embeddings).unsqueeze(0) ```
Dupaja/cmu-arctic-xvectors
[ "task_categories:text-to-speech", "task_categories:audio-to-audio", "license:mit", "region:us" ]
2024-01-01T23:54:00+00:00
{"license": "mit", "task_categories": ["text-to-speech", "audio-to-audio"], "pretty_name": "CMU ARCTIC X-Vectors"}
2024-01-01T23:54:00+00:00
[]
[]
TAGS #task_categories-text-to-speech #task_categories-audio-to-audio #license-mit #region-us
# Speaker embeddings extracted from CMU ARCTIC There is one '.npy' file for each utterance in the dataset, 7931 files in total. The speaker embeddings are 512-element X-vectors. The CMU ARCTIC dataset divides the utterances among the following speakers: - bdl (US male) - slt (US female) - jmk (Canadian male) - awb (Scottish male) - rms (US male) - clb (US female) - ksp (Indian male) The X-vectors were extracted using this script, which uses the 'speechbrain/spkrec-xvect-voxceleb' model. Usage:
[ "# Speaker embeddings extracted from CMU ARCTIC\n\nThere is one '.npy' file for each utterance in the dataset, 7931 files in total. The speaker embeddings are 512-element X-vectors.\n\nThe CMU ARCTIC dataset divides the utterances among the following speakers:\n\n- bdl (US male)\n- slt (US female)\n- jmk (Canadian male)\n- awb (Scottish male)\n- rms (US male)\n- clb (US female)\n- ksp (Indian male)\n\nThe X-vectors were extracted using this script, which uses the 'speechbrain/spkrec-xvect-voxceleb' model.\n\nUsage:" ]
[ "TAGS\n#task_categories-text-to-speech #task_categories-audio-to-audio #license-mit #region-us \n", "# Speaker embeddings extracted from CMU ARCTIC\n\nThere is one '.npy' file for each utterance in the dataset, 7931 files in total. The speaker embeddings are 512-element X-vectors.\n\nThe CMU ARCTIC dataset divides the utterances among the following speakers:\n\n- bdl (US male)\n- slt (US female)\n- jmk (Canadian male)\n- awb (Scottish male)\n- rms (US male)\n- clb (US female)\n- ksp (Indian male)\n\nThe X-vectors were extracted using this script, which uses the 'speechbrain/spkrec-xvect-voxceleb' model.\n\nUsage:" ]
[ 38, 168 ]
[ "passage: TAGS\n#task_categories-text-to-speech #task_categories-audio-to-audio #license-mit #region-us \n# Speaker embeddings extracted from CMU ARCTIC\n\nThere is one '.npy' file for each utterance in the dataset, 7931 files in total. The speaker embeddings are 512-element X-vectors.\n\nThe CMU ARCTIC dataset divides the utterances among the following speakers:\n\n- bdl (US male)\n- slt (US female)\n- jmk (Canadian male)\n- awb (Scottish male)\n- rms (US male)\n- clb (US female)\n- ksp (Indian male)\n\nThe X-vectors were extracted using this script, which uses the 'speechbrain/spkrec-xvect-voxceleb' model.\n\nUsage:" ]
400751de860bb2f901da9c661c6cae1f5523bfe4
# Weblate Translations Amazigh subset of [Weblate Translations](https://huggingface.co/datasets/ayymen/Weblate-Translations).
Tamazight-NLP/Weblate-Translations
[ "task_categories:translation", "task_categories:text2text-generation", "annotations_creators:crowdsourced", "size_categories:10K<n<100K", "language:ber", "language:zgh", "language:kab", "language:tzm", "language:en", "region:us" ]
2024-01-02T01:01:43+00:00
{"annotations_creators": ["crowdsourced"], "language": ["ber", "zgh", "kab", "tzm", "en"], "size_categories": ["10K<n<100K"], "task_categories": ["translation", "text2text-generation"], "pretty_name": "Weblate Translations", "configs": [{"config_name": "en_GB-tzm", "data_files": "en_GB-tzm.tsv"}, {"config_name": "en-ber", "data_files": "en-ber.tsv"}, {"config_name": "en-kab-KAB", "data_files": "en-kab-KAB.tsv"}, {"config_name": "en_US-tzm", "data_files": "en_US-tzm.tsv"}, {"config_name": "en-zgh", "data_files": "en-zgh.tsv", "default": true}, {"config_name": "en-tzm", "data_files": "en-tzm.tsv"}, {"config_name": "en_US-ber", "data_files": "en_US-ber.tsv"}, {"config_name": "en_US-kab", "data_files": "en_US-kab.tsv"}, {"config_name": "en_GB-kab", "data_files": "en_GB-kab.tsv"}, {"config_name": "zh_Hant-zgh", "data_files": "zh_Hant-zgh.tsv"}, {"config_name": "en-b+kab", "data_files": "en-b+kab.tsv"}, {"config_name": "en_GB-zgh", "data_files": "en_GB-zgh.tsv"}, {"config_name": "en-kab", "data_files": "en-kab.tsv"}]}
2024-01-13T14:23:38+00:00
[]
[ "ber", "zgh", "kab", "tzm", "en" ]
TAGS #task_categories-translation #task_categories-text2text-generation #annotations_creators-crowdsourced #size_categories-10K<n<100K #language-ber #language-Standard Moroccan Tamazight #language-Kabyle #language-Central Atlas Tamazight #language-English #region-us
# Weblate Translations Amazigh subset of Weblate Translations.
[ "# Weblate Translations\nAmazigh subset of Weblate Translations." ]
[ "TAGS\n#task_categories-translation #task_categories-text2text-generation #annotations_creators-crowdsourced #size_categories-10K<n<100K #language-ber #language-Standard Moroccan Tamazight #language-Kabyle #language-Central Atlas Tamazight #language-English #region-us \n", "# Weblate Translations\nAmazigh subset of Weblate Translations." ]
[ 86, 16 ]
[ "passage: TAGS\n#task_categories-translation #task_categories-text2text-generation #annotations_creators-crowdsourced #size_categories-10K<n<100K #language-ber #language-Standard Moroccan Tamazight #language-Kabyle #language-Central Atlas Tamazight #language-English #region-us \n# Weblate Translations\nAmazigh subset of Weblate Translations." ]
2cf771fe4450e52c8fbe8f87c9f7ef20f3189de6
# Dataset Card for "no_robots" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jovianzm/no_robots
[ "task_categories:conversational", "task_categories:question-answering", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "language:en", "license:mit", "region:us" ]
2024-01-02T01:32:46+00:00
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K", "1K<n<10K"], "task_categories": ["conversational", "question-answering"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "prompt_id", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "category", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 28805395, "num_examples": 9500}, {"name": "test", "num_bytes": 1545168, "num_examples": 500}], "download_size": 18891461, "dataset_size": 30350563}}
2024-01-02T01:51:46+00:00
[]
[ "en" ]
TAGS #task_categories-conversational #task_categories-question-answering #size_categories-10K<n<100K #size_categories-1K<n<10K #language-English #license-mit #region-us
# Dataset Card for "no_robots" More Information needed
[ "# Dataset Card for \"no_robots\"\n\nMore Information needed" ]
[ "TAGS\n#task_categories-conversational #task_categories-question-answering #size_categories-10K<n<100K #size_categories-1K<n<10K #language-English #license-mit #region-us \n", "# Dataset Card for \"no_robots\"\n\nMore Information needed" ]
[ 61, 14 ]
[ "passage: TAGS\n#task_categories-conversational #task_categories-question-answering #size_categories-10K<n<100K #size_categories-1K<n<10K #language-English #license-mit #region-us \n# Dataset Card for \"no_robots\"\n\nMore Information needed" ]
d7d16f1018d7b5e8455e96992c31a94fbcc76794
# Dataset Card for "ISABELLA-COTIER-ART" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
iamkaikai/ISABELLA-COTIER-ART
[ "region:us" ]
2024-01-02T01:47:15+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14682800.0, "num_examples": 273}], "download_size": 14659067, "dataset_size": 14682800.0}}
2024-01-02T04:53:51+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ISABELLA-COTIER-ART" More Information needed
[ "# Dataset Card for \"ISABELLA-COTIER-ART\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ISABELLA-COTIER-ART\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ISABELLA-COTIER-ART\"\n\nMore Information needed" ]
99b69c4ee8980143eb62c77e3159673f14290569
# HSS Shakha Khel Dataset ## Overview The HSS Shakha Khel dataset is specifically tailored for training the LLAMA 2 7B model. This dataset is in the alpaca format, ensuring compatibility and efficiency for machine learning purposes. ## Source This dataset has been meticulously created by extracting information from the "HSS KHEL BOOK," which is a comprehensive resource on various activities and exercises. The book can be accessed [here](https://sevikasamiti.org/resources/RSS/Publications/Khel%20Book.pdf). ## Dataset Structure The dataset is structured in the alpaca format, which is ideal for training advanced machine learning models like LLAMA 2 7B. ## Acknowledgements This dataset was created thanks to the information provided in the "HSS KHEL BOOK." We acknowledge the authors and contributors of the book for their valuable work. ## Contact For any queries or further information regarding the dataset, please reach out to me --- license: apache-2.0 ---
Suru/HSS-shakha-khel
[ "region:us" ]
2024-01-02T02:08:19+00:00
{}
2024-01-02T03:42:40+00:00
[]
[]
TAGS #region-us
# HSS Shakha Khel Dataset ## Overview The HSS Shakha Khel dataset is specifically tailored for training the LLAMA 2 7B model. This dataset is in the alpaca format, ensuring compatibility and efficiency for machine learning purposes. ## Source This dataset has been meticulously created by extracting information from the "HSS KHEL BOOK," which is a comprehensive resource on various activities and exercises. The book can be accessed here. ## Dataset Structure The dataset is structured in the alpaca format, which is ideal for training advanced machine learning models like LLAMA 2 7B. ## Acknowledgements This dataset was created thanks to the information provided in the "HSS KHEL BOOK." We acknowledge the authors and contributors of the book for their valuable work. ## Contact For any queries or further information regarding the dataset, please reach out to me --- license: apache-2.0 ---
[ "# HSS Shakha Khel Dataset", "## Overview\nThe HSS Shakha Khel dataset is specifically tailored for training the LLAMA 2 7B model. This dataset is in the alpaca format, ensuring compatibility and efficiency for machine learning purposes.", "## Source\nThis dataset has been meticulously created by extracting information from the \"HSS KHEL BOOK,\" which is a comprehensive resource on various activities and exercises. The book can be accessed here.", "## Dataset Structure\nThe dataset is structured in the alpaca format, which is ideal for training advanced machine learning models like LLAMA 2 7B.", "## Acknowledgements\nThis dataset was created thanks to the information provided in the \"HSS KHEL BOOK.\" We acknowledge the authors and contributors of the book for their valuable work.", "## Contact\nFor any queries or further information regarding the dataset, please reach out to me\n\n---\nlicense: apache-2.0\n---" ]
[ "TAGS\n#region-us \n", "# HSS Shakha Khel Dataset", "## Overview\nThe HSS Shakha Khel dataset is specifically tailored for training the LLAMA 2 7B model. This dataset is in the alpaca format, ensuring compatibility and efficiency for machine learning purposes.", "## Source\nThis dataset has been meticulously created by extracting information from the \"HSS KHEL BOOK,\" which is a comprehensive resource on various activities and exercises. The book can be accessed here.", "## Dataset Structure\nThe dataset is structured in the alpaca format, which is ideal for training advanced machine learning models like LLAMA 2 7B.", "## Acknowledgements\nThis dataset was created thanks to the information provided in the \"HSS KHEL BOOK.\" We acknowledge the authors and contributors of the book for their valuable work.", "## Contact\nFor any queries or further information regarding the dataset, please reach out to me\n\n---\nlicense: apache-2.0\n---" ]
[ 6, 9, 51, 46, 35, 41, 27 ]
[ "passage: TAGS\n#region-us \n# HSS Shakha Khel Dataset## Overview\nThe HSS Shakha Khel dataset is specifically tailored for training the LLAMA 2 7B model. This dataset is in the alpaca format, ensuring compatibility and efficiency for machine learning purposes.## Source\nThis dataset has been meticulously created by extracting information from the \"HSS KHEL BOOK,\" which is a comprehensive resource on various activities and exercises. The book can be accessed here.## Dataset Structure\nThe dataset is structured in the alpaca format, which is ideal for training advanced machine learning models like LLAMA 2 7B.## Acknowledgements\nThis dataset was created thanks to the information provided in the \"HSS KHEL BOOK.\" We acknowledge the authors and contributors of the book for their valuable work.## Contact\nFor any queries or further information regarding the dataset, please reach out to me\n\n---\nlicense: apache-2.0\n---" ]
5380187a0d53680ae433b5dfa8a8fe760cc56576
# Dataset Card for "discorsi-vari" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mii-llm/discorsi-vari
[ "region:us" ]
2024-01-02T02:18:44+00:00
{"dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 63794757.0, "num_examples": 8125}], "download_size": 29458789, "dataset_size": 63794757.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-02T02:19:00+00:00
[]
[]
TAGS #region-us
# Dataset Card for "discorsi-vari" More Information needed
[ "# Dataset Card for \"discorsi-vari\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"discorsi-vari\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"discorsi-vari\"\n\nMore Information needed" ]
f37104c7af4e162349f35eb3e19d3e8f4debde0e
# DevSpecCode Synthetic Code Dataset with instructions requiring multiple complex requirements, limitations, and instructions. ### Example Instruction ``` Please create a small function in Go that meets the following requirements: 1. Write a Go function named `parallelSum` that accepts a slice of integers and returns the sum of those integers. However, the sum must be calculated in parallel using Go routines, by dividing the slice into four roughly equal parts and summing each part in separate Go routines. Use channels to collect the results of each summing routine. 2. Ensure that the `parallelSum` function is safe for concurrent use by multiple goroutines. To achieve this, you must implement a mechanism to prevent race conditions when the separate sums are combined to produce the final sum. 3. The function should be able to handle slices of any size (including those not evenly divisible by four). It must allocate any extra elements correctly among the four summing routines to ensure accurate results. If the number of elements is less than four, the function should still use multiple routines for practice, but it may result in some routines receiving no elements to sum. Remember, the implementation should not exceed 50 lines of code and should contain all the required concurrency controls and error handling exclusively within the function body. ``` ### Languages - Python (*majority*) - JavaScript - Java - C# - C++ - Ruby - Go - TypeScript
cfahlgren1/DevSpecCode
[ "license:mit", "region:us" ]
2024-01-02T02:48:14+00:00
{"license": "mit"}
2024-01-02T02:56:12+00:00
[]
[]
TAGS #license-mit #region-us
# DevSpecCode Synthetic Code Dataset with instructions requiring multiple complex requirements, limitations, and instructions. ### Example Instruction ### Languages - Python (*majority*) - JavaScript - Java - C# - C++ - Ruby - Go - TypeScript
[ "# DevSpecCode\n\nSynthetic Code Dataset with instructions requiring multiple complex requirements, limitations, and instructions.", "### Example Instruction", "### Languages\n\n- Python (*majority*)\n- JavaScript\n- Java\n- C#\n- C++\n- Ruby\n- Go\n- TypeScript" ]
[ "TAGS\n#license-mit #region-us \n", "# DevSpecCode\n\nSynthetic Code Dataset with instructions requiring multiple complex requirements, limitations, and instructions.", "### Example Instruction", "### Languages\n\n- Python (*majority*)\n- JavaScript\n- Java\n- C#\n- C++\n- Ruby\n- Go\n- TypeScript" ]
[ 11, 25, 6, 28 ]
[ "passage: TAGS\n#license-mit #region-us \n# DevSpecCode\n\nSynthetic Code Dataset with instructions requiring multiple complex requirements, limitations, and instructions.### Example Instruction### Languages\n\n- Python (*majority*)\n- JavaScript\n- Java\n- C#\n- C++\n- Ruby\n- Go\n- TypeScript" ]
5e9aa029e84a3fa94498dd55bb711d2b4306a117
As per [the community's request](https://huggingface.co/datasets/CausalLM/GPT-4-Self-Instruct-German/discussions/1), here we share a Turkish dataset synthesized using the OpenAI GPT-4 model with Self-Instruct, utilizing some excess Azure credits. Please feel free to use it. All questions and answers are newly generated by GPT-4, without specialized verification, only simple filtering and strict semantic similarity control have been applied. We hope that this will be helpful for fine-tuning open-source models for non-English languages, particularly Turkish. This dataset will be updated continuously.
CausalLM/GPT-4-Self-Instruct-Turkish
[ "language:tr", "license:cc-by-4.0", "gpt4", "region:us" ]
2024-01-02T02:57:33+00:00
{"language": ["tr"], "license": "cc-by-4.0", "tags": ["gpt4"]}
2024-01-02T15:21:43+00:00
[]
[ "tr" ]
TAGS #language-Turkish #license-cc-by-4.0 #gpt4 #region-us
As per the community's request, here we share a Turkish dataset synthesized using the OpenAI GPT-4 model with Self-Instruct, utilizing some excess Azure credits. Please feel free to use it. All questions and answers are newly generated by GPT-4, without specialized verification, only simple filtering and strict semantic similarity control have been applied. We hope that this will be helpful for fine-tuning open-source models for non-English languages, particularly Turkish. This dataset will be updated continuously.
[]
[ "TAGS\n#language-Turkish #license-cc-by-4.0 #gpt4 #region-us \n" ]
[ 25 ]
[ "passage: TAGS\n#language-Turkish #license-cc-by-4.0 #gpt4 #region-us \n" ]
ff13babe333076ac8014bd70f3c99282ddc4bea1
# Dataset Card for primer_demo_ejemplo_ds_semantic ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset description](#dataset-description) - [Dataset categories](#dataset-categories) ## Dataset description - **Homepage:** https://huggingface.co/datasets/Lit4pCol4b/primer_demo_ejemplo_ds_semantic ## Dataset categories | Id | Name | Description | | --- | ---- | ----------- | | 1 | objeto_interes | [128, 0, 0] | | 2 | agua | [0, 128, 0] |
Lit4pCol4b/demo_ds_hf_hub_segformer_fine_tuned_ADE20k_format_rgb_crudo_oi_IS_v1
[ "task_categories:image-segmentation", "region:us" ]
2024-01-02T03:02:24+00:00
{"task_categories": ["image-segmentation"]}
2024-01-04T01:29:37+00:00
[]
[]
TAGS #task_categories-image-segmentation #region-us
Dataset Card for primer\_demo\_ejemplo\_ds\_semantic ==================================================== Table of Contents ----------------- * Table of Contents * Dataset description * Dataset categories Dataset description ------------------- * Homepage: URL Dataset categories ------------------ Id: 1, Name: objeto\_interes, Description: [128, 0, 0] Id: 2, Name: agua, Description: [0, 128, 0]
[]
[ "TAGS\n#task_categories-image-segmentation #region-us \n" ]
[ 18 ]
[ "passage: TAGS\n#task_categories-image-segmentation #region-us \n" ]
e41de23cf91606016d96876a4a594991f36be251
This dataset has been modified from the microsoft/LCC_csharp dataset to provide CodeLLaMa with infilling tasks as per the original fill-in-the-middle paper, were the text that needs to be filled in is moved to the end of the dataset, thus taking advantage of the Generative feature of GPT-style models.
fasterinnerlooper/lcc_csharp
[ "task_categories:mask-generation", "task_categories:fill-mask", "task_categories:text-generation", "size_categories:100K<n<1M", "language:en", "license:mit", "code", "region:us" ]
2024-01-02T03:25:30+00:00
{"language": ["en"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["mask-generation", "fill-mask", "text-generation"], "pretty_name": "LCC_csharp dataset modified for infilling", "dataset_info": {"features": [{"name": "prefix", "dtype": "string"}, {"name": "suffix", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1852197668, "num_examples": 100000}], "download_size": 531853418, "dataset_size": 1852197668}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["code"]}
2024-01-21T22:10:31+00:00
[]
[ "en" ]
TAGS #task_categories-mask-generation #task_categories-fill-mask #task_categories-text-generation #size_categories-100K<n<1M #language-English #license-mit #code #region-us
This dataset has been modified from the microsoft/LCC_csharp dataset to provide CodeLLaMa with infilling tasks as per the original fill-in-the-middle paper, were the text that needs to be filled in is moved to the end of the dataset, thus taking advantage of the Generative feature of GPT-style models.
[]
[ "TAGS\n#task_categories-mask-generation #task_categories-fill-mask #task_categories-text-generation #size_categories-100K<n<1M #language-English #license-mit #code #region-us \n" ]
[ 63 ]
[ "passage: TAGS\n#task_categories-mask-generation #task_categories-fill-mask #task_categories-text-generation #size_categories-100K<n<1M #language-English #license-mit #code #region-us \n" ]
635d84a12a01f7efb985bb820ffa7386ae8e8d14
# TL;DR SFT Dataset for OpenAI's [Summarize from Feedback](https://openai.com/blog/summarization/) task The dataset is directly taken from https://github.com/openai/summarize-from-feedback/tree/700967448d10004279f138666442bf1497d0e705#reddit-tldr-dataset These columns are taken directly from the aforementioned dataset: * **id**: unique identifier for the post * **subreddit**: subreddit the post was taken from * **title**: title of the post * **post**: body of the post * **summary**: summary of the post * **reference_response**: reference response for the post These columns are added by this preprocessing script: * **query**: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last ` `. If it's too short it pads the main text ([summarize_from_feedback/tasks.py#L98-L165](https://github.com/openai/summarize-from-feedback/blob/700967448d10004279f138666442bf1497d0e705/summarize_from_feedback/tasks.py#L98-L165)). Padding is either space or `[PAD]` token (see Args below). * **query_token**: tokenized version of `query` * **reference_response_token**: tokenized version of `reference_response` * **reference_response_token_len**: length of `reference_response_token` * **query_reference_response**: concatenation of `query.strip()` and `reference_response` * **query_reference_response_token**: tokenized version of `query_reference_response`, up to `max_sft_query_response_length` tokens * **query_reference_response_token_len**: length of `query_reference_response_token` # Args ```python {'base_model': 'EleutherAI/pythia-160m', 'hf_entity': 'vwxyzjn', 'max_rm_query_response_length': 560, 'max_rm_response_length': 48, 'max_sft_query_response_length': 560, 'max_sft_response_length': 48, 'oai_params': TaskQueryHParams(length=512, format_str='SUBREDDIT: r/{subreddit}\n' '\n' 'TITLE: {title}\n' '\n' 'POST: {post}\n' '\n' 'TL;DR:', truncate_field='post', truncate_text='\n', padding=[50277], pad_side='left'), 'push_to_hub': True} {'format_str': 'SUBREDDIT: r/{subreddit}\n' '\n' 'TITLE: {title}\n' '\n' 'POST: {post}\n' '\n' 'TL;DR:', 'length': 512, 'pad_side': 'left', 'padding': [50277], 'truncate_field': 'post', 'truncate_text': '\n'} ```
vwxyzjn/summarize_from_feedback_tldr_3_filtered_oai_preprocessing_1704166566
[ "region:us" ]
2024-01-02T03:36:19+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "subreddit", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "post", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "query_token", "sequence": "int64"}, {"name": "query", "dtype": "string"}, {"name": "reference_response", "dtype": "string"}, {"name": "reference_response_token", "sequence": "int64"}, {"name": "reference_response_token_len", "dtype": "int64"}, {"name": "query_reference_response", "dtype": "string"}, {"name": "query_reference_response_token", "sequence": "int64"}, {"name": "query_reference_response_token_len", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1787745967, "num_examples": 116722}, {"name": "validation", "num_bytes": 98754319, "num_examples": 6447}, {"name": "test", "num_bytes": 100397998, "num_examples": 6553}], "download_size": 573704772, "dataset_size": 1986898284}}
2024-01-02T03:36:38+00:00
[]
[]
TAGS #region-us
# TL;DR SFT Dataset for OpenAI's Summarize from Feedback task The dataset is directly taken from URL These columns are taken directly from the aforementioned dataset: * id: unique identifier for the post * subreddit: subreddit the post was taken from * title: title of the post * post: body of the post * summary: summary of the post * reference_response: reference response for the post These columns are added by this preprocessing script: * query: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last ' '. If it's too short it pads the main text (summarize_from_feedback/URL#L98-L165). Padding is either space or '[PAD]' token (see Args below). * query_token: tokenized version of 'query' * reference_response_token: tokenized version of 'reference_response' * reference_response_token_len: length of 'reference_response_token' * query_reference_response: concatenation of 'URL()' and 'reference_response' * query_reference_response_token: tokenized version of 'query_reference_response', up to 'max_sft_query_response_length' tokens * query_reference_response_token_len: length of 'query_reference_response_token' # Args
[ "# TL;DR SFT Dataset for OpenAI's Summarize from Feedback task\n\nThe dataset is directly taken from URL\n\nThese columns are taken directly from the aforementioned dataset:\n\n* id: unique identifier for the post\n* subreddit: subreddit the post was taken from\n* title: title of the post\n* post: body of the post\n* summary: summary of the post\n* reference_response: reference response for the post\n\nThese columns are added by this preprocessing script:\n* query: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last '\n'. If it's too short it pads the main text (summarize_from_feedback/URL#L98-L165). Padding is either space or '[PAD]' token (see Args below).\n* query_token: tokenized version of 'query'\n* reference_response_token: tokenized version of 'reference_response'\n* reference_response_token_len: length of 'reference_response_token'\n* query_reference_response: concatenation of 'URL()' and 'reference_response'\n* query_reference_response_token: tokenized version of 'query_reference_response', up to 'max_sft_query_response_length' tokens\n* query_reference_response_token_len: length of 'query_reference_response_token'", "# Args" ]
[ "TAGS\n#region-us \n", "# TL;DR SFT Dataset for OpenAI's Summarize from Feedback task\n\nThe dataset is directly taken from URL\n\nThese columns are taken directly from the aforementioned dataset:\n\n* id: unique identifier for the post\n* subreddit: subreddit the post was taken from\n* title: title of the post\n* post: body of the post\n* summary: summary of the post\n* reference_response: reference response for the post\n\nThese columns are added by this preprocessing script:\n* query: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last '\n'. If it's too short it pads the main text (summarize_from_feedback/URL#L98-L165). Padding is either space or '[PAD]' token (see Args below).\n* query_token: tokenized version of 'query'\n* reference_response_token: tokenized version of 'reference_response'\n* reference_response_token_len: length of 'reference_response_token'\n* query_reference_response: concatenation of 'URL()' and 'reference_response'\n* query_reference_response_token: tokenized version of 'query_reference_response', up to 'max_sft_query_response_length' tokens\n* query_reference_response_token_len: length of 'query_reference_response_token'", "# Args" ]
[ 6, 384, 3 ]
[ "passage: TAGS\n#region-us \n# TL;DR SFT Dataset for OpenAI's Summarize from Feedback task\n\nThe dataset is directly taken from URL\n\nThese columns are taken directly from the aforementioned dataset:\n\n* id: unique identifier for the post\n* subreddit: subreddit the post was taken from\n* title: title of the post\n* post: body of the post\n* summary: summary of the post\n* reference_response: reference response for the post\n\nThese columns are added by this preprocessing script:\n* query: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last '\n'. If it's too short it pads the main text (summarize_from_feedback/URL#L98-L165). Padding is either space or '[PAD]' token (see Args below).\n* query_token: tokenized version of 'query'\n* reference_response_token: tokenized version of 'reference_response'\n* reference_response_token_len: length of 'reference_response_token'\n* query_reference_response: concatenation of 'URL()' and 'reference_response'\n* query_reference_response_token: tokenized version of 'query_reference_response', up to 'max_sft_query_response_length' tokens\n* query_reference_response_token_len: length of 'query_reference_response_token'# Args" ]
04922dd7bb97e48739f4dd0d835db86acfe28783
# Dataset Card for "summarize_from_feedback_oai_preprocessing_1704166566" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
vwxyzjn/summarize_from_feedback_oai_preprocessing_1704166566
[ "region:us" ]
2024-01-02T03:36:57+00:00
{"dataset_info": {"features": [{"name": "info", "struct": [{"name": "id", "dtype": "string"}, {"name": "post", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "subreddit", "dtype": "string"}, {"name": "site", "dtype": "string"}, {"name": "article", "dtype": "string"}]}, {"name": "summaries", "list": [{"name": "text", "dtype": "string"}, {"name": "policy", "dtype": "string"}, {"name": "note", "dtype": "string"}]}, {"name": "choice", "dtype": "int32"}, {"name": "worker", "dtype": "string"}, {"name": "batch", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "extra", "struct": [{"name": "confidence", "dtype": "int32"}]}, {"name": "query_token", "sequence": "int64"}, {"name": "query", "dtype": "string"}, {"name": "response0", "dtype": "string"}, {"name": "response0_token", "sequence": "int64"}, {"name": "response0_token_len", "dtype": "int64"}, {"name": "response1", "dtype": "string"}, {"name": "response1_token", "sequence": "int64"}, {"name": "response1_token_len", "dtype": "int64"}, {"name": "response0_policy", "dtype": "string"}, {"name": "response1_policy", "dtype": "string"}, {"name": "policies", "dtype": "string"}, {"name": "query_response0", "dtype": "string"}, {"name": "query_response0_token", "sequence": "int64"}, {"name": "query_response0_token_len", "dtype": "int64"}, {"name": "query_response1", "dtype": "string"}, {"name": "query_response1_token", "sequence": "int64"}, {"name": "query_response1_token_len", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2145150935, "num_examples": 92858}, {"name": "validation", "num_bytes": 2005104645, "num_examples": 86086}], "download_size": 285738191, "dataset_size": 4150255580}}
2024-01-02T03:37:31+00:00
[]
[]
TAGS #region-us
# Dataset Card for "summarize_from_feedback_oai_preprocessing_1704166566" More Information needed
[ "# Dataset Card for \"summarize_from_feedback_oai_preprocessing_1704166566\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"summarize_from_feedback_oai_preprocessing_1704166566\"\n\nMore Information needed" ]
[ 6, 30 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"summarize_from_feedback_oai_preprocessing_1704166566\"\n\nMore Information needed" ]
29d4055957b3f871b5e93bce3ee35179ce10068f
# TL;DR SFT Dataset for OpenAI's [Summarize from Feedback](https://openai.com/blog/summarization/) task The dataset is directly taken from https://github.com/openai/summarize-from-feedback/tree/700967448d10004279f138666442bf1497d0e705#reddit-tldr-dataset These columns are taken directly from the aforementioned dataset: * **id**: unique identifier for the post * **subreddit**: subreddit the post was taken from * **title**: title of the post * **post**: body of the post * **summary**: summary of the post * **reference_response**: reference response for the post These columns are added by this preprocessing script: * **query**: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last ` `. If it's too short it pads the main text ([summarize_from_feedback/tasks.py#L98-L165](https://github.com/openai/summarize-from-feedback/blob/700967448d10004279f138666442bf1497d0e705/summarize_from_feedback/tasks.py#L98-L165)). Padding is either space or `[PAD]` token (see Args below). * **query_token**: tokenized version of `query` * **reference_response_token**: tokenized version of `reference_response` * **reference_response_token_len**: length of `reference_response_token` * **query_reference_response**: concatenation of `query.strip()` and `reference_response` * **query_reference_response_token**: tokenized version of `query_reference_response`, up to `max_sft_query_response_length` tokens * **query_reference_response_token_len**: length of `query_reference_response_token` # Args ```python {'base_model': 'EleutherAI/pythia-160m', 'hf_entity': 'vwxyzjn', 'max_rm_query_response_length': 560, 'max_rm_response_length': 48, 'max_sft_query_response_length': 560, 'max_sft_response_length': 48, 'oai_params': TaskQueryHParams(length=512, format_str='SUBREDDIT: r/{subreddit}\n' '\n' 'TITLE: {title}\n' '\n' 'POST: {post}\n' '\n' 'TL;DR:', truncate_field='post', truncate_text='\n', padding=[50277], pad_side='left'), 'push_to_hub': True} {'format_str': 'SUBREDDIT: r/{subreddit}\n' '\n' 'TITLE: {title}\n' '\n' 'POST: {post}\n' '\n' 'TL;DR:', 'length': 512, 'pad_side': 'left', 'padding': [50277], 'truncate_field': 'post', 'truncate_text': '\n'} ```
vwxyzjn/summarize_from_feedback_tldr_3_filtered_oai_preprocessing_1704169778
[ "region:us" ]
2024-01-02T04:29:50+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "subreddit", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "post", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "query_token", "sequence": "int64"}, {"name": "query", "dtype": "string"}, {"name": "reference_response", "dtype": "string"}, {"name": "reference_response_token", "sequence": "int64"}, {"name": "reference_response_token_len", "dtype": "int64"}, {"name": "query_reference_response", "dtype": "string"}, {"name": "query_reference_response_token", "sequence": "int64"}, {"name": "query_reference_response_token_len", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1593903817, "num_examples": 116722}, {"name": "validation", "num_bytes": 88064739, "num_examples": 6447}, {"name": "test", "num_bytes": 89555498, "num_examples": 6553}], "download_size": 551663615, "dataset_size": 1771524054}}
2024-01-02T04:30:11+00:00
[]
[]
TAGS #region-us
# TL;DR SFT Dataset for OpenAI's Summarize from Feedback task The dataset is directly taken from URL These columns are taken directly from the aforementioned dataset: * id: unique identifier for the post * subreddit: subreddit the post was taken from * title: title of the post * post: body of the post * summary: summary of the post * reference_response: reference response for the post These columns are added by this preprocessing script: * query: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last ' '. If it's too short it pads the main text (summarize_from_feedback/URL#L98-L165). Padding is either space or '[PAD]' token (see Args below). * query_token: tokenized version of 'query' * reference_response_token: tokenized version of 'reference_response' * reference_response_token_len: length of 'reference_response_token' * query_reference_response: concatenation of 'URL()' and 'reference_response' * query_reference_response_token: tokenized version of 'query_reference_response', up to 'max_sft_query_response_length' tokens * query_reference_response_token_len: length of 'query_reference_response_token' # Args
[ "# TL;DR SFT Dataset for OpenAI's Summarize from Feedback task\n\nThe dataset is directly taken from URL\n\nThese columns are taken directly from the aforementioned dataset:\n\n* id: unique identifier for the post\n* subreddit: subreddit the post was taken from\n* title: title of the post\n* post: body of the post\n* summary: summary of the post\n* reference_response: reference response for the post\n\nThese columns are added by this preprocessing script:\n* query: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last '\n'. If it's too short it pads the main text (summarize_from_feedback/URL#L98-L165). Padding is either space or '[PAD]' token (see Args below).\n* query_token: tokenized version of 'query'\n* reference_response_token: tokenized version of 'reference_response'\n* reference_response_token_len: length of 'reference_response_token'\n* query_reference_response: concatenation of 'URL()' and 'reference_response'\n* query_reference_response_token: tokenized version of 'query_reference_response', up to 'max_sft_query_response_length' tokens\n* query_reference_response_token_len: length of 'query_reference_response_token'", "# Args" ]
[ "TAGS\n#region-us \n", "# TL;DR SFT Dataset for OpenAI's Summarize from Feedback task\n\nThe dataset is directly taken from URL\n\nThese columns are taken directly from the aforementioned dataset:\n\n* id: unique identifier for the post\n* subreddit: subreddit the post was taken from\n* title: title of the post\n* post: body of the post\n* summary: summary of the post\n* reference_response: reference response for the post\n\nThese columns are added by this preprocessing script:\n* query: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last '\n'. If it's too short it pads the main text (summarize_from_feedback/URL#L98-L165). Padding is either space or '[PAD]' token (see Args below).\n* query_token: tokenized version of 'query'\n* reference_response_token: tokenized version of 'reference_response'\n* reference_response_token_len: length of 'reference_response_token'\n* query_reference_response: concatenation of 'URL()' and 'reference_response'\n* query_reference_response_token: tokenized version of 'query_reference_response', up to 'max_sft_query_response_length' tokens\n* query_reference_response_token_len: length of 'query_reference_response_token'", "# Args" ]
[ 6, 384, 3 ]
[ "passage: TAGS\n#region-us \n# TL;DR SFT Dataset for OpenAI's Summarize from Feedback task\n\nThe dataset is directly taken from URL\n\nThese columns are taken directly from the aforementioned dataset:\n\n* id: unique identifier for the post\n* subreddit: subreddit the post was taken from\n* title: title of the post\n* post: body of the post\n* summary: summary of the post\n* reference_response: reference response for the post\n\nThese columns are added by this preprocessing script:\n* query: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last '\n'. If it's too short it pads the main text (summarize_from_feedback/URL#L98-L165). Padding is either space or '[PAD]' token (see Args below).\n* query_token: tokenized version of 'query'\n* reference_response_token: tokenized version of 'reference_response'\n* reference_response_token_len: length of 'reference_response_token'\n* query_reference_response: concatenation of 'URL()' and 'reference_response'\n* query_reference_response_token: tokenized version of 'query_reference_response', up to 'max_sft_query_response_length' tokens\n* query_reference_response_token_len: length of 'query_reference_response_token'# Args" ]
50176c0164ac8850f27cad9f3aabe93cf9558c0d
# Dataset Card for "summarize_from_feedback_oai_preprocessing_1704169778" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
vwxyzjn/summarize_from_feedback_oai_preprocessing_1704169778
[ "region:us" ]
2024-01-02T04:30:29+00:00
{"dataset_info": {"features": [{"name": "info", "struct": [{"name": "id", "dtype": "string"}, {"name": "post", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "subreddit", "dtype": "string"}, {"name": "site", "dtype": "string"}, {"name": "article", "dtype": "string"}]}, {"name": "summaries", "list": [{"name": "text", "dtype": "string"}, {"name": "policy", "dtype": "string"}, {"name": "note", "dtype": "string"}]}, {"name": "choice", "dtype": "int32"}, {"name": "worker", "dtype": "string"}, {"name": "batch", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "extra", "struct": [{"name": "confidence", "dtype": "int32"}]}, {"name": "query_token", "sequence": "int64"}, {"name": "query", "dtype": "string"}, {"name": "response0", "dtype": "string"}, {"name": "response0_token", "sequence": "int64"}, {"name": "response0_token_len", "dtype": "int64"}, {"name": "response1", "dtype": "string"}, {"name": "response1_token", "sequence": "int64"}, {"name": "response1_token_len", "dtype": "int64"}, {"name": "response0_policy", "dtype": "string"}, {"name": "response1_policy", "dtype": "string"}, {"name": "policies", "dtype": "string"}, {"name": "query_response0", "dtype": "string"}, {"name": "query_response0_token", "sequence": "int64"}, {"name": "query_response0_token_len", "dtype": "int64"}, {"name": "query_response1", "dtype": "string"}, {"name": "query_response1_token", "sequence": "int64"}, {"name": "query_response1_token_len", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1914904595, "num_examples": 92858}, {"name": "validation", "num_bytes": 1780140675, "num_examples": 86086}], "download_size": 270613778, "dataset_size": 3695045270}}
2024-01-02T04:31:00+00:00
[]
[]
TAGS #region-us
# Dataset Card for "summarize_from_feedback_oai_preprocessing_1704169778" More Information needed
[ "# Dataset Card for \"summarize_from_feedback_oai_preprocessing_1704169778\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"summarize_from_feedback_oai_preprocessing_1704169778\"\n\nMore Information needed" ]
[ 6, 30 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"summarize_from_feedback_oai_preprocessing_1704169778\"\n\nMore Information needed" ]
6c452950abf9dcc461bc9510dd2bf6e69b6495d2
This is a converted dataset for https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2 that allows sft in https://github.com/hiyouga/LLaMA-Factory for function calling fine tuning. You need to add the following to the datasets.json file, and changed the `file_name` to your local path. ``` "glaive-function-calling-v2": { "file_name": "./glaive-function-calling-v2/simple-function-calling-v2_converted.json", "columns": { "prompt": "instruction", "query": "input", "response": "output", "history": "history" } } ``` There is also a `simple-function-calling-v2_converted.json` that trimmed to the first 1,000 samples in the originial dataset which is about 1% in size.
Yhyu13/glaive-function-calling-v2-llama-factory-convert
[ "license:apache-2.0", "region:us" ]
2024-01-02T05:12:48+00:00
{"license": "apache-2.0"}
2024-01-21T07:00:47+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
This is a converted dataset for URL that allows sft in URL for function calling fine tuning. You need to add the following to the URL file, and changed the 'file_name' to your local path. There is also a 'simple-function-calling-v2_converted.json' that trimmed to the first 1,000 samples in the originial dataset which is about 1% in size.
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n" ]
a7cd0ad4e8e05bd7f705c18b80df393bbba456c4
## This is the Official Capybara dataset. Over 10,000 multi-turn examples. Capybara is the culmination of insights derived from synthesis techniques like Evol-instruct (used for WizardLM), Alpaca, Orca, Vicuna, Lamini, FLASK and others. The single-turn seeds used to intiate the Amplify-Instruct synthesis of conversations are mostly based on datasets that i've personally vetted extensively, and are often highly regarded for their diversity and demonstration of logical robustness and prose, such as Airoboros, Know logic, EverythingLM, GPTeacher and even entirely new seed instructions derived from different sources, including certain in-house multi-turn datasets like Dove and Verified-Camel(A successor to Puffin). The multi-turn synthetic conversation generation method is what i'm calling Amplify-Instruct, and the first resulting dataset using this method is called Capybara. This dataset has a strong focus on information diversity across a wide range of domains, and multi-turn conversations that strongly emphasize reasoning, logic and extrapolation about a wide range of subjects, also many great examples of conversations delving into obscure sub-topics and rabbit holes across pop-culture and STEM, while also maintaining natural prose. While performing great in it's current state, the current dataset used for fine-tuning is entirely contained within 20K training examples, this is 10 times smaller than many similar performing datasets, this is signficant when it comes to scaling implications once I decide to scale the use of Amplify-Instruct to significantly more examples. - Most tokens contained in this dataset are newly synthesized and did not exist prior online. - This leverages the Amplify-Instruct method(paper coming soon) to grow thousands of high-quality single-turn seeds into advanced and in-depth multi-turn conversations. - Average context length per conversation is over 1,000 tokens and 3 turns or more per example (most instruction/chat datasets on HF for fine-tuning are only 1 turn) - Each conversation is optimized to amplify the natural raw knowledge capabilities of the model, as well as delving deep into obscure and advanced topics. - Aggresively filtered to remove any and all possible examples of overt moralizing/alignment, and common undesirable behaviours such as "as an AI language model" and "September 2021" and "I don't have personal beliefs" ## Benchmarks. - Resulting benchmarks are available on HF Leaderboard, and other benchmarks done as well such as AGIEval, Bigbench and GPT4All. - (The only Capybara model available on all of these benchmarks including HF leaderboard is Capybara V1, trained on Llama-2) - The below benchmarks are compared against fine-tunes also done on Llama-2. ![Capybara](https://i.imgur.com/OpajtNJ.jpeg) ![Capybara](https://i.imgur.com/daIZn6n.jpeg) ## Quality filtering and cleaning. - Extensive measures were done to filter out any conversations that contained even a single instance of overt AI moralizing/alignment, such as "As an AI language model" and common undesirable behaviours such as conversations that include "September 2021" and "I don't have personal beliefs" and other phrases I've found to be highly correlated with undesirable responses and conversation paths. ## Thank you to those of you that have indirectly contributed! While most of the tokens within Capybara are newly synthsized and part of datasets like Puffin/Dove, we would like to credit the single-turn datasets we leveraged as seeds, which were used to generate the multi-turn data. The datasets shown in green below are datasets that we sampled from to curate seeds that are used during Amplify-Instruct synthesis for this project, however, most of the tokens in capybara within those given sections are novel tokens not present in any of the seed datasets. Datasets in Blue are in-house curations that previously existed prior to Capybara, and were now used as seeds for Capybara. ![Capybara](https://i.imgur.com/yB58OoD.jpeg) ## Dataset contamination. We have checked the capybara dataset for contamination for several of the most popular benchmarks and can confirm that there is no contaminaton found besides MT-bench which is now cleaned out. We leveraged minhash to check for 100%, 99%, 98% and 97% similarity matches between our data and the questions and answers in benchmarks, we found no exact matches, nor did we find any matches down to the 97% similarity level. The following are benchmarks we checked for contamination against our dataset: - HumanEval - AGIEval - TruthfulQA - MMLU - GPT4All *Newly cleaned out as of 12/15/2023 - MT-bench ## Credits During the curation process, there can be some relatively arduos steps when it comes to actually executing on the best experimentation or concepts for how to filter examples out. Luckily there is folks over at Nous Research that helped with expediting these processes, big thank you to J-Supha specifically for making these types of significant contributions. ## Example Outputs from the Llama-2 7B model trained on this dataset: ![Capybara](https://img001.prntscr.com/file/img001/T9yYxR1xQSaK_UGdy3t2Cw.png) ![Capybara](https://img001.prntscr.com/file/img001/DQXqmKbsQQOIcgny1eoGNA.png) ![Capybara](https://img001.prntscr.com/file/img001/85X3L9ZxTsOKo3fUQ7GRVA.png) ## Future Plans & How you can help! This is a relatively early build amongst the grand plans for the future of what I plan to work on! In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets. If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord! Citation: ``` @article{daniele2023amplify-instruct, title={Amplify-Instruct: Synthetically Generated Diverse Multi-turn Conversations for Effecient LLM Training.}, author={Daniele, Luigi and Suphavadeeprasit}, journal={arXiv preprint arXiv:(coming soon)}, url={https://huggingface.co/datasets/LDJnr/Capybara}, year={2023} } ```
cfahlgren1/Capybara-Converted
[ "task_categories:conversational", "task_categories:question-answering", "task_categories:text-generation", "size_categories:10K<n<100K", "language:en", "license:apache-2.0", "Physics", "Biology", "Math", "Chemistry", "Culture", "Logic", "Roleplay", "region:us" ]
2024-01-02T07:15:16+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["conversational", "question-answering", "text-generation"], "pretty_name": "LessWrong-Amplify-Instruct", "tags": ["Physics", "Biology", "Math", "Chemistry", "Culture", "Logic", "Roleplay"]}
2024-01-02T07:23:08+00:00
[]
[ "en" ]
TAGS #task_categories-conversational #task_categories-question-answering #task_categories-text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #Physics #Biology #Math #Chemistry #Culture #Logic #Roleplay #region-us
## This is the Official Capybara dataset. Over 10,000 multi-turn examples. Capybara is the culmination of insights derived from synthesis techniques like Evol-instruct (used for WizardLM), Alpaca, Orca, Vicuna, Lamini, FLASK and others. The single-turn seeds used to intiate the Amplify-Instruct synthesis of conversations are mostly based on datasets that i've personally vetted extensively, and are often highly regarded for their diversity and demonstration of logical robustness and prose, such as Airoboros, Know logic, EverythingLM, GPTeacher and even entirely new seed instructions derived from different sources, including certain in-house multi-turn datasets like Dove and Verified-Camel(A successor to Puffin). The multi-turn synthetic conversation generation method is what i'm calling Amplify-Instruct, and the first resulting dataset using this method is called Capybara. This dataset has a strong focus on information diversity across a wide range of domains, and multi-turn conversations that strongly emphasize reasoning, logic and extrapolation about a wide range of subjects, also many great examples of conversations delving into obscure sub-topics and rabbit holes across pop-culture and STEM, while also maintaining natural prose. While performing great in it's current state, the current dataset used for fine-tuning is entirely contained within 20K training examples, this is 10 times smaller than many similar performing datasets, this is signficant when it comes to scaling implications once I decide to scale the use of Amplify-Instruct to significantly more examples. - Most tokens contained in this dataset are newly synthesized and did not exist prior online. - This leverages the Amplify-Instruct method(paper coming soon) to grow thousands of high-quality single-turn seeds into advanced and in-depth multi-turn conversations. - Average context length per conversation is over 1,000 tokens and 3 turns or more per example (most instruction/chat datasets on HF for fine-tuning are only 1 turn) - Each conversation is optimized to amplify the natural raw knowledge capabilities of the model, as well as delving deep into obscure and advanced topics. - Aggresively filtered to remove any and all possible examples of overt moralizing/alignment, and common undesirable behaviours such as "as an AI language model" and "September 2021" and "I don't have personal beliefs" ## Benchmarks. - Resulting benchmarks are available on HF Leaderboard, and other benchmarks done as well such as AGIEval, Bigbench and GPT4All. - (The only Capybara model available on all of these benchmarks including HF leaderboard is Capybara V1, trained on Llama-2) - The below benchmarks are compared against fine-tunes also done on Llama-2. !Capybara !Capybara ## Quality filtering and cleaning. - Extensive measures were done to filter out any conversations that contained even a single instance of overt AI moralizing/alignment, such as "As an AI language model" and common undesirable behaviours such as conversations that include "September 2021" and "I don't have personal beliefs" and other phrases I've found to be highly correlated with undesirable responses and conversation paths. ## Thank you to those of you that have indirectly contributed! While most of the tokens within Capybara are newly synthsized and part of datasets like Puffin/Dove, we would like to credit the single-turn datasets we leveraged as seeds, which were used to generate the multi-turn data. The datasets shown in green below are datasets that we sampled from to curate seeds that are used during Amplify-Instruct synthesis for this project, however, most of the tokens in capybara within those given sections are novel tokens not present in any of the seed datasets. Datasets in Blue are in-house curations that previously existed prior to Capybara, and were now used as seeds for Capybara. !Capybara ## Dataset contamination. We have checked the capybara dataset for contamination for several of the most popular benchmarks and can confirm that there is no contaminaton found besides MT-bench which is now cleaned out. We leveraged minhash to check for 100%, 99%, 98% and 97% similarity matches between our data and the questions and answers in benchmarks, we found no exact matches, nor did we find any matches down to the 97% similarity level. The following are benchmarks we checked for contamination against our dataset: - HumanEval - AGIEval - TruthfulQA - MMLU - GPT4All *Newly cleaned out as of 12/15/2023 - MT-bench ## Credits During the curation process, there can be some relatively arduos steps when it comes to actually executing on the best experimentation or concepts for how to filter examples out. Luckily there is folks over at Nous Research that helped with expediting these processes, big thank you to J-Supha specifically for making these types of significant contributions. ## Example Outputs from the Llama-2 7B model trained on this dataset: !Capybara !Capybara !Capybara ## Future Plans & How you can help! This is a relatively early build amongst the grand plans for the future of what I plan to work on! In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets. If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord! Citation:
[ "## This is the Official Capybara dataset. Over 10,000 multi-turn examples.\n\nCapybara is the culmination of insights derived from synthesis techniques like Evol-instruct (used for WizardLM), Alpaca, Orca, Vicuna, Lamini, FLASK and others.\nThe single-turn seeds used to intiate the Amplify-Instruct synthesis of conversations are mostly based on datasets that i've personally vetted extensively, and are often highly regarded for their diversity and demonstration of logical robustness and prose, such as Airoboros, Know logic, EverythingLM, GPTeacher and even entirely new seed instructions derived from different sources, including certain in-house multi-turn datasets like Dove and Verified-Camel(A successor to Puffin).\n\nThe multi-turn synthetic conversation generation method is what i'm calling Amplify-Instruct, and the first resulting dataset using this method is called Capybara. \nThis dataset has a strong focus on information diversity across a wide range of domains, and multi-turn conversations that strongly emphasize reasoning, logic and extrapolation about a wide range of subjects, also many great examples of conversations delving into obscure sub-topics and rabbit holes across pop-culture and STEM, while also maintaining natural prose.\nWhile performing great in it's current state, the current dataset used for fine-tuning is entirely contained within 20K training examples, this is 10 times smaller than many similar performing datasets, this is signficant when it comes to scaling implications once I decide to scale the use of Amplify-Instruct to significantly more examples.\n\n - Most tokens contained in this dataset are newly synthesized and did not exist prior online.\n\n - This leverages the Amplify-Instruct method(paper coming soon) to grow thousands of high-quality single-turn seeds into advanced and in-depth multi-turn conversations.\n\n - Average context length per conversation is over 1,000 tokens and 3 turns or more per example (most instruction/chat datasets on HF for fine-tuning are only 1 turn)\n\n - Each conversation is optimized to amplify the natural raw knowledge capabilities of the model, as well as delving deep into obscure and advanced topics.\n\n - Aggresively filtered to remove any and all possible examples of overt moralizing/alignment, and common undesirable behaviours such as \"as an AI language model\" and \"September 2021\" and \"I don't have personal beliefs\"", "## Benchmarks.\n\n- Resulting benchmarks are available on HF Leaderboard, and other benchmarks done as well such as AGIEval, Bigbench and GPT4All. \n- (The only Capybara model available on all of these benchmarks including HF leaderboard is Capybara V1, trained on Llama-2)\n- The below benchmarks are compared against fine-tunes also done on Llama-2.\n\n!Capybara\n\n!Capybara", "## Quality filtering and cleaning.\n\n - Extensive measures were done to filter out any conversations that contained even a single instance of overt AI moralizing/alignment, such as \"As an AI language model\" and common undesirable behaviours such as conversations that include \"September 2021\" and \"I don't have personal beliefs\" and other phrases I've found to be highly correlated with undesirable responses and conversation paths.", "## Thank you to those of you that have indirectly contributed!\n\nWhile most of the tokens within Capybara are newly synthsized and part of datasets like Puffin/Dove, we would like to credit the single-turn datasets we leveraged as seeds, which were used to generate the multi-turn data.\n\nThe datasets shown in green below are datasets that we sampled from to curate seeds that are used during Amplify-Instruct synthesis for this project, however, most of the tokens in capybara within those given sections are novel tokens not present in any of the seed datasets.\n\nDatasets in Blue are in-house curations that previously existed prior to Capybara, and were now used as seeds for Capybara.\n\n!Capybara", "## Dataset contamination.\n\nWe have checked the capybara dataset for contamination for several of the most popular benchmarks and can confirm that there is no contaminaton found besides MT-bench which is now cleaned out.\n\nWe leveraged minhash to check for 100%, 99%, 98% and 97% similarity matches between our data and the questions and answers in benchmarks, we found no exact matches, nor did we find any matches down to the 97% similarity level.\n\nThe following are benchmarks we checked for contamination against our dataset:\n\n- HumanEval\n\n- AGIEval\n\n- TruthfulQA\n\n- MMLU\n\n- GPT4All\n\n*Newly cleaned out as of 12/15/2023 - MT-bench", "## Credits\nDuring the curation process, there can be some relatively arduos steps when it comes to actually executing on the best experimentation or concepts for how to filter examples out.\nLuckily there is folks over at Nous Research that helped with expediting these processes, big thank you to J-Supha specifically for making these types of significant contributions.", "## Example Outputs from the Llama-2 7B model trained on this dataset:\n!Capybara\n!Capybara\n!Capybara", "## Future Plans & How you can help!\nThis is a relatively early build amongst the grand plans for the future of what I plan to work on! \nIn the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets.\nIf you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!\nCitation:" ]
[ "TAGS\n#task_categories-conversational #task_categories-question-answering #task_categories-text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #Physics #Biology #Math #Chemistry #Culture #Logic #Roleplay #region-us \n", "## This is the Official Capybara dataset. Over 10,000 multi-turn examples.\n\nCapybara is the culmination of insights derived from synthesis techniques like Evol-instruct (used for WizardLM), Alpaca, Orca, Vicuna, Lamini, FLASK and others.\nThe single-turn seeds used to intiate the Amplify-Instruct synthesis of conversations are mostly based on datasets that i've personally vetted extensively, and are often highly regarded for their diversity and demonstration of logical robustness and prose, such as Airoboros, Know logic, EverythingLM, GPTeacher and even entirely new seed instructions derived from different sources, including certain in-house multi-turn datasets like Dove and Verified-Camel(A successor to Puffin).\n\nThe multi-turn synthetic conversation generation method is what i'm calling Amplify-Instruct, and the first resulting dataset using this method is called Capybara. \nThis dataset has a strong focus on information diversity across a wide range of domains, and multi-turn conversations that strongly emphasize reasoning, logic and extrapolation about a wide range of subjects, also many great examples of conversations delving into obscure sub-topics and rabbit holes across pop-culture and STEM, while also maintaining natural prose.\nWhile performing great in it's current state, the current dataset used for fine-tuning is entirely contained within 20K training examples, this is 10 times smaller than many similar performing datasets, this is signficant when it comes to scaling implications once I decide to scale the use of Amplify-Instruct to significantly more examples.\n\n - Most tokens contained in this dataset are newly synthesized and did not exist prior online.\n\n - This leverages the Amplify-Instruct method(paper coming soon) to grow thousands of high-quality single-turn seeds into advanced and in-depth multi-turn conversations.\n\n - Average context length per conversation is over 1,000 tokens and 3 turns or more per example (most instruction/chat datasets on HF for fine-tuning are only 1 turn)\n\n - Each conversation is optimized to amplify the natural raw knowledge capabilities of the model, as well as delving deep into obscure and advanced topics.\n\n - Aggresively filtered to remove any and all possible examples of overt moralizing/alignment, and common undesirable behaviours such as \"as an AI language model\" and \"September 2021\" and \"I don't have personal beliefs\"", "## Benchmarks.\n\n- Resulting benchmarks are available on HF Leaderboard, and other benchmarks done as well such as AGIEval, Bigbench and GPT4All. \n- (The only Capybara model available on all of these benchmarks including HF leaderboard is Capybara V1, trained on Llama-2)\n- The below benchmarks are compared against fine-tunes also done on Llama-2.\n\n!Capybara\n\n!Capybara", "## Quality filtering and cleaning.\n\n - Extensive measures were done to filter out any conversations that contained even a single instance of overt AI moralizing/alignment, such as \"As an AI language model\" and common undesirable behaviours such as conversations that include \"September 2021\" and \"I don't have personal beliefs\" and other phrases I've found to be highly correlated with undesirable responses and conversation paths.", "## Thank you to those of you that have indirectly contributed!\n\nWhile most of the tokens within Capybara are newly synthsized and part of datasets like Puffin/Dove, we would like to credit the single-turn datasets we leveraged as seeds, which were used to generate the multi-turn data.\n\nThe datasets shown in green below are datasets that we sampled from to curate seeds that are used during Amplify-Instruct synthesis for this project, however, most of the tokens in capybara within those given sections are novel tokens not present in any of the seed datasets.\n\nDatasets in Blue are in-house curations that previously existed prior to Capybara, and were now used as seeds for Capybara.\n\n!Capybara", "## Dataset contamination.\n\nWe have checked the capybara dataset for contamination for several of the most popular benchmarks and can confirm that there is no contaminaton found besides MT-bench which is now cleaned out.\n\nWe leveraged minhash to check for 100%, 99%, 98% and 97% similarity matches between our data and the questions and answers in benchmarks, we found no exact matches, nor did we find any matches down to the 97% similarity level.\n\nThe following are benchmarks we checked for contamination against our dataset:\n\n- HumanEval\n\n- AGIEval\n\n- TruthfulQA\n\n- MMLU\n\n- GPT4All\n\n*Newly cleaned out as of 12/15/2023 - MT-bench", "## Credits\nDuring the curation process, there can be some relatively arduos steps when it comes to actually executing on the best experimentation or concepts for how to filter examples out.\nLuckily there is folks over at Nous Research that helped with expediting these processes, big thank you to J-Supha specifically for making these types of significant contributions.", "## Example Outputs from the Llama-2 7B model trained on this dataset:\n!Capybara\n!Capybara\n!Capybara", "## Future Plans & How you can help!\nThis is a relatively early build amongst the grand plans for the future of what I plan to work on! \nIn the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets.\nIf you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!\nCitation:" ]
[ 87, 590, 101, 101, 186, 160, 78, 33, 127 ]
[ "passage: TAGS\n#task_categories-conversational #task_categories-question-answering #task_categories-text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #Physics #Biology #Math #Chemistry #Culture #Logic #Roleplay #region-us \n", "passage: ## This is the Official Capybara dataset. Over 10,000 multi-turn examples.\n\nCapybara is the culmination of insights derived from synthesis techniques like Evol-instruct (used for WizardLM), Alpaca, Orca, Vicuna, Lamini, FLASK and others.\nThe single-turn seeds used to intiate the Amplify-Instruct synthesis of conversations are mostly based on datasets that i've personally vetted extensively, and are often highly regarded for their diversity and demonstration of logical robustness and prose, such as Airoboros, Know logic, EverythingLM, GPTeacher and even entirely new seed instructions derived from different sources, including certain in-house multi-turn datasets like Dove and Verified-Camel(A successor to Puffin).\n\nThe multi-turn synthetic conversation generation method is what i'm calling Amplify-Instruct, and the first resulting dataset using this method is called Capybara. \nThis dataset has a strong focus on information diversity across a wide range of domains, and multi-turn conversations that strongly emphasize reasoning, logic and extrapolation about a wide range of subjects, also many great examples of conversations delving into obscure sub-topics and rabbit holes across pop-culture and STEM, while also maintaining natural prose.\nWhile performing great in it's current state, the current dataset used for fine-tuning is entirely contained within 20K training examples, this is 10 times smaller than many similar performing datasets, this is signficant when it comes to scaling implications once I decide to scale the use of Amplify-Instruct to significantly more examples.\n\n - Most tokens contained in this dataset are newly synthesized and did not exist prior online.\n\n - This leverages the Amplify-Instruct method(paper coming soon) to grow thousands of high-quality single-turn seeds into advanced and in-depth multi-turn conversations.\n\n - Average context length per conversation is over 1,000 tokens and 3 turns or more per example (most instruction/chat datasets on HF for fine-tuning are only 1 turn)\n\n - Each conversation is optimized to amplify the natural raw knowledge capabilities of the model, as well as delving deep into obscure and advanced topics.\n\n - Aggresively filtered to remove any and all possible examples of overt moralizing/alignment, and common undesirable behaviours such as \"as an AI language model\" and \"September 2021\" and \"I don't have personal beliefs\"## Benchmarks.\n\n- Resulting benchmarks are available on HF Leaderboard, and other benchmarks done as well such as AGIEval, Bigbench and GPT4All. \n- (The only Capybara model available on all of these benchmarks including HF leaderboard is Capybara V1, trained on Llama-2)\n- The below benchmarks are compared against fine-tunes also done on Llama-2.\n\n!Capybara\n\n!Capybara## Quality filtering and cleaning.\n\n - Extensive measures were done to filter out any conversations that contained even a single instance of overt AI moralizing/alignment, such as \"As an AI language model\" and common undesirable behaviours such as conversations that include \"September 2021\" and \"I don't have personal beliefs\" and other phrases I've found to be highly correlated with undesirable responses and conversation paths.## Thank you to those of you that have indirectly contributed!\n\nWhile most of the tokens within Capybara are newly synthsized and part of datasets like Puffin/Dove, we would like to credit the single-turn datasets we leveraged as seeds, which were used to generate the multi-turn data.\n\nThe datasets shown in green below are datasets that we sampled from to curate seeds that are used during Amplify-Instruct synthesis for this project, however, most of the tokens in capybara within those given sections are novel tokens not present in any of the seed datasets.\n\nDatasets in Blue are in-house curations that previously existed prior to Capybara, and were now used as seeds for Capybara.\n\n!Capybara" ]
3d9220bd0e965c4768cca2627f69311c9f5dd551
This dataset is a subset of the Open Assistant dataset, which you can find here: https://huggingface.co/datasets/OpenAssistant/oasst1/tree/main This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples. This dataset was used to train Guanaco with QLoRA. For further information, please see the original dataset. License: Apache 2.0
thiennguyen1998/assistant-guanaco
[ "region:us" ]
2024-01-02T08:07:56+00:00
{}
2024-01-03T04:58:00+00:00
[]
[]
TAGS #region-us
This dataset is a subset of the Open Assistant dataset, which you can find here: URL This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples. This dataset was used to train Guanaco with QLoRA. For further information, please see the original dataset. License: Apache 2.0
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
f76a43553cc575a8ee85570797b0584693eb3f6c
This dataset was derived from GAIR/lima, It is a translated version from the original dataset. (Translated language: Kannada)
jayavibhav/LIMA-Kannada
[ "region:us" ]
2024-01-02T08:20:11+00:00
{}
2024-01-02T08:26:58+00:00
[]
[]
TAGS #region-us
This dataset was derived from GAIR/lima, It is a translated version from the original dataset. (Translated language: Kannada)
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
edf92a505b0f6afaa4c435dd96072c0bd6a16b37
# OpenAssistant TOP-1 Conversation Threads in huggingface chat format Export of [oasst2](https://huggingface.co/datasets/OpenAssistant/oasst2) only top 1 threads in [huggingface chat format](https://huggingface.co/docs/transformers/chat_templating) # Script The convert script can be find [here](https://github.com/blancsw/deep_4_all/tree/main/datasets/oasst)
blancsw/oasst2_top1_chat_format
[ "task_categories:conversational", "size_categories:100K<n<1M", "language:en", "language:es", "language:ru", "language:de", "language:pl", "language:th", "language:vi", "language:sv", "language:bn", "language:da", "language:he", "language:it", "language:fa", "language:sk", "language:id", "language:nb", "language:el", "language:nl", "language:hu", "language:eu", "language:zh", "language:eo", "language:ja", "language:ca", "language:cs", "language:bg", "language:fi", "language:pt", "language:tr", "language:ro", "language:ar", "language:uk", "language:gl", "language:fr", "language:ko", "license:apache-2.0", "human-feedback", "sft", "region:us" ]
2024-01-02T08:48:25+00:00
{"language": ["en", "es", "ru", "de", "pl", "th", "vi", "sv", "bn", "da", "he", "it", "fa", "sk", "id", "nb", "el", "nl", "hu", "eu", "zh", "eo", "ja", "ca", "cs", "bg", "fi", "pt", "tr", "ro", "ar", "uk", "gl", "fr", "ko"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["conversational"], "pretty_name": "OpenAssistant Conversations Release 2 in huggingface chat format", "tags": ["human-feedback", "sft"], "dataset_info": {"features": [{"name": "conversation", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "langs", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18366000, "num_examples": 10746}], "download_size": 10484376, "dataset_size": 18366000}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-23T09:36:49+00:00
[]
[ "en", "es", "ru", "de", "pl", "th", "vi", "sv", "bn", "da", "he", "it", "fa", "sk", "id", "nb", "el", "nl", "hu", "eu", "zh", "eo", "ja", "ca", "cs", "bg", "fi", "pt", "tr", "ro", "ar", "uk", "gl", "fr", "ko" ]
TAGS #task_categories-conversational #size_categories-100K<n<1M #language-English #language-Spanish #language-Russian #language-German #language-Polish #language-Thai #language-Vietnamese #language-Swedish #language-Bengali #language-Danish #language-Hebrew #language-Italian #language-Persian #language-Slovak #language-Indonesian #language-Norwegian Bokmål #language-Modern Greek (1453-) #language-Dutch #language-Hungarian #language-Basque #language-Chinese #language-Esperanto #language-Japanese #language-Catalan #language-Czech #language-Bulgarian #language-Finnish #language-Portuguese #language-Turkish #language-Romanian #language-Arabic #language-Ukrainian #language-Galician #language-French #language-Korean #license-apache-2.0 #human-feedback #sft #region-us
# OpenAssistant TOP-1 Conversation Threads in huggingface chat format Export of oasst2 only top 1 threads in huggingface chat format # Script The convert script can be find here
[ "# OpenAssistant TOP-1 Conversation Threads in huggingface chat format\n\nExport of oasst2 only top 1 threads in huggingface chat format", "# Script\n\nThe convert script can be find here" ]
[ "TAGS\n#task_categories-conversational #size_categories-100K<n<1M #language-English #language-Spanish #language-Russian #language-German #language-Polish #language-Thai #language-Vietnamese #language-Swedish #language-Bengali #language-Danish #language-Hebrew #language-Italian #language-Persian #language-Slovak #language-Indonesian #language-Norwegian Bokmål #language-Modern Greek (1453-) #language-Dutch #language-Hungarian #language-Basque #language-Chinese #language-Esperanto #language-Japanese #language-Catalan #language-Czech #language-Bulgarian #language-Finnish #language-Portuguese #language-Turkish #language-Romanian #language-Arabic #language-Ukrainian #language-Galician #language-French #language-Korean #license-apache-2.0 #human-feedback #sft #region-us \n", "# OpenAssistant TOP-1 Conversation Threads in huggingface chat format\n\nExport of oasst2 only top 1 threads in huggingface chat format", "# Script\n\nThe convert script can be find here" ]
[ 239, 35, 9 ]
[ "passage: TAGS\n#task_categories-conversational #size_categories-100K<n<1M #language-English #language-Spanish #language-Russian #language-German #language-Polish #language-Thai #language-Vietnamese #language-Swedish #language-Bengali #language-Danish #language-Hebrew #language-Italian #language-Persian #language-Slovak #language-Indonesian #language-Norwegian Bokmål #language-Modern Greek (1453-) #language-Dutch #language-Hungarian #language-Basque #language-Chinese #language-Esperanto #language-Japanese #language-Catalan #language-Czech #language-Bulgarian #language-Finnish #language-Portuguese #language-Turkish #language-Romanian #language-Arabic #language-Ukrainian #language-Galician #language-French #language-Korean #license-apache-2.0 #human-feedback #sft #region-us \n# OpenAssistant TOP-1 Conversation Threads in huggingface chat format\n\nExport of oasst2 only top 1 threads in huggingface chat format# Script\n\nThe convert script can be find here" ]
02c97d1f6308f52b8f528c532079823ab7292f03
# Dataset Card for "ThangaTharun" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ThangaTharun/ThangaTharun
[ "region:us" ]
2024-01-02T08:55:00+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "audio"}}}}], "splits": [{"name": "train", "num_bytes": 504559.0, "num_examples": 5}], "download_size": 0, "dataset_size": 504559.0}}
2024-01-02T09:05:40+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ThangaTharun" More Information needed
[ "# Dataset Card for \"ThangaTharun\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ThangaTharun\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ThangaTharun\"\n\nMore Information needed" ]
03aacdb97e6392997183450c1846516f9b41ab8e
<div class="card-layout-item" data-background="{}" data-pm-slice="2 2 [&quot;document&quot;,{&quot;docId&quot;:&quot;utelmcg25ai30qz&quot;,&quot;background&quot;:{&quot;type&quot;:&quot;none&quot;},&quot;docFlags&quot;:{&quot;cardLayoutsEnabled&quot;:true},&quot;format&quot;:null,&quot;customCode&quot;:{},&quot;settings&quot;:{},&quot;generateStatus&quot;:null,&quot;generateInfo&quot;:{}},&quot;card&quot;,{&quot;id&quot;:&quot;fjs5psn5copsyz4&quot;,&quot;previewContent&quot;:null,&quot;background&quot;:{&quot;type&quot;:&quot;none&quot;},&quot;container&quot;:{},&quot;cardSize&quot;:&quot;default&quot;,&quot;layout&quot;:&quot;blank&quot;,&quot;layoutTemplateColumns&quot;:null}]"> <p><a href="https://ocutamin-review.company.site/"><strong>Ocutamin</strong> </a>asserts itself as the inaugural all-natural solution designed to enhance vision without the need for medications or risky surgical procedures. It addresses the underlying factors contributing to poor eyesight, aiming to rectify issues solely through the use of natural ingredients.</p> <h2><a href="https://www.globalfitnessmart.com/get-ocutamin"><strong>{</strong><strong>Ocutamin- Official Website -- Order Now}</strong></a></h2> <h2><strong>➡️● For Order Official Website - <a href="https://www.globalfitnessmart.com/get-ocutamin">https://www.globalfitnessmart.com/get-ocutamin</a></strong><br /><strong>➡️● Item Name: &mdash; {<a href="https://www.globalfitnessmart.com/get-ocutamin">Ocutamin</a>}</strong><br /><strong>➡️● Ingredients: &mdash; All Natural</strong><br /><strong>➡️● Incidental Effects: &mdash; NA</strong><br /><strong>➡️● Accessibility: &mdash; <a href="https://www.globalfitnessmart.com/get-ocutamin">Online</a></strong></h2> <h2><a href="https://www.globalfitnessmart.com/get-ocutamin"><strong>✅HUGE DISCOUNT ! HURRY UP! ORDER NOW!✅</strong></a><br /><a href="https://www.globalfitnessmart.com/get-ocutamin"><strong>✅HUGE DISCOUNT ! HURRY UP! ORDER NOW!✅</strong></a><br /><a href="https://www.globalfitnessmart.com/get-ocutamin"><strong>✅HUGE DISCOUNT ! HURRY UP! ORDER NOW!✅</strong></a></h2> <h2><strong>What is <a href="https://groups.google.com/g/ocutamin/c/Ryoz9Suf-RY">Ocutamin</a> Dietary Supplement?</strong></h2> <p><a href="https://sites.google.com/view/ocutamin-review-usa/home"><strong>Ocutamin</strong></a> is a daily supplement claiming to improve and fortify eye health. The formula is doctor-formulated and contains various nutrients to address the root of poor sight. The supplement is easy to swallow and can provide quality results in days.</p> <p>According to the <a href="https://colab.research.google.com/drive/1jB-UNGMX6zTmxUioGB_s0BLe2NYQXlAB"><strong>Ocutamin</strong></a> website, the supplement has eight science-approved ingredients to manage eye issues. It is purportedly a safe, affordable, and effective solution to worsening eye health. It can prevent users from undergoing expensive Laser Eye Surgery (LASIK) or using contact lenses for the rest of their lives.</p> <p>A former eye specialist Dr. Dean Avant is the formulator of <a href="https://lookerstudio.google.com/u/0/reporting/56d93833-e5a4-45dc-a27e-cbd11c011e07/page/KkTmD"><strong>Ocutamin</strong></a>. He experienced failing sight despite his knowledge and expertise. With another researcher, he discovered certain nutrients, including lutein and quercetin, that nurture the eyes and restore sight quickly.Today, thousands have tried the <a href="https://gamma.app/docs/Ocutamin-Pressura-Work-To-Promote-1Vision-Support-Formula-Reviews-i0d33n9jfq7fwyq?mode=doc"><strong>Ocutamin</strong></a> supplement, supposedly restoring their vision. The supplement is ideal for adults of all ages.</p> <h2 style="text-align: center;"><a href="https://www.globalfitnessmart.com/get-ocutamin"><strong>(EXCLUSIVE OFFER)Click Here : "Ocutamin USA"Official Website!</strong></a></h2> <h2><strong>How Does <a href="https://forum.mmm.ucar.edu/threads/ocutamin-pressura-work-to-promote-ocutamin-1vision-support-formula-united-states-canada-does-it-really-work.15058/">Ocutamin</a> Work?</strong></h2> <p><a href="https://ocutamin-official.clubeo.com/calendar/2024/01/04/ocutamin-work-to-promote-restores-eyesight-united-states-canada-does-it-really-work"><strong>Ocutamin</strong></a>'s creator points out that modern problems like excessive use of computers, laptops, mobile phones, and TV is the primary cause of eye problems. In addition, environmental toxins, UV rays, foods, and water can damage the eyes.</p> <p><a href="https://ocutamin-official.clubeo.com/page/ocutamin-pressura-work-to-promote-1vision-support-formula-reviews-scientifically-formulated-supplement.html"><strong>Ocutamin</strong></a> formulator reasons that ancestors enjoyed laser-sharp sight despite their age. They needed unfailing sight to gather food and protect themselves from animals. How did they maintain quality sight? Below is how <a href="https://ocutamin-official.clubeo.com/page/ocutamin-reviews-updated-2024-do-not-buy-till-you-read-this-updated-2024-do-not-buy-till-you-read-this.html"><strong>Ocutamin</strong></a> can support and restore sight</p> <p><strong>Nourish the Eyes</strong> &ndash; Due to poor dietary patterns; most Americans cannot get sufficient vision-improving nutrients. Many homes eat junk and processed foods that increase inflammation and toxins in the eyes. <a href="https://ocutamin-official.clubeo.com/"><strong>Ocutamin</strong></a> has eight active ingredients that nourish the different eye cells, improving their function. The supplement can fight eye malnourishment.</p> <p><strong>Clear Toxins</strong> &ndash; The environment is full of toxins. Avoiding some of these contaminants is impossible because they are in the air, foods, medicine, and cleaning products. <a href="https://www.scoop.it/topic/ocutamin-by-ocutamin-official"><strong>Ocutamin</strong></a> maker lists organophosphate (OP) as the most dangerous toxin that can damage the eye cells. The supplement has nutrients that enhance the cleansing and detoxification process. It can aid the body in eliminating toxins, thus improving sight.</p> <p><strong>Fight Optic Atrophy</strong> &ndash; <a href="https://www.scoop.it/topic/ocutamin-eye-health-care-new-2024-advanced-formula"><strong>Ocutamin</strong></a> creator claims that most people do not utilize the eyes as required leading to optic atrophy. Studies show that people using their eyes actively, indoors and outdoors, train the different cells to become powerful. The supplement may strengthen the different eye parts.</p> <p><strong>Refine Blood Circulation &ndash;</strong> Impaired blood flow in the eye restricts nutrient and oxygen intake. <a href="https://ocutamin-1.jimdosite.com/"><strong>Ocutamin</strong></a> can strengthen the eye capillaries and arteries, thus advancing blood circulation. The maker claims it may restore crystal-clear sight and prevent eye cells from dying.</p> <p><strong>Improve Cellular Health</strong> &ndash; Some <a href="https://ocutamin.bandcamp.com/album/ocutamin-reviews-updated-2024-do-not-buy-till-you-read-this"><strong>Ocutamin</strong></a> ingredients are designed to support cellular regeneration and revitalization. It works by repairing different cells and preventing cellular decay. Consequently, it may protect the eyes from macular degeneration, cataracts, and other age-related sight problems.</p> <h2><strong>Benefits Of Using <a href="https://soundcloud.com/ocutaminofficial/ocutamin-usa-is-legit-2024-updated-report">Ocutamin</a>:</strong></h2> <p>OCUTAMIN's distinctive formulation offers a range of benefits that contribute to improved eye health and enhanced vision. These advantages include:</p> <p><strong>Support Against Digital Eye Strain</strong>: In today's digital age, prolonged screen exposure often leads to digital eye strain. OCUTAMIN's blend of nutrients is designed to alleviate discomfort and mitigate the effects of eye strain associated with screen use.</p> <p><strong>Protection from Age-related Vision Decline</strong>: The potent antioxidants found in OCUTAMIN, such as lutein and zeaxanthin, serve as a defense against age-related vision decline, fostering long-term eye health.</p> <p><strong>Enhanced Night Vision</strong>: Featuring bilberry extract as a key component, OCUTAMIN draws on traditional uses to enhance night vision, allowing for clearer visibility in low-light conditions.</p> <p><strong>Overall Visual Clarity:</strong> By supplying essential nutrients crucial for optimal eye function, OCUTAMIN may contribute to improved visual clarity and focus. This support helps you navigate the world with increased confidence.</p> <h2 style="text-align: center;"><a href="https://www.globalfitnessmart.com/get-ocutamin"><strong>SPECIAL PROMO[Limited Discount]: "Ocutamin USA"Official Website!</strong></a></h2> <h1><strong><a href="https://ocutamin.hashnode.dev/ocutamin-usa-is-legit-2024-updated-report">Ocutamin</a> Ingredients</strong></h1> <p><a href="https://followme.tribe.so/post/ocutamin---usa-is-legit-2024-updated-report-6593bc86f64295489d92b9f1"><strong>Ocutamin</strong></a> is rich in natural ingredients that have undergone extensive research to affirm their effectiveness in enhancing vision. The different ingredients are purportedly in approved dosages and quantities to give users rapid results. The maker boldly claims that you can experience an improvement in eye health within a few days. Below are some of the active ingredients and their role in boosting sight.</p> <p><strong>Quercetin</strong></p> <p><a href="https://medium.com/@ocutaminofficial/ocutamin-usa-is-legit-2024-updated-report-12098509e48f"><strong>Ocutamin</strong></a> argues that most eye problems emanate from high toxin levels. The environment contains various chemicals, including OP, linked to severe vision problems. Scholarly studies show that people exposed to organophosphate have sight defects, including retinal degeneration, optic nerve atrophy, blurred vision, astigmatism, myopia, and optic disc edema.</p> <p>Peer-reviewed studies show that quercetin may improve the strength and functions of neurotransmitters inside the retina. Additionally, the nutrient may restore sight, prevent optic atrophy, and enhance overall cellular health.</p> <p><strong>Bilberry Fruit</strong></p> <p>There are various scientific proofs that bilberry can improve vision. Historical reports show that British Royal Air Force pilots consumed the inky blue fruit to enhance their night vision and combat their enemies.</p> <p>Bilberry is rich in anti-inflammatory and antioxidant components. It can eliminate pollutants reducing vision health. It can nourish every ocular cell, thus boosting its functions. Bilberry fruit can relax the blood capillaries in the eyes, thus enhancing nutrient intake and waste removal.</p> <p><strong>Lutein</strong></p> <p><a href="https://bitbucket.org/ocutamin/ocutamin/issues/1/ocutamin-work-to-promote-restores-eyesight"><strong>Ocutamin</strong></a> contains lutein from Marigold flowers. The nutrient is a natural anti-inflammatory that can combat optic atrophy problems. Studies show it can aid in the removal of toxins. Similarly, it can protect the eyes from UV rays and harmful blue wavelength light.Lutein can strengthen the muscles in the optic nerve, thus boosting its function. It can also enhance communication between the eyes and brain, enhancing vision.</p> <h2><strong><a href="https://followme.tribe.so/post/ocutamin-work-to-promote-restores-eyesight-united-states-canada-does-it-rea--6593bd0602d8d6065bff9e12">Ocutamin</a> Dosage and Side Effects</strong></h2> <p><a href="https://medium.com/@ocutaminofficial/ocutamin-work-to-promote-restores-eyesight-united-states-canada-does-it-really-work-54c725b1b601"><strong>Ocutamin</strong></a> recommends using one capsule daily. Customers can use the supplement at any time of the day. However, users should stay within the suggested dosages.</p> <p>Side Effects &ndash; <a href="https://bitbucket.org/ocutamin/ocutamin/issues/2/ocutamin-reviews-updated-2024-do-not-buy"><strong>Ocutamin</strong></a> is natural and manufactured using pure ingredients. The formulator claims it cannot give users any side effects. Still, the manufacturer recommends seeking medical authorization before using the supplement. Consumers who experience adverse side effects should seek medical help and stop the dosage.</p> <p>Place your order today before stock runs out!</p> <h2><strong>Pros</strong></h2> <p><strong>Clear vision:</strong> As the distortion, blurriness, flashes, and floaters gradually lessen, the clarity of vision is no longer an issue.</p> <p><strong>No surgery:</strong> If the damage can be repaired naturally, there is no need for surgery, which can save time and money.</p> <p><strong>No glasses or lenses:</strong> After taking Ocutamin for a while, the need for vision aids decreases.</p> <p><strong>Protection from the sun:</strong> Ocutamin components also assist to lessen light sensitivity and sun damage.</p> <p><strong>Better vision and focus:</strong> The eyes can see clearly and with complete focus.</p> <h2><strong>Cons</strong></h2> <p><strong>Limited accessibility:</strong> this product may only be purchased online and is not offered by nearby vendors, pharmacies, or shops.</p> <p><strong>Variable results:</strong> depending on how the body responds, results may vary across users and take many months.</p> <p><strong>Not a medication:</strong> Ocutamin is a dietary supplement that promotes eye health but is not a medication. It does not treat anything and cannot be used in place of medicine.</p> <h2 style="text-align: center;"><strong><a href="https://www.globalfitnessmart.com/get-ocutamin">SPECIAL PROMO: Get Ocutamin at the Lowest Discounted Price Online</a></strong></h2> <h2><strong>FAQs about Ocutamin Supplement</strong></h2> <p><strong>Q: What causes poor sight?</strong></p> <p>A: According to Ocutamin, too much screen time, low water intake, poor diet, sleep deficiency, and unhealthy lifestyle habits are the leading causes of eye problems.</p> <p><strong>Q: Can I inherit eye problems?</strong></p> <p>A: Some eye issues like hyperopia and myopia are genetically linked. However, experts claim you can prevent the development of these eye problems by maintaining a healthy diet and good eye hygiene.</p> <p><strong>Q: Can Ocutamin improve eyesight?</strong></p> <p>A: Ocutamin is not a quick fix to better vision. The manufacturer recommends using it consistently for extended periods to nourish the eyes and improve sight.</p> <p><strong>Q: Does Ocutamin interact with other medications?</strong></p> <p>A: The maker recommends seeking medical guidance before using the supplement.</p> <p><strong>Q: Who can use the Ocutamin supplement?</strong></p> <p>A: Ocutamin is marketed for anyone experiencing vision problems, including blurry eyes and poor sight.</p> <p><strong>Q: Can children use Ocutamin?</strong></p> <p>A: No, Ocutamin is only for adult men and women.</p> <p><strong>Q: What ingredients are inside Ocutamin?</strong></p> <p>A: Ocutamin has eight ingredients, including bilberry fruit extract, lutein, and quercetin.</p> <p><strong>Q: How long should I use the Ocutamin supplement?</strong></p> <p>A: The manufacturer suggests using it for over three months.</p> <p><strong>Q: Is Ocutamin addictive?</strong></p> <p>A: Ocutamin is supposedly free from stimulants and thus unlikely to cause addiction even with prolonged usage. However, the maker recommends taking a two-week break after every three months.</p> <p><strong>Q: What if Ocutamin fails to work?</strong></p> <p>A: Ocutamin comes with a 60-day money-back guarantee. Customers can request a refund if they experience no improvement in their vision within the stipulated period.</p> <h1><strong>Pricing</strong></h1> <p>Ocutamin is only available through the official website. The manufacturer warns against buying from third parties. Customers can buy a one-month- six-month package depending on their budget. However, multiple buys come with free shipping and price reduction.</p> <p>Ocutamin is being sold currently at a discount offer. The pricing of Ocutamin is as follows:</p> <ul> <li><strong>Order one bottle of Ocutamin and pay $69.00 and a small shipping fee. You save $30 off the regular retail price of $99.</strong></li> <li><strong>Three-bottle bundle and pay $59.00 each (order total $177). You save $120 off the regular retail price of $297. There&rsquo;s free US shipping included with your order.</strong></li> <li><strong>A six-bottle bundle is $49.00 each (order total $294). You save $300 off the regular retail price of $594. There&rsquo;s free US shipping included with your order.</strong></li> </ul> <h2><strong>Conclusion</strong></h2> <p>Ocutamin is a dietary supplement that promotes the health of the macular, retina, and optic nerve. Ocutamin's makers also assert that it can enhance vision and lower the risk of age-related eye conditions. However, these statements are not backed by any scientific data. Ocutamin's long-term safety is also unknown because peer evaluations have not endorsed it. This supplement should not be taken by women who are pregnant, nursing, under 18, or who have a significant medical condition.</p> <h2 style="text-align: center;"><strong><a href="https://www.globalfitnessmart.com/get-ocutamin">Exclive Details: *Ocutamin* Read More Details on Official Website USA!</a></strong></h2> <h2># READ MORE</h2> <p><a href="https://ocutamin-review.company.site/">https://ocutamin-review.company.site/</a></p> <p><a href="https://groups.google.com/g/ocutamin/c/Ryoz9Suf-RY">https://groups.google.com/g/ocutamin/c/Ryoz9Suf-RY</a></p> <p><a href="https://sites.google.com/view/ocutamin-review-usa/home">https://sites.google.com/view/ocutamin-review-usa/home</a></p> <p><a href="https://colab.research.google.com/drive/1jB-UNGMX6zTmxUioGB_s0BLe2NYQXlAB">https://colab.research.google.com/drive/1jB-UNGMX6zTmxUioGB_s0BLe2NYQXlAB</a></p> <p><a href="https://lookerstudio.google.com/u/0/reporting/56d93833-e5a4-45dc-a27e-cbd11c011e07/page/KkTmD">https://lookerstudio.google.com/u/0/reporting/56d93833-e5a4-45dc-a27e-cbd11c011e07/page/KkTmD</a></p> <p><a href="https://www.scoop.it/topic/ocutamin-by-ocutamin-official">https://www.scoop.it/topic/ocutamin-by-ocutamin-official</a></p> <p><a href="https://ocutamin-official.clubeo.com/">https://ocutamin-official.clubeo.com/</a></p> <p><a href="https://ocutamin-official.clubeo.com/page/ocutamin-reviews-updated-2024-do-not-buy-till-you-read-this-updated-2024-do-not-buy-till-you-read-this.html">https://ocutamin-official.clubeo.com/page/ocutamin-reviews-updated-2024-do-not-buy-till-you-read-this-updated-2024-do-not-buy-till-you-read-this.html</a></p> <p><a href="https://ocutamin-official.clubeo.com/page/ocutamin-pressura-work-to-promote-1vision-support-formula-reviews-scientifically-formulated-supplement.html">https://ocutamin-official.clubeo.com/page/ocutamin-pressura-work-to-promote-1vision-support-formula-reviews-scientifically-formulated-supplement.html</a></p> <p><a href="https://ocutamin-official.clubeo.com/calendar/2024/01/04/ocutamin-work-to-promote-restores-eyesight-united-states-canada-does-it-really-work">https://ocutamin-official.clubeo.com/calendar/2024/01/04/ocutamin-work-to-promote-restores-eyesight-united-states-canada-does-it-really-work</a></p> <p><a href="https://forum.mmm.ucar.edu/threads/ocutamin-pressura-work-to-promote-ocutamin-1vision-support-formula-united-states-canada-does-it-really-work.15058/">https://forum.mmm.ucar.edu/threads/ocutamin-pressura-work-to-promote-ocutamin-1vision-support-formula-united-states-canada-does-it-really-work.15058/</a></p> <p><a href="https://gamma.app/docs/Ocutamin-Pressura-Work-To-Promote-1Vision-Support-Formula-Reviews-i0d33n9jfq7fwyq?mode=doc">https://gamma.app/docs/Ocutamin-Pressura-Work-To-Promote-1Vision-Support-Formula-Reviews-i0d33n9jfq7fwyq?mode=doc</a></p> <p><a href="https://soundcloud.com/ocutaminofficial/ocutamin-usa-is-legit-2024-updated-report">https://soundcloud.com/ocutaminofficial/ocutamin-usa-is-legit-2024-updated-report</a></p> <p><a href="https://ocutamin.bandcamp.com/album/ocutamin-reviews-updated-2024-do-not-buy-till-you-read-this">https://ocutamin.bandcamp.com/album/ocutamin-reviews-updated-2024-do-not-buy-till-you-read-this</a></p> <p><a href="https://ocutamin-1.jimdosite.com/">https://ocutamin-1.jimdosite.com/</a></p> <p><a href="https://bitbucket.org/ocutamin/ocutamin/issues/2/ocutamin-reviews-updated-2024-do-not-buy">https://bitbucket.org/ocutamin/ocutamin/issues/2/ocutamin-reviews-updated-2024-do-not-buy</a></p> <p><a href="https://medium.com/@ocutaminofficial/ocutamin-work-to-promote-restores-eyesight-united-states-canada-does-it-really-work-54c725b1b601">https://medium.com/@ocutaminofficial/ocutamin-work-to-promote-restores-eyesight-united-states-canada-does-it-really-work-54c725b1b601</a></p> <p><a href="https://followme.tribe.so/post/ocutamin-work-to-promote-restores-eyesight-united-states-canada-does-it-rea--6593bd0602d8d6065bff9e12">https://followme.tribe.so/post/ocutamin-work-to-promote-restores-eyesight-united-states-canada-does-it-rea--6593bd0602d8d6065bff9e12</a></p> <p><a href="https://leetcode.com/discuss/interview-question/4491198/Ocutamin-USA-*IS-Legit*-2024-Updated-Report!">https://leetcode.com/discuss/interview-question/4491198/Ocutamin-USA-*IS-Legit*-2024-Updated-Report!</a></p> <p><a href="https://bookshop.org/wishlists/7b030215c10d2bce3555aaa3b68625bc343bab23">https://bookshop.org/wishlists/7b030215c10d2bce3555aaa3b68625bc343bab23</a></p> <p><a href="https://wandering.flarum.cloud/d/35304-ocutamin-usa-is-legit-2024-updated-report">https://wandering.flarum.cloud/d/35304-ocutamin-usa-is-legit-2024-updated-report</a></p> <p><a href="https://community.thebatraanumerology.com/post/ocutamin---usa-is-legit-2024-updated-report-6593c1648f5b2c0a5837c75d">https://community.thebatraanumerology.com/post/ocutamin---usa-is-legit-2024-updated-report-6593c1648f5b2c0a5837c75d</a></p> </div>
ocutaminofficial/ocutamin
[ "region:us" ]
2024-01-02T09:19:06+00:00
{}
2024-01-02T09:19:23+00:00
[]
[]
TAGS #region-us
<div class="card-layout-item" data-background="{}" data-pm-slice="2 2 [&quot;document&quot;,{&quot;docId&quot;:&quot;utelmcg25ai30qz&quot;,&quot;background&quot;:{&quot;type&quot;:&quot;none&quot;},&quot;docFlags&quot;:{&quot;cardLayoutsEnabled&quot;:true},&quot;format&quot;:null,&quot;customCode&quot;:{},&quot;settings&quot;:{},&quot;generateStatus&quot;:null,&quot;generateInfo&quot;:{}},&quot;card&quot;,{&quot;id&quot;:&quot;fjs5psn5copsyz4&quot;,&quot;previewContent&quot;:null,&quot;background&quot;:{&quot;type&quot;:&quot;none&quot;},&quot;container&quot;:{},&quot;cardSize&quot;:&quot;default&quot;,&quot;layout&quot;:&quot;blank&quot;,&quot;layoutTemplateColumns&quot;:null}]"> <p><a href="URL </a>asserts itself as the inaugural all-natural solution designed to enhance vision without the need for medications or risky surgical procedures. It addresses the underlying factors contributing to poor eyesight, aiming to rectify issues solely through the use of natural ingredients.</p> <h2><a href="URL Official Website -- Order Now}</strong></a></h2> <h2><strong>️● For Order Official Website - <a href="URL/URL /><strong>️● Item Name: &mdash; {<a href="URL /><strong>️● Ingredients: &mdash; All Natural</strong><br /><strong>️● Incidental Effects: &mdash; NA</strong><br /><strong>️● Accessibility: &mdash; <a href="URL <h2><a href="URL DISCOUNT ! HURRY UP! ORDER NOW!</strong></a><br /><a href="URL DISCOUNT ! HURRY UP! ORDER NOW!</strong></a><br /><a href="URL DISCOUNT ! HURRY UP! ORDER NOW!</strong></a></h2> <h2><strong>What is <a href="URL Dietary Supplement?</strong></h2> <p><a href="URL is a daily supplement claiming to improve and fortify eye health. The formula is doctor-formulated and contains various nutrients to address the root of poor sight. The supplement is easy to swallow and can provide quality results in days.</p> <p>According to the <a href="URL website, the supplement has eight science-approved ingredients to manage eye issues. It is purportedly a safe, affordable, and effective solution to worsening eye health. It can prevent users from undergoing expensive Laser Eye Surgery (LASIK) or using contact lenses for the rest of their lives.</p> <p>A former eye specialist Dr. Dean Avant is the formulator of <a href="URL He experienced failing sight despite his knowledge and expertise. With another researcher, he discovered certain nutrients, including lutein and quercetin, that nurture the eyes and restore sight quickly.Today, thousands have tried the <a href="URL supplement, supposedly restoring their vision. The supplement is ideal for adults of all ages.</p> <h2 style="text-align: center;"><a href="URL OFFER)Click Here : "Ocutamin USA"Official Website!</strong></a></h2> <h2><strong>How Does <a href="URL Work?</strong></h2> <p><a href="URL creator points out that modern problems like excessive use of computers, laptops, mobile phones, and TV is the primary cause of eye problems. In addition, environmental toxins, UV rays, foods, and water can damage the eyes.</p> <p><a href="URL formulator reasons that ancestors enjoyed laser-sharp sight despite their age. They needed unfailing sight to gather food and protect themselves from animals. How did they maintain quality sight? Below is how <a href="URL can support and restore sight</p> <p><strong>Nourish the Eyes</strong> &ndash; Due to poor dietary patterns; most Americans cannot get sufficient vision-improving nutrients. Many homes eat junk and processed foods that increase inflammation and toxins in the eyes. <a href="URL has eight active ingredients that nourish the different eye cells, improving their function. The supplement can fight eye malnourishment.</p> <p><strong>Clear Toxins</strong> &ndash; The environment is full of toxins. Avoiding some of these contaminants is impossible because they are in the air, foods, medicine, and cleaning products. <a href="URL maker lists organophosphate (OP) as the most dangerous toxin that can damage the eye cells. The supplement has nutrients that enhance the cleansing and detoxification process. It can aid the body in eliminating toxins, thus improving sight.</p> <p><strong>Fight Optic Atrophy</strong> &ndash; <a href="URL creator claims that most people do not utilize the eyes as required leading to optic atrophy. Studies show that people using their eyes actively, indoors and outdoors, train the different cells to become powerful. The supplement may strengthen the different eye parts.</p> <p><strong>Refine Blood Circulation &ndash;</strong> Impaired blood flow in the eye restricts nutrient and oxygen intake. <a href="URL can strengthen the eye capillaries and arteries, thus advancing blood circulation. The maker claims it may restore crystal-clear sight and prevent eye cells from dying.</p> <p><strong>Improve Cellular Health</strong> &ndash; Some <a href="URL ingredients are designed to support cellular regeneration and revitalization. It works by repairing different cells and preventing cellular decay. Consequently, it may protect the eyes from macular degeneration, cataracts, and other age-related sight problems.</p> <h2><strong>Benefits Of Using <a href="URL <p>OCUTAMIN's distinctive formulation offers a range of benefits that contribute to improved eye health and enhanced vision. These advantages include:</p> <p><strong>Support Against Digital Eye Strain</strong>: In today's digital age, prolonged screen exposure often leads to digital eye strain. OCUTAMIN's blend of nutrients is designed to alleviate discomfort and mitigate the effects of eye strain associated with screen use.</p> <p><strong>Protection from Age-related Vision Decline</strong>: The potent antioxidants found in OCUTAMIN, such as lutein and zeaxanthin, serve as a defense against age-related vision decline, fostering long-term eye health.</p> <p><strong>Enhanced Night Vision</strong>: Featuring bilberry extract as a key component, OCUTAMIN draws on traditional uses to enhance night vision, allowing for clearer visibility in low-light conditions.</p> <p><strong>Overall Visual Clarity:</strong> By supplying essential nutrients crucial for optimal eye function, OCUTAMIN may contribute to improved visual clarity and focus. This support helps you navigate the world with increased confidence.</p> <h2 style="text-align: center;"><a href="URL PROMO[Limited Discount]: "Ocutamin USA"Official Website!</strong></a></h2> <h1><strong><a href="URL Ingredients</strong></h1> <p><a href="URL is rich in natural ingredients that have undergone extensive research to affirm their effectiveness in enhancing vision. The different ingredients are purportedly in approved dosages and quantities to give users rapid results. The maker boldly claims that you can experience an improvement in eye health within a few days. Below are some of the active ingredients and their role in boosting sight.</p> <p><strong>Quercetin</strong></p> <p><a href="URL argues that most eye problems emanate from high toxin levels. The environment contains various chemicals, including OP, linked to severe vision problems. Scholarly studies show that people exposed to organophosphate have sight defects, including retinal degeneration, optic nerve atrophy, blurred vision, astigmatism, myopia, and optic disc edema.</p> <p>Peer-reviewed studies show that quercetin may improve the strength and functions of neurotransmitters inside the retina. Additionally, the nutrient may restore sight, prevent optic atrophy, and enhance overall cellular health.</p> <p><strong>Bilberry Fruit</strong></p> <p>There are various scientific proofs that bilberry can improve vision. Historical reports show that British Royal Air Force pilots consumed the inky blue fruit to enhance their night vision and combat their enemies.</p> <p>Bilberry is rich in anti-inflammatory and antioxidant components. It can eliminate pollutants reducing vision health. It can nourish every ocular cell, thus boosting its functions. Bilberry fruit can relax the blood capillaries in the eyes, thus enhancing nutrient intake and waste removal.</p> <p><strong>Lutein</strong></p> <p><a href="URL contains lutein from Marigold flowers. The nutrient is a natural anti-inflammatory that can combat optic atrophy problems. Studies show it can aid in the removal of toxins. Similarly, it can protect the eyes from UV rays and harmful blue wavelength light.Lutein can strengthen the muscles in the optic nerve, thus boosting its function. It can also enhance communication between the eyes and brain, enhancing vision.</p> <h2><strong><a href="URL Dosage and Side Effects</strong></h2> <p><a href="URL recommends using one capsule daily. Customers can use the supplement at any time of the day. However, users should stay within the suggested dosages.</p> <p>Side Effects &ndash; <a href="URL is natural and manufactured using pure ingredients. The formulator claims it cannot give users any side effects. Still, the manufacturer recommends seeking medical authorization before using the supplement. Consumers who experience adverse side effects should seek medical help and stop the dosage.</p> <p>Place your order today before stock runs out!</p> <h2><strong>Pros</strong></h2> <p><strong>Clear vision:</strong> As the distortion, blurriness, flashes, and floaters gradually lessen, the clarity of vision is no longer an issue.</p> <p><strong>No surgery:</strong> If the damage can be repaired naturally, there is no need for surgery, which can save time and money.</p> <p><strong>No glasses or lenses:</strong> After taking Ocutamin for a while, the need for vision aids decreases.</p> <p><strong>Protection from the sun:</strong> Ocutamin components also assist to lessen light sensitivity and sun damage.</p> <p><strong>Better vision and focus:</strong> The eyes can see clearly and with complete focus.</p> <h2><strong>Cons</strong></h2> <p><strong>Limited accessibility:</strong> this product may only be purchased online and is not offered by nearby vendors, pharmacies, or shops.</p> <p><strong>Variable results:</strong> depending on how the body responds, results may vary across users and take many months.</p> <p><strong>Not a medication:</strong> Ocutamin is a dietary supplement that promotes eye health but is not a medication. It does not treat anything and cannot be used in place of medicine.</p> <h2 style="text-align: center;"><strong><a href="URL PROMO: Get Ocutamin at the Lowest Discounted Price Online</a></strong></h2> <h2><strong>FAQs about Ocutamin Supplement</strong></h2> <p><strong>Q: What causes poor sight?</strong></p> <p>A: According to Ocutamin, too much screen time, low water intake, poor diet, sleep deficiency, and unhealthy lifestyle habits are the leading causes of eye problems.</p> <p><strong>Q: Can I inherit eye problems?</strong></p> <p>A: Some eye issues like hyperopia and myopia are genetically linked. However, experts claim you can prevent the development of these eye problems by maintaining a healthy diet and good eye hygiene.</p> <p><strong>Q: Can Ocutamin improve eyesight?</strong></p> <p>A: Ocutamin is not a quick fix to better vision. The manufacturer recommends using it consistently for extended periods to nourish the eyes and improve sight.</p> <p><strong>Q: Does Ocutamin interact with other medications?</strong></p> <p>A: The maker recommends seeking medical guidance before using the supplement.</p> <p><strong>Q: Who can use the Ocutamin supplement?</strong></p> <p>A: Ocutamin is marketed for anyone experiencing vision problems, including blurry eyes and poor sight.</p> <p><strong>Q: Can children use Ocutamin?</strong></p> <p>A: No, Ocutamin is only for adult men and women.</p> <p><strong>Q: What ingredients are inside Ocutamin?</strong></p> <p>A: Ocutamin has eight ingredients, including bilberry fruit extract, lutein, and quercetin.</p> <p><strong>Q: How long should I use the Ocutamin supplement?</strong></p> <p>A: The manufacturer suggests using it for over three months.</p> <p><strong>Q: Is Ocutamin addictive?</strong></p> <p>A: Ocutamin is supposedly free from stimulants and thus unlikely to cause addiction even with prolonged usage. However, the maker recommends taking a two-week break after every three months.</p> <p><strong>Q: What if Ocutamin fails to work?</strong></p> <p>A: Ocutamin comes with a 60-day money-back guarantee. Customers can request a refund if they experience no improvement in their vision within the stipulated period.</p> <h1><strong>Pricing</strong></h1> <p>Ocutamin is only available through the official website. The manufacturer warns against buying from third parties. Customers can buy a one-month- six-month package depending on their budget. However, multiple buys come with free shipping and price reduction.</p> <p>Ocutamin is being sold currently at a discount offer. The pricing of Ocutamin is as follows:</p> <ul> <li><strong>Order one bottle of Ocutamin and pay $69.00 and a small shipping fee. You save $30 off the regular retail price of $99.</strong></li> <li><strong>Three-bottle bundle and pay $59.00 each (order total $177). You save $120 off the regular retail price of $297. There&rsquo;s free US shipping included with your order.</strong></li> <li><strong>A six-bottle bundle is $49.00 each (order total $294). You save $300 off the regular retail price of $594. There&rsquo;s free US shipping included with your order.</strong></li> </ul> <h2><strong>Conclusion</strong></h2> <p>Ocutamin is a dietary supplement that promotes the health of the macular, retina, and optic nerve. Ocutamin's makers also assert that it can enhance vision and lower the risk of age-related eye conditions. However, these statements are not backed by any scientific data. Ocutamin's long-term safety is also unknown because peer evaluations have not endorsed it. This supplement should not be taken by women who are pregnant, nursing, under 18, or who have a significant medical condition.</p> <h2 style="text-align: center;"><strong><a href="URL Details: *Ocutamin* Read More Details on Official Website USA!</a></strong></h2> <h2># READ MORE</h2> <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL </div>
[ "# READ MORE</h2>\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n</div>" ]
[ "TAGS\n#region-us \n", "# READ MORE</h2>\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n</div>" ]
[ 6, 231 ]
[ "passage: TAGS\n#region-us \n# READ MORE</h2>\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n</div>" ]
b02c8c5ca57f49521fbda3916a51cd2d2b1b5807
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
ugursa/Yahoo-Finance-News-Sentences
[ "task_categories:text-classification", "size_categories:10K<n<100K", "language:en", "license:mit", "finance", "region:us" ]
2024-01-02T09:36:18+00:00
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "tags": ["finance"]}
2024-01-02T11:50:41+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #size_categories-10K<n<100K #language-English #license-mit #finance #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-English #license-mit #finance #region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 41, 34, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-English #license-mit #finance #region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
214a175f65945249afffd982281aa82e79a943d0
extracted and reformatted for LLama 2 from [SALT-NLP/FLUE-FiQA](https://huggingface.co/datasets/SALT-NLP/FLUE-FiQA) for easier use
sherelyn912/fiqa
[ "doi:10.57967/hf/1703", "region:us" ]
2024-01-02T09:48:47+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15869474, "num_examples": 14166}, {"name": "test", "num_bytes": 1932368, "num_examples": 1706}, {"name": "validation", "num_bytes": 1432148, "num_examples": 1238}], "download_size": 11000011, "dataset_size": 19233990}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
2024-01-03T07:33:45+00:00
[]
[]
TAGS #doi-10.57967/hf/1703 #region-us
extracted and reformatted for LLama 2 from SALT-NLP/FLUE-FiQA for easier use
[]
[ "TAGS\n#doi-10.57967/hf/1703 #region-us \n" ]
[ 18 ]
[ "passage: TAGS\n#doi-10.57967/hf/1703 #region-us \n" ]
f3f521c6edc479ab8b20d77591efeffcfd45ef1c
# MS MARCO dummy+test dataset Used for testing [nixietune](https://github.com/nixiesearch/nixietune): a dummy dataset of random 1000 queries from MS MARCO. The format is the following: ```json { "query": ")what was the immediate impact of the success of the manhattan project?", "positive": [ "The presence of communication amid scientific minds was equally important to the success of the Manhattan Project as scientific intellect was. The only cloud hanging over the impressive achievement of the atomic researchers and engineers is what their success truly meant; hundreds of thousands of innocent lives obliterated." ], "negative": [] } ``` ## Usage ```python from datasets import load_dataset data = load_dataset('nixiesearch/ms-marco-dummy') print(data["train"].features) ``` ## License Apache 2.0
nixiesearch/ms-marco-dummy
[ "task_categories:sentence-similarity", "size_categories:100K<n<1M", "source_datasets:MSMARCO", "language:en", "license:apache-2.0", "text", "region:us" ]
2024-01-02T10:00:07+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "source_datasets": ["MSMARCO"], "task_categories": ["sentence-similarity"], "pretty_name": "MS MARCO dummy dataset", "tags": ["text"], "dataset_info": {"config_name": "default", "features": [{"name": "query", "dtype": "string"}, {"name": "positive", "sequence": "string"}, {"name": "negative", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 11535280, "num_examples": 1000}, {"name": "test", "num_bytes": 11668968, "num_examples": 1000}]}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train/*"}, {"split": "test", "path": "data/test/*"}]}], "train-eval-index": [{"config": "default", "task": "sentence-similarity", "splits": {"train_split": "train", "eval_split": "test"}}]}
2024-01-02T10:04:32+00:00
[]
[ "en" ]
TAGS #task_categories-sentence-similarity #size_categories-100K<n<1M #source_datasets-MSMARCO #language-English #license-apache-2.0 #text #region-us
# MS MARCO dummy+test dataset Used for testing nixietune: a dummy dataset of random 1000 queries from MS MARCO. The format is the following: ## Usage ## License Apache 2.0
[ "# MS MARCO dummy+test dataset\n\nUsed for testing nixietune: a dummy dataset of random 1000 queries from MS MARCO. The format is the following:", "## Usage", "## License\n\nApache 2.0" ]
[ "TAGS\n#task_categories-sentence-similarity #size_categories-100K<n<1M #source_datasets-MSMARCO #language-English #license-apache-2.0 #text #region-us \n", "# MS MARCO dummy+test dataset\n\nUsed for testing nixietune: a dummy dataset of random 1000 queries from MS MARCO. The format is the following:", "## Usage", "## License\n\nApache 2.0" ]
[ 55, 39, 3, 5 ]
[ "passage: TAGS\n#task_categories-sentence-similarity #size_categories-100K<n<1M #source_datasets-MSMARCO #language-English #license-apache-2.0 #text #region-us \n# MS MARCO dummy+test dataset\n\nUsed for testing nixietune: a dummy dataset of random 1000 queries from MS MARCO. The format is the following:## Usage## License\n\nApache 2.0" ]
3f6f073a501338a23e9a3e089cf8317ca28f18cc
# Dataset Card for Dataset Name This dataset is a culmination of diverse sources, carefully curated with the intention of constructing a versatile and comprehensive dataset. We have amalgamated high-quality text from various datasets to form this unified dataset, designed to serve as a valuable and multifaceted resource for diverse purposes." ### Datasets used to create- - [aditijha/instruct_v1_10k](https://huggingface.co/datasets/aditijha/instruct_v1_10k) - [mosaicml/instruct-v3](https://huggingface.co/datasets/mosaicml/instruct-v3) - [jondurbin/airoboros-2.2.1](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1) ## Uses Can be used to fine-tune models.
CrabfishAI/InstructQA-Highquality-16k
[ "task_categories:question-answering", "size_categories:10K<n<100K", "language:en", "license:unknown", "region:us" ]
2024-01-02T10:07:41+00:00
{"language": ["en"], "license": "unknown", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering"]}
2024-01-13T13:45:29+00:00
[]
[ "en" ]
TAGS #task_categories-question-answering #size_categories-10K<n<100K #language-English #license-unknown #region-us
# Dataset Card for Dataset Name This dataset is a culmination of diverse sources, carefully curated with the intention of constructing a versatile and comprehensive dataset. We have amalgamated high-quality text from various datasets to form this unified dataset, designed to serve as a valuable and multifaceted resource for diverse purposes." ### Datasets used to create- - aditijha/instruct_v1_10k - mosaicml/instruct-v3 - jondurbin/airoboros-2.2.1 ## Uses Can be used to fine-tune models.
[ "# Dataset Card for Dataset Name\n\nThis dataset is a culmination of diverse sources, carefully curated with the intention of constructing a versatile and comprehensive dataset. We have amalgamated high-quality text from various datasets to form this unified dataset, designed to serve as a valuable and multifaceted resource for diverse purposes.\"", "### Datasets used to create-\n- aditijha/instruct_v1_10k \n- mosaicml/instruct-v3 \n- jondurbin/airoboros-2.2.1", "## Uses\nCan be used to fine-tune models." ]
[ "TAGS\n#task_categories-question-answering #size_categories-10K<n<100K #language-English #license-unknown #region-us \n", "# Dataset Card for Dataset Name\n\nThis dataset is a culmination of diverse sources, carefully curated with the intention of constructing a versatile and comprehensive dataset. We have amalgamated high-quality text from various datasets to form this unified dataset, designed to serve as a valuable and multifaceted resource for diverse purposes.\"", "### Datasets used to create-\n- aditijha/instruct_v1_10k \n- mosaicml/instruct-v3 \n- jondurbin/airoboros-2.2.1", "## Uses\nCan be used to fine-tune models." ]
[ 41, 74, 43, 12 ]
[ "passage: TAGS\n#task_categories-question-answering #size_categories-10K<n<100K #language-English #license-unknown #region-us \n# Dataset Card for Dataset Name\n\nThis dataset is a culmination of diverse sources, carefully curated with the intention of constructing a versatile and comprehensive dataset. We have amalgamated high-quality text from various datasets to form this unified dataset, designed to serve as a valuable and multifaceted resource for diverse purposes.\"### Datasets used to create-\n- aditijha/instruct_v1_10k \n- mosaicml/instruct-v3 \n- jondurbin/airoboros-2.2.1## Uses\nCan be used to fine-tune models." ]
a00608b63480624608a85d37cfbe7a4fed909fa7
# Dataset Card for "autotrain-data-fertilizer-pair-classify" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
matthewfarant/autotrain-data-fertilizer-pair-classify
[ "region:us" ]
2024-01-02T10:23:28+00:00
{"dataset_info": {"features": [{"name": "autotrain_text", "dtype": "string"}, {"name": "autotrain_label", "dtype": {"class_label": {"names": {"0": 0, "1": 1}}}}], "splits": [{"name": "train", "num_bytes": 807, "num_examples": 12}, {"name": "validation", "num_bytes": 199, "num_examples": 3}], "download_size": 3898, "dataset_size": 1006}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
2024-01-02T10:23:30+00:00
[]
[]
TAGS #region-us
# Dataset Card for "autotrain-data-fertilizer-pair-classify" More Information needed
[ "# Dataset Card for \"autotrain-data-fertilizer-pair-classify\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"autotrain-data-fertilizer-pair-classify\"\n\nMore Information needed" ]
[ 6, 24 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"autotrain-data-fertilizer-pair-classify\"\n\nMore Information needed" ]
348a19a1208bf5be42dc48ac2b3fd063d4f9b00f
# Official YOLOv7 Implementation of paper - [YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors](https://arxiv.org/abs/2207.02696) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/yolov7-trainable-bag-of-freebies-sets-new/real-time-object-detection-on-coco)](https://paperswithcode.com/sota/real-time-object-detection-on-coco?p=yolov7-trainable-bag-of-freebies-sets-new) [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/akhaliq/yolov7) <a href="https://colab.research.google.com/gist/AlexeyAB/b769f5795e65fdab80086f6cb7940dae/yolov7detection.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> [![arxiv.org](http://img.shields.io/badge/cs.CV-arXiv%3A2207.02696-B31B1B.svg)](https://arxiv.org/abs/2207.02696) <div align="center"> <a href="./"> <img src="./figure/performance.png" width="79%"/> </a> </div> ## Web Demo - Integrated into [Huggingface Spaces 🤗](https://huggingface.co/spaces/akhaliq/yolov7) using Gradio. Try out the Web Demo [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/akhaliq/yolov7) ## Performance MS COCO | Model | Test Size | AP<sup>test</sup> | AP<sub>50</sub><sup>test</sup> | AP<sub>75</sub><sup>test</sup> | batch 1 fps | batch 32 average time | | :-- | :-: | :-: | :-: | :-: | :-: | :-: | | [**YOLOv7**](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt) | 640 | **51.4%** | **69.7%** | **55.9%** | 161 *fps* | 2.8 *ms* | | [**YOLOv7-X**](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7x.pt) | 640 | **53.1%** | **71.2%** | **57.8%** | 114 *fps* | 4.3 *ms* | | | | | | | | | | [**YOLOv7-W6**](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-w6.pt) | 1280 | **54.9%** | **72.6%** | **60.1%** | 84 *fps* | 7.6 *ms* | | [**YOLOv7-E6**](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-e6.pt) | 1280 | **56.0%** | **73.5%** | **61.2%** | 56 *fps* | 12.3 *ms* | | [**YOLOv7-D6**](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-d6.pt) | 1280 | **56.6%** | **74.0%** | **61.8%** | 44 *fps* | 15.0 *ms* | | [**YOLOv7-E6E**](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-e6e.pt) | 1280 | **56.8%** | **74.4%** | **62.1%** | 36 *fps* | 18.7 *ms* | ## Installation Docker environment (recommended) <details><summary> <b>Expand</b> </summary> ``` shell # create the docker container, you can change the share memory size if you have more. nvidia-docker run --name yolov7 -it -v your_coco_path/:/coco/ -v your_code_path/:/yolov7 --shm-size=64g nvcr.io/nvidia/pytorch:21.08-py3 # apt install required packages apt update apt install -y zip htop screen libgl1-mesa-glx # pip install required packages pip install seaborn thop # go to code folder cd /yolov7 ``` </details> ## Testing [`yolov7.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt) [`yolov7x.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7x.pt) [`yolov7-w6.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-w6.pt) [`yolov7-e6.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-e6.pt) [`yolov7-d6.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-d6.pt) [`yolov7-e6e.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-e6e.pt) ``` shell python test.py --data data/coco.yaml --img 640 --batch 32 --conf 0.001 --iou 0.65 --device 0 --weights yolov7.pt --name yolov7_640_val ``` You will get the results: ``` Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.51206 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.69730 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.55521 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.35247 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.55937 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.66693 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.38453 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.63765 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.68772 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.53766 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.73549 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.83868 ``` To measure accuracy, download [COCO-annotations for Pycocotools](http://images.cocodataset.org/annotations/annotations_trainval2017.zip) to the `./coco/annotations/instances_val2017.json` ## Training Data preparation ``` shell bash scripts/get_coco.sh ``` * Download MS COCO dataset images ([train](http://images.cocodataset.org/zips/train2017.zip), [val](http://images.cocodataset.org/zips/val2017.zip), [test](http://images.cocodataset.org/zips/test2017.zip)) and [labels](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/coco2017labels-segments.zip). If you have previously used a different version of YOLO, we strongly recommend that you delete `train2017.cache` and `val2017.cache` files, and redownload [labels](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/coco2017labels-segments.zip) Single GPU training ``` shell # train p5 models python train.py --workers 8 --device 0 --batch-size 32 --data data/coco.yaml --img 640 640 --cfg cfg/training/yolov7.yaml --weights '' --name yolov7 --hyp data/hyp.scratch.p5.yaml # train p6 models python train_aux.py --workers 8 --device 0 --batch-size 16 --data data/coco.yaml --img 1280 1280 --cfg cfg/training/yolov7-w6.yaml --weights '' --name yolov7-w6 --hyp data/hyp.scratch.p6.yaml ``` Multiple GPU training ``` shell # train p5 models python -m torch.distributed.launch --nproc_per_node 4 --master_port 9527 train.py --workers 8 --device 0,1,2,3 --sync-bn --batch-size 128 --data data/coco.yaml --img 640 640 --cfg cfg/training/yolov7.yaml --weights '' --name yolov7 --hyp data/hyp.scratch.p5.yaml # train p6 models python -m torch.distributed.launch --nproc_per_node 8 --master_port 9527 train_aux.py --workers 8 --device 0,1,2,3,4,5,6,7 --sync-bn --batch-size 128 --data data/coco.yaml --img 1280 1280 --cfg cfg/training/yolov7-w6.yaml --weights '' --name yolov7-w6 --hyp data/hyp.scratch.p6.yaml ``` ## Transfer learning [`yolov7_training.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7_training.pt) [`yolov7x_training.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7x_training.pt) [`yolov7-w6_training.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-w6_training.pt) [`yolov7-e6_training.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-e6_training.pt) [`yolov7-d6_training.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-d6_training.pt) [`yolov7-e6e_training.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-e6e_training.pt) Single GPU finetuning for custom dataset ``` shell # finetune p5 models python train.py --workers 8 --device 0 --batch-size 32 --data data/custom.yaml --img 640 640 --cfg cfg/training/yolov7-custom.yaml --weights 'yolov7_training.pt' --name yolov7-custom --hyp data/hyp.scratch.custom.yaml # finetune p6 models python train_aux.py --workers 8 --device 0 --batch-size 16 --data data/custom.yaml --img 1280 1280 --cfg cfg/training/yolov7-w6-custom.yaml --weights 'yolov7-w6_training.pt' --name yolov7-w6-custom --hyp data/hyp.scratch.custom.yaml ``` ## Re-parameterization See [reparameterization.ipynb](tools/reparameterization.ipynb) ## Inference On video: ``` shell python detect.py --weights yolov7.pt --conf 0.25 --img-size 640 --source yourvideo.mp4 ``` On image: ``` shell python detect.py --weights yolov7.pt --conf 0.25 --img-size 640 --source inference/images/horses.jpg ``` <div align="center"> <a href="./"> <img src="./figure/horses_prediction.jpg" width="59%"/> </a> </div> ## Export **Pytorch to CoreML (and inference on MacOS/iOS)** <a href="https://colab.research.google.com/github/WongKinYiu/yolov7/blob/main/tools/YOLOv7CoreML.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> **Pytorch to ONNX with NMS (and inference)** <a href="https://colab.research.google.com/github/WongKinYiu/yolov7/blob/main/tools/YOLOv7onnx.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> ```shell python export.py --weights yolov7-tiny.pt --grid --end2end --simplify \ --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640 --max-wh 640 ``` **Pytorch to TensorRT with NMS (and inference)** <a href="https://colab.research.google.com/github/WongKinYiu/yolov7/blob/main/tools/YOLOv7trt.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> ```shell wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-tiny.pt python export.py --weights ./yolov7-tiny.pt --grid --end2end --simplify --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640 git clone https://github.com/Linaom1214/tensorrt-python.git python ./tensorrt-python/export.py -o yolov7-tiny.onnx -e yolov7-tiny-nms.trt -p fp16 ``` **Pytorch to TensorRT another way** <a href="https://colab.research.google.com/gist/AlexeyAB/fcb47ae544cf284eb24d8ad8e880d45c/yolov7trtlinaom.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <details><summary> <b>Expand</b> </summary> ```shell wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-tiny.pt python export.py --weights yolov7-tiny.pt --grid --include-nms git clone https://github.com/Linaom1214/tensorrt-python.git python ./tensorrt-python/export.py -o yolov7-tiny.onnx -e yolov7-tiny-nms.trt -p fp16 # Or use trtexec to convert ONNX to TensorRT engine /usr/src/tensorrt/bin/trtexec --onnx=yolov7-tiny.onnx --saveEngine=yolov7-tiny-nms.trt --fp16 ``` </details> Tested with: Python 3.7.13, Pytorch 1.12.0+cu113 ## Pose estimation [`code`](https://github.com/WongKinYiu/yolov7/tree/pose) [`yolov7-w6-pose.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-w6-pose.pt) See [keypoint.ipynb](https://github.com/WongKinYiu/yolov7/blob/main/tools/keypoint.ipynb). <div align="center"> <a href="./"> <img src="./figure/pose.png" width="39%"/> </a> </div> ## Instance segmentation [`code`](https://github.com/WongKinYiu/yolov7/tree/mask) [`yolov7-mask.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-mask.pt) See [instance.ipynb](https://github.com/WongKinYiu/yolov7/blob/main/tools/instance.ipynb). <div align="center"> <a href="./"> <img src="./figure/mask.png" width="59%"/> </a> </div> ## Instance segmentation [`code`](https://github.com/WongKinYiu/yolov7/tree/u7/seg) [`yolov7-seg.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-seg.pt) YOLOv7 for instance segmentation (YOLOR + YOLOv5 + YOLACT) | Model | Test Size | AP<sup>box</sup> | AP<sub>50</sub><sup>box</sup> | AP<sub>75</sub><sup>box</sup> | AP<sup>mask</sup> | AP<sub>50</sub><sup>mask</sup> | AP<sub>75</sub><sup>mask</sup> | | :-- | :-: | :-: | :-: | :-: | :-: | :-: | :-: | | **YOLOv7-seg** | 640 | **51.4%** | **69.4%** | **55.8%** | **41.5%** | **65.5%** | **43.7%** | ## Anchor free detection head [`code`](https://github.com/WongKinYiu/yolov7/tree/u6) [`yolov7-u6.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-u6.pt) YOLOv7 with decoupled TAL head (YOLOR + YOLOv5 + YOLOv6) | Model | Test Size | AP<sup>val</sup> | AP<sub>50</sub><sup>val</sup> | AP<sub>75</sub><sup>val</sup> | | :-- | :-: | :-: | :-: | :-: | | **YOLOv7-u6** | 640 | **52.6%** | **69.7%** | **57.3%** | ## Citation ``` @article{wang2022yolov7, title={{YOLOv7}: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors}, author={Wang, Chien-Yao and Bochkovskiy, Alexey and Liao, Hong-Yuan Mark}, journal={arXiv preprint arXiv:2207.02696}, year={2022} } ``` ## Teaser Yolov7-semantic & YOLOv7-panoptic & YOLOv7-caption <div align="center"> <a href="./"> <img src="./figure/tennis.jpg" width="24%"/> </a> <a href="./"> <img src="./figure/tennis_semantic.jpg" width="24%"/> </a> <a href="./"> <img src="./figure/tennis_panoptic.png" width="24%"/> </a> <a href="./"> <img src="./figure/tennis_caption.png" width="24%"/> </a> </div> ## Acknowledgements <details><summary> <b>Expand</b> </summary> * [https://github.com/AlexeyAB/darknet](https://github.com/AlexeyAB/darknet) * [https://github.com/WongKinYiu/yolor](https://github.com/WongKinYiu/yolor) * [https://github.com/WongKinYiu/PyTorch_YOLOv4](https://github.com/WongKinYiu/PyTorch_YOLOv4) * [https://github.com/WongKinYiu/ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4) * [https://github.com/Megvii-BaseDetection/YOLOX](https://github.com/Megvii-BaseDetection/YOLOX) * [https://github.com/ultralytics/yolov3](https://github.com/ultralytics/yolov3) * [https://github.com/ultralytics/yolov5](https://github.com/ultralytics/yolov5) * [https://github.com/DingXiaoH/RepVGG](https://github.com/DingXiaoH/RepVGG) * [https://github.com/JUGGHM/OREPA_CVPR2022](https://github.com/JUGGHM/OREPA_CVPR2022) * [https://github.com/TexasInstruments/edgeai-yolov5/tree/yolo-pose](https://github.com/TexasInstruments/edgeai-yolov5/tree/yolo-pose) </details>
nachiiiket/autofish
[ "arxiv:2207.02696", "region:us" ]
2024-01-02T10:51:09+00:00
{}
2024-01-02T10:52:52+00:00
[ "2207.02696" ]
[]
TAGS #arxiv-2207.02696 #region-us
Official YOLOv7 =============== Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors ![PWC](URL ![Hugging Face Spaces](URL <a href="URL src="URL alt="Open In Colab"> ![URL](URL [![](./figure/URL)](./) Web Demo -------- * Integrated into Huggingface Spaces using Gradio. Try out the Web Demo ![Hugging Face Spaces](URL Performance ----------- MS COCO Installation ------------ Docker environment (recommended) **Expand** Testing ------- 'URL' 'URL' 'URL' 'URL' 'URL' 'URL' You will get the results: To measure accuracy, download COCO-annotations for Pycocotools to the './coco/annotations/instances\_val2017.json' Training -------- Data preparation * Download MS COCO dataset images (train, val, test) and labels. If you have previously used a different version of YOLO, we strongly recommend that you delete 'URL' and 'URL' files, and redownload labels Single GPU training Multiple GPU training Transfer learning ----------------- 'yolov7\_training.pt' 'yolov7x\_training.pt' 'yolov7-w6\_training.pt' 'yolov7-e6\_training.pt' 'yolov7-d6\_training.pt' 'yolov7-e6e\_training.pt' Single GPU finetuning for custom dataset Re-parameterization ------------------- See URL Inference --------- On video: On image: [![](./figure/horses_prediction.jpg)](./) Export ------ Pytorch to CoreML (and inference on MacOS/iOS) <a href="URL src="URL alt="Open In Colab"> Pytorch to ONNX with NMS (and inference) <a href="URL src="URL alt="Open In Colab"> Pytorch to TensorRT with NMS (and inference) <a href="URL src="URL alt="Open In Colab"> Pytorch to TensorRT another way <a href="URL src="URL alt="Open In Colab"> **Expand** Tested with: Python 3.7.13, Pytorch 1.12.0+cu113 Pose estimation --------------- 'code' 'URL' See URL. [![](./figure/URL)](./) Instance segmentation --------------------- 'code' 'URL' See URL. [![](./figure/URL)](./) Instance segmentation --------------------- 'code' 'URL' YOLOv7 for instance segmentation (YOLOR + YOLOv5 + YOLACT) Anchor free detection head -------------------------- 'code' 'URL' YOLOv7 with decoupled TAL head (YOLOR + YOLOv5 + YOLOv6) Teaser ------ Yolov7-semantic & YOLOv7-panoptic & YOLOv7-caption [![](./figure/URL)](./) [![](./figure/tennis_semantic.jpg)](./) [![](./figure/tennis_panoptic.png)](./) [![](./figure/tennis_caption.png)](./) Acknowledgements ---------------- **Expand** * URL * URL * URL * URL * URL * URL * URL * URL * URL * URL
[]
[ "TAGS\n#arxiv-2207.02696 #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#arxiv-2207.02696 #region-us \n" ]
68215e40e48810ded509c6334b28d097ed647258
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
ThangaTharun/SecondDataset
[ "region:us" ]
2024-01-02T10:55:08+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 519148.0, "num_examples": 8}], "download_size": 454579, "dataset_size": 519148.0}}
2024-01-02T12:21:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 34, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
5bbe260db99068dcd0a5f20f758d887697bd6304
# Dataset Card for itu_annotated_dataset This dataset has been created with [Argilla](https://docs.argilla.io). As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets). ## Dataset Description - **Homepage:** https://argilla.io - **Repository:** https://github.com/argilla-io/argilla - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset contains: * A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla. * Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`. * The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla. ### Load with Argilla To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code: ```python import argilla as rg ds = rg.FeedbackDataset.from_huggingface("vishnuramov/itu_annotated_dataset") ``` ### Load with `datasets` To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code: ```python from datasets import load_dataset ds = load_dataset("vishnuramov/itu_annotated_dataset") ``` ### Supported Tasks and Leaderboards This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure). There are no leaderboards associated with this dataset. ### Languages [More Information Needed] ## Dataset Structure ### Data in Argilla The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**. The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions. | Field Name | Title | Type | Required | Markdown | | ---------- | ----- | ---- | -------- | -------- | | background | Background | text | True | False | | prompt | Prompt | text | True | False | | response | Final Response | text | True | False | The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking. | Question Name | Title | Type | Required | Description | Values/Labels | | ------------- | ----- | ---- | -------- | ----------- | ------------- | | response_correction | Response_correction | text | True | N/A | N/A | The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata". The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`. | Metadata Name | Title | Type | Values | Visible for Annotators | | ------------- | ----- | ---- | ------ | ---------------------- | The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section. ### Data Instances An example of a dataset instance in Argilla looks as follows: ```json { "external_id": null, "fields": { "background": "Background: ITU has published Use cases for Autonomous Networks. ITU focus group on autonomous networks studies the use cases and ITU-T SG13 has published many use cases on autonomous networks. These use cases are categorised into two main categories depending on whether they are related to application of autonomous networks or related to the core concepts of autonomous networks.", "prompt": "\u003chuman\u003e: who publishes use cases for autonomous networks?", "response": "\u003cbot\u003e: ITU publishes use cases for autonomous networks based on the work of ITU focus group on autonomous networks and ITU-T SG13." }, "metadata": {}, "responses": [], "suggestions": [], "vectors": {} } ``` While the same record in HuggingFace `datasets` looks as follows: ```json { "background": "Background: ITU has published Use cases for Autonomous Networks. ITU focus group on autonomous networks studies the use cases and ITU-T SG13 has published many use cases on autonomous networks. These use cases are categorised into two main categories depending on whether they are related to application of autonomous networks or related to the core concepts of autonomous networks.", "external_id": null, "metadata": "{}", "prompt": "\u003chuman\u003e: who publishes use cases for autonomous networks?", "response": "\u003cbot\u003e: ITU publishes use cases for autonomous networks based on the work of ITU focus group on autonomous networks and ITU-T SG13.", "response_correction": [], "response_correction-suggestion": null, "response_correction-suggestion-metadata": { "agent": null, "score": null, "type": null } } ``` ### Data Fields Among the dataset fields, we differentiate between the following: * **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions. * **background** is of type `text`. * **prompt** is of type `text`. * **response** is of type `text`. * **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`. * **response_correction** is of type `text`. * **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable. * (optional) **response_correction-suggestion** is of type `text`. Additionally, we also have two more fields that are optional and are the following: * **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`. * **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file. ### Data Splits The dataset contains a single split, which is `train`. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation guidelines Please, read the question carefully and try to answer it as accurately as possible. #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
vishnuramov/itu_annotated_dataset
[ "size_categories:n<1K", "rlfh", "argilla", "human-feedback", "region:us" ]
2024-01-02T11:01:35+00:00
{"size_categories": "n<1K", "tags": ["rlfh", "argilla", "human-feedback"]}
2024-01-15T12:55:19+00:00
[]
[]
TAGS #size_categories-n<1K #rlfh #argilla #human-feedback #region-us
Dataset Card for itu\_annotated\_dataset ======================================== This dataset has been created with Argilla. As shown in the sections below, this dataset can be loaded into Argilla as explained in Load with Argilla, or used directly with the 'datasets' library in Load with 'datasets'. Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: * Leaderboard: * Point of Contact: ### Dataset Summary This dataset contains: * A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\_huggingface' method in Argilla. * Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\_huggingface' and can be loaded independently using the 'datasets' library via 'load\_dataset'. * The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla. ### Load with Argilla To load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code: ### Load with 'datasets' To load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code: ### Supported Tasks and Leaderboards This dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section. There are no leaderboards associated with this dataset. ### Languages Dataset Structure ----------------- ### Data in Argilla The dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines. The fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions. The questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\_selection, multi\_label\_selection, or ranking. The suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata". The metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\_properties' defined in the dataset configuration file in 'URL'. The guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section. ### Data Instances An example of a dataset instance in Argilla looks as follows: While the same record in HuggingFace 'datasets' looks as follows: ### Data Fields Among the dataset fields, we differentiate between the following: * Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions. + background is of type 'text'. + prompt is of type 'text'. + response is of type 'text'. * Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'. + response\_correction is of type 'text'. * Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable. + (optional) response\_correction-suggestion is of type 'text'. Additionally, we also have two more fields that are optional and are the following: * metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\_properties' defined in the dataset configuration file in 'URL'. * external\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file. ### Data Splits The dataset contains a single split, which is 'train'. Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation guidelines Please, read the question carefully and try to answer it as accurately as possible. #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions
[ "### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.", "### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:", "### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:", "### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.", "### Languages\n\n\nDataset Structure\n-----------------", "### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.", "### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:", "### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ background is of type 'text'.\n\t+ prompt is of type 'text'.\n\t+ response is of type 'text'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ response\\_correction is of type 'text'.\n* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) response\\_correction-suggestion is of type 'text'.\n\n\nAdditionally, we also have two more fields that are optional and are the following:\n\n\n* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.", "### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation guidelines\n\n\nPlease, read the question carefully and try to answer it as accurately as possible.", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#size_categories-n<1K #rlfh #argilla #human-feedback #region-us \n", "### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.", "### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:", "### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:", "### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.", "### Languages\n\n\nDataset Structure\n-----------------", "### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.", "### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:", "### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ background is of type 'text'.\n\t+ prompt is of type 'text'.\n\t+ response is of type 'text'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ response\\_correction is of type 'text'.\n* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) response\\_correction-suggestion is of type 'text'.\n\n\nAdditionally, we also have two more fields that are optional and are the following:\n\n\n* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.", "### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation guidelines\n\n\nPlease, read the question carefully and try to answer it as accurately as possible.", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 27, 162, 40, 53, 68, 11, 404, 40, 464, 27, 7, 4, 10, 10, 5, 22, 5, 9, 18, 7, 8, 14, 6, 6, 5 ]
[ "passage: TAGS\n#size_categories-n<1K #rlfh #argilla #human-feedback #region-us \n### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.### Languages\n\n\nDataset Structure\n-----------------", "passage: ### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ background is of type 'text'.\n\t+ prompt is of type 'text'.\n\t+ response is of type 'text'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ response\\_correction is of type 'text'.\n* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) response\\_correction-suggestion is of type 'text'.\n\n\nAdditionally, we also have two more fields that are optional and are the following:\n\n\n* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file." ]
d555af1e7ef22310f14dcd536707c9be422fa769
# Dataset Card for "e5" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Minglii/e5
[ "region:us" ]
2024-01-02T11:16:57+00:00
{"dataset_info": {"features": [{"name": "data", "struct": [{"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "id", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1797829, "num_examples": 2600}], "download_size": 1040195, "dataset_size": 1797829}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-02T11:19:53+00:00
[]
[]
TAGS #region-us
# Dataset Card for "e5" More Information needed
[ "# Dataset Card for \"e5\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"e5\"\n\nMore Information needed" ]
[ 6, 12 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"e5\"\n\nMore Information needed" ]
23ab581de8142c5bb83a6a6685a86241d73fa4c8
# Dataset Card for "e10" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Minglii/e10
[ "region:us" ]
2024-01-02T11:17:16+00:00
{"dataset_info": {"features": [{"name": "data", "struct": [{"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "id", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 3496846, "num_examples": 5200}], "download_size": 2006397, "dataset_size": 3496846}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-02T11:20:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "e10" More Information needed
[ "# Dataset Card for \"e10\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"e10\"\n\nMore Information needed" ]
[ 6, 12 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"e10\"\n\nMore Information needed" ]
7f624c4f3cc76a03a81dda32f245f837b39869e5
# Dataset Card for "e15" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Minglii/e15
[ "region:us" ]
2024-01-02T11:17:30+00:00
{"dataset_info": {"features": [{"name": "data", "struct": [{"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "id", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 5112113, "num_examples": 7800}], "download_size": 2914272, "dataset_size": 5112113}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-02T11:20:25+00:00
[]
[]
TAGS #region-us
# Dataset Card for "e15" More Information needed
[ "# Dataset Card for \"e15\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"e15\"\n\nMore Information needed" ]
[ 6, 12 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"e15\"\n\nMore Information needed" ]
56fea5a51e2e21c14933d3756869ae70cad6711e
<div class="card-layout-item" data-background="{}" data-pm-slice="2 2 [&quot;document&quot;,{&quot;docId&quot;:&quot;utelmcg25ai30qz&quot;,&quot;background&quot;:{&quot;type&quot;:&quot;none&quot;},&quot;docFlags&quot;:{&quot;cardLayoutsEnabled&quot;:true},&quot;format&quot;:null,&quot;customCode&quot;:{},&quot;settings&quot;:{},&quot;generateStatus&quot;:null,&quot;generateInfo&quot;:{}},&quot;card&quot;,{&quot;id&quot;:&quot;fjs5psn5copsyz4&quot;,&quot;previewContent&quot;:null,&quot;background&quot;:{&quot;type&quot;:&quot;none&quot;},&quot;container&quot;:{},&quot;cardSize&quot;:&quot;default&quot;,&quot;layout&quot;:&quot;blank&quot;,&quot;layoutTemplateColumns&quot;:null}]"> <p><a href="https://ocutamin-review.company.site/"><strong>Ocutamin</strong> </a>asserts itself as the inaugural all-natural solution designed to enhance vision without the need for medications or risky surgical procedures. It addresses the underlying factors contributing to poor eyesight, aiming to rectify issues solely through the use of natural ingredients.</p> <h2><a href="https://www.globalfitnessmart.com/get-ocutamin"><strong>{</strong><strong>Ocutamin- Official Website -- Order Now}</strong></a></h2> <h2><strong>➡️● For Order Official Website - <a href="https://www.globalfitnessmart.com/get-ocutamin">https://www.globalfitnessmart.com/get-ocutamin</a></strong><br /><strong>➡️● Item Name: &mdash; {<a href="https://www.globalfitnessmart.com/get-ocutamin">Ocutamin</a>}</strong><br /><strong>➡️● Ingredients: &mdash; All Natural</strong><br /><strong>➡️● Incidental Effects: &mdash; NA</strong><br /><strong>➡️● Accessibility: &mdash; <a href="https://www.globalfitnessmart.com/get-ocutamin">Online</a></strong></h2> <h2><a href="https://www.globalfitnessmart.com/get-ocutamin"><strong>✅HUGE DISCOUNT ! HURRY UP! ORDER NOW!✅</strong></a><br /><a href="https://www.globalfitnessmart.com/get-ocutamin"><strong>✅HUGE DISCOUNT ! HURRY UP! ORDER NOW!✅</strong></a><br /><a href="https://www.globalfitnessmart.com/get-ocutamin"><strong>✅HUGE DISCOUNT ! HURRY UP! ORDER NOW!✅</strong></a></h2> <h2><strong>What is <a href="https://groups.google.com/g/ocutamin/c/Ryoz9Suf-RY">Ocutamin</a> Dietary Supplement?</strong></h2> <p><a href="https://sites.google.com/view/ocutamin-review-usa/home"><strong>Ocutamin</strong></a> is a daily supplement claiming to improve and fortify eye health. The formula is doctor-formulated and contains various nutrients to address the root of poor sight. The supplement is easy to swallow and can provide quality results in days.</p> <p>According to the <a href="https://colab.research.google.com/drive/1jB-UNGMX6zTmxUioGB_s0BLe2NYQXlAB"><strong>Ocutamin</strong></a> website, the supplement has eight science-approved ingredients to manage eye issues. It is purportedly a safe, affordable, and effective solution to worsening eye health. It can prevent users from undergoing expensive Laser Eye Surgery (LASIK) or using contact lenses for the rest of their lives.</p> <p>A former eye specialist Dr. Dean Avant is the formulator of <a href="https://lookerstudio.google.com/u/0/reporting/56d93833-e5a4-45dc-a27e-cbd11c011e07/page/KkTmD"><strong>Ocutamin</strong></a>. He experienced failing sight despite his knowledge and expertise. With another researcher, he discovered certain nutrients, including lutein and quercetin, that nurture the eyes and restore sight quickly.Today, thousands have tried the <a href="https://gamma.app/docs/Ocutamin-Pressura-Work-To-Promote-1Vision-Support-Formula-Reviews-i0d33n9jfq7fwyq?mode=doc"><strong>Ocutamin</strong></a> supplement, supposedly restoring their vision. The supplement is ideal for adults of all ages.</p> <h2 style="text-align: center;"><a href="https://www.globalfitnessmart.com/get-ocutamin"><strong>(EXCLUSIVE OFFER)Click Here : "Ocutamin USA"Official Website!</strong></a></h2> <h2><strong>How Does <a href="https://forum.mmm.ucar.edu/threads/ocutamin-pressura-work-to-promote-ocutamin-1vision-support-formula-united-states-canada-does-it-really-work.15058/">Ocutamin</a> Work?</strong></h2> <p><a href="https://ocutamin-official.clubeo.com/calendar/2024/01/04/ocutamin-work-to-promote-restores-eyesight-united-states-canada-does-it-really-work"><strong>Ocutamin</strong></a>'s creator points out that modern problems like excessive use of computers, laptops, mobile phones, and TV is the primary cause of eye problems. In addition, environmental toxins, UV rays, foods, and water can damage the eyes.</p> <p><a href="https://ocutamin-official.clubeo.com/page/ocutamin-pressura-work-to-promote-1vision-support-formula-reviews-scientifically-formulated-supplement.html"><strong>Ocutamin</strong></a> formulator reasons that ancestors enjoyed laser-sharp sight despite their age. They needed unfailing sight to gather food and protect themselves from animals. How did they maintain quality sight? Below is how <a href="https://ocutamin-official.clubeo.com/page/ocutamin-reviews-updated-2024-do-not-buy-till-you-read-this-updated-2024-do-not-buy-till-you-read-this.html"><strong>Ocutamin</strong></a> can support and restore sight</p> <p><strong>Nourish the Eyes</strong> &ndash; Due to poor dietary patterns; most Americans cannot get sufficient vision-improving nutrients. Many homes eat junk and processed foods that increase inflammation and toxins in the eyes. <a href="https://ocutamin-official.clubeo.com/"><strong>Ocutamin</strong></a> has eight active ingredients that nourish the different eye cells, improving their function. The supplement can fight eye malnourishment.</p> <p><strong>Clear Toxins</strong> &ndash; The environment is full of toxins. Avoiding some of these contaminants is impossible because they are in the air, foods, medicine, and cleaning products. <a href="https://www.scoop.it/topic/ocutamin-by-ocutamin-official"><strong>Ocutamin</strong></a> maker lists organophosphate (OP) as the most dangerous toxin that can damage the eye cells. The supplement has nutrients that enhance the cleansing and detoxification process. It can aid the body in eliminating toxins, thus improving sight.</p> <p><strong>Fight Optic Atrophy</strong> &ndash; <a href="https://www.scoop.it/topic/ocutamin-eye-health-care-new-2024-advanced-formula"><strong>Ocutamin</strong></a> creator claims that most people do not utilize the eyes as required leading to optic atrophy. Studies show that people using their eyes actively, indoors and outdoors, train the different cells to become powerful. The supplement may strengthen the different eye parts.</p> <p><strong>Refine Blood Circulation &ndash;</strong> Impaired blood flow in the eye restricts nutrient and oxygen intake. <a href="https://ocutamin-1.jimdosite.com/"><strong>Ocutamin</strong></a> can strengthen the eye capillaries and arteries, thus advancing blood circulation. The maker claims it may restore crystal-clear sight and prevent eye cells from dying.</p> <p><strong>Improve Cellular Health</strong> &ndash; Some <a href="https://ocutamin.bandcamp.com/album/ocutamin-reviews-updated-2024-do-not-buy-till-you-read-this"><strong>Ocutamin</strong></a> ingredients are designed to support cellular regeneration and revitalization. It works by repairing different cells and preventing cellular decay. Consequently, it may protect the eyes from macular degeneration, cataracts, and other age-related sight problems.</p> <h2><strong>Benefits Of Using <a href="https://soundcloud.com/ocutaminofficial/ocutamin-usa-is-legit-2024-updated-report">Ocutamin</a>:</strong></h2> <p>OCUTAMIN's distinctive formulation offers a range of benefits that contribute to improved eye health and enhanced vision. These advantages include:</p> <p><strong>Support Against Digital Eye Strain</strong>: In today's digital age, prolonged screen exposure often leads to digital eye strain. OCUTAMIN's blend of nutrients is designed to alleviate discomfort and mitigate the effects of eye strain associated with screen use.</p> <p><strong>Protection from Age-related Vision Decline</strong>: The potent antioxidants found in OCUTAMIN, such as lutein and zeaxanthin, serve as a defense against age-related vision decline, fostering long-term eye health.</p> <p><strong>Enhanced Night Vision</strong>: Featuring bilberry extract as a key component, OCUTAMIN draws on traditional uses to enhance night vision, allowing for clearer visibility in low-light conditions.</p> <p><strong>Overall Visual Clarity:</strong> By supplying essential nutrients crucial for optimal eye function, OCUTAMIN may contribute to improved visual clarity and focus. This support helps you navigate the world with increased confidence.</p> <h2 style="text-align: center;"><a href="https://www.globalfitnessmart.com/get-ocutamin"><strong>SPECIAL PROMO[Limited Discount]: "Ocutamin USA"Official Website!</strong></a></h2> <h1><strong><a href="https://ocutamin.hashnode.dev/ocutamin-usa-is-legit-2024-updated-report">Ocutamin</a> Ingredients</strong></h1> <p><a href="https://followme.tribe.so/post/ocutamin---usa-is-legit-2024-updated-report-6593bc86f64295489d92b9f1"><strong>Ocutamin</strong></a> is rich in natural ingredients that have undergone extensive research to affirm their effectiveness in enhancing vision. The different ingredients are purportedly in approved dosages and quantities to give users rapid results. The maker boldly claims that you can experience an improvement in eye health within a few days. Below are some of the active ingredients and their role in boosting sight.</p> <p><strong>Quercetin</strong></p> <p><a href="https://medium.com/@ocutaminofficial/ocutamin-usa-is-legit-2024-updated-report-12098509e48f"><strong>Ocutamin</strong></a> argues that most eye problems emanate from high toxin levels. The environment contains various chemicals, including OP, linked to severe vision problems. Scholarly studies show that people exposed to organophosphate have sight defects, including retinal degeneration, optic nerve atrophy, blurred vision, astigmatism, myopia, and optic disc edema.</p> <p>Peer-reviewed studies show that quercetin may improve the strength and functions of neurotransmitters inside the retina. Additionally, the nutrient may restore sight, prevent optic atrophy, and enhance overall cellular health.</p> <p><strong>Bilberry Fruit</strong></p> <p>There are various scientific proofs that bilberry can improve vision. Historical reports show that British Royal Air Force pilots consumed the inky blue fruit to enhance their night vision and combat their enemies.</p> <p>Bilberry is rich in anti-inflammatory and antioxidant components. It can eliminate pollutants reducing vision health. It can nourish every ocular cell, thus boosting its functions. Bilberry fruit can relax the blood capillaries in the eyes, thus enhancing nutrient intake and waste removal.</p> <p><strong>Lutein</strong></p> <p><a href="https://bitbucket.org/ocutamin/ocutamin/issues/1/ocutamin-work-to-promote-restores-eyesight"><strong>Ocutamin</strong></a> contains lutein from Marigold flowers. The nutrient is a natural anti-inflammatory that can combat optic atrophy problems. Studies show it can aid in the removal of toxins. Similarly, it can protect the eyes from UV rays and harmful blue wavelength light.Lutein can strengthen the muscles in the optic nerve, thus boosting its function. It can also enhance communication between the eyes and brain, enhancing vision.</p> <h2><strong><a href="https://followme.tribe.so/post/ocutamin-work-to-promote-restores-eyesight-united-states-canada-does-it-rea--6593bd0602d8d6065bff9e12">Ocutamin</a> Dosage and Side Effects</strong></h2> <p><a href="https://medium.com/@ocutaminofficial/ocutamin-work-to-promote-restores-eyesight-united-states-canada-does-it-really-work-54c725b1b601"><strong>Ocutamin</strong></a> recommends using one capsule daily. Customers can use the supplement at any time of the day. However, users should stay within the suggested dosages.</p> <p>Side Effects &ndash; <a href="https://bitbucket.org/ocutamin/ocutamin/issues/2/ocutamin-reviews-updated-2024-do-not-buy"><strong>Ocutamin</strong></a> is natural and manufactured using pure ingredients. The formulator claims it cannot give users any side effects. Still, the manufacturer recommends seeking medical authorization before using the supplement. Consumers who experience adverse side effects should seek medical help and stop the dosage.</p> <p>Place your order today before stock runs out!</p> <h2><strong>Pros</strong></h2> <p><strong>Clear vision:</strong> As the distortion, blurriness, flashes, and floaters gradually lessen, the clarity of vision is no longer an issue.</p> <p><strong>No surgery:</strong> If the damage can be repaired naturally, there is no need for surgery, which can save time and money.</p> <p><strong>No glasses or lenses:</strong> After taking Ocutamin for a while, the need for vision aids decreases.</p> <p><strong>Protection from the sun:</strong> Ocutamin components also assist to lessen light sensitivity and sun damage.</p> <p><strong>Better vision and focus:</strong> The eyes can see clearly and with complete focus.</p> <h2><strong>Cons</strong></h2> <p><strong>Limited accessibility:</strong> this product may only be purchased online and is not offered by nearby vendors, pharmacies, or shops.</p> <p><strong>Variable results:</strong> depending on how the body responds, results may vary across users and take many months.</p> <p><strong>Not a medication:</strong> Ocutamin is a dietary supplement that promotes eye health but is not a medication. It does not treat anything and cannot be used in place of medicine.</p> <h2 style="text-align: center;"><strong><a href="https://www.globalfitnessmart.com/get-ocutamin">SPECIAL PROMO: Get Ocutamin at the Lowest Discounted Price Online</a></strong></h2> <h2><strong>FAQs about Ocutamin Supplement</strong></h2> <p><strong>Q: What causes poor sight?</strong></p> <p>A: According to Ocutamin, too much screen time, low water intake, poor diet, sleep deficiency, and unhealthy lifestyle habits are the leading causes of eye problems.</p> <p><strong>Q: Can I inherit eye problems?</strong></p> <p>A: Some eye issues like hyperopia and myopia are genetically linked. However, experts claim you can prevent the development of these eye problems by maintaining a healthy diet and good eye hygiene.</p> <p><strong>Q: Can Ocutamin improve eyesight?</strong></p> <p>A: Ocutamin is not a quick fix to better vision. The manufacturer recommends using it consistently for extended periods to nourish the eyes and improve sight.</p> <p><strong>Q: Does Ocutamin interact with other medications?</strong></p> <p>A: The maker recommends seeking medical guidance before using the supplement.</p> <p><strong>Q: Who can use the Ocutamin supplement?</strong></p> <p>A: Ocutamin is marketed for anyone experiencing vision problems, including blurry eyes and poor sight.</p> <p><strong>Q: Can children use Ocutamin?</strong></p> <p>A: No, Ocutamin is only for adult men and women.</p> <p><strong>Q: What ingredients are inside Ocutamin?</strong></p> <p>A: Ocutamin has eight ingredients, including bilberry fruit extract, lutein, and quercetin.</p> <p><strong>Q: How long should I use the Ocutamin supplement?</strong></p> <p>A: The manufacturer suggests using it for over three months.</p> <p><strong>Q: Is Ocutamin addictive?</strong></p> <p>A: Ocutamin is supposedly free from stimulants and thus unlikely to cause addiction even with prolonged usage. However, the maker recommends taking a two-week break after every three months.</p> <p><strong>Q: What if Ocutamin fails to work?</strong></p> <p>A: Ocutamin comes with a 60-day money-back guarantee. Customers can request a refund if they experience no improvement in their vision within the stipulated period.</p> <h1><strong>Pricing</strong></h1> <p>Ocutamin is only available through the official website. The manufacturer warns against buying from third parties. Customers can buy a one-month- six-month package depending on their budget. However, multiple buys come with free shipping and price reduction.</p> <p>Ocutamin is being sold currently at a discount offer. The pricing of Ocutamin is as follows:</p> <ul> <li><strong>Order one bottle of Ocutamin and pay $69.00 and a small shipping fee. You save $30 off the regular retail price of $99.</strong></li> <li><strong>Three-bottle bundle and pay $59.00 each (order total $177). You save $120 off the regular retail price of $297. There&rsquo;s free US shipping included with your order.</strong></li> <li><strong>A six-bottle bundle is $49.00 each (order total $294). You save $300 off the regular retail price of $594. There&rsquo;s free US shipping included with your order.</strong></li> </ul> <h2><strong>Conclusion</strong></h2> <p>Ocutamin is a dietary supplement that promotes the health of the macular, retina, and optic nerve. Ocutamin's makers also assert that it can enhance vision and lower the risk of age-related eye conditions. However, these statements are not backed by any scientific data. Ocutamin's long-term safety is also unknown because peer evaluations have not endorsed it. This supplement should not be taken by women who are pregnant, nursing, under 18, or who have a significant medical condition.</p> <h2 style="text-align: center;"><strong><a href="https://www.globalfitnessmart.com/get-ocutamin">Exclive Details: *Ocutamin* Read More Details on Official Website USA!</a></strong></h2> <h2># READ MORE</h2> <p><a href="https://ocutamin-review.company.site/">https://ocutamin-review.company.site/</a></p> <p><a href="https://groups.google.com/g/ocutamin/c/Ryoz9Suf-RY">https://groups.google.com/g/ocutamin/c/Ryoz9Suf-RY</a></p> <p><a href="https://sites.google.com/view/ocutamin-review-usa/home">https://sites.google.com/view/ocutamin-review-usa/home</a></p> <p><a href="https://colab.research.google.com/drive/1jB-UNGMX6zTmxUioGB_s0BLe2NYQXlAB">https://colab.research.google.com/drive/1jB-UNGMX6zTmxUioGB_s0BLe2NYQXlAB</a></p> <p><a href="https://lookerstudio.google.com/u/0/reporting/56d93833-e5a4-45dc-a27e-cbd11c011e07/page/KkTmD">https://lookerstudio.google.com/u/0/reporting/56d93833-e5a4-45dc-a27e-cbd11c011e07/page/KkTmD</a></p> <p><a href="https://www.scoop.it/topic/ocutamin-by-ocutamin-official">https://www.scoop.it/topic/ocutamin-by-ocutamin-official</a></p> <p><a href="https://ocutamin-official.clubeo.com/">https://ocutamin-official.clubeo.com/</a></p> <p><a href="https://ocutamin-official.clubeo.com/page/ocutamin-reviews-updated-2024-do-not-buy-till-you-read-this-updated-2024-do-not-buy-till-you-read-this.html">https://ocutamin-official.clubeo.com/page/ocutamin-reviews-updated-2024-do-not-buy-till-you-read-this-updated-2024-do-not-buy-till-you-read-this.html</a></p> <p><a href="https://ocutamin-official.clubeo.com/page/ocutamin-pressura-work-to-promote-1vision-support-formula-reviews-scientifically-formulated-supplement.html">https://ocutamin-official.clubeo.com/page/ocutamin-pressura-work-to-promote-1vision-support-formula-reviews-scientifically-formulated-supplement.html</a></p> <p><a href="https://ocutamin-official.clubeo.com/calendar/2024/01/04/ocutamin-work-to-promote-restores-eyesight-united-states-canada-does-it-really-work">https://ocutamin-official.clubeo.com/calendar/2024/01/04/ocutamin-work-to-promote-restores-eyesight-united-states-canada-does-it-really-work</a></p> <p><a href="https://forum.mmm.ucar.edu/threads/ocutamin-pressura-work-to-promote-ocutamin-1vision-support-formula-united-states-canada-does-it-really-work.15058/">https://forum.mmm.ucar.edu/threads/ocutamin-pressura-work-to-promote-ocutamin-1vision-support-formula-united-states-canada-does-it-really-work.15058/</a></p> <p><a href="https://gamma.app/docs/Ocutamin-Pressura-Work-To-Promote-1Vision-Support-Formula-Reviews-i0d33n9jfq7fwyq?mode=doc">https://gamma.app/docs/Ocutamin-Pressura-Work-To-Promote-1Vision-Support-Formula-Reviews-i0d33n9jfq7fwyq?mode=doc</a></p> <p><a href="https://soundcloud.com/ocutaminofficial/ocutamin-usa-is-legit-2024-updated-report">https://soundcloud.com/ocutaminofficial/ocutamin-usa-is-legit-2024-updated-report</a></p> <p><a href="https://ocutamin.bandcamp.com/album/ocutamin-reviews-updated-2024-do-not-buy-till-you-read-this">https://ocutamin.bandcamp.com/album/ocutamin-reviews-updated-2024-do-not-buy-till-you-read-this</a></p> <p><a href="https://ocutamin-1.jimdosite.com/">https://ocutamin-1.jimdosite.com/</a></p> <p><a href="https://bitbucket.org/ocutamin/ocutamin/issues/2/ocutamin-reviews-updated-2024-do-not-buy">https://bitbucket.org/ocutamin/ocutamin/issues/2/ocutamin-reviews-updated-2024-do-not-buy</a></p> <p><a href="https://medium.com/@ocutaminofficial/ocutamin-work-to-promote-restores-eyesight-united-states-canada-does-it-really-work-54c725b1b601">https://medium.com/@ocutaminofficial/ocutamin-work-to-promote-restores-eyesight-united-states-canada-does-it-really-work-54c725b1b601</a></p> <p><a href="https://followme.tribe.so/post/ocutamin-work-to-promote-restores-eyesight-united-states-canada-does-it-rea--6593bd0602d8d6065bff9e12">https://followme.tribe.so/post/ocutamin-work-to-promote-restores-eyesight-united-states-canada-does-it-rea--6593bd0602d8d6065bff9e12</a></p> <p><a href="https://leetcode.com/discuss/interview-question/4491198/Ocutamin-USA-*IS-Legit*-2024-Updated-Report!">https://leetcode.com/discuss/interview-question/4491198/Ocutamin-USA-*IS-Legit*-2024-Updated-Report!</a></p> <p><a href="https://bookshop.org/wishlists/7b030215c10d2bce3555aaa3b68625bc343bab23">https://bookshop.org/wishlists/7b030215c10d2bce3555aaa3b68625bc343bab23</a></p> <p><a href="https://wandering.flarum.cloud/d/35304-ocutamin-usa-is-legit-2024-updated-report">https://wandering.flarum.cloud/d/35304-ocutamin-usa-is-legit-2024-updated-report</a></p> <p><a href="https://community.thebatraanumerology.com/post/ocutamin---usa-is-legit-2024-updated-report-6593c1648f5b2c0a5837c75d">https://community.thebatraanumerology.com/post/ocutamin---usa-is-legit-2024-updated-report-6593c1648f5b2c0a5837c75d</a></p> <p><a href="https://sketchfab.com/3d-models/ocutamin-usa-is-legit-2024-updated-report-4c2dd7484e7c405b8cbfb4b2c07b7793">https://sketchfab.com/3d-models/ocutamin-usa-is-legit-2024-updated-report-4c2dd7484e7c405b8cbfb4b2c07b7793</a></p> </div>
ocutaminofficial/Ocutamin-review
[ "region:us" ]
2024-01-02T11:22:53+00:00
{}
2024-01-02T11:23:05+00:00
[]
[]
TAGS #region-us
<div class="card-layout-item" data-background="{}" data-pm-slice="2 2 [&quot;document&quot;,{&quot;docId&quot;:&quot;utelmcg25ai30qz&quot;,&quot;background&quot;:{&quot;type&quot;:&quot;none&quot;},&quot;docFlags&quot;:{&quot;cardLayoutsEnabled&quot;:true},&quot;format&quot;:null,&quot;customCode&quot;:{},&quot;settings&quot;:{},&quot;generateStatus&quot;:null,&quot;generateInfo&quot;:{}},&quot;card&quot;,{&quot;id&quot;:&quot;fjs5psn5copsyz4&quot;,&quot;previewContent&quot;:null,&quot;background&quot;:{&quot;type&quot;:&quot;none&quot;},&quot;container&quot;:{},&quot;cardSize&quot;:&quot;default&quot;,&quot;layout&quot;:&quot;blank&quot;,&quot;layoutTemplateColumns&quot;:null}]"> <p><a href="URL </a>asserts itself as the inaugural all-natural solution designed to enhance vision without the need for medications or risky surgical procedures. It addresses the underlying factors contributing to poor eyesight, aiming to rectify issues solely through the use of natural ingredients.</p> <h2><a href="URL Official Website -- Order Now}</strong></a></h2> <h2><strong>️● For Order Official Website - <a href="URL/URL /><strong>️● Item Name: &mdash; {<a href="URL /><strong>️● Ingredients: &mdash; All Natural</strong><br /><strong>️● Incidental Effects: &mdash; NA</strong><br /><strong>️● Accessibility: &mdash; <a href="URL <h2><a href="URL DISCOUNT ! HURRY UP! ORDER NOW!</strong></a><br /><a href="URL DISCOUNT ! HURRY UP! ORDER NOW!</strong></a><br /><a href="URL DISCOUNT ! HURRY UP! ORDER NOW!</strong></a></h2> <h2><strong>What is <a href="URL Dietary Supplement?</strong></h2> <p><a href="URL is a daily supplement claiming to improve and fortify eye health. The formula is doctor-formulated and contains various nutrients to address the root of poor sight. The supplement is easy to swallow and can provide quality results in days.</p> <p>According to the <a href="URL website, the supplement has eight science-approved ingredients to manage eye issues. It is purportedly a safe, affordable, and effective solution to worsening eye health. It can prevent users from undergoing expensive Laser Eye Surgery (LASIK) or using contact lenses for the rest of their lives.</p> <p>A former eye specialist Dr. Dean Avant is the formulator of <a href="URL He experienced failing sight despite his knowledge and expertise. With another researcher, he discovered certain nutrients, including lutein and quercetin, that nurture the eyes and restore sight quickly.Today, thousands have tried the <a href="URL supplement, supposedly restoring their vision. The supplement is ideal for adults of all ages.</p> <h2 style="text-align: center;"><a href="URL OFFER)Click Here : "Ocutamin USA"Official Website!</strong></a></h2> <h2><strong>How Does <a href="URL Work?</strong></h2> <p><a href="URL creator points out that modern problems like excessive use of computers, laptops, mobile phones, and TV is the primary cause of eye problems. In addition, environmental toxins, UV rays, foods, and water can damage the eyes.</p> <p><a href="URL formulator reasons that ancestors enjoyed laser-sharp sight despite their age. They needed unfailing sight to gather food and protect themselves from animals. How did they maintain quality sight? Below is how <a href="URL can support and restore sight</p> <p><strong>Nourish the Eyes</strong> &ndash; Due to poor dietary patterns; most Americans cannot get sufficient vision-improving nutrients. Many homes eat junk and processed foods that increase inflammation and toxins in the eyes. <a href="URL has eight active ingredients that nourish the different eye cells, improving their function. The supplement can fight eye malnourishment.</p> <p><strong>Clear Toxins</strong> &ndash; The environment is full of toxins. Avoiding some of these contaminants is impossible because they are in the air, foods, medicine, and cleaning products. <a href="URL maker lists organophosphate (OP) as the most dangerous toxin that can damage the eye cells. The supplement has nutrients that enhance the cleansing and detoxification process. It can aid the body in eliminating toxins, thus improving sight.</p> <p><strong>Fight Optic Atrophy</strong> &ndash; <a href="URL creator claims that most people do not utilize the eyes as required leading to optic atrophy. Studies show that people using their eyes actively, indoors and outdoors, train the different cells to become powerful. The supplement may strengthen the different eye parts.</p> <p><strong>Refine Blood Circulation &ndash;</strong> Impaired blood flow in the eye restricts nutrient and oxygen intake. <a href="URL can strengthen the eye capillaries and arteries, thus advancing blood circulation. The maker claims it may restore crystal-clear sight and prevent eye cells from dying.</p> <p><strong>Improve Cellular Health</strong> &ndash; Some <a href="URL ingredients are designed to support cellular regeneration and revitalization. It works by repairing different cells and preventing cellular decay. Consequently, it may protect the eyes from macular degeneration, cataracts, and other age-related sight problems.</p> <h2><strong>Benefits Of Using <a href="URL <p>OCUTAMIN's distinctive formulation offers a range of benefits that contribute to improved eye health and enhanced vision. These advantages include:</p> <p><strong>Support Against Digital Eye Strain</strong>: In today's digital age, prolonged screen exposure often leads to digital eye strain. OCUTAMIN's blend of nutrients is designed to alleviate discomfort and mitigate the effects of eye strain associated with screen use.</p> <p><strong>Protection from Age-related Vision Decline</strong>: The potent antioxidants found in OCUTAMIN, such as lutein and zeaxanthin, serve as a defense against age-related vision decline, fostering long-term eye health.</p> <p><strong>Enhanced Night Vision</strong>: Featuring bilberry extract as a key component, OCUTAMIN draws on traditional uses to enhance night vision, allowing for clearer visibility in low-light conditions.</p> <p><strong>Overall Visual Clarity:</strong> By supplying essential nutrients crucial for optimal eye function, OCUTAMIN may contribute to improved visual clarity and focus. This support helps you navigate the world with increased confidence.</p> <h2 style="text-align: center;"><a href="URL PROMO[Limited Discount]: "Ocutamin USA"Official Website!</strong></a></h2> <h1><strong><a href="URL Ingredients</strong></h1> <p><a href="URL is rich in natural ingredients that have undergone extensive research to affirm their effectiveness in enhancing vision. The different ingredients are purportedly in approved dosages and quantities to give users rapid results. The maker boldly claims that you can experience an improvement in eye health within a few days. Below are some of the active ingredients and their role in boosting sight.</p> <p><strong>Quercetin</strong></p> <p><a href="URL argues that most eye problems emanate from high toxin levels. The environment contains various chemicals, including OP, linked to severe vision problems. Scholarly studies show that people exposed to organophosphate have sight defects, including retinal degeneration, optic nerve atrophy, blurred vision, astigmatism, myopia, and optic disc edema.</p> <p>Peer-reviewed studies show that quercetin may improve the strength and functions of neurotransmitters inside the retina. Additionally, the nutrient may restore sight, prevent optic atrophy, and enhance overall cellular health.</p> <p><strong>Bilberry Fruit</strong></p> <p>There are various scientific proofs that bilberry can improve vision. Historical reports show that British Royal Air Force pilots consumed the inky blue fruit to enhance their night vision and combat their enemies.</p> <p>Bilberry is rich in anti-inflammatory and antioxidant components. It can eliminate pollutants reducing vision health. It can nourish every ocular cell, thus boosting its functions. Bilberry fruit can relax the blood capillaries in the eyes, thus enhancing nutrient intake and waste removal.</p> <p><strong>Lutein</strong></p> <p><a href="URL contains lutein from Marigold flowers. The nutrient is a natural anti-inflammatory that can combat optic atrophy problems. Studies show it can aid in the removal of toxins. Similarly, it can protect the eyes from UV rays and harmful blue wavelength light.Lutein can strengthen the muscles in the optic nerve, thus boosting its function. It can also enhance communication between the eyes and brain, enhancing vision.</p> <h2><strong><a href="URL Dosage and Side Effects</strong></h2> <p><a href="URL recommends using one capsule daily. Customers can use the supplement at any time of the day. However, users should stay within the suggested dosages.</p> <p>Side Effects &ndash; <a href="URL is natural and manufactured using pure ingredients. The formulator claims it cannot give users any side effects. Still, the manufacturer recommends seeking medical authorization before using the supplement. Consumers who experience adverse side effects should seek medical help and stop the dosage.</p> <p>Place your order today before stock runs out!</p> <h2><strong>Pros</strong></h2> <p><strong>Clear vision:</strong> As the distortion, blurriness, flashes, and floaters gradually lessen, the clarity of vision is no longer an issue.</p> <p><strong>No surgery:</strong> If the damage can be repaired naturally, there is no need for surgery, which can save time and money.</p> <p><strong>No glasses or lenses:</strong> After taking Ocutamin for a while, the need for vision aids decreases.</p> <p><strong>Protection from the sun:</strong> Ocutamin components also assist to lessen light sensitivity and sun damage.</p> <p><strong>Better vision and focus:</strong> The eyes can see clearly and with complete focus.</p> <h2><strong>Cons</strong></h2> <p><strong>Limited accessibility:</strong> this product may only be purchased online and is not offered by nearby vendors, pharmacies, or shops.</p> <p><strong>Variable results:</strong> depending on how the body responds, results may vary across users and take many months.</p> <p><strong>Not a medication:</strong> Ocutamin is a dietary supplement that promotes eye health but is not a medication. It does not treat anything and cannot be used in place of medicine.</p> <h2 style="text-align: center;"><strong><a href="URL PROMO: Get Ocutamin at the Lowest Discounted Price Online</a></strong></h2> <h2><strong>FAQs about Ocutamin Supplement</strong></h2> <p><strong>Q: What causes poor sight?</strong></p> <p>A: According to Ocutamin, too much screen time, low water intake, poor diet, sleep deficiency, and unhealthy lifestyle habits are the leading causes of eye problems.</p> <p><strong>Q: Can I inherit eye problems?</strong></p> <p>A: Some eye issues like hyperopia and myopia are genetically linked. However, experts claim you can prevent the development of these eye problems by maintaining a healthy diet and good eye hygiene.</p> <p><strong>Q: Can Ocutamin improve eyesight?</strong></p> <p>A: Ocutamin is not a quick fix to better vision. The manufacturer recommends using it consistently for extended periods to nourish the eyes and improve sight.</p> <p><strong>Q: Does Ocutamin interact with other medications?</strong></p> <p>A: The maker recommends seeking medical guidance before using the supplement.</p> <p><strong>Q: Who can use the Ocutamin supplement?</strong></p> <p>A: Ocutamin is marketed for anyone experiencing vision problems, including blurry eyes and poor sight.</p> <p><strong>Q: Can children use Ocutamin?</strong></p> <p>A: No, Ocutamin is only for adult men and women.</p> <p><strong>Q: What ingredients are inside Ocutamin?</strong></p> <p>A: Ocutamin has eight ingredients, including bilberry fruit extract, lutein, and quercetin.</p> <p><strong>Q: How long should I use the Ocutamin supplement?</strong></p> <p>A: The manufacturer suggests using it for over three months.</p> <p><strong>Q: Is Ocutamin addictive?</strong></p> <p>A: Ocutamin is supposedly free from stimulants and thus unlikely to cause addiction even with prolonged usage. However, the maker recommends taking a two-week break after every three months.</p> <p><strong>Q: What if Ocutamin fails to work?</strong></p> <p>A: Ocutamin comes with a 60-day money-back guarantee. Customers can request a refund if they experience no improvement in their vision within the stipulated period.</p> <h1><strong>Pricing</strong></h1> <p>Ocutamin is only available through the official website. The manufacturer warns against buying from third parties. Customers can buy a one-month- six-month package depending on their budget. However, multiple buys come with free shipping and price reduction.</p> <p>Ocutamin is being sold currently at a discount offer. The pricing of Ocutamin is as follows:</p> <ul> <li><strong>Order one bottle of Ocutamin and pay $69.00 and a small shipping fee. You save $30 off the regular retail price of $99.</strong></li> <li><strong>Three-bottle bundle and pay $59.00 each (order total $177). You save $120 off the regular retail price of $297. There&rsquo;s free US shipping included with your order.</strong></li> <li><strong>A six-bottle bundle is $49.00 each (order total $294). You save $300 off the regular retail price of $594. There&rsquo;s free US shipping included with your order.</strong></li> </ul> <h2><strong>Conclusion</strong></h2> <p>Ocutamin is a dietary supplement that promotes the health of the macular, retina, and optic nerve. Ocutamin's makers also assert that it can enhance vision and lower the risk of age-related eye conditions. However, these statements are not backed by any scientific data. Ocutamin's long-term safety is also unknown because peer evaluations have not endorsed it. This supplement should not be taken by women who are pregnant, nursing, under 18, or who have a significant medical condition.</p> <h2 style="text-align: center;"><strong><a href="URL Details: *Ocutamin* Read More Details on Official Website USA!</a></strong></h2> <h2># READ MORE</h2> <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL <p><a href="URL/URL </div>
[ "# READ MORE</h2>\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n</div>" ]
[ "TAGS\n#region-us \n", "# READ MORE</h2>\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n</div>" ]
[ 6, 241 ]
[ "passage: TAGS\n#region-us \n# READ MORE</h2>\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n<p><a href=\"URL/URL\n</div>" ]
2d3a47af7cbfa6db4868f657ebd72edd1a447672
# Dataset Card for "aug_train1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
adityarra07/aug_train_1
[ "region:us" ]
2024-01-02T13:31:27+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 364677474.1, "num_examples": 2700}, {"name": "test", "num_bytes": 36864625.0, "num_examples": 300}], "download_size": 395989646, "dataset_size": 401542099.1}}
2024-01-02T13:31:40+00:00
[]
[]
TAGS #region-us
# Dataset Card for "aug_train1" More Information needed
[ "# Dataset Card for \"aug_train1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"aug_train1\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"aug_train1\"\n\nMore Information needed" ]
961888726137e06f43ae0d1dbd33ed187362e09f
# Dataset Card for Spoken-SQuAD ## Dataset Description - **Repository:** [https://github.com/chiahsuan156/Spoken-SQuAD](https://github.com/chiahsuan156/Spoken-SQuAD) - **Paper:** [https://arxiv.org/abs/1804.00320](https://arxiv.org/abs/1804.00320) ## Citation ```bibtex @article{lee2018spoken, title={Spoken SQuAD: A Study of Mitigating the Impact of Speech Recognition Errors on Listening Comprehension}, author={Lee, Chia-Hsuan and Wu, Szu-Lin and Liu, Chi-Liang and Lee, Hung-yi}, journal={Proc. Interspeech 2018}, pages={3459--3463}, year={2018} } ```
alinet/spoken_squad
[ "task_categories:question-answering", "size_categories:10K<n<100K", "language:en", "license:unknown", "arxiv:1804.00320", "region:us" ]
2024-01-02T13:43:11+00:00
{"language": ["en"], "license": "unknown", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering"], "pretty_name": "Spoken-SQuAD", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "train.json"}, {"split": "validation", "path": "test.json"}]}, {"config_name": "WER44", "data_files": [{"split": "test", "path": "test_WER44.json"}]}, {"config_name": "WER54", "data_files": [{"split": "test", "path": "test_WER54.json"}]}]}
2024-01-23T16:54:56+00:00
[ "1804.00320" ]
[ "en" ]
TAGS #task_categories-question-answering #size_categories-10K<n<100K #language-English #license-unknown #arxiv-1804.00320 #region-us
# Dataset Card for Spoken-SQuAD ## Dataset Description - Repository: URL - Paper: URL
[ "# Dataset Card for Spoken-SQuAD", "## Dataset Description\n\n- Repository: URL\n- Paper: URL" ]
[ "TAGS\n#task_categories-question-answering #size_categories-10K<n<100K #language-English #license-unknown #arxiv-1804.00320 #region-us \n", "# Dataset Card for Spoken-SQuAD", "## Dataset Description\n\n- Repository: URL\n- Paper: URL" ]
[ 49, 11, 14 ]
[ "passage: TAGS\n#task_categories-question-answering #size_categories-10K<n<100K #language-English #license-unknown #arxiv-1804.00320 #region-us \n# Dataset Card for Spoken-SQuAD## Dataset Description\n\n- Repository: URL\n- Paper: URL" ]
67265af29ef78fe2eb84aa650f1738ae545397aa
# Dataset Card for "aug_train2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
adityarra07/aug_train_2
[ "region:us" ]
2024-01-02T13:43:55+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 363575582.1, "num_examples": 2700}, {"name": "test", "num_bytes": 37966517.0, "num_examples": 300}], "download_size": 395989646, "dataset_size": 401542099.1}}
2024-01-02T13:44:09+00:00
[]
[]
TAGS #region-us
# Dataset Card for "aug_train2" More Information needed
[ "# Dataset Card for \"aug_train2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"aug_train2\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"aug_train2\"\n\nMore Information needed" ]
073d5608133d715723734a42ab89a8ad7e856f5d
# Malaysian Youtube Audio Instructions Convert from speech to text using Whisper Large V3 after that generate multiturn using Mixtral Instruct. - [mixtral-audio-instruction.jsonl](mixtral-audio-instruction.jsonl), 293752 rows, 2.1 GB. ## Example data ``` {'context': 'anda tahu keuntungan boleh lebih tinggi daripada keuntungan kewangan rumah maka saya tidak akan mencari dalam akaun saya akan mencari ke dalam ethereum atau beberapa crypto punks bergantung pada faktor risiko anda kerana rumah kajang dihantar tidak mengganggu dsr saya sejauh ini jadi sekarang apa posisi saya untuk mendapatkan kewangan ketiga jadi mungkin setelah melihat sekeliling saya menemui seorang penjual yang dapat menutupi perhubungan tetapi bank hanya menerima 70% dari itu saya boleh membayar perbezaan dengan menggunakan wang ini kerana sekali lagi ia menyusahkan saya dan aset tetapi jika anda tidak selesa dengan mencari', 'chat': [{'role': 'user', 'content': 'Why might someone consider investing in cryptocurrencies like Ethereum instead of traditional financial investments such as real estate?', 'content_ms': 'Mengapakah seseorang mungkin mempertimbangkan untuk melabur dalam mata wang kripto seperti Ethereum dan bukannya pelaburan kewangan tradisional seperti hartanah?'}, {'role': 'assistant', 'content': 'There are several reasons why someone might consider investing in cryptocurrencies like Ethereum instead of traditional financial investments such as real estate:\n\n1. Potential for higher returns: Cryptocurrencies have the potential for much higher returns compared to traditional financial investments. While the real estate market can be profitable, the potential for exponential growth in the cryptocurrency market is significantly higher.\n2. Diversification: Investing in cryptocurrencies can provide a diversification benefit to an investment portfolio. By investing in a variety of assets, including cryptocurrencies, investors can spread their risk and potentially increase their overall returns.\n3. Liquidity: Cryptocurrencies are generally more liquid than real estate investments. It is easier to buy and sell cryptocurrencies, and they can be quickly converted into cash.\n4. Accessibility: Investing in cryptocurrencies is generally more accessible than investing in real estate. There are fewer barriers to entry, and it is easier to get started with a smaller amount of money.\n5. Decentralization: Cryptocurrencies are decentralized, meaning they are not controlled by any government or financial institution. This can provide a level of independence and autonomy that is not possible with traditional financial investments.\n\nHowever, it is important to note that investing in cryptocurrencies is also riskier than traditional financial investments. The value of cryptocurrencies can be highly volatile, and there is a risk of losing all or a significant portion of the investment. Therefore, it is important to carefully consider your own risk tolerance and financial situation before investing in cryptocurrencies.', 'content_ms': 'Terdapat beberapa sebab mengapa seseorang mungkin mempertimbangkan untuk melabur dalam mata wang kripto seperti Ethereum dan bukannya pelaburan kewangan tradisional seperti hartanah:\n\n1. Potensi untuk pulangan yang lebih tinggi: Mata wang kripto mempunyai potensi untuk pulangan yang lebih tinggi berbanding dengan pelaburan kewangan tradisional. Walaupun pasaran hartanah boleh menguntungkan, potensi pertumbuhan eksponen dalam pasaran mata wang kripto adalah jauh lebih tinggi.\n2. Kepelbagaian: Melabur dalam mata wang kripto boleh memberikan manfaat kepelbagaian kepada portfolio pelaburan. Dengan melabur dalam pelbagai aset, termasuk mata wang kripto, pelabur boleh menyebarkan risiko mereka dan berpotensi meningkatkan pulangan keseluruhan mereka.\n3. Kecairan: Mata wang kripto biasanya lebih cair daripada pelaburan hartanah. Lebih mudah untuk membeli dan menjual mata wang kripto, dan ia boleh ditukar dengan cepat kepada wang tunai.\n4. Kebolehcapaian: Melabur dalam mata wang kripto secara umumnya lebih mudah diakses daripada melabur dalam hartanah. Terdapat lebih sedikit halangan untuk masuk, dan lebih mudah untuk bermula dengan jumlah wang yang lebih kecil.\n5. Desentralisasi: Mata wang kripto adalah terdesentralisasi, bermakna ia tidak dikawal oleh mana-mana kerajaan atau institusi kewangan. Ini boleh memberikan tahap kebebasan dan autonomi yang tidak mungkin dilakukan dengan pelaburan kewangan tradisional.\n\nWalau bagaimanapun, adalah penting untuk ambil perhatian bahawa melabur dalam mata wang kripto juga lebih berisiko daripada pelaburan kewangan tradisional. Nilai mata wang kripto boleh menjadi sangat tidak menentu, dan terdapat risiko kehilangan semua atau sebahagian besar pelaburan. Oleh itu, adalah penting untuk mempertimbangkan dengan teliti toleransi risiko dan keadaan kewangan anda sendiri sebelum melabur dalam mata wang kripto.'}, {'role': 'user', 'content': 'mengapa orang menggunakan blockchain untuk mengurus siasia digital', 'content_ms': 'mengapa orang menggunakan blockchain untuk menguruskan sisa digital'}, {'role': 'assistant', 'content': 'There are several reasons why people use blockchain technology to manage digital assets:\n\n1. Security: Blockchain technology provides a high level of security for managing digital assets. The decentralized nature of blockchain makes it difficult for hackers to manipulate or steal digital assets.\n2. Transparency: Blockchain provides a transparent record of all transactions related to a digital asset. This makes it easier to track ownership and verify the authenticity of the asset.\n3. Accessibility: Blockchain technology allows for the easy transfer and management of digital assets, making them accessible to anyone with an internet connection.\n4. Immutability: Once data is recorded on a blockchain, it cannot be altered or deleted. This ensures that the record of ownership and transactions related to a digital asset is permanent and cannot be tampered with.\n5. Programmability: Blockchain technology allows for the creation of smart contracts, which are self-executing contracts with the terms of the agreement directly written into code. This enables automatic and secure transfer of digital assets based on predefined conditions.\n6. Interoperability: Blockchain technology enables the interoperability of different digital assets and systems, allowing for seamless transfer and exchange of value.\n\nOverall, the use of blockchain technology for managing digital assets provides a secure, transparent, and decentralized system that is accessible to anyone with an internet connection. This makes it an attractive solution for managing various types of digital assets, including cryptocurrencies, digital art, and other digital assets.', 'content_ms': 'Terdapat beberapa sebab mengapa orang menggunakan teknologi blockchain untuk mengurus aset digital:\n\n1. Keselamatan: Teknologi Blockchain menyediakan tahap keselamatan yang tinggi untuk mengurus aset digital. Sifat blockchain yang tidak berpusat menyukarkan penggodam untuk memanipulasi atau mencuri aset digital.\n2. Ketelusan: Blockchain menyediakan rekod yang telus bagi semua transaksi yang berkaitan dengan aset digital. Ini memudahkan untuk menjejak pemilikan dan mengesahkan ketulenan aset.\n3. Kebolehcapaian: Teknologi Blockchain membolehkan pemindahan dan pengurusan aset digital dengan mudah, menjadikannya boleh diakses oleh sesiapa sahaja yang mempunyai sambungan internet.\n4. Ketidakbolehubahan: Setelah data direkodkan pada rantaian blok, ia tidak boleh diubah atau dipadamkan. Ini memastikan rekod pemilikan dan transaksi yang berkaitan dengan aset digital adalah kekal dan tidak boleh diusik.\n5. Kebolehprograman: Teknologi Blockchain membolehkan penciptaan kontrak pintar, yang merupakan kontrak yang dilaksanakan sendiri dengan syarat perjanjian yang ditulis terus ke dalam kod. Ini membolehkan pemindahan aset digital secara automatik dan selamat berdasarkan syarat yang telah ditetapkan.\n6. Saling kendali: Teknologi Blockchain membolehkan kesalingoperasian aset dan sistem digital yang berbeza, membolehkan pemindahan dan pertukaran nilai yang lancar.\n\nSecara keseluruhannya, penggunaan teknologi blockchain untuk mengurus aset digital menyediakan sistem yang selamat, telus dan terpencar yang boleh diakses oleh sesiapa sahaja yang mempunyai sambungan internet. Ini menjadikannya penyelesaian yang menarik untuk mengurus pelbagai jenis aset digital, termasuk mata wang kripto, seni digital dan aset digital lain.'}], 'filename': 'output-audio/1-0-0.mp3'} ``` Notebooks at https://github.com/mesolitica/malaysian-dataset/tree/master/chatbot/malaysian-youtube-audio-instructions
mesolitica/malaysian-youtube-audio-instructions
[ "language:ms", "license:mit", "region:us" ]
2024-01-02T13:48:43+00:00
{"language": ["ms"], "license": "mit"}
2024-02-02T08:58:25+00:00
[]
[ "ms" ]
TAGS #language-Malay (macrolanguage) #license-mit #region-us
# Malaysian Youtube Audio Instructions Convert from speech to text using Whisper Large V3 after that generate multiturn using Mixtral Instruct. - URL, 293752 rows, 2.1 GB. ## Example data Notebooks at URL
[ "# Malaysian Youtube Audio Instructions\n\nConvert from speech to text using Whisper Large V3 after that generate multiturn using Mixtral Instruct.\n\n- URL, 293752 rows, 2.1 GB.", "## Example data\n\n\n\nNotebooks at URL" ]
[ "TAGS\n#language-Malay (macrolanguage) #license-mit #region-us \n", "# Malaysian Youtube Audio Instructions\n\nConvert from speech to text using Whisper Large V3 after that generate multiturn using Mixtral Instruct.\n\n- URL, 293752 rows, 2.1 GB.", "## Example data\n\n\n\nNotebooks at URL" ]
[ 21, 46, 8 ]
[ "passage: TAGS\n#language-Malay (macrolanguage) #license-mit #region-us \n# Malaysian Youtube Audio Instructions\n\nConvert from speech to text using Whisper Large V3 after that generate multiturn using Mixtral Instruct.\n\n- URL, 293752 rows, 2.1 GB.## Example data\n\n\n\nNotebooks at URL" ]
611875550ddc9f8922588d206919ddbb4c790658
# Dataset Card for "aug_train3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
adityarra07/aug_train_3
[ "region:us" ]
2024-01-02T13:56:29+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 360855280.1, "num_examples": 2700}, {"name": "test", "num_bytes": 40686819.0, "num_examples": 300}], "download_size": 395989646, "dataset_size": 401542099.1}}
2024-01-02T13:56:43+00:00
[]
[]
TAGS #region-us
# Dataset Card for "aug_train3" More Information needed
[ "# Dataset Card for \"aug_train3\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"aug_train3\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"aug_train3\"\n\nMore Information needed" ]
a991496f6a092a90ac37fe8c1453317fb130d1d5
# Dataset Card for "aug_train4" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
adityarra07/aug_train_4
[ "region:us" ]
2024-01-02T14:09:11+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 357988377.1, "num_examples": 2700}, {"name": "test", "num_bytes": 43553722.0, "num_examples": 300}], "download_size": 395989646, "dataset_size": 401542099.1}}
2024-01-02T14:09:25+00:00
[]
[]
TAGS #region-us
# Dataset Card for "aug_train4" More Information needed
[ "# Dataset Card for \"aug_train4\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"aug_train4\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"aug_train4\"\n\nMore Information needed" ]
8fb31baa59cc30e25f22feafef86b605b2ea9b97
alpaca format compatible ultrafeedback for sft --- license: mit task_categories: - text-generation language: - en ---
adi-kmt/ultrafeedback_allenai_cleaned_alpaca
[ "region:us" ]
2024-01-02T14:10:51+00:00
{"dataset_info": {"features": [{"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 120293986, "num_examples": 60829}], "download_size": 69901098, "dataset_size": 120293986}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-03T16:50:30+00:00
[]
[]
TAGS #region-us
alpaca format compatible ultrafeedback for sft --- license: mit task_categories: - text-generation language: - en ---
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
b81ad671d7387af035f818433a446160c03fc427
# Dataset Card for "sft_data_phase2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
haisonle001/sft_data_phase2
[ "region:us" ]
2024-01-02T14:11:02+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train_sft", "path": "data/train_sft-*"}]}], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "prompt_id", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train_sft", "num_bytes": 572411392.0, "num_examples": 194937}], "download_size": 259484116, "dataset_size": 572411392.0}}
2024-01-02T14:11:42+00:00
[]
[]
TAGS #region-us
# Dataset Card for "sft_data_phase2" More Information needed
[ "# Dataset Card for \"sft_data_phase2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"sft_data_phase2\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"sft_data_phase2\"\n\nMore Information needed" ]
a4a48d9090cc495e25d03697613719778fb5178d
# KoLLaVA-v1.5 Visual Instruct 581K Dataset Card [LLaVA-v1.5](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_v1_5_mix665k.json)의 Instruction-following Data에서 필요한 데이터를 필터링하고, 한국어로 번역한 데이터셋입니다. (feat. DeepL) 사용 방법은 [KoLLaVA](https://github.com/tabtoyou/KoLLaVA) repo를 참고해주세요. 작성중..
tabtoyou/KoLLaVA-v1.5-Instruct-581k
[ "task_categories:visual-question-answering", "size_categories:100K<n<1M", "language:ko", "license:cc-by-nc-4.0", "region:us" ]
2024-01-02T14:18:55+00:00
{"language": ["ko"], "license": "cc-by-nc-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["visual-question-answering"]}
2024-01-02T15:07:53+00:00
[]
[ "ko" ]
TAGS #task_categories-visual-question-answering #size_categories-100K<n<1M #language-Korean #license-cc-by-nc-4.0 #region-us
# KoLLaVA-v1.5 Visual Instruct 581K Dataset Card LLaVA-v1.5의 Instruction-following Data에서 필요한 데이터를 필터링하고, 한국어로 번역한 데이터셋입니다. (feat. DeepL) 사용 방법은 KoLLaVA repo를 참고해주세요. 작성중..
[ "# KoLLaVA-v1.5 Visual Instruct 581K Dataset Card\nLLaVA-v1.5의 Instruction-following Data에서 필요한 데이터를 필터링하고, 한국어로 번역한 데이터셋입니다. (feat. DeepL)\n\n사용 방법은 KoLLaVA repo를 참고해주세요.\n\n작성중.." ]
[ "TAGS\n#task_categories-visual-question-answering #size_categories-100K<n<1M #language-Korean #license-cc-by-nc-4.0 #region-us \n", "# KoLLaVA-v1.5 Visual Instruct 581K Dataset Card\nLLaVA-v1.5의 Instruction-following Data에서 필요한 데이터를 필터링하고, 한국어로 번역한 데이터셋입니다. (feat. DeepL)\n\n사용 방법은 KoLLaVA repo를 참고해주세요.\n\n작성중.." ]
[ 49, 68 ]
[ "passage: TAGS\n#task_categories-visual-question-answering #size_categories-100K<n<1M #language-Korean #license-cc-by-nc-4.0 #region-us \n# KoLLaVA-v1.5 Visual Instruct 581K Dataset Card\nLLaVA-v1.5의 Instruction-following Data에서 필요한 데이터를 필터링하고, 한국어로 번역한 데이터셋입니다. (feat. DeepL)\n\n사용 방법은 KoLLaVA repo를 참고해주세요.\n\n작성중.." ]
8837137dfc54e3871b060e19f16b587ce1601d27
# Dataset information Dataset concatenating all NER datasets, available in French and open-source, for 4 entities (LOC, PER, ORG, MISC). There are a total of **384,773** rows, of which 328,757 are for training, 24,131 for validation and 31,885 for testing. Our methodology is described in a blog post available in [English](https://blog.vaniila.ai/en/NER_en/) or [French](https://blog.vaniila.ai/NER/). # Usage ``` from datasets import load_dataset dataset = load_dataset("CATIE-AQ/frenchNER_4entities") ``` # Dataset ## Details of rows | Dataset Original | Splits | Note | | ----------- | ----------- | ----------- | | [Multiconer](https://huggingface.co/datasets/aashsach/multiconer2)| 16,548 train / 857 validation / 0 test | In practice, we use the original validation set as test set<br> and creat a new val set from 5% of train created, i.e.<br> 15,721 train / 827 validation / 857 test| | [Multinerd](https://huggingface.co/datasets/Babelscape/multinerd)| 140,880 train / 17,610 val / 17,695 test | | | [Pii-masking-200k](https://huggingface.co/datasets/ai4privacy/pii-masking-200k)| 61,958 train / 0 validation / 0 test | Only dataset without duplicate data or leaks | | [Wikiner](https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr)| 120,682 train / 0 validation / 13,410 test | In practice, 5% of val created from train set, i.e.<br> 113,296 train / 5,994 validation / 13,393 test | ## Removing duplicate data and leaks The sum of the values of the datasets listed here gives the following result: ``` DatasetDict({ train: Dataset({ features: ['tokens', 'ner_tags', 'dataset'], num_rows: 331855 }) validation: Dataset({ features: ['tokens', 'ner_tags', 'dataset'], num_rows: 24431 }) test: Dataset({ features: ['tokens', 'ner_tags', 'dataset'], num_rows: 31945 }) }) ``` However, a data item in training split A may not be in A's test split, but may be present in B's test set, creating a leak when we create the A+B dataset. The same logic applies to duplicate data. So we need to make sure we remove them. After our clean-up, we finally have the following numbers: ``` DatasetDict({ train: Dataset({ features: ['tokens', 'ner_tags', 'dataset'], num_rows: 328757 }) validation: Dataset({ features: ['tokens', 'ner_tags', 'dataset'], num_rows: 24131 }) test: Dataset({ features: ['tokens', 'ner_tags', 'dataset'], num_rows: 31885 }) }) ``` Note: in practice, the test split contains 1 line which we failed to deduplicate, i.e. 0.003%. ### Details of entities (after cleaning) <table> <thead> <tr> <th><br>Datasets</th> <th><br>Splits</th> <th><br>O</th> <th><br>PER</th> <th><br>LOC</th> <th><br>ORG</th> <th><br>MISC</th> </tr> </thead> <tbody> <tr> <td rowspan="3"><br>Multiconer</td> <td><br>train</td> <td><br>184,060</td> <td><br>18,060</td> <td><br>7,165</td> <td><br>6,967</td> <td><br>16,033</td> </tr> <tr> <td><br>validation</td> <td><br>10,064</td> <td><br>1,069</td> <td><br>389</td> <td><br>328</td> <td><br>836</td> </tr> <tr> <td><br>test</td> <td><br>10,413</td> <td><br>979</td> <td><br>387</td> <td><br>381</td> <td><br>874</td> </tr> <tr> <td rowspan="3"><br>Multinerd</td> <td><br>train</td> <td><br>2,947,995</td> <td><br>149,159</td> <td><br>105,586</td> <td><br>68,821</td> <td><br>94,510</td> </tr> <tr> <td><br>validation</td> <td><br>397,409</td> <td><br>17,484</td> <td><br>13,992</td> <td><br>3,478</td> <td><br>13,557</td> </tr> <tr> <td><br>test</td> <td><br>405,176</td> <td><br>18,567</td> <td><br>14,083</td> <td><br>3,636</td> <td><br>12,710</td> </tr> <tr> <td rowspan="1"><br>Pii-masking-200k</td> <td><br>train</td> <td><br>1,785,505</td> <td><br>29,838</td> <td><br>42,154</td> <td><br>12,310</td> <td><br>619,710</td> </tr> <tr> <td rowspan="3"><br>Wikiner</td> <td><br>train</td> <td><br>2,622,132</td> <td><br>110,087</td> <td><br>131,841</td> <td><br>38,991</td> <td><br>69,241</td> </tr> <tr> <td><br>validation</td> <td><br>137,107</td> <td><br>5,481</td> <td><br>7,204</td> <td><br>2,121</td> <td><br>3,828</td> </tr> <tr> <td><br>test</td> <td><br>305,034</td> <td><br>13,324</td> <td><br>15,213</td> <td><br>3,894</td> <td><br>8,176</td> </tr> <tr> <td rowspan="3"><br>Total</td> <td><br>train</td> <td><br><b>7,539,692</b></td> <td><br><b>307,144</b></td> <td><br><b>286,746</b></td> <td><br><b>127,089</b></td> <td><br><b>799,494</b></td> </tr> <tr> <td><br>validation</td> <td><br><b>544,580</b></td> <td><br><b>24,034</b></td> <td><br><b>21,585</b></td> <td><br><b>5,927</b></td> <td><br><b>18,221</b></td> </tr> <tr> <td><br>test</td> <td><br><b>720,623</b></td> <td><br><b>32,870</b></td> <td><br><b>29,683</b></td> <td><br><b>7,911</b></td> <td><br><b>21,760</b></td> </tr> </tbody> </table> ## Columns ``` dataset_train = dataset['train'].to_pandas() dataset_train.head() tokens ner_tags dataset 0 [On, a, souvent, voulu, faire, de, La, Bruyère... [0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, ... wikiner 1 [Les, améliorations, apportées, par, rapport, ... [0, 0, 0, 0, 0, 0, 4, 4, 0, 0, 0, 0, 0, 2, 2, ... wikiner 2 [Cette, assemblée, de, notables, ,, réunie, en... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, ... wikiner 3 [Wittgenstein, projetait, en, effet, d', élabo... [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ... wikiner 4 [Le, premier, écrivain, à, écrire, des, fictio... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, ... wikiner ``` - the `tokens` column contains the tokens - the `ner_tags` column contains the NER tags (IOB format with 0="O", 1="PER", 2="ORG", 3="LOC" and 4="MISC") - the `dataset` column identifies the row's original dataset (if you wish to apply filters to it) ## Split - `train` corresponds to the concatenation of `multiconer` + `multinerd` + `pii-masking-200k` + `wikiner` - `validation` corresponds to the concatenation of `multiconer` + `multinerd` + `wikiner` - `test` corresponds to the concatenation of `multiconer` + `multinerd` + `wikiner` # Citations ### multiconer ``` @inproceedings{multiconer2-report, title={{SemEval-2023 Task 2: Fine-grained Multilingual Named Entity Recognition (MultiCoNER 2)}}, author={Fetahu, Besnik and Kar, Sudipta and Chen, Zhiyu and Rokhlenko, Oleg and Malmasi, Shervin}, booktitle={Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)}, year={2023}, publisher={Association for Computational Linguistics}} @article{multiconer2-data, title={{MultiCoNER v2: a Large Multilingual dataset for Fine-grained and Noisy Named Entity Recognition}}, author={Fetahu, Besnik and Chen, Zhiyu and Kar, Sudipta and Rokhlenko, Oleg and Malmasi, Shervin}, year={2023}} ``` ### multinerd ``` @inproceedings{tedeschi-navigli-2022-multinerd, title = "{M}ulti{NERD}: A Multilingual, Multi-Genre and Fine-Grained Dataset for Named Entity Recognition (and Disambiguation)", author = "Tedeschi, Simone and Navigli, Roberto", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-naacl.60", doi = "10.18653/v1/2022.findings-naacl.60", pages = "801--812"} ``` ### pii-masking-200k ``` @misc {ai4privacy_2023, author = { {ai4Privacy} }, title = { pii-masking-200k (Revision 1d4c0a1) }, year = 2023, url = { https://huggingface.co/datasets/ai4privacy/pii-masking-200k }, doi = { 10.57967/hf/1532 }, publisher = { Hugging Face }} ``` ### wikiner ``` @article{NOTHMAN2013151, title = {Learning multilingual named entity recognition from Wikipedia}, journal = {Artificial Intelligence}, volume = {194}, pages = {151-175}, year = {2013}, note = {Artificial Intelligence, Wikipedia and Semi-Structured Resources}, issn = {0004-3702}, doi = {https://doi.org/10.1016/j.artint.2012.03.006}, url = {https://www.sciencedirect.com/science/article/pii/S0004370212000276}, author = {Joel Nothman and Nicky Ringland and Will Radford and Tara Murphy and James R. Curran}} ``` ### frenchNER_4entities ``` @misc {frenchNER2024, author = { {BOURDOIS, Loïck} }, organization = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, title = { frenchNER_4entities (Revision f1e8fef) }, year = 2024, url = { https://huggingface.co/datasets/CATIE-AQ/frenchNER_4entities }, doi = { 10.57967/hf/1751 }, publisher = { Hugging Face } } ``` # License [cc-by-4.0](https://creativecommons.org/licenses/by/4.0/deed.en)
CATIE-AQ/frenchNER_4entities
[ "task_categories:token-classification", "size_categories:100K<n<1M", "language:fr", "license:cc-by-4.0", "doi:10.57967/hf/1751", "region:us" ]
2024-01-02T14:19:25+00:00
{"language": ["fr"], "license": "cc-by-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["token-classification"], "dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "int64"}, {"name": "dataset", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 166027517.81620362, "num_examples": 328757}, {"name": "validation", "num_bytes": 10651145.0, "num_examples": 24131}, {"name": "test", "num_bytes": 14093255.0, "num_examples": 31885}], "download_size": 41512813, "dataset_size": 190771917.81620362}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-02-08T17:08:48+00:00
[]
[ "fr" ]
TAGS #task_categories-token-classification #size_categories-100K<n<1M #language-French #license-cc-by-4.0 #doi-10.57967/hf/1751 #region-us
Dataset information =================== Dataset concatenating all NER datasets, available in French and open-source, for 4 entities (LOC, PER, ORG, MISC). There are a total of 384,773 rows, of which 328,757 are for training, 24,131 for validation and 31,885 for testing. Our methodology is described in a blog post available in English or French. Usage ===== Dataset ======= Details of rows --------------- Dataset Original: Multiconer, Splits: 16,548 train / 857 validation / 0 test, Note: In practice, we use the original validation set as test set and creat a new val set from 5% of train created, i.e. 15,721 train / 827 validation / 857 test Dataset Original: Multinerd, Splits: 140,880 train / 17,610 val / 17,695 test, Note: Dataset Original: Pii-masking-200k, Splits: 61,958 train / 0 validation / 0 test, Note: Only dataset without duplicate data or leaks Dataset Original: Wikiner, Splits: 120,682 train / 0 validation / 13,410 test, Note: In practice, 5% of val created from train set, i.e. 113,296 train / 5,994 validation / 13,393 test Removing duplicate data and leaks --------------------------------- The sum of the values of the datasets listed here gives the following result: However, a data item in training split A may not be in A's test split, but may be present in B's test set, creating a leak when we create the A+B dataset. Note: in practice, the test split contains 1 line which we failed to deduplicate, i.e. 0.003%. ### Details of entities (after cleaning) Columns ------- * the 'tokens' column contains the tokens * the 'ner\_tags' column contains the NER tags (IOB format with 0="O", 1="PER", 2="ORG", 3="LOC" and 4="MISC") * the 'dataset' column identifies the row's original dataset (if you wish to apply filters to it) Split ----- * 'train' corresponds to the concatenation of 'multiconer' + 'multinerd' + 'pii-masking-200k' + 'wikiner' * 'validation' corresponds to the concatenation of 'multiconer' + 'multinerd' + 'wikiner' * 'test' corresponds to the concatenation of 'multiconer' + 'multinerd' + 'wikiner' s ### multiconer ### multinerd ### pii-masking-200k ### wikiner ### frenchNER\_4entities License ======= cc-by-4.0
[ "### Details of entities (after cleaning)\n\n\n\nColumns\n-------\n\n\n* the 'tokens' column contains the tokens\n* the 'ner\\_tags' column contains the NER tags (IOB format with 0=\"O\", 1=\"PER\", 2=\"ORG\", 3=\"LOC\" and 4=\"MISC\")\n* the 'dataset' column identifies the row's original dataset (if you wish to apply filters to it)\n\n\nSplit\n-----\n\n\n* 'train' corresponds to the concatenation of 'multiconer' + 'multinerd' + 'pii-masking-200k' + 'wikiner'\n* 'validation' corresponds to the concatenation of 'multiconer' + 'multinerd' + 'wikiner'\n* 'test' corresponds to the concatenation of 'multiconer' + 'multinerd' + 'wikiner'\n\n\ns", "### multiconer", "### multinerd", "### pii-masking-200k", "### wikiner", "### frenchNER\\_4entities\n\n\nLicense\n=======\n\n\ncc-by-4.0" ]
[ "TAGS\n#task_categories-token-classification #size_categories-100K<n<1M #language-French #license-cc-by-4.0 #doi-10.57967/hf/1751 #region-us \n", "### Details of entities (after cleaning)\n\n\n\nColumns\n-------\n\n\n* the 'tokens' column contains the tokens\n* the 'ner\\_tags' column contains the NER tags (IOB format with 0=\"O\", 1=\"PER\", 2=\"ORG\", 3=\"LOC\" and 4=\"MISC\")\n* the 'dataset' column identifies the row's original dataset (if you wish to apply filters to it)\n\n\nSplit\n-----\n\n\n* 'train' corresponds to the concatenation of 'multiconer' + 'multinerd' + 'pii-masking-200k' + 'wikiner'\n* 'validation' corresponds to the concatenation of 'multiconer' + 'multinerd' + 'wikiner'\n* 'test' corresponds to the concatenation of 'multiconer' + 'multinerd' + 'wikiner'\n\n\ns", "### multiconer", "### multinerd", "### pii-masking-200k", "### wikiner", "### frenchNER\\_4entities\n\n\nLicense\n=======\n\n\ncc-by-4.0" ]
[ 57, 208, 5, 5, 9, 4, 20 ]
[ "passage: TAGS\n#task_categories-token-classification #size_categories-100K<n<1M #language-French #license-cc-by-4.0 #doi-10.57967/hf/1751 #region-us \n### Details of entities (after cleaning)\n\n\n\nColumns\n-------\n\n\n* the 'tokens' column contains the tokens\n* the 'ner\\_tags' column contains the NER tags (IOB format with 0=\"O\", 1=\"PER\", 2=\"ORG\", 3=\"LOC\" and 4=\"MISC\")\n* the 'dataset' column identifies the row's original dataset (if you wish to apply filters to it)\n\n\nSplit\n-----\n\n\n* 'train' corresponds to the concatenation of 'multiconer' + 'multinerd' + 'pii-masking-200k' + 'wikiner'\n* 'validation' corresponds to the concatenation of 'multiconer' + 'multinerd' + 'wikiner'\n* 'test' corresponds to the concatenation of 'multiconer' + 'multinerd' + 'wikiner'\n\n\ns### multiconer### multinerd### pii-masking-200k### wikiner### frenchNER\\_4entities\n\n\nLicense\n=======\n\n\ncc-by-4.0" ]
9a80b52c48ece6570c96e25ba69d73045ce8ba15
# Dataset Card for "aug_train5" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
adityarra07/aug_train_5
[ "region:us" ]
2024-01-02T14:21:41+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 362336851.1, "num_examples": 2700}, {"name": "test", "num_bytes": 39205248.0, "num_examples": 300}], "download_size": 395989646, "dataset_size": 401542099.1}}
2024-01-02T14:21:55+00:00
[]
[]
TAGS #region-us
# Dataset Card for "aug_train5" More Information needed
[ "# Dataset Card for \"aug_train5\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"aug_train5\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"aug_train5\"\n\nMore Information needed" ]
09ed81eb3d562b6198d4beaf1776777221d3bf69
# Dataset Card for "aug_train6" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
adityarra07/aug_train_6
[ "region:us" ]
2024-01-02T14:34:15+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 358788849.1, "num_examples": 2700}, {"name": "test", "num_bytes": 42753250.0, "num_examples": 300}], "download_size": 395989646, "dataset_size": 401542099.1}}
2024-01-02T14:34:30+00:00
[]
[]
TAGS #region-us
# Dataset Card for "aug_train6" More Information needed
[ "# Dataset Card for \"aug_train6\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"aug_train6\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"aug_train6\"\n\nMore Information needed" ]
b916466f01f21d9e96de5e2bd378f2cdda334698
# Dataset Card for "aug_train7" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
adityarra07/aug_train_7
[ "region:us" ]
2024-01-02T14:47:05+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 361493291.1, "num_examples": 2700}, {"name": "test", "num_bytes": 40048808.0, "num_examples": 300}], "download_size": 395989646, "dataset_size": 401542099.1}}
2024-01-02T14:47:19+00:00
[]
[]
TAGS #region-us
# Dataset Card for "aug_train7" More Information needed
[ "# Dataset Card for \"aug_train7\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"aug_train7\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"aug_train7\"\n\nMore Information needed" ]
c3320604c0b5955dc429808494896fbddd4ed062
# Dataset Card for "conoscenza" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mii-llm/conoscenza
[ "region:us" ]
2024-01-02T14:58:37+00:00
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1865183, "num_examples": 1712}], "download_size": 1117936, "dataset_size": 1865183}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-02T16:04:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for "conoscenza" More Information needed
[ "# Dataset Card for \"conoscenza\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"conoscenza\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"conoscenza\"\n\nMore Information needed" ]
77dcaab78bb8f149adc5f04f34175e7f2412c24d
# Dataset Card for "aug_train8" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
adityarra07/aug_train_8
[ "region:us" ]
2024-01-02T14:59:46+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 361450011.1, "num_examples": 2700}, {"name": "test", "num_bytes": 40092088.0, "num_examples": 300}], "download_size": 395989646, "dataset_size": 401542099.1}}
2024-01-02T15:00:01+00:00
[]
[]
TAGS #region-us
# Dataset Card for "aug_train8" More Information needed
[ "# Dataset Card for \"aug_train8\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"aug_train8\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"aug_train8\"\n\nMore Information needed" ]
d49edd89363475fc691fff3436def1fdb45b9177
# Dataset Card for "aug_train9" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
adityarra07/aug_train_9
[ "region:us" ]
2024-01-02T15:12:27+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 360637582.1, "num_examples": 2700}, {"name": "test", "num_bytes": 40904517.0, "num_examples": 300}], "download_size": 395989646, "dataset_size": 401542099.1}}
2024-01-02T15:12:41+00:00
[]
[]
TAGS #region-us
# Dataset Card for "aug_train9" More Information needed
[ "# Dataset Card for \"aug_train9\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"aug_train9\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"aug_train9\"\n\nMore Information needed" ]
6573d9f2d4cd511fa08271c2ca936fed9d69045e
# Dataset Card for "CUBISM-ART" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
iamkaikai/CUBISM-ART
[ "region:us" ]
2024-01-02T15:29:22+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11173050.0, "num_examples": 218}], "download_size": 11161603, "dataset_size": 11173050.0}}
2024-01-02T16:56:08+00:00
[]
[]
TAGS #region-us
# Dataset Card for "CUBISM-ART" More Information needed
[ "# Dataset Card for \"CUBISM-ART\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"CUBISM-ART\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"CUBISM-ART\"\n\nMore Information needed" ]
b693c6692afebd8a6305c432a3f4d44b6e59ab38
# Dataset Card for "quiz" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mii-llm/quiz
[ "region:us" ]
2024-01-02T15:58:55+00:00
{"dataset_info": {"features": [{"name": "difficulty", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1192185, "num_examples": 2060}], "download_size": 737177, "dataset_size": 1192185}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-02T15:58:58+00:00
[]
[]
TAGS #region-us
# Dataset Card for "quiz" More Information needed
[ "# Dataset Card for \"quiz\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"quiz\"\n\nMore Information needed" ]
[ 6, 12 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"quiz\"\n\nMore Information needed" ]
eb14aa679136518046871de756277362577e254f
# Dataset Card for "csfd_reviews-mock-dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CZLC/csfd_reviews-mock-dataset
[ "region:us" ]
2024-01-02T16:07:45+00:00
{"dataset_info": {"config_name": "sk", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9929503, "num_examples": 25000}, {"name": "val", "num_bytes": 9929503, "num_examples": 25000}], "download_size": 7537429, "dataset_size": 19859006}, "configs": [{"config_name": "sk", "data_files": [{"split": "train", "path": "sk/train-*"}, {"split": "val", "path": "sk/val-*"}]}]}
2024-01-04T12:47:47+00:00
[]
[]
TAGS #region-us
# Dataset Card for "csfd_reviews-mock-dataset" More Information needed
[ "# Dataset Card for \"csfd_reviews-mock-dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"csfd_reviews-mock-dataset\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"csfd_reviews-mock-dataset\"\n\nMore Information needed" ]
414364f02ce2d20c93825900ed3ec9e6326275e2
# Dataset Card for "istruzioni-merge" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mii-llm/istruzioni-merge
[ "region:us" ]
2024-01-02T16:24:54+00:00
{"dataset_info": {"features": [{"name": "type", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 184861486, "num_examples": 106744}], "download_size": 100976033, "dataset_size": 184861486}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-02T16:25:42+00:00
[]
[]
TAGS #region-us
# Dataset Card for "istruzioni-merge" More Information needed
[ "# Dataset Card for \"istruzioni-merge\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"istruzioni-merge\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"istruzioni-merge\"\n\nMore Information needed" ]
7895ba5704947cd2e951382d723d02ddc80ccfc1
# Chat Fine-tuning Dataset - OpenAssistant DeepSeek Coder This dataset allows for fine-tuning chat models using: ``` B_INST = '\n### Instruction:\n' E_INST = '\n### Response:\n' BOS = '<|begin▁of▁sentence|>' EOS = '\n<|EOT|>\n' ``` Sample Preparation: 1. The dataset is cloned from [TimDettmers](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), which itself is a subset of the Open Assistant dataset, which you can find [here](https://huggingface.co/datasets/OpenAssistant/oasst1/tree/main). This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples. 1. The dataset was then filtered to: - replace instances of '### Human:' with 'B_INST' - replace instances of '### Assistant:' with 'E_INST' - end assistant responses with the correct EOS. Details of the root dataset follow, copied from that repo: # OpenAssistant Conversations Dataset (OASST1) ## Dataset Description - **Homepage:** https://www.open-assistant.io/ - **Repository:** https://github.com/LAION-AI/Open-Assistant - **Paper:** https://arxiv.org/abs/2304.07327 ### Dataset Summary In an effort to democratize research on large-scale alignment, we release OpenAssistant Conversations (OASST1), a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292 quality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers. Please refer to our [paper](https://arxiv.org/abs/2304.07327) for further details. ### Dataset Structure This dataset contains message trees. Each message tree has an initial prompt message as the root node, which can have multiple child messages as replies, and these child messages can have multiple replies. All messages have a role property: this can either be "assistant" or "prompter". The roles in conversation threads from prompt to leaf node strictly alternate between "prompter" and "assistant". This version of the dataset contains data collected on the [open-assistant.io](https://open-assistant.io/) website until April 12 2023. ### JSON Example: Message For readability, the following JSON examples are shown formatted with indentation on multiple lines. Objects are stored without indentation (on single lines) in the actual jsonl files. ```json { "message_id": "218440fd-5317-4355-91dc-d001416df62b", "parent_id": "13592dfb-a6f9-4748-a92c-32b34e239bb4", "user_id": "8e95461f-5e94-4d8b-a2fb-d4717ce973e4", "text": "It was the winter of 2035, and artificial intelligence (..)", "role": "assistant", "lang": "en", "review_count": 3, "review_result": true, "deleted": false, "rank": 0, "synthetic": true, "model_name": "oasst-sft-0_3000,max_new_tokens=400 (..)", "labels": { "spam": { "value": 0.0, "count": 3 }, "lang_mismatch": { "value": 0.0, "count": 3 }, "pii": { "value": 0.0, "count": 3 }, "not_appropriate": { "value": 0.0, "count": 3 }, "hate_speech": { "value": 0.0, "count": 3 }, "sexual_content": { "value": 0.0, "count": 3 }, "quality": { "value": 0.416, "count": 3 }, "toxicity": { "value": 0.16, "count": 3 }, "humor": { "value": 0.0, "count": 3 }, "creativity": { "value": 0.33, "count": 3 }, "violence": { "value": 0.16, "count": 3 } } } ``` ### JSON Example: Conversation Tree For readability, only a subset of the message properties is shown here. ```json { "message_tree_id": "14fbb664-a620-45ce-bee4-7c519b16a793", "tree_state": "ready_for_export", "prompt": { "message_id": "14fbb664-a620-45ce-bee4-7c519b16a793", "text": "Why can't we divide by 0? (..)", "role": "prompter", "lang": "en", "replies": [ { "message_id": "894d30b6-56b4-4605-a504-89dd15d4d1c8", "text": "The reason we cannot divide by zero is because (..)", "role": "assistant", "lang": "en", "replies": [ // ... ] }, { "message_id": "84d0913b-0fd9-4508-8ef5-205626a7039d", "text": "The reason that the result of a division by zero is (..)", "role": "assistant", "lang": "en", "replies": [ { "message_id": "3352725e-f424-4e3b-a627-b6db831bdbaa", "text": "Math is confusing. Like those weird Irrational (..)", "role": "prompter", "lang": "en", "replies": [ { "message_id": "f46207ca-3149-46e9-a466-9163d4ce499c", "text": "Irrational numbers are simply numbers (..)", "role": "assistant", "lang": "en", "replies": [] }, // ... ] } ] } ] } } ``` Please refer to [oasst-data](https://github.com/LAION-AI/Open-Assistant/tree/main/oasst-data) for details about the data structure and Python code to read and write jsonl files containing oasst data objects. If you would like to explore the dataset yourself you can find a [`getting-started`](https://github.com/LAION-AI/Open-Assistant/blob/main/notebooks/openassistant-oasst1/getting-started.ipynb) notebook in the `notebooks/openassistant-oasst1` folder of the [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant) github repository. ## Main Dataset Files Conversation data is provided either as nested messages in trees (extension `.trees.jsonl.gz`) or as a flat list (table) of messages (extension `.messages.jsonl.gz`). ### Ready For Export Trees ``` 2023-04-12_oasst_ready.trees.jsonl.gz 10,364 trees with 88,838 total messages 2023-04-12_oasst_ready.messages.jsonl.gz 88,838 messages ``` Trees in `ready_for_export` state without spam and deleted messages including message labels. The oasst_ready-trees file usually is sufficient for supervised fine-tuning (SFT) & reward model (RM) training. ### All Trees ``` 2023-04-12_oasst_all.trees.jsonl.gz 66,497 trees with 161,443 total messages 2023-04-12_oasst_all.messages.jsonl.gz 161,443 messages ``` All trees, including those in states `prompt_lottery_waiting` (trees that consist of only one message, namely the initial prompt), `aborted_low_grade` (trees that stopped growing because the messages had low quality), and `halted_by_moderator`. ### Supplemental Exports: Spam & Prompts ``` 2023-04-12_oasst_spam.messages.jsonl.gz ``` These are messages which were deleted or have a negative review result (`"review_result": false`). Besides low quality, a frequent reason for message deletion is a wrong language tag. ``` 2023-04-12_oasst_prompts.messages.jsonl.gz ``` These are all the kept initial prompt messages with positive review result (no spam) of trees in `ready_for_export` or `prompt_lottery_waiting` state. ### Using the Huggingface Datasets While HF datasets is ideal for tabular datasets, it is not a natural fit for nested data structures like the OpenAssistant conversation trees. Nevertheless, we make all messages which can also be found in the file `2023-04-12_oasst_ready.trees.jsonl.gz` available in parquet as train/validation splits. These are directly loadable by [Huggingface Datasets](https://pypi.org/project/datasets/). To load the oasst1 train & validation splits use: ```python from datasets import load_dataset ds = load_dataset("OpenAssistant/oasst1") train = ds['train'] # len(train)=84437 (95%) val = ds['validation'] # len(val)=4401 (5%) ``` The messages appear in depth-first order of the message trees. Full conversation trees can be reconstructed from the flat messages table by using the `parent_id` and `message_id` properties to identify the parent-child relationship of messages. The `message_tree_id` and `tree_state` properties (only present in flat messages files) can be used to find all messages of a message tree or to select trees by their state. ### Languages OpenAssistant Conversations incorporates 35 different languages with a distribution of messages as follows: **Languages with over 1000 messages** - English: 71956 - Spanish: 43061 - Russian: 9089 - German: 5279 - Chinese: 4962 - French: 4251 - Thai: 3042 - Portuguese (Brazil): 2969 - Catalan: 2260 - Korean: 1553 - Ukrainian: 1352 - Italian: 1320 - Japanese: 1018 <details> <summary><b>Languages with under 1000 messages</b></summary> <ul> <li>Vietnamese: 952</li> <li>Basque: 947</li> <li>Polish: 886</li> <li>Hungarian: 811</li> <li>Arabic: 666</li> <li>Dutch: 628</li> <li>Swedish: 512</li> <li>Turkish: 454</li> <li>Finnish: 386</li> <li>Czech: 372</li> <li>Danish: 358</li> <li>Galician: 339</li> <li>Hebrew: 255</li> <li>Romanian: 200</li> <li>Norwegian Bokmål: 133</li> <li>Indonesian: 115</li> <li>Bulgarian: 95</li> <li>Bengali: 82</li> <li>Persian: 72</li> <li>Greek: 66</li> <li>Esperanto: 59</li> <li>Slovak: 19</li> </ul> </details> ## Contact - Discord [Open Assistant Discord Server](https://ykilcher.com/open-assistant-discord) - GitHub: [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant) - E-Mail: [[email protected]](mailto:[email protected])
Trelis/openassistant-deepseek-coder
[ "size_categories:1K<n<10k", "language:en", "language:es", "language:ru", "language:de", "language:pl", "language:th", "language:vi", "language:sv", "language:bn", "language:da", "language:he", "language:it", "language:fa", "language:sk", "language:id", "language:nb", "language:el", "language:nl", "language:hu", "language:eu", "language:zh", "language:eo", "language:ja", "language:ca", "language:cs", "language:bg", "language:fi", "language:pt", "language:tr", "language:ro", "language:ar", "language:uk", "language:gl", "language:fr", "language:ko", "license:apache-2.0", "human-feedback", "deepseek coder", "arxiv:2304.07327", "region:us" ]
2024-01-02T16:37:32+00:00
{"language": ["en", "es", "ru", "de", "pl", "th", "vi", "sv", "bn", "da", "he", "it", "fa", "sk", "id", "nb", "el", "nl", "hu", "eu", "zh", "eo", "ja", "ca", "cs", "bg", "fi", "pt", "tr", "ro", "ar", "uk", "gl", "fr", "ko"], "license": "apache-2.0", "size_categories": ["1K<n<10k"], "pretty_name": "Filtered OpenAssistant Conversations", "tags": ["human-feedback", "deepseek coder"]}
2024-01-03T00:16:11+00:00
[ "2304.07327" ]
[ "en", "es", "ru", "de", "pl", "th", "vi", "sv", "bn", "da", "he", "it", "fa", "sk", "id", "nb", "el", "nl", "hu", "eu", "zh", "eo", "ja", "ca", "cs", "bg", "fi", "pt", "tr", "ro", "ar", "uk", "gl", "fr", "ko" ]
TAGS #size_categories-1K<n<10k #language-English #language-Spanish #language-Russian #language-German #language-Polish #language-Thai #language-Vietnamese #language-Swedish #language-Bengali #language-Danish #language-Hebrew #language-Italian #language-Persian #language-Slovak #language-Indonesian #language-Norwegian Bokmål #language-Modern Greek (1453-) #language-Dutch #language-Hungarian #language-Basque #language-Chinese #language-Esperanto #language-Japanese #language-Catalan #language-Czech #language-Bulgarian #language-Finnish #language-Portuguese #language-Turkish #language-Romanian #language-Arabic #language-Ukrainian #language-Galician #language-French #language-Korean #license-apache-2.0 #human-feedback #deepseek coder #arxiv-2304.07327 #region-us
# Chat Fine-tuning Dataset - OpenAssistant DeepSeek Coder This dataset allows for fine-tuning chat models using: Sample Preparation: 1. The dataset is cloned from TimDettmers, which itself is a subset of the Open Assistant dataset, which you can find here. This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples. 1. The dataset was then filtered to: - replace instances of '### Human:' with 'B_INST' - replace instances of '### Assistant:' with 'E_INST' - end assistant responses with the correct EOS. Details of the root dataset follow, copied from that repo: # OpenAssistant Conversations Dataset (OASST1) ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL ### Dataset Summary In an effort to democratize research on large-scale alignment, we release OpenAssistant Conversations (OASST1), a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292 quality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers. Please refer to our paper for further details. ### Dataset Structure This dataset contains message trees. Each message tree has an initial prompt message as the root node, which can have multiple child messages as replies, and these child messages can have multiple replies. All messages have a role property: this can either be "assistant" or "prompter". The roles in conversation threads from prompt to leaf node strictly alternate between "prompter" and "assistant". This version of the dataset contains data collected on the URL website until April 12 2023. ### JSON Example: Message For readability, the following JSON examples are shown formatted with indentation on multiple lines. Objects are stored without indentation (on single lines) in the actual jsonl files. ### JSON Example: Conversation Tree For readability, only a subset of the message properties is shown here. Please refer to oasst-data for details about the data structure and Python code to read and write jsonl files containing oasst data objects. If you would like to explore the dataset yourself you can find a 'getting-started' notebook in the 'notebooks/openassistant-oasst1' folder of the LAION-AI/Open-Assistant github repository. ## Main Dataset Files Conversation data is provided either as nested messages in trees (extension '.URL') or as a flat list (table) of messages (extension '.URL'). ### Ready For Export Trees Trees in 'ready_for_export' state without spam and deleted messages including message labels. The oasst_ready-trees file usually is sufficient for supervised fine-tuning (SFT) & reward model (RM) training. ### All Trees All trees, including those in states 'prompt_lottery_waiting' (trees that consist of only one message, namely the initial prompt), 'aborted_low_grade' (trees that stopped growing because the messages had low quality), and 'halted_by_moderator'. ### Supplemental Exports: Spam & Prompts These are messages which were deleted or have a negative review result ('"review_result": false'). Besides low quality, a frequent reason for message deletion is a wrong language tag. These are all the kept initial prompt messages with positive review result (no spam) of trees in 'ready_for_export' or 'prompt_lottery_waiting' state. ### Using the Huggingface Datasets While HF datasets is ideal for tabular datasets, it is not a natural fit for nested data structures like the OpenAssistant conversation trees. Nevertheless, we make all messages which can also be found in the file '2023-04-12_oasst_ready.URL' available in parquet as train/validation splits. These are directly loadable by Huggingface Datasets. To load the oasst1 train & validation splits use: The messages appear in depth-first order of the message trees. Full conversation trees can be reconstructed from the flat messages table by using the 'parent_id' and 'message_id' properties to identify the parent-child relationship of messages. The 'message_tree_id' and 'tree_state' properties (only present in flat messages files) can be used to find all messages of a message tree or to select trees by their state. ### Languages OpenAssistant Conversations incorporates 35 different languages with a distribution of messages as follows: Languages with over 1000 messages - English: 71956 - Spanish: 43061 - Russian: 9089 - German: 5279 - Chinese: 4962 - French: 4251 - Thai: 3042 - Portuguese (Brazil): 2969 - Catalan: 2260 - Korean: 1553 - Ukrainian: 1352 - Italian: 1320 - Japanese: 1018 <details> <summary><b>Languages with under 1000 messages</b></summary> <ul> <li>Vietnamese: 952</li> <li>Basque: 947</li> <li>Polish: 886</li> <li>Hungarian: 811</li> <li>Arabic: 666</li> <li>Dutch: 628</li> <li>Swedish: 512</li> <li>Turkish: 454</li> <li>Finnish: 386</li> <li>Czech: 372</li> <li>Danish: 358</li> <li>Galician: 339</li> <li>Hebrew: 255</li> <li>Romanian: 200</li> <li>Norwegian Bokmål: 133</li> <li>Indonesian: 115</li> <li>Bulgarian: 95</li> <li>Bengali: 82</li> <li>Persian: 72</li> <li>Greek: 66</li> <li>Esperanto: 59</li> <li>Slovak: 19</li> </ul> </details> ## Contact - Discord Open Assistant Discord Server - GitHub: LAION-AI/Open-Assistant - E-Mail: open-assistant@URL
[ "# Chat Fine-tuning Dataset - OpenAssistant DeepSeek Coder\nThis dataset allows for fine-tuning chat models using:\n\n\nSample Preparation:\n\n1. The dataset is cloned from TimDettmers, which itself is a subset of the Open Assistant dataset, which you can find here. This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples.\n1. The dataset was then filtered to:\n - replace instances of '### Human:' with 'B_INST'\n - replace instances of '### Assistant:' with 'E_INST'\n - end assistant responses with the correct EOS.\n\nDetails of the root dataset follow, copied from that repo:", "# OpenAssistant Conversations Dataset (OASST1)", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL", "### Dataset Summary\n\nIn an effort to democratize research on large-scale alignment, we release OpenAssistant \nConversations (OASST1), a human-generated, human-annotated assistant-style conversation \ncorpus consisting of 161,443 messages in 35 different languages, annotated with 461,292 \nquality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus \nis a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.\n\nPlease refer to our paper for further details.", "### Dataset Structure\n\nThis dataset contains message trees. Each message tree has an initial prompt message as the root node, \nwhich can have multiple child messages as replies, and these child messages can have multiple replies. \n\nAll messages have a role property: this can either be \"assistant\" or \"prompter\". The roles in \nconversation threads from prompt to leaf node strictly alternate between \"prompter\" and \"assistant\".\n\nThis version of the dataset contains data collected on the URL website until April 12 2023.", "### JSON Example: Message\n\nFor readability, the following JSON examples are shown formatted with indentation on multiple lines.\nObjects are stored without indentation (on single lines) in the actual jsonl files.", "### JSON Example: Conversation Tree\n\nFor readability, only a subset of the message properties is shown here.\n\n\n\nPlease refer to oasst-data for\ndetails about the data structure and Python code to read and write jsonl files containing oasst data objects.\n\nIf you would like to explore the dataset yourself you can find a \n'getting-started' \nnotebook in the 'notebooks/openassistant-oasst1' folder of the LAION-AI/Open-Assistant\ngithub repository.", "## Main Dataset Files\n\nConversation data is provided either as nested messages in trees (extension '.URL') \nor as a flat list (table) of messages (extension '.URL').", "### Ready For Export Trees\n\n\nTrees in 'ready_for_export' state without spam and deleted messages including message labels.\nThe oasst_ready-trees file usually is sufficient for supervised fine-tuning (SFT) & reward model (RM) training.", "### All Trees\n\nAll trees, including those in states 'prompt_lottery_waiting' (trees that consist of only one message, namely the initial prompt),\n'aborted_low_grade' (trees that stopped growing because the messages had low quality), and 'halted_by_moderator'.", "### Supplemental Exports: Spam & Prompts\n\nThese are messages which were deleted or have a negative review result ('\"review_result\": false').\nBesides low quality, a frequent reason for message deletion is a wrong language tag.\n\n\nThese are all the kept initial prompt messages with positive review result (no spam) of trees in 'ready_for_export' or 'prompt_lottery_waiting' state.", "### Using the Huggingface Datasets\n\nWhile HF datasets is ideal for tabular datasets, it is not a natural fit for nested data structures like the OpenAssistant conversation trees.\nNevertheless, we make all messages which can also be found in the file '2023-04-12_oasst_ready.URL' available in parquet as train/validation splits. \nThese are directly loadable by Huggingface Datasets.\n\nTo load the oasst1 train & validation splits use:\n\n\n\nThe messages appear in depth-first order of the message trees.\n\nFull conversation trees can be reconstructed from the flat messages table by using the 'parent_id' \nand 'message_id' properties to identify the parent-child relationship of messages. The 'message_tree_id' \nand 'tree_state' properties (only present in flat messages files) can be used to find all messages of a message tree or to select trees by their state.", "### Languages\n\nOpenAssistant Conversations incorporates 35 different languages with a distribution of messages as follows:\n\nLanguages with over 1000 messages\n- English: 71956\n- Spanish: 43061\n- Russian: 9089\n- German: 5279\n- Chinese: 4962\n- French: 4251\n- Thai: 3042\n- Portuguese (Brazil): 2969\n- Catalan: 2260\n- Korean: 1553\n- Ukrainian: 1352\n- Italian: 1320\n- Japanese: 1018\n\n<details>\n <summary><b>Languages with under 1000 messages</b></summary>\n <ul>\n <li>Vietnamese: 952</li>\n <li>Basque: 947</li>\n <li>Polish: 886</li>\n <li>Hungarian: 811</li>\n <li>Arabic: 666</li>\n <li>Dutch: 628</li>\n <li>Swedish: 512</li>\n <li>Turkish: 454</li>\n <li>Finnish: 386</li>\n <li>Czech: 372</li>\n <li>Danish: 358</li>\n <li>Galician: 339</li>\n <li>Hebrew: 255</li>\n <li>Romanian: 200</li>\n <li>Norwegian Bokmål: 133</li>\n <li>Indonesian: 115</li>\n <li>Bulgarian: 95</li>\n <li>Bengali: 82</li>\n <li>Persian: 72</li>\n <li>Greek: 66</li>\n <li>Esperanto: 59</li>\n <li>Slovak: 19</li>\n </ul>\n</details>", "## Contact\n\n- Discord Open Assistant Discord Server\n- GitHub: LAION-AI/Open-Assistant\n- E-Mail: open-assistant@URL" ]
[ "TAGS\n#size_categories-1K<n<10k #language-English #language-Spanish #language-Russian #language-German #language-Polish #language-Thai #language-Vietnamese #language-Swedish #language-Bengali #language-Danish #language-Hebrew #language-Italian #language-Persian #language-Slovak #language-Indonesian #language-Norwegian Bokmål #language-Modern Greek (1453-) #language-Dutch #language-Hungarian #language-Basque #language-Chinese #language-Esperanto #language-Japanese #language-Catalan #language-Czech #language-Bulgarian #language-Finnish #language-Portuguese #language-Turkish #language-Romanian #language-Arabic #language-Ukrainian #language-Galician #language-French #language-Korean #license-apache-2.0 #human-feedback #deepseek coder #arxiv-2304.07327 #region-us \n", "# Chat Fine-tuning Dataset - OpenAssistant DeepSeek Coder\nThis dataset allows for fine-tuning chat models using:\n\n\nSample Preparation:\n\n1. The dataset is cloned from TimDettmers, which itself is a subset of the Open Assistant dataset, which you can find here. This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples.\n1. The dataset was then filtered to:\n - replace instances of '### Human:' with 'B_INST'\n - replace instances of '### Assistant:' with 'E_INST'\n - end assistant responses with the correct EOS.\n\nDetails of the root dataset follow, copied from that repo:", "# OpenAssistant Conversations Dataset (OASST1)", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL", "### Dataset Summary\n\nIn an effort to democratize research on large-scale alignment, we release OpenAssistant \nConversations (OASST1), a human-generated, human-annotated assistant-style conversation \ncorpus consisting of 161,443 messages in 35 different languages, annotated with 461,292 \nquality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus \nis a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.\n\nPlease refer to our paper for further details.", "### Dataset Structure\n\nThis dataset contains message trees. Each message tree has an initial prompt message as the root node, \nwhich can have multiple child messages as replies, and these child messages can have multiple replies. \n\nAll messages have a role property: this can either be \"assistant\" or \"prompter\". The roles in \nconversation threads from prompt to leaf node strictly alternate between \"prompter\" and \"assistant\".\n\nThis version of the dataset contains data collected on the URL website until April 12 2023.", "### JSON Example: Message\n\nFor readability, the following JSON examples are shown formatted with indentation on multiple lines.\nObjects are stored without indentation (on single lines) in the actual jsonl files.", "### JSON Example: Conversation Tree\n\nFor readability, only a subset of the message properties is shown here.\n\n\n\nPlease refer to oasst-data for\ndetails about the data structure and Python code to read and write jsonl files containing oasst data objects.\n\nIf you would like to explore the dataset yourself you can find a \n'getting-started' \nnotebook in the 'notebooks/openassistant-oasst1' folder of the LAION-AI/Open-Assistant\ngithub repository.", "## Main Dataset Files\n\nConversation data is provided either as nested messages in trees (extension '.URL') \nor as a flat list (table) of messages (extension '.URL').", "### Ready For Export Trees\n\n\nTrees in 'ready_for_export' state without spam and deleted messages including message labels.\nThe oasst_ready-trees file usually is sufficient for supervised fine-tuning (SFT) & reward model (RM) training.", "### All Trees\n\nAll trees, including those in states 'prompt_lottery_waiting' (trees that consist of only one message, namely the initial prompt),\n'aborted_low_grade' (trees that stopped growing because the messages had low quality), and 'halted_by_moderator'.", "### Supplemental Exports: Spam & Prompts\n\nThese are messages which were deleted or have a negative review result ('\"review_result\": false').\nBesides low quality, a frequent reason for message deletion is a wrong language tag.\n\n\nThese are all the kept initial prompt messages with positive review result (no spam) of trees in 'ready_for_export' or 'prompt_lottery_waiting' state.", "### Using the Huggingface Datasets\n\nWhile HF datasets is ideal for tabular datasets, it is not a natural fit for nested data structures like the OpenAssistant conversation trees.\nNevertheless, we make all messages which can also be found in the file '2023-04-12_oasst_ready.URL' available in parquet as train/validation splits. \nThese are directly loadable by Huggingface Datasets.\n\nTo load the oasst1 train & validation splits use:\n\n\n\nThe messages appear in depth-first order of the message trees.\n\nFull conversation trees can be reconstructed from the flat messages table by using the 'parent_id' \nand 'message_id' properties to identify the parent-child relationship of messages. The 'message_tree_id' \nand 'tree_state' properties (only present in flat messages files) can be used to find all messages of a message tree or to select trees by their state.", "### Languages\n\nOpenAssistant Conversations incorporates 35 different languages with a distribution of messages as follows:\n\nLanguages with over 1000 messages\n- English: 71956\n- Spanish: 43061\n- Russian: 9089\n- German: 5279\n- Chinese: 4962\n- French: 4251\n- Thai: 3042\n- Portuguese (Brazil): 2969\n- Catalan: 2260\n- Korean: 1553\n- Ukrainian: 1352\n- Italian: 1320\n- Japanese: 1018\n\n<details>\n <summary><b>Languages with under 1000 messages</b></summary>\n <ul>\n <li>Vietnamese: 952</li>\n <li>Basque: 947</li>\n <li>Polish: 886</li>\n <li>Hungarian: 811</li>\n <li>Arabic: 666</li>\n <li>Dutch: 628</li>\n <li>Swedish: 512</li>\n <li>Turkish: 454</li>\n <li>Finnish: 386</li>\n <li>Czech: 372</li>\n <li>Danish: 358</li>\n <li>Galician: 339</li>\n <li>Hebrew: 255</li>\n <li>Romanian: 200</li>\n <li>Norwegian Bokmål: 133</li>\n <li>Indonesian: 115</li>\n <li>Bulgarian: 95</li>\n <li>Bengali: 82</li>\n <li>Persian: 72</li>\n <li>Greek: 66</li>\n <li>Esperanto: 59</li>\n <li>Slovak: 19</li>\n </ul>\n</details>", "## Contact\n\n- Discord Open Assistant Discord Server\n- GitHub: LAION-AI/Open-Assistant\n- E-Mail: open-assistant@URL" ]
[ 241, 167, 15, 18, 120, 120, 51, 117, 46, 66, 74, 99, 221, 381, 36 ]
[ "passage: TAGS\n#size_categories-1K<n<10k #language-English #language-Spanish #language-Russian #language-German #language-Polish #language-Thai #language-Vietnamese #language-Swedish #language-Bengali #language-Danish #language-Hebrew #language-Italian #language-Persian #language-Slovak #language-Indonesian #language-Norwegian Bokmål #language-Modern Greek (1453-) #language-Dutch #language-Hungarian #language-Basque #language-Chinese #language-Esperanto #language-Japanese #language-Catalan #language-Czech #language-Bulgarian #language-Finnish #language-Portuguese #language-Turkish #language-Romanian #language-Arabic #language-Ukrainian #language-Galician #language-French #language-Korean #license-apache-2.0 #human-feedback #deepseek coder #arxiv-2304.07327 #region-us \n# Chat Fine-tuning Dataset - OpenAssistant DeepSeek Coder\nThis dataset allows for fine-tuning chat models using:\n\n\nSample Preparation:\n\n1. The dataset is cloned from TimDettmers, which itself is a subset of the Open Assistant dataset, which you can find here. This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples.\n1. The dataset was then filtered to:\n - replace instances of '### Human:' with 'B_INST'\n - replace instances of '### Assistant:' with 'E_INST'\n - end assistant responses with the correct EOS.\n\nDetails of the root dataset follow, copied from that repo:# OpenAssistant Conversations Dataset (OASST1)## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL", "passage: ### Dataset Summary\n\nIn an effort to democratize research on large-scale alignment, we release OpenAssistant \nConversations (OASST1), a human-generated, human-annotated assistant-style conversation \ncorpus consisting of 161,443 messages in 35 different languages, annotated with 461,292 \nquality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus \nis a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.\n\nPlease refer to our paper for further details.### Dataset Structure\n\nThis dataset contains message trees. Each message tree has an initial prompt message as the root node, \nwhich can have multiple child messages as replies, and these child messages can have multiple replies. \n\nAll messages have a role property: this can either be \"assistant\" or \"prompter\". The roles in \nconversation threads from prompt to leaf node strictly alternate between \"prompter\" and \"assistant\".\n\nThis version of the dataset contains data collected on the URL website until April 12 2023.### JSON Example: Message\n\nFor readability, the following JSON examples are shown formatted with indentation on multiple lines.\nObjects are stored without indentation (on single lines) in the actual jsonl files.### JSON Example: Conversation Tree\n\nFor readability, only a subset of the message properties is shown here.\n\n\n\nPlease refer to oasst-data for\ndetails about the data structure and Python code to read and write jsonl files containing oasst data objects.\n\nIf you would like to explore the dataset yourself you can find a \n'getting-started' \nnotebook in the 'notebooks/openassistant-oasst1' folder of the LAION-AI/Open-Assistant\ngithub repository.## Main Dataset Files\n\nConversation data is provided either as nested messages in trees (extension '.URL') \nor as a flat list (table) of messages (extension '.URL').### Ready For Export Trees\n\n\nTrees in 'ready_for_export' state without spam and deleted messages including message labels.\nThe oasst_ready-trees file usually is sufficient for supervised fine-tuning (SFT) & reward model (RM) training.### All Trees\n\nAll trees, including those in states 'prompt_lottery_waiting' (trees that consist of only one message, namely the initial prompt),\n'aborted_low_grade' (trees that stopped growing because the messages had low quality), and 'halted_by_moderator'.", "passage: ### Supplemental Exports: Spam & Prompts\n\nThese are messages which were deleted or have a negative review result ('\"review_result\": false').\nBesides low quality, a frequent reason for message deletion is a wrong language tag.\n\n\nThese are all the kept initial prompt messages with positive review result (no spam) of trees in 'ready_for_export' or 'prompt_lottery_waiting' state.### Using the Huggingface Datasets\n\nWhile HF datasets is ideal for tabular datasets, it is not a natural fit for nested data structures like the OpenAssistant conversation trees.\nNevertheless, we make all messages which can also be found in the file '2023-04-12_oasst_ready.URL' available in parquet as train/validation splits. \nThese are directly loadable by Huggingface Datasets.\n\nTo load the oasst1 train & validation splits use:\n\n\n\nThe messages appear in depth-first order of the message trees.\n\nFull conversation trees can be reconstructed from the flat messages table by using the 'parent_id' \nand 'message_id' properties to identify the parent-child relationship of messages. The 'message_tree_id' \nand 'tree_state' properties (only present in flat messages files) can be used to find all messages of a message tree or to select trees by their state." ]
fa9f5fca5a03ae65b244ca88cdf7b563be73121c
# Dataset Card for "0-10000-ultrafeedback-ita" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
giux78/0-10000-ultrafeedback-ita
[ "region:us" ]
2024-01-02T16:53:43+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "test_gen", "path": "data/test_gen-*"}, {"split": "test_sft", "path": "data/test_sft-*"}, {"split": "train_gen", "path": "data/train_gen-*"}, {"split": "train_sft", "path": "data/train_sft-*"}]}], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "prompt_id", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "test_gen", "num_bytes": 148276089, "num_examples": 28304}, {"name": "test_sft", "num_bytes": 154695659, "num_examples": 23110}, {"name": "train_gen", "num_bytes": 1347396812, "num_examples": 256032}, {"name": "train_sft", "num_bytes": 73545780, "num_examples": 10000}], "download_size": 930927327, "dataset_size": 1723914340}}
2024-01-02T16:54:37+00:00
[]
[]
TAGS #region-us
# Dataset Card for "0-10000-ultrafeedback-ita" More Information needed
[ "# Dataset Card for \"0-10000-ultrafeedback-ita\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"0-10000-ultrafeedback-ita\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"0-10000-ultrafeedback-ita\"\n\nMore Information needed" ]
f733e098c26575ac12f0d9a366c99a11e51218a7
# Dataset Card for "Rewrite-2k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lmg-anon/Rewrite-2k
[ "region:us" ]
2024-01-02T17:02:33+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5011502, "num_examples": 842}], "download_size": 2798517, "dataset_size": 5011502}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-06T03:03:19+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Rewrite-2k" More Information needed
[ "# Dataset Card for \"Rewrite-2k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Rewrite-2k\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Rewrite-2k\"\n\nMore Information needed" ]
73ae840825fcfe45b8e79355a58a8f9b1ad0b6f8
# Dataset Card for "studio" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mii-llm/studio
[ "region:us" ]
2024-01-02T17:19:50+00:00
{"dataset_info": {"features": [{"name": "topic", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4620775, "num_examples": 543}], "download_size": 143033, "dataset_size": 4620775}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-02T17:19:53+00:00
[]
[]
TAGS #region-us
# Dataset Card for "studio" More Information needed
[ "# Dataset Card for \"studio\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"studio\"\n\nMore Information needed" ]
[ 6, 11 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"studio\"\n\nMore Information needed" ]
24e9b881ca01310cf41e23802cfc47e7b7215a4f
The **Parallel Tunisian Constitution Corpus (PTCC)** corpus is a corpus of 149 articles written in Modern Standard Arabic and Tunisian Arabic. Tesseract was used to transform the constitution's pdf files into text files. Afterward, alignment of the parallel articles was achieved by a simple Python script. More details can be found in: https://amr-keleg.github.io/projects/digitalizing_dialectal_arabic/ ### Sources: * [Tunisian Arabic translation of the 2014 Tunisian Constitution](https://www.babnet.net/rttdetail-84167.asp) * [2014 Tunisian Constitution in MSA](https://upload.wikimedia.org/wikipedia/commons/7/78/Constitution_Tunisienne_2014.pdf)
AMR-KELEG/PTCC
[ "task_categories:text-generation", "language:ar", "license:mit", "legal", "region:us" ]
2024-01-02T18:03:47+00:00
{"language": ["ar"], "license": "mit", "task_categories": ["text-generation"], "tags": ["legal"]}
2024-01-02T20:36:50+00:00
[]
[ "ar" ]
TAGS #task_categories-text-generation #language-Arabic #license-mit #legal #region-us
The Parallel Tunisian Constitution Corpus (PTCC) corpus is a corpus of 149 articles written in Modern Standard Arabic and Tunisian Arabic. Tesseract was used to transform the constitution's pdf files into text files. Afterward, alignment of the parallel articles was achieved by a simple Python script. More details can be found in: URL ### Sources: * Tunisian Arabic translation of the 2014 Tunisian Constitution * 2014 Tunisian Constitution in MSA
[ "### Sources:\n* Tunisian Arabic translation of the 2014 Tunisian Constitution\n* 2014 Tunisian Constitution in MSA" ]
[ "TAGS\n#task_categories-text-generation #language-Arabic #license-mit #legal #region-us \n", "### Sources:\n* Tunisian Arabic translation of the 2014 Tunisian Constitution\n* 2014 Tunisian Constitution in MSA" ]
[ 29, 24 ]
[ "passage: TAGS\n#task_categories-text-generation #language-Arabic #license-mit #legal #region-us \n### Sources:\n* Tunisian Arabic translation of the 2014 Tunisian Constitution\n* 2014 Tunisian Constitution in MSA" ]
697aedae5da666eb6fc4e88b76aa5303d97ab6e7
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
cmeraki/ultrachat_hindi_seamless
[ "size_categories:100K<n<1M", "language:hi", "language:en", "license:cc-by-nc-4.0", "region:us" ]
2024-01-02T18:04:13+00:00
{"language": ["hi", "en"], "license": "cc-by-nc-4.0", "size_categories": ["100K<n<1M"], "dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train_sft", "num_bytes": 2761401316, "num_examples": 185542}, {"name": "test_sft", "num_bytes": 147845678, "num_examples": 10000}], "download_size": 952634359, "dataset_size": 2909246994}, "configs": [{"config_name": "default", "data_files": [{"split": "train_sft", "path": "data/train_sft-*"}, {"split": "test_sft", "path": "data/test_sft-*"}]}]}
2024-01-04T04:48:47+00:00
[]
[ "hi", "en" ]
TAGS #size_categories-100K<n<1M #language-Hindi #language-English #license-cc-by-nc-4.0 #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#size_categories-100K<n<1M #language-Hindi #language-English #license-cc-by-nc-4.0 #region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 37, 34, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#size_categories-100K<n<1M #language-Hindi #language-English #license-cc-by-nc-4.0 #region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
cad71ffca3da3bb3d1b0b99baccf6931fee8d15a
## Overview This dataset is a continuation of the [airoboros-3.1](https://hf.co/datasets/jondurbin/airoboros-3.1) with the following changes: * MathJSON has been removed for the time-being, because it seems to confuse the models at times, causing more problems than it's worth. The mathjson dataset can be found [here](https://huggingface.co/datasets/jondurbin/mathjson-alpha) * The de-censorship data has been re-added, to ensure a non-DPO SFT model using this dataset is relatively uncensored. * ~11k instructions from [slimorca](https://huggingface.co/datasets/Open-Orca/SlimOrca) where extended to have an additional, follow-up turn to enhance multi-turn capabilities. ## Format The format is now in ShareGPT format, to better accomodate the OS ecosystem fine-tuning tooling. ## Usage restriction To use this data, you must acknowledge/agree to the following: - a small sampling of the data contained within is "toxic"/"harmful", and contains profanity and other types of sensitive content - none of the content or views contained in the dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs without a great amount of validation - you are able to use the dataset lawfully, particularly in locations with less-than-free speech laws - you, and you alone are responsible for having downloaded and used the dataset, and I am completely indemnified from any and all liabilities Also note that the data was generated primarily with gpt-4, and therefore may have some strings attached to the OpenAI terms of service.
jondurbin/airoboros-3.2
[ "license:cc-by-4.0", "not-for-all-audiences", "region:us" ]
2024-01-02T18:32:43+00:00
{"license": "cc-by-4.0", "tags": ["not-for-all-audiences"]}
2024-01-02T18:53:05+00:00
[]
[]
TAGS #license-cc-by-4.0 #not-for-all-audiences #region-us
## Overview This dataset is a continuation of the airoboros-3.1 with the following changes: * MathJSON has been removed for the time-being, because it seems to confuse the models at times, causing more problems than it's worth. The mathjson dataset can be found here * The de-censorship data has been re-added, to ensure a non-DPO SFT model using this dataset is relatively uncensored. * ~11k instructions from slimorca where extended to have an additional, follow-up turn to enhance multi-turn capabilities. ## Format The format is now in ShareGPT format, to better accomodate the OS ecosystem fine-tuning tooling. ## Usage restriction To use this data, you must acknowledge/agree to the following: - a small sampling of the data contained within is "toxic"/"harmful", and contains profanity and other types of sensitive content - none of the content or views contained in the dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs without a great amount of validation - you are able to use the dataset lawfully, particularly in locations with less-than-free speech laws - you, and you alone are responsible for having downloaded and used the dataset, and I am completely indemnified from any and all liabilities Also note that the data was generated primarily with gpt-4, and therefore may have some strings attached to the OpenAI terms of service.
[ "## Overview\n\nThis dataset is a continuation of the airoboros-3.1 with the following changes:\n* MathJSON has been removed for the time-being, because it seems to confuse the models at times, causing more problems than it's worth. The mathjson dataset can be found here\n* The de-censorship data has been re-added, to ensure a non-DPO SFT model using this dataset is relatively uncensored.\n* ~11k instructions from slimorca where extended to have an additional, follow-up turn to enhance multi-turn capabilities.", "## Format\n\nThe format is now in ShareGPT format, to better accomodate the OS ecosystem fine-tuning tooling.", "## Usage restriction\n\nTo use this data, you must acknowledge/agree to the following:\n- a small sampling of the data contained within is \"toxic\"/\"harmful\", and contains profanity and other types of sensitive content\n- none of the content or views contained in the dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs without a great amount of validation\n- you are able to use the dataset lawfully, particularly in locations with less-than-free speech laws\n- you, and you alone are responsible for having downloaded and used the dataset, and I am completely indemnified from any and all liabilities\n\nAlso note that the data was generated primarily with gpt-4, and therefore may have some strings attached to the OpenAI terms of service." ]
[ "TAGS\n#license-cc-by-4.0 #not-for-all-audiences #region-us \n", "## Overview\n\nThis dataset is a continuation of the airoboros-3.1 with the following changes:\n* MathJSON has been removed for the time-being, because it seems to confuse the models at times, causing more problems than it's worth. The mathjson dataset can be found here\n* The de-censorship data has been re-added, to ensure a non-DPO SFT model using this dataset is relatively uncensored.\n* ~11k instructions from slimorca where extended to have an additional, follow-up turn to enhance multi-turn capabilities.", "## Format\n\nThe format is now in ShareGPT format, to better accomodate the OS ecosystem fine-tuning tooling.", "## Usage restriction\n\nTo use this data, you must acknowledge/agree to the following:\n- a small sampling of the data contained within is \"toxic\"/\"harmful\", and contains profanity and other types of sensitive content\n- none of the content or views contained in the dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs without a great amount of validation\n- you are able to use the dataset lawfully, particularly in locations with less-than-free speech laws\n- you, and you alone are responsible for having downloaded and used the dataset, and I am completely indemnified from any and all liabilities\n\nAlso note that the data was generated primarily with gpt-4, and therefore may have some strings attached to the OpenAI terms of service." ]
[ 24, 131, 28, 183 ]
[ "passage: TAGS\n#license-cc-by-4.0 #not-for-all-audiences #region-us \n## Overview\n\nThis dataset is a continuation of the airoboros-3.1 with the following changes:\n* MathJSON has been removed for the time-being, because it seems to confuse the models at times, causing more problems than it's worth. The mathjson dataset can be found here\n* The de-censorship data has been re-added, to ensure a non-DPO SFT model using this dataset is relatively uncensored.\n* ~11k instructions from slimorca where extended to have an additional, follow-up turn to enhance multi-turn capabilities.## Format\n\nThe format is now in ShareGPT format, to better accomodate the OS ecosystem fine-tuning tooling.## Usage restriction\n\nTo use this data, you must acknowledge/agree to the following:\n- a small sampling of the data contained within is \"toxic\"/\"harmful\", and contains profanity and other types of sensitive content\n- none of the content or views contained in the dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs without a great amount of validation\n- you are able to use the dataset lawfully, particularly in locations with less-than-free speech laws\n- you, and you alone are responsible for having downloaded and used the dataset, and I am completely indemnified from any and all liabilities\n\nAlso note that the data was generated primarily with gpt-4, and therefore may have some strings attached to the OpenAI terms of service." ]
e2c0fb71fff4e7f46d1a243ec2640f7ece76cb9d
## Preparing the dataset ### NOTICE: All code is owned by Hugging Face and uses the Apache 2.0 Licence. While I clean and strip the dataset for processing, do note that this dataset is under the same scruteny as the original Apache 2.0 License. ## Clone Repo Data souce used is the [accelerate](https://github.com/huggingface/accelerate) repository. I'm using the latest version, v0.25.0 ```bash git clone https://github.com/huggingface/accelerate cd accelerate git checkout v0.25.0 cd .. mkdir docs src mv accelerate/src/accelerate/* src mv accelerate/docs/* docs cd src rm __init__.py commands/__init__.py test_utils/__init__.py utils/__init__.py ``` ### Cleaning the dataset Using `regex` in VSCODE, use the following replacement: ```regex # Copyright(.*\n)+# limitations under the license. ``` ```regex <!--Copyright(.*\n)+--> ``` In the source: ```regex """ ``` To: ```regex """ ``` Then remove all import statements (as we only care about the content). Strip all blank spaces/whitespace: ```regex ^(?:[\t ]*(?:\r?\n|\r))+ ``` **WARNING**: It is known that this will seperate out the `_inner()` in the source code and use it as a seperate function losing the context. Trying out with this issue for now.
muellerzr/RAG-accelerate
[ "language:en", "license:apache-2.0", "region:us" ]
2024-01-02T18:45:44+00:00
{"language": ["en"], "license": "apache-2.0"}
2024-01-04T21:29:59+00:00
[]
[ "en" ]
TAGS #language-English #license-apache-2.0 #region-us
## Preparing the dataset ### NOTICE: All code is owned by Hugging Face and uses the Apache 2.0 Licence. While I clean and strip the dataset for processing, do note that this dataset is under the same scruteny as the original Apache 2.0 License. ## Clone Repo Data souce used is the accelerate repository. I'm using the latest version, v0.25.0 ### Cleaning the dataset Using 'regex' in VSCODE, use the following replacement: In the source: To: Then remove all import statements (as we only care about the content). Strip all blank spaces/whitespace: WARNING: It is known that this will seperate out the '_inner()' in the source code and use it as a seperate function losing the context. Trying out with this issue for now.
[ "## Preparing the dataset", "### NOTICE:\n\nAll code is owned by Hugging Face and uses the Apache 2.0 Licence. While I clean and strip the dataset for processing, do note that this dataset is under the same scruteny as the original Apache 2.0 License.", "## Clone Repo\n\nData souce used is the accelerate repository. I'm using the latest version, v0.25.0", "### Cleaning the dataset\n\nUsing 'regex' in VSCODE, use the following replacement:\n\n\n\n\n\nIn the source:\n\nTo:\n\nThen remove all import statements (as we only care about the content).\n\nStrip all blank spaces/whitespace:\n\n\nWARNING: It is known that this will seperate out the '_inner()' in the source code and use it as a seperate function losing the context. Trying out with this issue for now." ]
[ "TAGS\n#language-English #license-apache-2.0 #region-us \n", "## Preparing the dataset", "### NOTICE:\n\nAll code is owned by Hugging Face and uses the Apache 2.0 Licence. While I clean and strip the dataset for processing, do note that this dataset is under the same scruteny as the original Apache 2.0 License.", "## Clone Repo\n\nData souce used is the accelerate repository. I'm using the latest version, v0.25.0", "### Cleaning the dataset\n\nUsing 'regex' in VSCODE, use the following replacement:\n\n\n\n\n\nIn the source:\n\nTo:\n\nThen remove all import statements (as we only care about the content).\n\nStrip all blank spaces/whitespace:\n\n\nWARNING: It is known that this will seperate out the '_inner()' in the source code and use it as a seperate function losing the context. Trying out with this issue for now." ]
[ 18, 7, 58, 28, 103 ]
[ "passage: TAGS\n#language-English #license-apache-2.0 #region-us \n## Preparing the dataset### NOTICE:\n\nAll code is owned by Hugging Face and uses the Apache 2.0 Licence. While I clean and strip the dataset for processing, do note that this dataset is under the same scruteny as the original Apache 2.0 License.## Clone Repo\n\nData souce used is the accelerate repository. I'm using the latest version, v0.25.0### Cleaning the dataset\n\nUsing 'regex' in VSCODE, use the following replacement:\n\n\n\n\n\nIn the source:\n\nTo:\n\nThen remove all import statements (as we only care about the content).\n\nStrip all blank spaces/whitespace:\n\n\nWARNING: It is known that this will seperate out the '_inner()' in the source code and use it as a seperate function losing the context. Trying out with this issue for now." ]