sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
adeab69db72d52c045035039b6c64367ced4e007 | # Dataset Card for "bookcorpus_compact_1024_shard2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | saibo/bookcorpus_compact_1024_shard2_of_10 | [
"region:us"
]
| 2023-01-06T09:25:20+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "concept_with_offset", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 759243184, "num_examples": 61605}], "download_size": 382569803, "dataset_size": 759243184}} | 2023-01-06T09:25:48+00:00 |
de29ae7533b3715ab0d1e3cb191316d01c8c3664 | # Dataset Card for "flurocells"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | zlgao/flurocells | [
"region:us"
]
| 2023-01-06T09:29:48+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "mcf7", "1": "mda231"}}}}], "splits": [{"name": "train", "num_bytes": 165402692.0, "num_examples": 203}], "download_size": 165410090, "dataset_size": 165402692.0}} | 2023-01-06T09:30:31+00:00 |
4f4c80e11df5a394a0ac8b6d78725ac4464acad1 | rayjhon/holland | [
"license:apache-2.0",
"region:us"
]
| 2023-01-06T09:51:00+00:00 | {"license": "apache-2.0"} | 2023-01-06T09:51:00+00:00 |
|
bf375d06223db27a1d2481ff985cdb1163e696b1 | # Dataset Card for "xquad_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Zaid/xquad_en | [
"region:us"
]
| 2023-01-06T10:05:48+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 903196.0815126051, "num_examples": 963}, {"name": "validation", "num_bytes": 111609.9, "num_examples": 119}, {"name": "test", "num_bytes": 101293.01848739496, "num_examples": 108}], "download_size": 323403, "dataset_size": 1116099.0}} | 2023-01-06T10:06:02+00:00 |
ea10a33abb1dc44fff19d29c078181ca7ffa94df | # Dataset Card for "xquad_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Zaid/xquad_ru | [
"region:us"
]
| 2023-01-06T10:06:47+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 1729326.2672268907, "num_examples": 963}, {"name": "validation", "num_bytes": 213696.6, "num_examples": 119}, {"name": "test", "num_bytes": 193943.13277310925, "num_examples": 108}], "download_size": 498595, "dataset_size": 2136966.0}} | 2023-01-06T10:07:02+00:00 |
474545964b7f14653e5de4d58cd465c5ec05e89d | # AutoTrain Dataset for project: real-vs-fake-news
## Dataset Description
This dataset has been automatically processed by AutoTrain for project real-vs-fake-news.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_title": "FBI Russia probe helped by Australian diplomat tip-off: NYT",
"text": "WASHINGTON (Reuters) - Trump campaign adviser George Papadopoulos told an Australian diplomat in May 2016 that Russia had political dirt on Democratic presidential candidate Hillary Clinton, the New York Times reported on Saturday. The conversation between Papadopoulos and the diplomat, Alexander Downer, in London was a driving factor behind the FBI\u2019s decision to open a counter-intelligence investigation of Moscow\u2019s contacts with the Trump campaign, the Times reported. Two months after the meeting, Australian officials passed the information that came from Papadopoulos to their American counterparts when leaked Democratic emails began appearing online, according to the newspaper, which cited four current and former U.S. and foreign officials. Besides the information from the Australians, the probe by the Federal Bureau of Investigation was also propelled by intelligence from other friendly governments, including the British and Dutch, the Times said. Papadopoulos, a Chicago-based international energy lawyer, pleaded guilty on Oct. 30 to lying to FBI agents about contacts with people who claimed to have ties to top Russian officials. It was the first criminal charge alleging links between the Trump campaign and Russia. The White House has played down the former aide\u2019s campaign role, saying it was \u201cextremely limited\u201d and that any actions he took would have been on his own. The New York Times, however, reported that Papadopoulos helped set up a meeting between then-candidate Donald Trump and Egyptian President Abdel Fattah al-Sisi and edited the outline of Trump\u2019s first major foreign policy speech in April 2016. The federal investigation, which is now being led by Special Counsel Robert Mueller, has hung over Trump\u2019s White House since he took office almost a year ago. Some Trump allies have recently accused Mueller\u2019s team of being biased against the Republican president. Lawyers for Papadopoulos did not immediately respond to requests by Reuters for comment. Mueller\u2019s office declined to comment. Trump\u2019s White House attorney, Ty Cobb, declined to comment on the New York Times report. \u201cOut of respect for the special counsel and his process, we are not commenting on matters such as this,\u201d he said in a statement. Mueller has charged four Trump associates, including Papadopoulos, in his investigation. Russia has denied interfering in the U.S. election and Trump has said there was no collusion between his campaign and Moscow. ",
"feat_subject": "politicsNews",
"feat_date": "December 30, 2017 ",
"target": 1
},
{
"feat_title": "Democrats ride grassroots wave to major statehouse gains",
"text": "(Reuters) - Democrats claimed historic gains in Virginia\u2019s statehouse and booted Republicans from state and local office across the United States on Tuesday, in the party\u2019s first big wave of victories since Republican Donald Trump\u2019s won the White House a year ago. Democrats must figure out how to turn that momentum to their advantage in November 2018 elections, when control of the U.S. Congress and scores of statehouses will be at stake. From coast to coast, Democratic victories showed grassroots resistance to Trump rallying the party\u2019s base, while independent and conservative voters appeared frustrated with the unpopular Republican leadership in Washington. Democrats won this year\u2019s races for governor in Virginia and New Jersey, but successes in legislative and local races nationwide may have revealed more about where the party stands a year into Trump\u2019s administration. Unexpectedly massive Democratic gains in Virginia\u2019s statehouse surprised even the most optimistic party loyalists in a state that has trended Democratic in recent years but remains a top target for both parties in national elections. \u201cThis is beyond our wildest expectations, to be honest,\u201d said Catherine Vaughan, co-founder of Flippable, one of several new startup progressive groups rebuilding the party at the grassroots level. With several races still too close to call, Democrats were close to flipping, or splitting, control of the Virginia House of Delegates, erasing overnight a two-to-one Republican majority. Democratic Lieutenant Governor Ralph Northam also defeated Republican Ed Gillespie by nearly nine percentage points in what had seemed a closer contest for Virginia\u2019s governor\u2019s mansion, a year after Democrat Hillary Clinton carried the state by five points in the presidential election. The losing candidate had employed Trump-style campaign tactics that highlighted divisive issues such as immigration, although the president did not join him on the campaign trail. In New Jersey, a Democratic presidential stronghold, voters replaced a two-term Republican governor with a Democrat and increased the party\u2019s majorities in the state legislature. Democrats notched additional wins in a Washington state Senate race that gave the party full control of the state government and in Republican-controlled Georgia, where Democrats picked up three seats in special state legislative elections. \u201cThis was the first chance that the voters got to send a message to Donald Trump and they took advantage of it,\u201d John Feehery, a Republican strategist in Washington, said by phone. The gains suggested to some election analysts that Democrats could retake the U.S. House of Representatives next year. Republicans control both the House and Senate along with the White House. Dave Wasserman, who analyzes U.S. House and statehouse races for the nonpartisan Cook Political Report, called the Virginia results a \u201ctidal wave.\u201d Even after Tuesday\u2019s gains, however, Democrats are completely locked out of power in 26 state governments. Republicans control two-thirds of U.S. legislative chambers. Desperate to rebuild, national Democrats this year showed newfound interest in legislative contests and races even farther down the ballot. The Democratic National Committee successfully invested in mayoral races from St. Petersburg, Florida, to Manchester, New Hampshire. \u201cIf there is a lesson to be taken from yesterday, it is that we need to make sure that we are competing everywhere, because Democrats can win,\u201d DNC Chairman Tom Perez said on a media call. Democratic Legislative Campaign Committee executive director Jessica Post said national party leaders must remain focused on local races, even in a congressional year. \u201cWe don\u2019t focus enough on the state level, and that is why we are in the place we are,\u201d she said. \u201cBut when we do, we win.\u201d ",
"feat_subject": "politicsNews",
"feat_date": "November 8, 2017 ",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_title": "Value(dtype='string', id=None)",
"text": "Value(dtype='string', id=None)",
"feat_subject": "Value(dtype='string', id=None)",
"feat_date": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['Fake', 'True'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1598 |
| valid | 400 |
| Eip/autotrain-data-real-vs-fake-news | [
"task_categories:text-classification",
"region:us"
]
| 2023-01-06T10:10:38+00:00 | {"task_categories": ["text-classification"]} | 2023-01-06T12:20:57+00:00 |
abad918fd61f5712ff030733993a4023ace37193 | # Dataset Card for "nature128_1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | mertcobanov/nature128_1k | [
"region:us"
]
| 2023-01-06T10:35:28+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "07968_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hardenbergia_violacea", "1": "07969_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hedysarum_alpinum", "2": "07970_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hedysarum_boreale", "3": "07971_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hoffmannseggia_glauca", "4": "07972_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hoffmannseggia_microphylla", "5": "07973_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hosackia_gracilis", "6": "07974_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hylodesmum_glutinosum", "7": "07975_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hylodesmum_nudiflorum", "8": "07976_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Indigofera_miniata", "9": "07977_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Kennedia_prostrata", "10": "07978_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Laburnum_anagyroides", "11": "07979_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_hirsutus", "12": "07980_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_japonicus", "13": "07986_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_tuberosus", "14": "07987_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_vernus", "15": "07988_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_vestitus", "16": "07989_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lespedeza_capitata", "17": "07990_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lespedeza_cuneata", "18": "07991_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lespedeza_virginica", "19": "07992_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lessertia_frutescens", "20": "08013_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_texensis", "21": "08014_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_truncatus", "22": "08015_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Macroptilium_atropurpureum", "23": "08016_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Macroptilium_gibbosifolium", "24": "08017_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Macroptilium_lathyroides", "25": "08018_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Medicago_arabica", "26": "08019_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Medicago_falcata", "27": "08020_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Medicago_lupulina", "28": "08021_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Medicago_minima", "29": "08022_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Medicago_polymorpha", "30": "08023_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Medicago_sativa", "31": "08024_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Melilotus_albus", "32": "08025_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Melilotus_indicus", "33": "08026_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Melilotus_officinalis", "34": "08049_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Prosopis_laevigata", "35": "08050_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Prosopis_pubescens", "36": "08051_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Prosopis_velutina", "37": "08052_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Psorothamnus_emoryi", "38": "08053_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Psorothamnus_schottii", "39": "08054_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Psorothamnus_spinosus", "40": "08055_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Pueraria_montana", "41": "08056_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Robinia_neomexicana", "42": "08057_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Robinia_pseudoacacia", "43": "08058_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Rupertia_physodes", "44": "08059_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Securigera_varia", "45": "08060_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Senegalia_greggii", "46": "08061_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Senna_alata", "47": "08062_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Senna_armata", "48": "08063_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Senna_covesii", "49": "09930_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dennstaedtiaceae_Hypolepis_ambigua", "50": "09931_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dennstaedtiaceae_Paesia_scaberula", "51": "09932_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dennstaedtiaceae_Pteridium_aquilinum", "52": "09933_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dennstaedtiaceae_Pteridium_esculentum", "53": "09934_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dennstaedtiaceae_Pteridium_pinetorum", "54": "09935_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Diplaziopsidaceae_Homalosorus_pycnocarpos", "55": "09936_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Cyrtomium_falcatum", "56": "09937_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_arguta", "57": "09938_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_carthusiana", "58": "09939_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_cristata", "59": "09940_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_expansa", "60": "09941_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_filix-mas", "61": "09942_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_fragrans", "62": "09943_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_intermedia", "63": "09944_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_marginalis", "64": "09945_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Polystichum_acrostichoides", "65": "09946_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Polystichum_lonchitis", "66": "09947_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Polystichum_munitum", "67": "09948_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Polystichum_neozelandicum", "68": "09949_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Polystichum_vestitum", "69": "09950_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Rumohra_adiantiformis", "70": "09951_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Nephrolepidaceae_Nephrolepis_cordifolia", "71": "09952_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Onocleaceae_Matteuccia_struthiopteris", "72": "09953_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Onocleaceae_Onoclea_sensibilis", "73": "09954_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Microsorum_pustulatum", "74": "09955_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Microsorum_scandens", "75": "09956_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Notogrammitis_heterophylla", "76": "09957_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Phlebodium_aureum", "77": "09958_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Pleopeltis_michauxiana", "78": "09959_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Polypodium_californicum", "79": "09960_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Polypodium_glycyrrhiza", "80": "09961_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Polypodium_scouleri", "81": "09962_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Polypodium_virginianum", "82": "09963_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Polypodium_vulgare", "83": "09964_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Pyrrosia_eleagnifolia", "84": "09965_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Acrostichum_danaeifolium", "85": "09966_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Adiantum_aleuticum", "86": "09967_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Adiantum_capillus-veneris", "87": "09968_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Adiantum_cunninghamii", "88": "09969_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Adiantum_hispidulum", "89": "09970_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Adiantum_jordanii", "90": "09971_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Adiantum_pedatum", "91": "09972_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Aspidotis_densa", "92": "09973_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Astrolepis_sinuata", "93": "09974_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Cryptogramma_acrostichoides", "94": "09975_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Myriopteris_alabamensis", "95": "09976_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Myriopteris_aurea", "96": "09977_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Myriopteris_parryi", "97": "09978_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pellaea_andromedifolia", "98": "09979_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pellaea_atropurpurea", "99": "09980_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pellaea_glabella", "100": "09981_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pellaea_mucronata", "101": "09982_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pellaea_rotundifolia", "102": "09983_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pentagramma_triangularis", "103": "09984_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pteris_cretica", "104": "09985_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pteris_macilenta", "105": "09986_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pteris_tremula", "106": "09987_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pteris_vittata", "107": "09988_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Thelypteridaceae_Parathelypteris_noveboracensis", "108": "09989_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Thelypteridaceae_Phegopteris_connectilis", "109": "09990_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Thelypteridaceae_Phegopteris_hexagonoptera", "110": "09991_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Thelypteridaceae_Pneumatopteris_pennigera", "111": "09992_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Thelypteridaceae_Thelypteris_palustris", "112": "09993_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Woodsiaceae_Woodsia_ilvensis", "113": "09994_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Woodsiaceae_Woodsia_obtusa", "114": "09995_Plantae_Tracheophyta_Polypodiopsida_Psilotales_Psilotaceae_Psilotum_nudum", "115": "09996_Plantae_Tracheophyta_Polypodiopsida_Psilotales_Psilotaceae_Tmesipteris_elongata", "116": "09997_Plantae_Tracheophyta_Polypodiopsida_Salviniales_Salviniaceae_Azolla_filiculoides", "117": "09998_Plantae_Tracheophyta_Polypodiopsida_Salviniales_Salviniaceae_Salvinia_minima", "118": "09999_Plantae_Tracheophyta_Polypodiopsida_Schizaeales_Lygodiaceae_Lygodium_japonicum"}}}}], "splits": [{"name": "train", "num_bytes": 130554746.56, "num_examples": 1190}], "download_size": 132054218, "dataset_size": 130554746.56}} | 2023-01-06T10:37:33+00:00 |
3ffa1d960d70aa5538559b98ae57731bda5c067e | LuffyTheFox/GenshinPortraits | [
"license:creativeml-openrail-m",
"region:us"
]
| 2023-01-06T11:10:28+00:00 | {"license": "creativeml-openrail-m"} | 2023-02-02T17:14:33+00:00 |
|
658b0a48276f029ac6907647ee9e1b76e896d1fc | # Dataset Card for "temp_repo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | pyakymenko/temp_repo | [
"region:us"
]
| 2023-01-06T11:41:52+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 226855.0, "num_examples": 4}], "download_size": 0, "dataset_size": 226855.0}} | 2023-01-06T12:03:02+00:00 |
7276a5670ff72438e60ac95c54e5ed25672bae30 | # Dataset Card for "gids"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Repository:** [RE-DS-Word-Attention-Models](https://github.com/SharmisthaJat/RE-DS-Word-Attention-Models/tree/master/Data/GIDS)
- **Paper:** [Improving Distantly Supervised Relation Extraction using Word and Entity Based Attention](https://arxiv.org/abs/1804.06987)
- **Size of downloaded dataset files:** 8.94 MB
- **Size of the generated dataset:** 11.82 MB
### Dataset Summary
The Google-IISc Distant Supervision (GIDS) is a new dataset for distantly-supervised relation extraction.
GIDS is seeded from the human-judged Google relation extraction corpus.
See the paper for full details: [Improving Distantly Supervised Relation Extraction using Word and Entity Based Attention](https://arxiv.org/abs/1804.06987)
Note:
- There is a formatted version that you can load with `datasets.load_dataset('gids', name='gids_formatted')`. This version is tokenized with spaCy, removes the underscores in the entities and provides entity offsets.
### Supported Tasks and Leaderboards
- **Tasks:** Relation Classification
- **Leaderboards:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
#### gids
- **Size of downloaded dataset files:** 8.94 MB
- **Size of the generated dataset:** 8.5 MB
An example of 'train' looks as follows:
```json
{
"sentence": "War as appropriate. Private Alfred James_Smurthwaite Sample. 26614. 2nd Battalion Yorkshire Regiment. Son of Edward James Sample, of North_Ormesby , Yorks. Died 2 April 1917. Aged 29. Born Ormesby, Enlisted Middlesbrough. Buried BUCQUOY ROAD CEMETERY, FICHEUX. Not listed on the Middlesbrough War Memorial Private Frederick Scott. 46449. 4th Battalion Yorkshire Regiment. Son of William and Maria Scott, of 25, Aspinall St., Heywood, Lancs. Born at West Hartlepool. Died 27 May 1918. Aged 24.",
"subj_id": "/m/02qt0sv",
"obj_id": "/m/0fnhl9",
"subj_text": "James_Smurthwaite",
"obj_text": "North_Ormesby",
"relation": 4
}
```
#### gids_formatted
- **Size of downloaded dataset files:** 8.94 MB
- **Size of the generated dataset:** 11.82 MB
An example of 'train' looks as follows:
```json
{
"token": ["announced", "he", "had", "closed", "shop", ".", "Mary", "D.", "Crisp", "Coyle", "opened", "in", "1951", ".", "Stoffey", ",", "a", "Maricopa", "County", "/", "Phoenix", "city", "resident", "and", "longtime", "customer", ",", "bought", "the", "business", "in", "2011", ",", "when", "then", "owners", "were", "facing", "closure", ".", "He", "renovated", "the", "diner", "is", "interior", ",", "increased", "training", "for", "staff", "and", "expanded", "the", "menu", "."],
"subj_start": 6,
"subj_end": 9,
"obj_start": 17,
"obj_end": 22,
"relation": 4
}
```
### Data Fields
The data fields are the same among all splits.
#### gids
- `sentence`: the sentence, a `string` feature.
- `subj_id`: the id of the relation subject mention, a `string` feature.
- `obj_id`: the id of the relation object mention, a `string` feature.
- `subj_text`: the text of the relation subject mention, a `string` feature.
- `obj_text`: the text of the relation object mention, a `string` feature.
- `relation`: the relation label of this instance, an `int` classification label.
```python
{"NA": 0, "/people/person/education./education/education/institution": 1, "/people/person/education./education/education/degree": 2, "/people/person/place_of_birth": 3, "/people/deceased_person/place_of_death": 4}
```
#### gids_formatted
- `token`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features.
- `subj_start`: the 0-based index of the start token of the relation subject mention, an `ìnt` feature.
- `subj_end`: the 0-based index of the end token of the relation subject mention, exclusive, an `ìnt` feature.
- `obj_start`: the 0-based index of the start token of the relation object mention, an `ìnt` feature.
- `obj_end`: the 0-based index of the end token of the relation object mention, exclusive, an `ìnt` feature.
- `relation`: the relation label of this instance, an `int` classification label.
```python
{"NA": 0, "/people/person/education./education/education/institution": 1, "/people/person/education./education/education/degree": 2, "/people/person/place_of_birth": 3, "/people/deceased_person/place_of_death": 4}
```
### Data Splits
| | Train | Dev | Test |
|------|-------|------|------|
| GIDS | 11297 | 1864 | 5663 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{DBLP:journals/corr/abs-1804-06987,
author = {Sharmistha Jat and
Siddhesh Khandelwal and
Partha P. Talukdar},
title = {Improving Distantly Supervised Relation Extraction using Word and
Entity Based Attention},
journal = {CoRR},
volume = {abs/1804.06987},
year = {2018},
url = {http://arxiv.org/abs/1804.06987},
eprinttype = {arXiv},
eprint = {1804.06987},
timestamp = {Fri, 15 Nov 2019 17:16:02 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1804-06987.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset. | DFKI-SLT/gids | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:other",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100k",
"source_datasets:extended|other",
"language:en",
"license:other",
"relation extraction",
"arxiv:1804.06987",
"region:us"
]
| 2023-01-06T12:24:59+00:00 | {"annotations_creators": ["other"], "language_creators": ["found"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100k"], "source_datasets": ["extended|other"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "Google-IISc Distant Supervision (GIDS) dataset for distantly-supervised relation extraction", "tags": ["relation extraction"], "dataset_info": [{"config_name": "gids", "features": [{"name": "sentence", "dtype": "string"}, {"name": "subj_id", "dtype": "string"}, {"name": "obj_id", "dtype": "string"}, {"name": "subj_text", "dtype": "string"}, {"name": "obj_text", "dtype": "string"}, {"name": "relation", "dtype": {"class_label": {"names": {"0": "NA", "1": "/people/person/education./education/education/institution", "2": "/people/person/education./education/education/degree", "3": "/people/person/place_of_birth", "4": "/people/deceased_person/place_of_death"}}}}], "splits": [{"name": "train", "num_bytes": 5088421, "num_examples": 11297}, {"name": "validation", "num_bytes": 844784, "num_examples": 1864}, {"name": "test", "num_bytes": 2568673, "num_examples": 5663}], "download_size": 8941490, "dataset_size": 8501878}, {"config_name": "gids_formatted", "features": [{"name": "token", "sequence": "string"}, {"name": "subj_start", "dtype": "int32"}, {"name": "subj_end", "dtype": "int32"}, {"name": "obj_start", "dtype": "int32"}, {"name": "obj_end", "dtype": "int32"}, {"name": "relation", "dtype": {"class_label": {"names": {"0": "NA", "1": "/people/person/education./education/education/institution", "2": "/people/person/education./education/education/degree", "3": "/people/person/place_of_birth", "4": "/people/deceased_person/place_of_death"}}}}], "splits": [{"name": "train", "num_bytes": 7075362, "num_examples": 11297}, {"name": "validation", "num_bytes": 1173957, "num_examples": 1864}, {"name": "test", "num_bytes": 3573706, "num_examples": 5663}], "download_size": 8941490, "dataset_size": 11823025}]} | 2023-01-11T10:06:07+00:00 |
617b89fed951bf7702c2e688c8dadc6a1cd64787 | # Dataset Card for "kbp37"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Repository:** [kbp37](https://github.com/zhangdongxu/kbp37)
- **Paper:** [Relation Classification via Recurrent Neural Network](https://arxiv.org/abs/1508.01006)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 5.11 MB
- **Size of the generated dataset:** 6.58 MB
### Dataset Summary
KBP37 is a revision of MIML-RE annotation dataset, provided by Gabor Angeli et al. (2014). They use both the 2010 and
2013 KBP official document collections, as well as a July 2013 dump of Wikipedia as the text corpus for annotation.
There are 33811 sentences been annotated. Zhang and Wang made several refinements:
1. They add direction to the relation names, e.g. '`per:employee_of`' is split into '`per:employee of(e1,e2)`'
and '`per:employee of(e2,e1)`'. They also replace '`org:parents`' with '`org:subsidiaries`' and replace
'`org:member of’ with '`org:member`' (by their reverse directions).
2. They discard low frequency relations such that both directions of each relation occur more than 100 times in the
dataset.
KBP37 contains 18 directional relations and an additional '`no_relation`' relation, resulting in 37 relation classes.
Note:
- There is a formatted version that you can load with `datasets.load_dataset('kbp37', name='kbp37_formatted')`. This version is tokenized with `str.split()` and
provides entities as offsets instead of being enclosed by xml tags. It discards some examples, however, that are invalid in the original dataset and lead
to entity offset errors, e.g. example train/1276.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The language data in KBP37 is in English (BCP-47 en)
## Dataset Structure
### Data Instances
#### kbp37
- **Size of downloaded dataset files:** 5.11 MB
- **Size of the generated dataset:** 4.7 MB
An example of 'train' looks as follows:
```json
{
"id": "0",
"sentence": "<e1> Thom Yorke </e1> of <e2> Radiohead </e2> has included the + for many of his signature distortion sounds using a variety of guitars to achieve various tonal options .",
"relation": 27
}
```
#### kbp37_formatted
- **Size of downloaded dataset files:** 5.11 MB
- **Size of the generated dataset:** 6.58 MB
An example of 'train' looks as follows:
```json
{
"id": "1",
"token": ["Leland", "High", "School", "is", "a", "public", "high", "school", "located", "in", "the", "Almaden", "Valley", "in", "San", "Jose", "California", "USA", "in", "the", "San", "Jose", "Unified", "School", "District", "."],
"e1_start": 0,
"e1_end": 3,
"e2_start": 14,
"e2_end": 16,
"relation": 3
}
```
### Data Fields
#### kbp37
- `id`: the instance id of this sentence, a `string` feature.
- `sentence`: the sentence, a `string` features.
- `relation`: the relation label of this instance, an `int` classification label.
```python
{"no_relation": 0, "org:alternate_names(e1,e2)": 1, "org:alternate_names(e2,e1)": 2, "org:city_of_headquarters(e1,e2)": 3, "org:city_of_headquarters(e2,e1)": 4, "org:country_of_headquarters(e1,e2)": 5, "org:country_of_headquarters(e2,e1)": 6, "org:founded(e1,e2)": 7, "org:founded(e2,e1)": 8, "org:founded_by(e1,e2)": 9, "org:founded_by(e2,e1)": 10, "org:members(e1,e2)": 11, "org:members(e2,e1)": 12, "org:stateorprovince_of_headquarters(e1,e2)": 13, "org:stateorprovince_of_headquarters(e2,e1)": 14, "org:subsidiaries(e1,e2)": 15, "org:subsidiaries(e2,e1)": 16, "org:top_members/employees(e1,e2)": 17, "org:top_members/employees(e2,e1)": 18, "per:alternate_names(e1,e2)": 19, "per:alternate_names(e2,e1)": 20, "per:cities_of_residence(e1,e2)": 21, "per:cities_of_residence(e2,e1)": 22, "per:countries_of_residence(e1,e2)": 23, "per:countries_of_residence(e2,e1)": 24, "per:country_of_birth(e1,e2)": 25, "per:country_of_birth(e2,e1)": 26, "per:employee_of(e1,e2)": 27, "per:employee_of(e2,e1)": 28, "per:origin(e1,e2)": 29, "per:origin(e2,e1)": 30, "per:spouse(e1,e2)": 31, "per:spouse(e2,e1)": 32, "per:stateorprovinces_of_residence(e1,e2)": 33, "per:stateorprovinces_of_residence(e2,e1)": 34, "per:title(e1,e2)": 35, "per:title(e2,e1)": 36}
```
#### kbp37_formatted
- `id`: the instance id of this sentence, a `string` feature.
- `token`: the list of tokens of this sentence, using `str.split()`, a `list` of `string` features.
- `e1_start`: the 0-based index of the start token of the first argument', an `int` feature.
- `e1_end`: the 0-based index of the end token of the first argument, exclusive, an `int` feature.
- `e2_start`: the 0-based index of the start token of the second argument, an `int` feature.
- `e2_end`: the 0-based index of the end token of the second argument, exclusive, an `int` feature.
- `relation`: the relation label of this instance, an `int` classification label (same as `'kbp37''`).
### Data Splits
| | Train | Dev | Test |
|-------|-------|------|------|
| kbp37 | 15917 | 1724 | 3405 |
| kbp37_formatted | 15807 | 1714 | 3379 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{DBLP:journals/corr/ZhangW15a,
author = {Dongxu Zhang and
Dong Wang},
title = {Relation Classification via Recurrent Neural Network},
journal = {CoRR},
volume = {abs/1508.01006},
year = {2015},
url = {http://arxiv.org/abs/1508.01006},
eprinttype = {arXiv},
eprint = {1508.01006},
timestamp = {Fri, 04 Nov 2022 18:37:50 +0100},
biburl = {https://dblp.org/rec/journals/corr/ZhangW15a.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset. | DFKI-SLT/kbp37 | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:other",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other",
"language:en",
"license:other",
"relation extraction",
"arxiv:1508.01006",
"region:us"
]
| 2023-01-06T12:26:09+00:00 | {"annotations_creators": ["other"], "language_creators": ["found"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "KBP37 is an English Relation Classification dataset", "tags": ["relation extraction"], "dataset_info": [{"config_name": "kbp37", "features": [{"name": "id", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "relation", "dtype": {"class_label": {"names": {"0": "no_relation", "1": "org:alternate_names(e1,e2)", "2": "org:alternate_names(e2,e1)", "3": "org:city_of_headquarters(e1,e2)", "4": "org:city_of_headquarters(e2,e1)", "5": "org:country_of_headquarters(e1,e2)", "6": "org:country_of_headquarters(e2,e1)", "7": "org:founded(e1,e2)", "8": "org:founded(e2,e1)", "9": "org:founded_by(e1,e2)", "10": "org:founded_by(e2,e1)", "11": "org:members(e1,e2)", "12": "org:members(e2,e1)", "13": "org:stateorprovince_of_headquarters(e1,e2)", "14": "org:stateorprovince_of_headquarters(e2,e1)", "15": "org:subsidiaries(e1,e2)", "16": "org:subsidiaries(e2,e1)", "17": "org:top_members/employees(e1,e2)", "18": "org:top_members/employees(e2,e1)", "19": "per:alternate_names(e1,e2)", "20": "per:alternate_names(e2,e1)", "21": "per:cities_of_residence(e1,e2)", "22": "per:cities_of_residence(e2,e1)", "23": "per:countries_of_residence(e1,e2)", "24": "per:countries_of_residence(e2,e1)", "25": "per:country_of_birth(e1,e2)", "26": "per:country_of_birth(e2,e1)", "27": "per:employee_of(e1,e2)", "28": "per:employee_of(e2,e1)", "29": "per:origin(e1,e2)", "30": "per:origin(e2,e1)", "31": "per:spouse(e1,e2)", "32": "per:spouse(e2,e1)", "33": "per:stateorprovinces_of_residence(e1,e2)", "34": "per:stateorprovinces_of_residence(e2,e1)", "35": "per:title(e1,e2)", "36": "per:title(e2,e1)"}}}}], "splits": [{"name": "train", "num_bytes": 3570626, "num_examples": 15917}, {"name": "validation", "num_bytes": 388935, "num_examples": 1724}, {"name": "test", "num_bytes": 762806, "num_examples": 3405}], "download_size": 5106673, "dataset_size": 4722367}, {"config_name": "kbp37_formatted", "features": [{"name": "id", "dtype": "string"}, {"name": "token", "sequence": "string"}, {"name": "e1_start", "dtype": "int32"}, {"name": "e1_end", "dtype": "int32"}, {"name": "e2_start", "dtype": "int32"}, {"name": "e2_end", "dtype": "int32"}, {"name": "relation", "dtype": {"class_label": {"names": {"0": "no_relation", "1": "org:alternate_names(e1,e2)", "2": "org:alternate_names(e2,e1)", "3": "org:city_of_headquarters(e1,e2)", "4": "org:city_of_headquarters(e2,e1)", "5": "org:country_of_headquarters(e1,e2)", "6": "org:country_of_headquarters(e2,e1)", "7": "org:founded(e1,e2)", "8": "org:founded(e2,e1)", "9": "org:founded_by(e1,e2)", "10": "org:founded_by(e2,e1)", "11": "org:members(e1,e2)", "12": "org:members(e2,e1)", "13": "org:stateorprovince_of_headquarters(e1,e2)", "14": "org:stateorprovince_of_headquarters(e2,e1)", "15": "org:subsidiaries(e1,e2)", "16": "org:subsidiaries(e2,e1)", "17": "org:top_members/employees(e1,e2)", "18": "org:top_members/employees(e2,e1)", "19": "per:alternate_names(e1,e2)", "20": "per:alternate_names(e2,e1)", "21": "per:cities_of_residence(e1,e2)", "22": "per:cities_of_residence(e2,e1)", "23": "per:countries_of_residence(e1,e2)", "24": "per:countries_of_residence(e2,e1)", "25": "per:country_of_birth(e1,e2)", "26": "per:country_of_birth(e2,e1)", "27": "per:employee_of(e1,e2)", "28": "per:employee_of(e2,e1)", "29": "per:origin(e1,e2)", "30": "per:origin(e2,e1)", "31": "per:spouse(e1,e2)", "32": "per:spouse(e2,e1)", "33": "per:stateorprovinces_of_residence(e1,e2)", "34": "per:stateorprovinces_of_residence(e2,e1)", "35": "per:title(e1,e2)", "36": "per:title(e2,e1)"}}}}], "splits": [{"name": "train", "num_bytes": 4943394, "num_examples": 15807}, {"name": "validation", "num_bytes": 539197, "num_examples": 1714}, {"name": "test", "num_bytes": 1055918, "num_examples": 3379}], "download_size": 5106673, "dataset_size": 6581345}]} | 2023-04-27T12:04:14+00:00 |
93a61f1639ee7e810abc309dc6ac345c0b8affa9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/tglobal-large-booksum-WIP4-r1
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-9d5680-2758781772 | [
"autotrain",
"evaluation",
"region:us"
]
| 2023-01-06T12:59:42+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/tglobal-large-booksum-WIP4-r1", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2023-01-06T14:35:56+00:00 |
93858a3e4e331c5ac6da0d49fbc77268fab96f69 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/tglobal-large-booksum-WIP4-r1
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-samsum-samsum-08013b-2758881773 | [
"autotrain",
"evaluation",
"region:us"
]
| 2023-01-06T12:59:46+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/tglobal-large-booksum-WIP4-r1", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2023-01-06T13:08:50+00:00 |
9bd3af8db8e3304f622c9f6f3bcf5cafe008e5d1 | arnepeine/icu_medications | [
"task_categories:automatic-speech-recognition",
"language:de",
"license:other",
"region:us"
]
| 2023-01-06T13:10:36+00:00 | {"language": ["de"], "license": "other", "task_categories": ["automatic-speech-recognition"], "pretty_name": "ICU Medication Dataset"} | 2023-01-06T13:12:08+00:00 |
|
d050610418f468b8774c9f1f6ca812515e170d20 | This repository holds embeddings for Stable Diffusion 2 768 | Zabin/SD2_768_Embedding | [
"region:us"
]
| 2023-01-06T13:14:55+00:00 | {} | 2023-01-21T06:08:15+00:00 |
272e365c209e906bac69e0686fbdc8f55796cf51 | metaeval/utilitarianism | [
"license:apache-2.0",
"region:us"
]
| 2023-01-06T13:23:13+00:00 | {"license": "apache-2.0"} | 2023-01-06T13:41:50+00:00 |
|
b6a4440982231c4bf33321bae7d26784504afc04 | # Dataset Card for "test_repo_111"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arnepeine/test_repo_111 | [
"region:us"
]
| 2023-01-06T13:28:55+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39116602.0, "num_examples": 502}], "download_size": 38127697, "dataset_size": 39116602.0}} | 2023-01-07T09:39:25+00:00 |
345d7e47b6c3dc03c26436466b077e5452d74fbe | Geawher/Entityrecongnitionjobs | [
"task_categories:token-classification",
"region:us"
]
| 2023-01-06T13:29:45+00:00 | {"task_categories": ["token-classification"]} | 2023-01-06T22:22:56+00:00 |
|
a677a23997beb9f0339567b4a7d1e567a9609765 | # Dataset Card for "owczpodh-dog-results"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | misza222/owczpodh-dog-results | [
"region:us"
]
| 2023-01-06T13:44:44+00:00 | {"dataset_info": {"features": [{"name": "images", "dtype": "image"}, {"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3813312.0, "num_examples": 8}], "download_size": 3814513, "dataset_size": 3813312.0}} | 2023-01-06T13:49:00+00:00 |
ac561acc3a27ad78d0f159393f048140d6308dab | # Dataset Card for "OwczarekPodhalanski-dog-lr1e-06-max_train_steps1200-results"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | misza222/OwczarekPodhalanski-dog-lr1e-06-max_train_steps800-results | [
"region:us"
]
| 2023-01-06T14:26:49+00:00 | {"dataset_info": {"features": [{"name": "images", "dtype": "image"}, {"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5281596.0, "num_examples": 12}], "download_size": 5282716, "dataset_size": 5281596.0}} | 2023-01-06T14:27:01+00:00 |
c3741a66c486b1a23beefdf6c75b06dba288d4f9 |
__ODEX__ is an Open-Domain EXecution-based NL-to-Code generation data benchmark.
It contains 945 samples with a total of 1,707 human-written test cases, covering intents in four different natural languages -- 439 in English, 90 in Spanish, 164 in Japanese, and 252 in Russian.
You can load the dataset by specifying a subset from *en, es, ja, ru* (by default the english subset *en* is loaded):
```python
from datasets import load_dataset
ds = load_dataset("neulab/odex", "ja", split="test")
```
If you find our dataset useful, please cite the paper
```
@article{wang2022execution,
title={Execution-Based Evaluation for Open-Domain Code Generation},
author={Zhiruo Wang, Shuyan Zhou, Daniel Fried, Graham Neubig},
journal={arXiv preprint arXiv:2212.10481},
year={2022}
}
``` | neulab/odex | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"language:es",
"language:ja",
"language:ru",
"license:cc-by-sa-4.0",
"region:us"
]
| 2023-01-06T14:30:00+00:00 | {"language": ["en", "es", "ja", "ru"], "license": "cc-by-sa-4.0", "size_categories": ["n<1K"], "task_categories": ["text2text-generation", "text-generation"]} | 2023-02-10T18:01:34+00:00 |
649356656e0639acacea52ee9986c421c6196a6e | # Dataset Card for "OwczarekPodhalanski-dog-lr1e-06-max_train_steps1200-results"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | misza222/OwczarekPodhalanski-dog-lr1e-06-max_train_steps1200-results | [
"region:us"
]
| 2023-01-06T14:37:09+00:00 | {"dataset_info": {"features": [{"name": "images", "dtype": "image"}, {"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2753767.0, "num_examples": 6}], "download_size": 2755049, "dataset_size": 2753767.0}} | 2023-01-06T16:09:48+00:00 |
a81c149b02cbb87a7d5f3fa37ff1edf01bebda76 | # Dataset Card for "dreambooth-hackathon-Daphnia"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | misza222/dreambooth-hackathon-Daphnia | [
"region:us"
]
| 2023-01-06T15:02:34+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 2288884.0, "num_examples": 9}], "download_size": 2242120, "dataset_size": 2288884.0}} | 2023-01-06T15:02:40+00:00 |
f6b53caa62bc535e9e71ab39541de447a25e055a | # Dataset Card for "SBC_segmented"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | NathanRoll/SBC_segmented | [
"region:us"
]
| 2023-01-06T15:17:33+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4103960228.735, "num_examples": 8573}, {"name": "test", "num_bytes": 318277804.0, "num_examples": 728}], "download_size": 3703460386, "dataset_size": 4422238032.735001}} | 2023-01-12T21:03:12+00:00 |
30a01d83ee8f222d39c37f261cc75ce5a89188b6 | # Dataset Card for "dreambooth-hackathon-RobertMazurek"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | misza222/dreambooth-hackathon-RobertMazurek | [
"region:us"
]
| 2023-01-06T15:50:20+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1320903.0, "num_examples": 12}], "download_size": 1321819, "dataset_size": 1320903.0}} | 2023-01-06T15:50:26+00:00 |
c5c38c2398d4bde5fbb2a30f7036a9a2a9c1a829 | ## Testing | 0x0x0/autotrain-data-giri | [
"region:us"
]
| 2023-01-06T15:56:23+00:00 | {} | 2023-02-05T21:01:05+00:00 |
3a92822cb07f4d7d054896232fd8869a13d15d81 |
# Dataset Card for Multilingual Grammar Error Correction
## Dataset Description
- **Homepage:** https://juancavallotti.com
- **Paper:** https://blog.juancavallotti.com/2023/01/06/training-a-multi-language-grammar-error-correction-system/
- **Point of Contact:** Juan Alberto López Cavallotti
### Dataset Summary
This dataset can be used to train a transformer model (we used T5) to correct grammar errors in simple sentences written in English, Spanish, French, or German.
This dataset was developed as a component for the [Squidigies](https://squidgies.app/) platform.
### Supported Tasks and Leaderboards
* **Grammar Error Correction:** By appending the prefix *fix grammar:* to the prrompt.
* **Language Detection:** By appending the prefix: *language:* to the prompt.
### Languages
* English
* Spanish
* French
* German
## Dataset Structure
### Data Instances
The dataset contains the following instances for each language:
* German 32282 sentences.
* English 51393 sentences.
* Spanish 67672 sentences.
* French 67157 sentences.
### Data Fields
* `lang`: The language of the sentence
* `sentence`: The original sentence.
* `modified`: The corrupted sentence.
* `transformation`: The primary transformation used by the synthetic data generator.
* `sec_transformation`: The secondary transformation (if any) used by the synthetic data generator.
### Data Splits
* `train`: There isn't a specific split defined. I recommend using 1k sentences sampled randomly from each language, combined with the SacreBleu metric.
## Dataset Creation
### Curation Rationale
This dataset was generated synthetically through code with the help of information of common grammar errors harvested throughout the internet.
### Source Data
#### Initial Data Collection and Normalization
The source grammatical sentences come from various open-source datasets, such as Tatoeba.
#### Who are the source language producers?
* Juan Alberto López Cavallotti
### Annotations
#### Annotation process
The annotation is automatic and produced by the generation script.
#### Who are the annotators?
* Data generation script by Juan Alberto López Cavallotti
### Other Known Limitations
The dataset doesn't cover all the possible grammar errors but serves as a starting point that generates fair results.
## Additional Information
### Dataset Curators
* Juan Alberto López Cavallotti
### Licensing Information
This dataset is distributed under the [Apache 2 License](https://www.apache.org/licenses/LICENSE-2.0)
### Citation Information
Please mention this original dataset and the author **Juan Alberto López Cavallotti**
### Contributions
* Juan Alberto López Cavallotti | juancavallotti/multilingual-gec | [
"task_categories:translation",
"size_categories:100K<n<1M",
"language:en",
"language:es",
"language:fr",
"language:de",
"license:apache-2.0",
"grammar",
"gec",
"multi language",
"language detection",
"region:us"
]
| 2023-01-06T16:07:20+00:00 | {"language": ["en", "es", "fr", "de"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["translation"], "pretty_name": "Multi Lingual Grammar Error Correction Dataset", "author": "Juan Alberto L\u00f3pez Cavallotti", "date": "Jan 6, 2023", "tags": ["grammar", "gec", "multi language", "language detection"]} | 2023-01-06T18:59:59+00:00 |
647ce0c45bd62ba02f11b7104124cf5fdb330bc1 | FatmaZahraZ/JobDecriptionsEntityRecognition | [
"region:us"
]
| 2023-01-06T16:07:59+00:00 | {} | 2023-01-06T21:40:48+00:00 |
|
66c85606ecdd55bcf2c7d44145e966a3fdba0b28 | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | Achitha/tamildata | [
"task_categories:automatic-speech-recognition",
"language:ta",
"region:us"
]
| 2023-01-06T17:10:31+00:00 | {"language": ["ta"], "task_categories": ["automatic-speech-recognition"], "pretty_name": "tamildata"} | 2023-01-08T15:35:38+00:00 |
5c3969941fae613b5c7646fd1112bfb1495b9d8c | MeetMeAt92/arcane-cyberpunk-random | [
"license:afl-3.0",
"region:us"
]
| 2023-01-06T18:57:05+00:00 | {"license": "afl-3.0"} | 2023-01-06T18:57:05+00:00 |
|
aeaf41084ab8c9611db53489805b0bd294985812 | arshiaHP76x/sennzan.py | [
"region:us"
]
| 2023-01-06T22:26:03+00:00 | {} | 2023-01-07T06:25:05+00:00 |
|
9f62b44bacade997a5b23ec05fb37874013e4010 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/tglobal-large-booksum-WIP3-K-r4
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-ee4836-2761681799 | [
"autotrain",
"evaluation",
"region:us"
]
| 2023-01-06T23:08:28+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/tglobal-large-booksum-WIP3-K-r4", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2023-01-07T00:06:34+00:00 |
4fcb5b9a0332dda9b7a80d7a4ebc15fb337b9e0b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/tglobal-large-booksum-WIP3-K-r4
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-samsum-samsum-b53b11-2761781800 | [
"autotrain",
"evaluation",
"region:us"
]
| 2023-01-06T23:08:32+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/tglobal-large-booksum-WIP3-K-r4", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2023-01-06T23:15:44+00:00 |
e96e7541e24931c5a2b7d0018865b666ad5dca0f | # Dataset Card for "pubtator-central-bigbio-kb-2022-12-18"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | gabrielaltay/pubtator-central-bigbio-kb-2022-12-18 | [
"region:us"
]
| 2023-01-07T05:19:49+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "document_id", "dtype": "string"}, {"name": "passages", "list": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "text", "sequence": "string"}, {"name": "offsets", "sequence": {"list": "int32"}}]}, {"name": "entities", "list": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "text", "sequence": "string"}, {"name": "offsets", "sequence": {"list": "int32"}}, {"name": "normalized", "list": [{"name": "db_name", "dtype": "string"}, {"name": "db_id", "dtype": "string"}]}]}, {"name": "events", "list": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "trigger", "struct": [{"name": "text", "sequence": "string"}, {"name": "offsets", "sequence": {"list": "int32"}}]}, {"name": "arguments", "list": [{"name": "role", "dtype": "string"}, {"name": "ref_id", "dtype": "string"}]}]}, {"name": "coreferences", "list": [{"name": "id", "dtype": "string"}, {"name": "entity_ids", "sequence": "string"}]}, {"name": "relations", "list": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "arg1_id", "dtype": "string"}, {"name": "arg2_id", "dtype": "string"}, {"name": "normalized", "list": [{"name": "db_name", "dtype": "string"}, {"name": "db_id", "dtype": "string"}]}]}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 101493304127, "num_examples": 33653973}, {"name": "validation", "num_bytes": 2115702473, "num_examples": 701124}, {"name": "test", "num_bytes": 2117460487, "num_examples": 701125}], "download_size": 49786905438, "dataset_size": 105726467087}} | 2023-01-07T05:51:13+00:00 |
f9b9d7da64666366196be6a96fb1404002709761 | adhikasp/hackernews | [
"license:unknown",
"region:us"
]
| 2023-01-07T07:19:01+00:00 | {"license": "unknown"} | 2023-01-07T07:19:01+00:00 |
|
6fc10e1dafa2047633e1376b0324ae11c61ad30b |
# Style Embedding - illl_liil

## Usage
To use an embedding, download the .pt file and place it in "\stable-diffusion-webui\embeddings".
In your prompt, write ```"illl_liil_style-15000"```.
## Original Artist
https://twitter.com/llii_ilil
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | kxly/illl_liil_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"image-to-image",
"region:us"
]
| 2023-01-07T07:39:31+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "pretty_name": "illl_liil Style", "thumbnail": "https://huggingface.co/datasets/kxly/illl_liil_style/blob/main/illl_liil_showcase.png", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false} | 2023-01-07T07:47:55+00:00 |
4ba4d6bbe054c63542a9d455489f3e6372240167 | # Dataset Card for "mel_spectogram_bird_audio"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | rachit8562/mel_spectogram_bird_audio | [
"region:us"
]
| 2023-01-07T08:02:49+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Chlorischloris", "1": "Columbapalumbus", "2": "Corvusfrugilegus", "3": "Delichonurbicum", "4": "Dendrocoposmajor", "5": "Passermontanus", "6": "Phoenicurusochruros", "7": "Sittaeuropaea", "8": "Turdusmerula", "9": "Turduspilaris"}}}}], "splits": [{"name": "train", "num_bytes": 1732741674.28153, "num_examples": 61376}, {"name": "test", "num_bytes": 311839995.5024702, "num_examples": 10832}], "download_size": 1955670248, "dataset_size": 2044581669.7840002}} | 2023-01-07T08:18:21+00:00 |
f2708e1df214319cb925fe73d9b229dd9a236b15 | # Dataset Card for "phone-recognition-generated"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Nithiwat/phone-recognition-generated | [
"region:us"
]
| 2023-01-07T09:22:39+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "ipa", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1355048763.96, "num_examples": 6860}], "download_size": 966944673, "dataset_size": 1355048763.96}} | 2023-01-07T09:49:37+00:00 |
75bafc6a17b4bdbf5ae1ea5ef04e3b5e5fd5a01f | # Dataset Card for "new_test_repo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | pyakymenko/new_test_repo | [
"region:us"
]
| 2023-01-07T09:26:48+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39116602.0, "num_examples": 502}], "download_size": 38127697, "dataset_size": 39116602.0}} | 2023-01-07T09:30:15+00:00 |
34376cdc7f76d2c03beeef6d9f4ca864f4c7f665 | nc33/boolques | [
"license:mit",
"region:us"
]
| 2023-01-07T09:57:11+00:00 | {"license": "mit"} | 2023-01-10T06:33:30+00:00 |
|
6b4ca73bab3334f60fbf6a81825ab6d8ccfb2ba4 | DavidVivancos/MindBigData2022_VisMNIST_Cap64_Morlet | [
"license:odbl",
"region:us"
]
| 2023-01-07T10:06:12+00:00 | {"license": "odbl"} | 2023-01-07T10:10:47+00:00 |
|
17a9b72bc74139cc23e543ca03e41e19d82008e4 |
This dataset contains 10 images of asterix and Obelix cartoon characters taken from internet
| nsanghi/axterix-obelix | [
"task_categories:image-to-image",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"asterix",
"diffusion",
"dreambooth",
"region:us"
]
| 2023-01-07T10:53:50+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["image-to-image"], "tags": ["asterix", "diffusion", "dreambooth"]} | 2023-01-07T11:00:21+00:00 |
9b36b13820de339d287a94242dbcfe69a002bd11 | hyper, LoRA | Toraong/Hypernetwork | [
"license:unknown",
"region:us"
]
| 2023-01-07T11:09:42+00:00 | {"license": "unknown"} | 2023-03-04T03:18:22+00:00 |
ebac76a1859f28ce4c387f2a3fa84c3138baa9e8 |
# Dataset Card for jacob-soni
## Dataset Description
The dataset contains of images my pet - Jacob, current age of 7 years.
### Dataset Curators
The data has been originally collected by Ashish Soni and his family.
### Licensing Information
The jacob-soni dataset version 1.0.0 is released under the Apache-2.0 License. | Ashish08/jacob-soni | [
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"images ",
"pet",
"dog",
"german-shepherd",
"dreambooth-hackathon",
"region:us"
]
| 2023-01-07T11:25:50+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "source_datasets": ["original"], "pretty_name": "My Dog - Jacob Soni", "tags": ["images ", "pet", "dog", "german-shepherd", "dreambooth-hackathon"]} | 2023-01-07T15:05:28+00:00 |
3915caacef63079345383d2ce5ad96b842ee4bfb | # Dataset Card for "eclassTrainST"
This NLI-Dataset can be used to fine-tune Models for the task of sentence-simularity. It consists of names and descriptions of pump-properties from the ECLASS-standard. | gart-labor/eclassTrainST | [
"task_categories:sentence-similarity",
"size_categories:100K<n<1M",
"language:en",
"region:us"
]
| 2023-01-07T12:18:12+00:00 | {"language": ["en"], "size_categories": ["100K<n<1M"], "task_categories": ["sentence-similarity"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "entailment", "dtype": "string"}, {"name": "contradiction", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 327174992, "num_examples": 698880}, {"name": "eval", "num_bytes": 219201779, "num_examples": 450912}], "download_size": 46751846, "dataset_size": 546376771}} | 2023-01-07T12:19:59+00:00 |
8672b08faab10c88ab014d48ff3f9d93023c2850 | nandovallec/df_ps_train_extra | [
"license:apache-2.0",
"region:us"
]
| 2023-01-07T12:19:21+00:00 | {"license": "apache-2.0"} | 2023-04-05T13:27:51+00:00 |
|
df52ccb0b886cb268f49efd6b6cac135472a9f55 | nandovallec/giantMatrix_extra | [
"license:apache-2.0",
"region:us"
]
| 2023-01-07T12:21:29+00:00 | {"license": "apache-2.0"} | 2023-04-05T13:27:57+00:00 |
|
56030305503dec3b96cba39bc8f9844b5535be41 | # Dataset Card for "eclassCorpus"
This Dataset consists of names and descriptions from ECLASS-standard pump-properties. It can be used to evaluate models on the task of matching paraphrases to the ECLASS-standard pump-properties based on their semantics. | gart-labor/eclassCorpus | [
"task_categories:sentence-similarity",
"size_categories:n<1K",
"language:en",
"doi:10.57967/hf/0410",
"region:us"
]
| 2023-01-07T12:38:01+00:00 | {"language": ["en"], "size_categories": ["n<1K"], "task_categories": ["sentence-similarity"], "dataset_info": {"features": [{"name": "did", "dtype": "int64"}, {"name": "query", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "datatype", "dtype": "string"}, {"name": "unit", "dtype": "string"}, {"name": "IRDI", "dtype": "string"}, {"name": "metalabel", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 137123, "num_examples": 672}], "download_size": 0, "dataset_size": 137123}} | 2023-01-07T12:42:19+00:00 |
4483d3730d71ab1f7700b2e80b97d31e75997d3b | # Dataset Card for "eclassQuery"
This Dataset consists of paraphrases of ECLASS-standard pump-properties. It can be used to evaluate models on the task of matching these paraphrases to the actual ECLASS-standard pump-properties based on their semantics. | gart-labor/eclassQuery | [
"task_categories:sentence-similarity",
"size_categories:1K<n<10K",
"language:en",
"doi:10.57967/hf/0409",
"region:us"
]
| 2023-01-07T12:38:27+00:00 | {"language": ["en"], "size_categories": ["1K<n<10K"], "task_categories": ["sentence-similarity"], "dataset_info": {"features": [{"name": "did", "dtype": "int64"}, {"name": "query", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "duplicate_id", "dtype": "int64"}, {"name": "metalabel", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 147176, "num_examples": 1040}, {"name": "eval", "num_bytes": 100846, "num_examples": 671}], "download_size": 113268, "dataset_size": 248022}} | 2023-01-07T12:42:40+00:00 |
d15b89deb2186aaaae788ae9efe261daebd28839 |
# Dataset Card for vada-sambhar
## Dataset Description
The dataset contains of images of my favorite south indian dish - Vada Sambhar.
### Dataset Curators
The data has been downloaded from Google images.
### Licensing Information
The vada-sambhar dataset version 1.0.0 is released under the Apache-2.0 License. | Ashish08/vada-sambhar | [
"size_categories:n<1K",
"source_datasets:google",
"language:en",
"license:apache-2.0",
"images ",
"food",
"vada sambhar",
"dreambooth-hackathon",
"region:us"
]
| 2023-01-07T12:51:40+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "source_datasets": ["google"], "pretty_name": "vada sambhar", "tags": ["images ", "food", "vada sambhar", "dreambooth-hackathon"]} | 2023-01-07T12:59:47+00:00 |
37ded182545688883979ac19aa4175cf71f9be85 | <b>Dataset Description</b>:-
MIS Farm Pond Change Detection Dataset consists of a total of 694 images of size 1024 x 768 pixels at zoom level 18 with a very high resolution up to 1 meter) were collected from Google Earth images. The region of Indian state of Maharashtra was chosen for the dataset. The villages collected have timestamps in months of Jan-April and the minimum year difference is 2 years and the maximum year difference is 9 years, oldest being 2007 and latest being 2021. The types of farm ponds being covered in the dataset are Wet Farm Pond - Lined, Wet Farm Pond - Unlined, Dry Farm Pond - Lined, Dry Farm Pond - Unlined. The change classes are mainly - Farm Pond Constructed, Farm Pond Demolished, Farm Pond Dried and Farm Pond Wetted. Most of the changes are from the farm pond constructed class showing that there is an increase in farm pond construction across villages in Maharashtra in past 8-9 years.
<b>T0.zip</b> : Consists of images of time T0 i.e. initial image <br>
<b>T1.zip</b> : Consists of images of time T1 i.e. changed image <br>
<b>task_1_masks.zip</b> : Consists of binary masks of task_1 i.e. Farm Pond Constructed and Farm Pond Demolished <br>
<b>task_2_masks.zip</b> : Consists of binary masks of task_2 i.e. Farm Pond Dried and Farm Pond Wetted <br>
<b>task_3_masks.zip</b> : Consists of binary masks of task_3 i.e. All 4 classes combined: Farm Pond Constructed, Farm Pond Demolished, Farm Pond Dried and Farm Pond Wetted <br>
<b>multi_class_masks.zip(new)</b>: Consists of indexed masks for multi class change detection. Each mask consists of pixels with values as an integer in the range 0-4,
0 - Background, 1 - Farm Pond Constructed, 2 - Farm Pond Demolished, 3 - Farm Pond Dried and 4 - Farm Pond Wetted <br>
<b>cd_dataset_train.txt</b> : Contains file_names of train set to be taken from T0, T1 and masks of one of the tasks(task_1, task_2, task_3) <br>
<b>cd_dataset_test.txt</b> : Contains file_names of test set to be taken from T0, T1 and masks of one of the tasks(task_1, task_2, task_3) <br>
<b>object_annotations_train_coco.json</b> : Contains positive images (having annotations) taken from both T0 and T1 in coco format to be used for training - Total 499 <br>
<b>object_annotations_test_coco.json</b> : Contains positive images (having annotations) taken from both T0 and T1 in coco format to be used for testing - Total 92 <br> | ctundia/FPCD | [
"license:cc-by-sa-4.0",
"region:us"
]
| 2023-01-07T13:16:22+00:00 | {"license": "cc-by-sa-4.0"} | 2023-06-20T15:55:24+00:00 |
50c1109fe617f75a7c0b67e696a99cb1599ea91e |
### FrenchHateSpeechSuperset
This dataset is a superset of multiple datasets including hate speech, harasment, sexist, racist, etc...messages from various platforms.
Included datasets :
- MLMA dataset
- CAA dataset
- FTR dataset
- "An Annotated Corpus for Sexism Detection in French Tweets" dataset
- UC-Berkeley-Measuring-Hate-Speech dataset (translated from english*)
#### References
```
@inproceedings{chiril2020annotated,
title={An Annotated Corpus for Sexism Detection in French Tweets},
author={Chiril, Patricia and Moriceau, V{\'e}ronique and Benamara, Farah and Mari, Alda and Origgi, Gloria and Coulomb-Gully, Marl{\`e}ne},
booktitle={Proceedings of The 12th Language Resources and Evaluation Conference},
pages={1397--1403},
year={2020}
}
```
```
@inproceedings{ousidhoum-etal-multilingual-hate-speech-2019,
title = "Multilingual and Multi-Aspect Hate Speech Analysis",
author = "Ousidhoum, Nedjma
and Lin, Zizheng
and Zhang, Hongming
and Song, Yangqiu
and Yeung, Dit-Yan",
booktitle = "Proceedings of EMNLP",
year = "2019",
publisher = "Association for Computational Linguistics",
}
```
```
Vanetik, N.; Mimoun, E. Detection of Racist Language in French Tweets. Information 2022, 13, 318. https://doi.org/10.3390/info13070318
```
```
@article{kennedy2020constructing,
title={Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application},
author={Kennedy, Chris J and Bacon, Geoff and Sahn, Alexander and von Vacano, Claudia},
journal={arXiv preprint arXiv:2009.10277},
year={2020}
}
```
```
Anaïs Ollagnier, Elena Cabrio, Serena Villata, Catherine Blaya. CyberAgressionAdo-v1: a Dataset of Annotated Online Aggressions in French Collected through a Role-playing Game. Language Resources and Evaluation Conference, Jun 2022, Marseille, France. ⟨hal-03765860⟩
```
### Translation
French datasets for hate speech are quite rare. To augment current dataset, messages from other languages (english only for now) have been integrated.
To integrate other languages dataset, MT model were used and manually selected for each dataset.
- UC-Berkeley-Measuring-Hate-Speech dataset : Abelll/marian-finetuned-kde4-en-to-fr
### Language verification
Since MT models are not perfect, some messages are not entirely translated or not translated at all.
To check for obvious errors in pipeline, a general language detection model is used to prune non french texts.
Language detection model : papluca/xlm-roberta-base-language-detection
### Annotation
Since "hate speech" dimension is highly subjective, and datasets comes with different annotations types, a conventional labeling stategy is required.
Each sample is annotated with "0" if negative sample and "1" if positive sample.
### Filtering rules :
- FTR dataset : [wip]
- MLMA dataset : [wip]
- CAA dataset : [wip]
- "Annotated Corpus" dataset : [wip]
- UC-Berkeley Measuring Hate Speech dataset : average hate_speech_score > 0 -> 1
| Poulpidot/FrenchHateSpeechSuperset | [
"license:unknown",
"doi:10.57967/hf/0284",
"region:us"
]
| 2023-01-07T13:19:59+00:00 | {"license": "unknown"} | 2023-02-04T21:17:04+00:00 |
114709884276379a01e0722d71cd590c8ad3a05d |
# Dataset Card for "ArASL_Database_Grayscale"
## Dataset Description
- **Homepage:** https://data.mendeley.com/datasets/y7pckrw6z2/1
- **Paper:** [ArASL: Arabic Alphabets Sign Language Dataset](https://www.sciencedirect.com/science/article/pii/S2352340919301283)
### Dataset Summary
A new dataset consists of 54,049 images of ArSL alphabets performed by more than 40 people for 32 standard Arabic signs and alphabets.
The number of images per class differs from one class to another. Sample image of all Arabic Language Signs is also attached. The CSV file contains the Label of each corresponding Arabic Sign Language Image based on the image file name.
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image into one of 32 classes.
### Languages
Arabic
### Data Instances
A sample from the training set is provided below:
```
{
'img': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=32x32 at 0x201FA6EE748>,
'label': 0
}
```
### Citation Information
```
@article{LATIF2019103777,
title = {ArASL: Arabic Alphabets Sign Language Dataset},
journal = {Data in Brief},
volume = {23},
pages = {103777},
year = {2019},
issn = {2352-3409},
doi = {https://doi.org/10.1016/j.dib.2019.103777},
url = {https://www.sciencedirect.com/science/article/pii/S2352340919301283},
author = {Ghazanfar Latif and Nazeeruddin Mohammad and Jaafar Alghazo and Roaa AlKhalaf and Rawan AlKhalaf},
abstract = {A fully-labelled dataset of Arabic Sign Language (ArSL) images is developed for research related to sign language recognition. The dataset will provide researcher the opportunity to investigate and develop automated systems for the deaf and hard of hearing people using machine learning, computer vision and deep learning algorithms. The contribution is a large fully-labelled dataset for Arabic Sign Language (ArSL) which is made publically available and free for all researchers. The dataset which is named ArSL2018 consists of 54,049 images for the 32 Arabic sign language sign and alphabets collected from 40 participants in different age groups. Different dimensions and different variations were present in images which can be cleared using pre-processing techniques to remove noise, center the image, etc. The dataset is made available publicly at https://data.mendeley.com/datasets/y7pckrw6z2/1.}
}
```
### Contributions
Thanks to [MOHAMMAD ALBARHAM](https://github.com/PAIN-BARHAM) for adding this dataset to huggingface hub. | pain/ArASL_Database_Grayscale | [
"task_categories:image-classification",
"language:ar",
"license:cc-by-4.0",
"image_classification",
"Arabic_Sign_Language",
"region:us"
]
| 2023-01-07T14:04:43+00:00 | {"language": ["ar"], "license": "cc-by-4.0", "task_categories": ["image-classification"], "splits": [{"name": "train", "num_bytes": 41355564.009, "num_examples": 54049}], "download_size": 30479019, "dataset_size": 41355564.009, "tags": ["image_classification", "Arabic_Sign_Language"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "ain", "1": "al", "2": "aleff", "3": "bb", "4": "dal", "5": "dha", "6": "dhad", "7": "fa", "8": "gaaf", "9": "ghain", "10": "ha", "11": "haa", "12": "jeem", "13": "kaaf", "14": "khaa", "15": "la", "16": "laam", "17": "meem", "18": "nun", "19": "ra", "20": "saad", "21": "seen", "22": "sheen", "23": "ta", "24": "taa", "25": "thaa", "26": "thal", "27": "toot", "28": "waw", "29": "ya", "30": "yaa", "31": "zay"}}}}]}} | 2023-01-07T14:44:35+00:00 |
8ae59b9e5c44c1e611b8b9d38fe2de948ca3f473 | Plachta/Umamusume-voice-text-pairs | [
"license:mit",
"region:us"
]
| 2023-01-07T14:49:21+00:00 | {"license": "mit"} | 2023-01-10T03:46:39+00:00 |
|
c04ae93c910764acae01047490b4e5aa88e7fb03 |
# Dataset Card for old-trafford
## Dataset Description
The dataset contains images of Old Trafford - a football stadium that belongs to Manchester United Football Club.
### Dataset Curators
The data has been downloaded from Google Images.
### Licensing Information
The old-trafford dataset version 1.0.0 is released under the creativeml-openrail-m License. | Ashish08/old-trafford | [
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:creativeml-openrail-m",
"images",
"football stadium",
"Manchester United",
"Old Trafford",
"dreambooth-hackathon",
"region:us"
]
| 2023-01-07T14:49:23+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "size_categories": ["n<1K"], "source_datasets": ["original"], "pretty_name": "Old Trafford", "tags": ["images", "football stadium", "Manchester United", "Old Trafford", "dreambooth-hackathon"]} | 2023-01-07T14:53:27+00:00 |
c3fc7802286283f93f73463c84fb0b607bd4c0f4 | ANANDHU-SCT/TOPIC_CLASSIFICATION | [
"license:apache-2.0",
"region:us"
]
| 2023-01-07T15:22:19+00:00 | {"license": "apache-2.0"} | 2023-01-13T13:12:49+00:00 |
|
d5e9a5dc3431bb30fbd5f1bdc441617c2f7059ab | nandovallec/df_ps_train_new | [
"license:apache-2.0",
"region:us"
]
| 2023-01-07T16:26:02+00:00 | {"license": "apache-2.0"} | 2023-01-07T16:26:02+00:00 |
|
0e316216fcc0569cf69fdb72271e363f72276576 | nandovallec/giantMatrix_new | [
"license:apache-2.0",
"region:us"
]
| 2023-01-07T16:27:15+00:00 | {"license": "apache-2.0"} | 2023-01-07T16:27:45+00:00 |
|
1e1710f63e9e953813429f8887922f976740b679 | nandovallec/recommender_dicts | [
"license:apache-2.0",
"region:us"
]
| 2023-01-07T16:28:29+00:00 | {"license": "apache-2.0"} | 2023-01-07T16:28:51+00:00 |
|
ace64008454c9d7f8ece0dee812d3dccd2b8e732 | mariem1994/nlp_project | [
"task_categories:token-classification",
"language:fr",
"license:afl-3.0",
"region:us"
]
| 2023-01-07T18:54:47+00:00 | {"language": ["fr"], "license": "afl-3.0", "task_categories": ["token-classification"]} | 2023-01-08T09:36:30+00:00 |
|
62f135dfafc22a508fedb88a70c7b1fee54bc9f5 | dhurley/medicare | [
"license:mit",
"region:us"
]
| 2023-01-07T19:13:51+00:00 | {"license": "mit"} | 2023-01-07T21:26:23+00:00 |
|
569c0268b7bf4bea85c83f1718569ca035928682 | # Dataset Card for "untitled_goose_game"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Arch4ngel/untitled_goose_game | [
"region:us"
]
| 2023-01-07T19:53:41+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1487961.0, "num_examples": 15}], "download_size": 1461841, "dataset_size": 1487961.0}} | 2023-01-07T20:00:06+00:00 |
fe779e44d3b8b228b23945e787788d48a22e2414 |
Dataset which can be loaded using this:
https://github.com/mathyouf/ranked-aesthetic-scorer/blob/main/data/processURS.py | MathYouF/reddit-urs-sfw-nature | [
"license:openrail",
"region:us"
]
| 2023-01-07T20:55:13+00:00 | {"license": "openrail"} | 2023-01-07T21:43:42+00:00 |
f34ba45a0ab87e02eaecc3047dd50864fbe75006 | JotDe/mscoco_1k | [
"license:openrail",
"region:us"
]
| 2023-01-07T21:14:37+00:00 | {"license": "openrail"} | 2023-01-12T00:15:31+00:00 |
|
d0adbb57d80bc283a34cc527bad190ff10fceb5c | # Dataset Card for "alphafold_issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tux/alphafold_issues | [
"region:us"
]
| 2023-01-07T21:28:07+00:00 | {"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "repository_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "comments_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "user", "struct": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "labels", "list": [{"name": "color", "dtype": "string"}, {"name": "default", "dtype": "bool"}, {"name": "description", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "name", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "state", "dtype": "string"}, {"name": "locked", "dtype": "bool"}, {"name": "assignee", "dtype": "float64"}, {"name": "assignees", "sequence": "null"}, {"name": "milestone", "dtype": "float64"}, {"name": "comments", "sequence": "string"}, {"name": "created_at", "dtype": "timestamp[ns, tz=UTC]"}, {"name": "updated_at", "dtype": "timestamp[ns, tz=UTC]"}, {"name": "closed_at", "dtype": "timestamp[ns, tz=UTC]"}, {"name": "author_association", "dtype": "string"}, {"name": "active_lock_reason", "dtype": "float64"}, {"name": "body", "dtype": "string"}, {"name": "reactions", "struct": [{"name": "+1", "dtype": "int64"}, {"name": "-1", "dtype": "int64"}, {"name": "confused", "dtype": "int64"}, {"name": "eyes", "dtype": "int64"}, {"name": "heart", "dtype": "int64"}, {"name": "hooray", "dtype": "int64"}, {"name": "laugh", "dtype": "int64"}, {"name": "rocket", "dtype": "int64"}, {"name": "total_count", "dtype": "int64"}, {"name": "url", "dtype": "string"}]}, {"name": "timeline_url", "dtype": "string"}, {"name": "performed_via_github_app", "dtype": "float64"}, {"name": "state_reason", "dtype": "string"}, {"name": "draft", "dtype": "float64"}, {"name": "pull_request", "struct": [{"name": "diff_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "merged_at", "dtype": "null"}, {"name": "patch_url", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "is_pull_request", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 838906, "num_examples": 200}], "download_size": 195220, "dataset_size": 838906}} | 2023-01-07T21:28:19+00:00 |
ef30f6a046230c843d79822b928267efd9453d5b | # Dataset Card for IMDb Movie Reviews
## Dataset Description
- **Homepage:** [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Total amount of disk used:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
This is a custom train/test/validation split of the IMDb Large Movie Review Dataset available from [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
#### IMDb_movie_reviews
An example of 'train':
```
{
"text": "Beautifully photographed and ably acted, generally, but the writing is very slipshod. There are scenes of such unbelievability that there is no joy in the watching. The fact that the young lover has a twin brother, for instance, is so contrived that I groaned out loud. And the "emotion-light bulb connection" seems gimmicky, too.<br /><br />I don\'t know, though. If you have a few glasses of wine and feel like relaxing with something pretty to look at with a few flaccid comedic scenes, this is a pretty good movie. No major effort on the part of the viewer required. But Italian film, especially Italian comedy, is usually much, much better than this."
"label": 0,
}
```
### Data Fields
The data fields are the same among all splits.
#### IMDb_movie_reviews
- `text`: a `string` feature.
- `label`: a classification label, with values `neg` (0), `pos` (1).
### Data Splits
| name | train | validation | test |
|------------------|------:|-----------:|------:|
|IMDb_movie_reviews| 36000 | 4000 | 10000 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{maas-EtAl:2011:ACL-HLT2011,
author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},
title = {Learning Word Vectors for Sentiment Analysis},
booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
month = {June},
year = {2011},
address = {Portland, Oregon, USA},
publisher = {Association for Computational Linguistics},
pages = {142--150},
url = {http://www.aclweb.org/anthology/P11-1015}
}
```
### Contributions
[More Information Needed] | jahjinx/IMDb_movie_reviews | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:other",
"region:us"
]
| 2023-01-07T22:36:33+00:00 | {"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "IMDb"} | 2023-01-08T15:47:19+00:00 |
365eef5afdceed49f0d25171c9ccdcc8b123bc51 | IceChes/fantasydiffusiondataset | [
"license:unlicense",
"region:us"
]
| 2023-01-07T23:41:03+00:00 | {"license": "unlicense"} | 2023-01-07T23:41:03+00:00 |
|
a2f7f35c36a4d551625a0607c7759ae7916fc6be | # Dataset Card for Superheroes
## Dataset Description
1400+ Superheroes history and powers description to apply text mining and NLP [Original source](https://www.kaggle.com/datasets/jonathanbesomi/superheroes-nlp-dataset/code?resource=download)
## Context
The aim of this dataset is to make text analytics and NLP even funnier. All of us have dreamed to be like a superhero and save the world, yet we are still on Kaggle figuring out how python works. Then, why not improve our NLP competences by analyzing Superheros' history and powers?
The particularity of this dataset is that it contains categorical and numerical features such as overall_score, intelligence_score, creator, alignment, gender, eye_color but also text features history_text and powers_text. By combining the two, a lot of interesting insights can be gathered!
## Content
We collected all data from superherodb and cooked for you in a nice and clean tabular format.
The dataset contains 1447 different Superheroes. Each superhero row has:
* overall_score - derivated by superherodb from the power stats features. Can you find the relationship?
* history_text - History of the Superhero (text features)
* powers_text - Description of Superheros' powers (text features)
* intelligence_score, strength_score, speed_score, durability_score, power_score and combat_score. (power stats features)
* "Origin" (full_name, alter_egos, …)
* "Connections" (occupation, base, teams, …)
* "Appareance" (gender, type_race, height, weight, eye_color, …)
## Acknowledgements
The following [Github repository](https://github.com/jbesomi/texthero/tree/master/dataset/Superheroes%20NLP%20Dataset) contains the code used to scrape this Dataset.
| jrtec/Superheroes | [
"task_categories:summarization",
"size_categories:1K<n<10K",
"language:en",
"license:cc0-1.0",
"superheroes",
"heroes",
"anime",
"manga",
"marvel",
"region:us"
]
| 2023-01-08T01:38:39+00:00 | {"language": ["en"], "license": "cc0-1.0", "size_categories": ["1K<n<10K"], "task_categories": ["summarization"], "tags": ["superheroes", "heroes", "anime", "manga", "marvel"]} | 2023-01-08T06:18:48+00:00 |
1a6e9bdf6f54e6d3df5480eb66d11aafe4b354e3 | [Needs More Information]
# Dataset Card for virus_dna_dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
A collection of full virus genome dna, the dataset was built from NCBI data
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
DNA
## Dataset Structure
### Data Instances
{ 'Description' : 'NC_030848.1 Haloarcula californiae icosahedral...', 'dna_sequence' : 'TCATCTC TCTCTCT CTCTCTT GTTCCCG CGCCCGC CCGCCC...',
'sequence_length':'35787', 'organism_id':' AB063393.2'}
### Data Fields
{ 'Description' : 'this contains the description about the DNA sequence contained in the NCBI dataset', 'dna_sequence' : 'this contains the dna sequence grouped by 7 nucleotides',
'sequence_length':'this contains the length of the dna sequence'}
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
The goal of this dataset was to make it easier to train an LLM on virus DNA
### Source Data
#### Initial Data Collection and Normalization
DNA sequences were grouped by 7 nucleotides to make it easier to tokenize. Only full genomes were selected
#### Who are the source language producers?
Viruses :)
### Annotations
#### Annotation process
NCBI
#### Who are the annotators?
NCBI
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
Make it easier to train LLMs on virus DNA
### Discussion of Biases
Only virus data that has been sequenced and upload into NCBI is contained in here
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Hassan Ahmed
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | Hack90/virus_dna_dataset | [
"region:us"
]
| 2023-01-08T02:21:44+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "features", "dtype": "int64"}, {"name": "seq_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 6621468623, "num_examples": 2602437}], "download_size": 2319826398, "dataset_size": 6621468623}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-08-26T12:07:54+00:00 |
38a258997cb5e6dd9b973534d3f860e76a6936a5 | # Dataset Card for "ui_refexp_saved_Jan2023"
This is a saved snapshot of the dynamically generated [UI Bert](https://huggingface.co/datasets/ivelin/ui_refexp) dataset.
Much faster download time than the dynamic version which pulls and filters large data files from remote sources. | ivelin/ui_refexp_saved | [
"task_categories:image-to-text",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"region:us"
]
| 2023-01-08T03:10:23+00:00 | {"language": ["en"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["image-to-text"], "pretty_name": "UIBert Referring Expressions Dataset", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "image_id", "dtype": "string"}, {"name": "image_file_path", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "target_bounding_box", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1910805137.216, "num_examples": 15624}, {"name": "validation", "num_bytes": 60403386, "num_examples": 471}, {"name": "test", "num_bytes": 69078983, "num_examples": 565}], "download_size": 1246541216, "dataset_size": 2040287506.216}} | 2023-01-08T03:35:06+00:00 |
a6a7e98320d20544b1d92ac27028496a5e7047cf | # Dataset Card for "bookcorpus_compact_1024_shard5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | saibo/bookcorpus_compact_1024_shard5_of_10 | [
"region:us"
]
| 2023-01-08T03:28:36+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "concept_with_offset", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 739992156, "num_examples": 61605}], "download_size": 372896291, "dataset_size": 739992156}} | 2023-01-08T03:29:05+00:00 |
6630a41f4d1d5b9055a78bc9b1ea785b8f43d4b0 | Umal-exvc/chocolate-ds | [
"license:unknown",
"region:us"
]
| 2023-01-08T04:40:31+00:00 | {"license": "unknown"} | 2023-01-08T04:40:31+00:00 |
|
7447513fe17ef531e005ea8ccc8f3b60f60324ed | # Dataset Card for "chocolate-captioned-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Umal-exvc/chocolate-captioned-dataset | [
"region:us"
]
| 2023-01-08T04:58:42+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 78434533.0, "num_examples": 500}], "download_size": 76921151, "dataset_size": 78434533.0}} | 2023-01-08T04:58:46+00:00 |
06cb26fb5a3eea7b4a7304f02141c489acf6246d | MParadis/new01 | [
"license:unknown",
"region:us"
]
| 2023-01-08T05:57:41+00:00 | {"license": "unknown"} | 2023-01-26T01:56:28+00:00 |
|
7fbf579be37189f452803ff19cc15f2b4e4ef0cf | # Dataset Card for "embedding_dataset_distilbert_base_uncased_ad_subwords"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | sanjin7/embedding_dataset_distilbert_base_uncased_ad_subwords | [
"region:us"
]
| 2023-01-08T07:54:44+00:00 | {"dataset_info": {"features": [{"name": "ad_id", "dtype": "int64"}, {"name": "shop_id", "dtype": "int64"}, {"name": "account_id", "dtype": "int64"}, {"name": "mean_embedding", "sequence": "float32"}, {"name": "cls_embedding", "sequence": "float32"}], "splits": [{"name": "test", "num_bytes": 5725152, "num_examples": 927}, {"name": "train", "num_bytes": 43769312, "num_examples": 7087}, {"name": "val", "num_bytes": 7726176, "num_examples": 1251}], "download_size": 69324552, "dataset_size": 57220640}} | 2023-01-16T11:12:24+00:00 |
15104b7bbcaf1e2a8ca0613d2b3e73957a0eb8cc | # Dataset Card for "RedditProject"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nillo36/RedditProject | [
"region:us"
]
| 2023-01-08T10:56:19+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "movies", "1": "news", "2": "nfl", "3": "pcmasterrace", "4": "relationship_advice"}}}}], "splits": [{"name": "train", "num_bytes": 567429, "num_examples": 800}, {"name": "validation", "num_bytes": 67565, "num_examples": 100}, {"name": "test", "num_bytes": 89805, "num_examples": 100}], "download_size": 443894, "dataset_size": 724799}} | 2023-01-09T17:06:29+00:00 |
c926e6ce93cbd5a6eaf0895abd48776cc5bae638 | AresEkb/prof_standards_sbert_large_mt_nlu_ru | [
"size_categories:100K<n<1M",
"language:ru",
"region:us"
]
| 2023-01-08T12:04:10+00:00 | {"language": ["ru"], "size_categories": ["100K<n<1M"], "pretty_name": "Professional Standards", "dataset_info": [{"config_name": "domains", "features": [{"name": "reg_number", "dtype": "string"}, {"name": "standard_name", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "purpose", "dtype": "string"}, {"name": "embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 7293978, "num_examples": 1510}], "download_size": 7789662, "dataset_size": 7293978}, {"config_name": "generalized_functions", "features": [{"name": "generalized_function_id", "dtype": "string"}, {"name": "reg_number", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 24536711, "num_examples": 5520}], "download_size": 26728782, "dataset_size": 24536711}, {"config_name": "jobs", "features": [{"name": "generalized_function_id", "dtype": "string"}, {"name": "reg_number", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 64746734, "num_examples": 14991}], "download_size": 68906153, "dataset_size": 64746734}, {"config_name": "particular_functions", "features": [{"name": "generalized_function_id", "dtype": "string"}, {"name": "particular_function_id", "dtype": "string"}, {"name": "reg_number", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 83618997, "num_examples": 18730}], "download_size": 89697328, "dataset_size": 83618997}, {"config_name": "actions", "features": [{"name": "generalized_function_id", "dtype": "string"}, {"name": "particular_function_id", "dtype": "string"}, {"name": "reg_number", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 642320840, "num_examples": 143024}], "download_size": 680158888, "dataset_size": 642320840}, {"config_name": "skills", "features": [{"name": "generalized_function_id", "dtype": "string"}, {"name": "particular_function_id", "dtype": "string"}, {"name": "reg_number", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 724280125, "num_examples": 161473}], "download_size": 747889457, "dataset_size": 724280125}, {"config_name": "knowledges", "features": [{"name": "generalized_function_id", "dtype": "string"}, {"name": "particular_function_id", "dtype": "string"}, {"name": "reg_number", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 1041374369, "num_examples": 234283}], "download_size": 1022695670, "dataset_size": 1041374369}]} | 2023-01-11T12:47:13+00:00 |
|
ea21a4eae61c32a6b796eb904a8034e275e93b96 | Benmrclhc/REKKIFAQ | [
"size_categories:n<1K",
"language:en",
"region:us"
]
| 2023-01-08T12:39:01+00:00 | {"language": ["en"], "size_categories": ["n<1K"]} | 2023-01-08T12:53:08+00:00 |
|
2494a8c69e77c0c8284fe604456054c5975c6490 | # Dataset Card for "tagesschau"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nillo36/tagesschau | [
"region:us"
]
| 2023-01-08T12:51:21+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "amerika", "1": "asien", "2": "finanzen", "3": "innenpolitik", "4": "sportschau", "5": "unternehmen", "6": "verbraucher"}}}}], "splits": [{"name": "train", "num_bytes": 4400114, "num_examples": 1200}, {"name": "validation", "num_bytes": 555716, "num_examples": 150}, {"name": "test", "num_bytes": 555716, "num_examples": 150}], "download_size": 3412290, "dataset_size": 5511546}} | 2023-01-08T12:51:41+00:00 |
e740abc42ba747786d5405286edb784cff205c77 | garNER/custom-MultiCoNER-II | [
"license:apache-2.0",
"region:us"
]
| 2023-01-08T13:12:27+00:00 | {"license": "apache-2.0"} | 2023-01-27T12:54:59+00:00 |
|
dfd4949be36ebfa7b9b9ec469046c64a2da9a7c9 |
# What is this dataset?
This dataset is a collection of Pull Requests **that contain comments** from the [Accelerate](https://github.com/huggingface/accelerate).
It contains the full contextual comments as well as code suggestions that exist inside of a code review | muellerzr/github-pr-history | [
"size_categories:n<1K",
"language:en",
"license:mit",
"region:us"
]
| 2023-01-08T13:34:38+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "pretty_name": "Github Pull Request History"} | 2023-01-08T15:29:01+00:00 |
2195b28c860447cbb3f61e21422654ae37216e41 | Zappandy/recipe_nlg | [
"license:apache-2.0",
"region:us"
]
| 2023-01-08T13:41:47+00:00 | {"license": "apache-2.0"} | 2023-01-09T14:26:39+00:00 |
|
ff5d55576bf4af2579a889ab3cc7af8c5728d4b6 | xianbao/my-dreambooth | [
"license:other",
"region:us"
]
| 2023-01-08T13:48:19+00:00 | {"license": "other"} | 2023-01-08T13:50:51+00:00 |
|
14895424dcff809578207f41d367a92158b1c941 | Team8/dataset | [
"region:us"
]
| 2023-01-08T14:02:32+00:00 | {"public_key": "-----BEGIN PUBLIC KEY-----\\n MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAnOjzZicSMFoD8sVcHq5F\\n k9HeryaMdlC8ZivIpo0NCd+85rtlWN/LA0h7AQoQJbN/Ri8l2ZfqXmfGKINgpjUs\\n FIgVvMOOIT1fiXXANQvNsTaWbJY0uDO4Z1WbWXjIZ6SbZ7FuID4hsHPpG0+uwUcx\\n /L3rPya2JRYbOKag5UED5sRHKAdNc9aInzZzOmomyaaA6Btnj9lSX+w65ps/Gi5o\\n a18j9aBda/On8WxTNcfBPjxqkyCvqW82te2+XGB8xUllllw2luqERLro9PrkLXV8\\n ZhXWqiF909HCw+U6z9MhIoFAmuROEy/pS7Pl9T2h/UUac9SoeNA3EN1qxXpmg/bn PwIDAQAB\\n -----END PUBLIC KEY-----"} | 2023-01-08T15:29:03+00:00 |
|
db101520da67e35a276c55402a6b8f543c700d39 |
# Dataset Card for BrWaC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [BrWaC homepage](https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC)
- **Repository:** [BrWaC repository](https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC)
- **Paper:** [The brWaC Corpus: A New Open Resource for Brazilian Portuguese](https://www.aclweb.org/anthology/L18-1686/)
- **Point of Contact:** [Jorge A. Wagner Filho](mailto:[email protected])
### Dataset Summary
The BrWaC (Brazilian Portuguese Web as Corpus) is a large corpus constructed following the Wacky framework,
which was made public for research purposes. The current corpus version, released in January 2017, is composed by
3.53 million documents, 2.68 billion tokens and 5.79 million types. Please note that this resource is available
solely for academic research purposes, and you agreed not to use it for any commercial applications. No need to manually download external sources.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Portuguese
## Dataset Structure
### Data Instances
An example from the BrWaC dataset looks as follows:
```
{
"doc_id": "netg-1afc73",
"text": {
"paragraphs": [
[
"Conteúdo recente"
],
[
"ESPUMA MARROM CHAMADA \"NINGUÉM MERECE\""
],
[
"31 de Agosto de 2015, 7:07 , por paulo soavinski - | No one following this article yet."
],
[
"Visualizado 202 vezes"
],
[
"JORNAL ELETRÔNICO DA ILHA DO MEL"
],
[
"Uma espuma marrom escuro tem aparecido com frequência na Praia de Fora.",
"Na faixa de areia ela aparece disseminada e não chama muito a atenção.",
"No Buraco do Aipo, com muitas pedras, ela aparece concentrada.",
"É fácil saber que esta espuma estranha está lá, quando venta.",
"Pequenos algodões de espuma começam a flutuar no espaço, pertinho da Praia do Saquinho.",
"Quem pode ajudar na coleta deste material, envio a laboratório renomado e pagamento de análises, favor entrar em contato com o site."
]
]
},
"title": "ESPUMA MARROM CHAMADA ‟NINGUÃÂM MERECE‟ - paulo soavinski",
"uri": "http://blogoosfero.cc/ilhadomel/pousadasilhadomel.com.br/espuma-marrom-chamada-ninguem-merece"
}
```
### Data Fields
- `doc_id`: The document ID
- `title`: The document title
- `uri`: URI where the document was extracted from
- `text`: A list of document paragraphs (with a list of sentences in it as a list of strings)
### Data Splits
The data is only split into train set with size of 3530796 samples.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{wagner2018brwac,
title={The brwac corpus: A new open resource for brazilian portuguese},
author={Wagner Filho, Jorge A and Wilkens, Rodrigo and Idiart, Marco and Villavicencio, Aline},
booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
year={2018}
}
``` | dominguesm/brwac | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:pt",
"license:unknown",
"region:us"
]
| 2023-01-08T14:08:57+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["pt"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "paperswithcode_id": "brwac", "pretty_name": "BrWaC", "dataset_info": {"features": [{"name": "doc_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "uri", "dtype": "string"}, {"name": "text", "sequence": [{"name": "paragraphs", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 18828412956, "num_examples": 3530796}], "download_size": 11616550261, "dataset_size": 18828412956}} | 2023-01-08T14:28:10+00:00 |
3e0f2ef9a58e6ec56db849005e818d11eac67d27 | group2test/tutorial-images | [
"license:apache-2.0",
"region:us"
]
| 2023-01-08T14:18:08+00:00 | {"license": "apache-2.0"} | 2023-01-08T15:21:41+00:00 |
|
e4781339d325dcce5080239ebfd43d1aa02484d5 | # Dataset Card for Dataset Name
titulos_noticias_rcn_clasificadas
## Dataset Description
Se tomo las noticias de la pagina de RCN y se clasifico los titulos por ['salud' 'tecnologia' 'colombia' 'economia' 'deportes']
salud= 1805 datos,
tecnologia= 1805 datos,
colombia= 1805 datos,
economia= 1805 datos,
deportes= 1805 datos,
Para dar un total de 9030 filas.
pagina: https://www.noticiasrcn.com/
- **Homepage:**
- **Repository:**
- **Point of Contact:**
### Languages
Español
## Dataset Structure
text, label, url | Nicky0007/titulos_noticias_rcn_clasificadas | [
"task_categories:token-classification",
"size_categories:1K<n<10K",
"language:es",
"region:us"
]
| 2023-01-08T14:29:50+00:00 | {"language": ["es"], "size_categories": ["1K<n<10K"], "task_categories": ["token-classification"]} | 2023-01-08T21:38:51+00:00 |
7b46494282984bdf024ceffad646d6f0689e0c7a | dawood/elon-tweets | [
"license:afl-3.0",
"region:us"
]
| 2023-01-08T14:31:45+00:00 | {"license": "afl-3.0"} | 2023-01-08T15:14:28+00:00 |
|
b60f7964a8e708f9215d5c0f9a409397301cba20 |
A dataset for translation. | Jour/Translation | [
"task_categories:translation",
"size_categories:100K<n<1M",
"region:us"
]
| 2023-01-08T15:06:05+00:00 | {"size_categories": ["100K<n<1M"], "task_categories": ["translation"]} | 2023-01-08T15:32:13+00:00 |
d4837a0c0fdba1f3a4ad3783234f6c7f961eb9b5 | # Dataset Card for "open-cm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | michaelb1225/open-cm | [
"region:us"
]
| 2023-01-08T15:09:26+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 6129427551.0, "num_examples": 671}], "download_size": 6071742068, "dataset_size": 6129427551.0}} | 2023-01-08T15:23:21+00:00 |
2875a61d509ab3012817736f5c7ba8898e9e6689 |
# About the Speech Corpus
`luxembourgish-asr-rtl-lu` dataset is a speech corpus for the under-resourced Luxembourgish language. The audio-transcription pairs were collected from [RTL.lu](http://www.rtl.lu/).
We used forced alignment to segment the audio files. The transcriptions were validated with the help of language experts at the [Center for the Luxembourgish Language](https://portal.education.lu/zls).
# Citation
```
@misc{lb-wav2vec2,
author = {Nguyen, Le Minh and Nayak, Shekhar and Coler, Matt.},
keywords = {Luxembourgish, multilingual speech recognition, language modelling, wav2vec 2.0 XLSR-53, under-resourced language},
title = {IMPROVING LUXEMBOURGISH SPEECH RECOGNITION WITH CROSS-LINGUAL SPEECH REPRESENTATIONS},
year = {2022},
copyright = {2023 IEEE}
}
```
# Copyright notice
Copyright © 2022 RTL.lu. All rights reserved. | Lemswasabi/luxembourgish-asr-rtl-lu | [
"language:lb",
"license:cc-by-nc-nd-4.0",
"region:us"
]
| 2023-01-08T15:29:50+00:00 | {"language": ["lb"], "license": "cc-by-nc-nd-4.0"} | 2023-01-08T15:44:54+00:00 |
c5d87bf3d79aae10a929022ed51c5fc2af692826 | vienduong88/Neyun | [
"license:openrail",
"region:us"
]
| 2023-01-08T16:47:18+00:00 | {"license": "openrail"} | 2023-01-08T16:54:10+00:00 |
|
ed1bc1c8606d73aebe2b8d5de0847c4520da97e2 | kaliansh/sdaia | [
"license:unknown",
"region:us"
]
| 2023-01-08T18:48:45+00:00 | {"license": "unknown"} | 2023-01-18T01:23:48+00:00 |
|
00b113dfb107b34a3ee80c92bf07274c5679ecb7 | hypernought/watercools | [
"license:artistic-2.0",
"region:us"
]
| 2023-01-08T19:08:53+00:00 | {"license": "artistic-2.0"} | 2023-01-08T19:10:16+00:00 |
|
ddd87ec3d20e617f872512b55c2744a6291455d4 | sajjadrauf/tolokaVQA | [
"license:other",
"region:us"
]
| 2023-01-08T19:18:38+00:00 | {"license": "other"} | 2023-01-08T20:16:39+00:00 |
|
14eb3a0f95145852082058962bd968c62d754a34 | # Dataset Card for "bookcorpus_compact_1024_shard4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | saibo/bookcorpus_compact_1024_shard4_of_10 | [
"region:us"
]
| 2023-01-08T19:31:58+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "concept_with_offset", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 767978582, "num_examples": 61605}], "download_size": 389198129, "dataset_size": 767978582}} | 2023-01-08T19:32:27+00:00 |
e6507d491fd697c11d11f95ef443eab5ecdfe5c6 |
# Note
Captcha images are presented as base64 string.
All csv files have a "\t" separator.
# Dataset consists of several files
## fssp_*.csv
I am publishing an updated version of the archive of 40,310 pictures, which I have divided into 4 categories:
- 4 symbols on the picture - 6 747 pcs.
- 5 symbols - 18 403 pcs.
- 6 characters - 7,038 pcs.
- 7 characters - 7 589 pcs.
Symbols used in captcha
'б','в','г','д','ж','к','л','м','н','п','р','с','т','2','4','5','6','7','8','9'
## fms.csv
About 15 thousand captcha imgs, which consists of 6 numbers.
## rosreestr.csv
About 10 thousand captcha, which consists of English characters and numbers with a length of 5 elements.
## vk.csv
About 19 thousand captcha, which consists of Russian characters and numbers from 5 to 6 elements long. Images from social network vk.com
# Kaggle
This Dataset is updated by the previous one, which I published on [Kaggle](https://www.kaggle.com/datasets/mrdaniilak/russian-captcha-images-base64)
### Citation
```
@misc{ russian_captcha_dataset,
title = { Russian Captcha Dataset },
type = { Open Source Dataset },
author = { Daniil Agniashvili },
url = { https://huggingface.co/datasets/daniilak/russian_captcha_images/ },
note = { visited on 2023-02-24 },
}
```
### License
Public Domain | daniilak/russian_captcha_images | [
"language:ru",
"license:cc",
"image",
"captcha",
"region:us"
]
| 2023-01-08T19:37:34+00:00 | {"language": ["ru"], "license": "cc", "tags": ["image", "captcha"]} | 2023-02-24T15:20:17+00:00 |
52ae1c1fb3c3195ae7d69dc2bd1fad58c8131add | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | sajjadrauf/VQA | [
"task_categories:image-segmentation",
"task_categories:image-classification",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:am",
"license:afl-3.0",
"region:us"
]
| 2023-01-08T20:17:26+00:00 | {"language": ["am"], "license": "afl-3.0", "size_categories": ["10K<n<100K"], "task_categories": ["image-segmentation", "image-classification", "question-answering"]} | 2023-01-08T20:19:36+00:00 |
0e31a633d3bc7cf6fb1516efef7a2a123a524b5c | Acumen/Test1 | [
"license:unknown",
"region:us"
]
| 2023-01-08T20:17:28+00:00 | {"license": "unknown"} | 2023-01-08T20:31:41+00:00 |
|
be9a03fde01d9f05107b14941af1ad99897691cf |
# Dataset Card for FrenchMedMCQA : A French Multiple-Choice Question Answering Corpus for Medical domain
## Table of Contents
- [Dataset Card for FrenchMedMCQA : A French Multiple-Choice Question Answering Corpus for Medical domain](#dataset-card-for-frenchmedmcqa--a-french-multiple-choice-question-answering-corpus-for-medical-domain)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contact](#contact)
## Dataset Description
- **Homepage:** https://deft2023.univ-avignon.fr/
- **Repository:** https://deft2023.univ-avignon.fr/
- **Paper:** [FrenchMedMCQA: A French Multiple-Choice Question Answering Dataset for Medical domain](https://hal.science/hal-03824241/document)
- **Leaderboard:** Coming soon
- **Point of Contact:** [Yanis LABRAK](mailto:[email protected])
### Dataset Summary
This paper introduces FrenchMedMCQA, the first publicly available Multiple-Choice Question Answering (MCQA) dataset in French for medical domain. It is composed of 3,105 questions taken from real exams of the French medical specialization diploma in pharmacy, mixing single and multiple answers.
Each instance of the dataset contains an identifier, a question, five possible answers and their manual correction(s).
We also propose first baseline models to automatically process this MCQA task in order to report on the current performances and to highlight the difficulty of the task. A detailed analysis of the results showed that it is necessary to have representations adapted to the medical domain or to the MCQA task: in our case, English specialized models yielded better results than generic French ones, even though FrenchMedMCQA is in French. Corpus, models and tools are available online.
### Supported Tasks and Leaderboards
Multiple-Choice Question Answering (MCQA)
### Languages
The questions and answers are available in French.
## Dataset Structure
### Data Instances
```json
{
"id": "1863462668476003678",
"question": "Parmi les propositions suivantes, laquelle (lesquelles) est (sont) exacte(s) ? Les chylomicrons plasmatiques :",
"answers": {
"a": "Sont plus riches en cholestérol estérifié qu'en triglycérides",
"b": "Sont synthétisés par le foie",
"c": "Contiennent de l'apolipoprotéine B48",
"d": "Contiennent de l'apolipoprotéine E",
"e": "Sont transformés par action de la lipoprotéine lipase"
},
"correct_answers": [
"c",
"d",
"e"
],
"subject_name": "pharmacie",
"type": "multiple"
}
```
### Data Fields
- `id` : a string question identifier for each example
- `question` : question text (a string)
- `answer_a` : Option A
- `answer_b` : Option B
- `answer_c` : Option C
- `answer_d` : Option D
- `answer_e` : Option E
- `correct_answers` : Correct options, i.e., A, D and E
- `choice_type` ({"single", "multiple"}): Question choice type.
- "single": Single-choice question, where each choice contains a single option.
- "multiple": Multi-choice question, where each choice contains a combination of multiple options.
### Data Splits
| # Answers | Training | Validation | Test | Total |
|:---------:|:--------:|:----------:|:----:|:-----:|
| 1 | 595 | 164 | 321 | 1,080 |
| 2 | 528 | 45 | 97 | 670 |
| 3 | 718 | 71 | 141 | 930 |
| 4 | 296 | 30 | 56 | 382 |
| 5 | 34 | 2 | 7 | 43 |
| Total | 2171 | 312 | 622 | 3,105 |
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The questions and their associated candidate answer(s) were collected from real French pharmacy exams on the remede website. Questions and answers were manually created by medical experts and used during examinations. The dataset is composed of 2,025 questions with multiple answers and 1,080 with a single one, for a total of 3,105 questions. Each instance of the dataset contains an identifier, a question, five options (labeled from A to E) and correct answer(s). The average question length is 14.17 tokens and the average answer length is 6.44 tokens. The vocabulary size is of 13k words, of which 3.8k are estimated medical domain-specific words (i.e. a word related to the medical field). We find an average of 2.49 medical domain-specific words in each question (17 % of the words) and 2 in each answer (36 % of the words). On average, a medical domain-specific word is present in 2 questions and in 8 answers.
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Additional Information
### Dataset Curators
The dataset was created by Labrak Yanis and Bazoge Adrien and Dufour Richard and Daille Béatrice and Gourraud Pierre-Antoine and Morin Emmanuel and Rouvier Mickael.
### Licensing Information
Apache 2.0
### Citation Information
If you find this useful in your research, please consider citing the dataset paper :
```latex
@inproceedings{labrak-etal-2022-frenchmedmcqa,
title = "{F}rench{M}ed{MCQA}: A {F}rench Multiple-Choice Question Answering Dataset for Medical domain",
author = "Labrak, Yanis and
Bazoge, Adrien and
Dufour, Richard and
Daille, Beatrice and
Gourraud, Pierre-Antoine and
Morin, Emmanuel and
Rouvier, Mickael",
booktitle = "Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.louhi-1.5",
pages = "41--46",
abstract = "This paper introduces FrenchMedMCQA, the first publicly available Multiple-Choice Question Answering (MCQA) dataset in French for medical domain. It is composed of 3,105 questions taken from real exams of the French medical specialization diploma in pharmacy, mixing single and multiple answers. Each instance of the dataset contains an identifier, a question, five possible answers and their manual correction(s). We also propose first baseline models to automatically process this MCQA task in order to report on the current performances and to highlight the difficulty of the task. A detailed analysis of the results showed that it is necessary to have representations adapted to the medical domain or to the MCQA task: in our case, English specialized models yielded better results than generic French ones, even though FrenchMedMCQA is in French. Corpus, models and tools are available online.",
}
```
### Contact
Thanks to contact [Yanis LABRAK](https://github.com/qanastek) for more information about this dataset.
| qanastek/frenchmedmcqa | [
"task_categories:question-answering",
"task_categories:multiple-choice",
"task_ids:multiple-choice-qa",
"task_ids:open-domain-qa",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1k<n<10k",
"source_datasets:original",
"language:fr",
"license:apache-2.0",
"region:us"
]
| 2023-01-08T20:22:47+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["fr"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1k<n<10k"], "source_datasets": ["original"], "task_categories": ["question-answering", "multiple-choice"], "task_ids": ["multiple-choice-qa", "open-domain-qa"], "paperswithcode_id": "frenchmedmcqa", "pretty_name": "FrenchMedMCQA"} | 2023-06-08T11:39:22+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.