sha
stringlengths
40
40
text
stringlengths
0
13.4M
id
stringlengths
2
117
tags
sequence
created_at
stringlengths
25
25
metadata
stringlengths
2
31.7M
last_modified
stringlengths
25
25
13b9a788d412e1e43f8e0446b1ee37211b360932
SemiNeural/MVdiffusion
[ "license:other", "region:us" ]
2022-07-18T00:08:20+00:00
{"license": "other"}
2022-07-18T00:08:20+00:00
f6256e656df5e70f06780d12b729258808da86ec
codie28/able
[ "license:apache-2.0", "region:us" ]
2022-07-18T01:55:37+00:00
{"license": "apache-2.0"}
2022-07-18T01:56:24+00:00
06bc381446b3c3cb1faaa56c5575c71f101e286a
# Dataset Card for "tner/btc" ## Dataset Description - **Repository:** [T-NER](https://github.com/asahi417/tner) - **Paper:** [https://aclanthology.org/C16-1111/](https://aclanthology.org/C16-1111/) - **Dataset:** Broad Twitter Corpus - **Domain:** Twitter - **Number of Entity:** 3 ### Dataset Summary Broad Twitter Corpus NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project. - Entity Types: `LOC`, `ORG`, `PER` ## Dataset Structure ### Data Instances An example of `train` looks as follows. ``` { 'tokens': ['I', 'hate', 'the', 'words', 'chunder', ',', 'vomit', 'and', 'puke', '.', 'BUUH', '.'], 'tags': [6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6] } ``` ### Label ID The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/btc/raw/main/dataset/label.json). ```python { "B-LOC": 0, "B-ORG": 1, "B-PER": 2, "I-LOC": 3, "I-ORG": 4, "I-PER": 5, "O": 6 } ``` ### Data Splits | name |train|validation|test| |---------|----:|---------:|---:| |btc | 6338| 1001|2000| ### Citation Information ``` @inproceedings{derczynski-etal-2016-broad, title = "Broad {T}witter Corpus: A Diverse Named Entity Recognition Resource", author = "Derczynski, Leon and Bontcheva, Kalina and Roberts, Ian", booktitle = "Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers", month = dec, year = "2016", address = "Osaka, Japan", publisher = "The COLING 2016 Organizing Committee", url = "https://aclanthology.org/C16-1111", pages = "1169--1179", abstract = "One of the main obstacles, hampering method development and comparative evaluation of named entity recognition in social media, is the lack of a sizeable, diverse, high quality annotated corpus, analogous to the CoNLL{'}2003 news dataset. For instance, the biggest Ritter tweet corpus is only 45,000 tokens {--} a mere 15{\%} the size of CoNLL{'}2003. Another major shortcoming is the lack of temporal, geographic, and author diversity. This paper introduces the Broad Twitter Corpus (BTC), which is not only significantly bigger, but sampled across different regions, temporal periods, and types of Twitter users. The gold-standard named entity annotations are made by a combination of NLP experts and crowd workers, which enables us to harness crowd recall while maintaining high quality. We also measure the entity drift observed in our dataset (i.e. how entity representation varies over time), and compare to newswire. The corpus is released openly, including source text and intermediate annotations.", } ```
tner/btc
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "multilinguality:monolingual", "size_categories:1k<10K", "language:en", "license:other", "region:us" ]
2022-07-18T09:38:50+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1k<10K"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "BTC"}
2022-11-27T19:07:36+00:00
cb0fecb243a95034376387309fe8c03f8bf74aee
# Dataset Card for "tner/tweebank_ner" ## Dataset Description - **Repository:** [T-NER](https://github.com/asahi417/tner) - **Paper:** [https://arxiv.org/abs/2201.07281](https://arxiv.org/abs/2201.07281) - **Dataset:** TweeBank NER - **Domain:** Twitter - **Number of Entity:** 4 ### Dataset Summary TweeBank NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project. - Entity Types: `LOC`, `MISC`, `PER`, `ORG` ## Dataset Structure ### Data Instances An example of `train` looks as follows. ``` { 'tokens': ['RT', '@USER2362', ':', 'Farmall', 'Heart', 'Of', 'The', 'Holidays', 'Tabletop', 'Christmas', 'Tree', 'With', 'Lights', 'And', 'Motion', 'URL1087', '#Holiday', '#Gifts'], 'tags': [8, 8, 8, 2, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8] } ``` ### Label ID The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/tweebank_ner/raw/main/dataset/label.json). ```python { "B-LOC": 0, "B-MISC": 1, "B-ORG": 2, "B-PER": 3, "I-LOC": 4, "I-MISC": 5, "I-ORG": 6, "I-PER": 7, "O": 8 } ``` ### Data Splits | name |train|validation|test| |---------|----:|---------:|---:| |tweebank_ner | 1639| 710 |1201| ### Citation Information ``` @article{DBLP:journals/corr/abs-2201-07281, author = {Hang Jiang and Yining Hua and Doug Beeferman and Deb Roy}, title = {Annotating the Tweebank Corpus on Named Entity Recognition and Building {NLP} Models for Social Media Analysis}, journal = {CoRR}, volume = {abs/2201.07281}, year = {2022}, url = {https://arxiv.org/abs/2201.07281}, eprinttype = {arXiv}, eprint = {2201.07281}, timestamp = {Fri, 21 Jan 2022 13:57:15 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-07281.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
tner/tweebank_ner
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "multilinguality:monolingual", "size_categories:1k<10K", "language:en", "license:other", "arxiv:2201.07281", "region:us" ]
2022-07-18T09:39:20+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1k<10K"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "TweeBank NER"}
2022-11-27T20:59:13+00:00
9d9c27f1d4fb18a02e0d8283bac6ebb01c56c458
# Dataset Card for "tner/tweetner7" ## Dataset Description - **Repository:** [https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper](https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper) - **Paper:** [https://arxiv.org/abs/2210.03797](https://arxiv.org/abs/2210.03797) - **Dataset:** TweetNER7 - **Domain:** Twitter - **Number of Entity:** 7 ### Dataset Summary This is the official repository of TweetNER7 (["Named Entity Recognition in Twitter: A Dataset and Analysis on Short-Term Temporal Shifts, AACL main conference 2022"](https://arxiv.org/abs/2210.03797)), an NER dataset on Twitter with 7 entity labels. Each instance of TweetNER7 comes with a timestamp which distributes from September 2019 to August 2021. The tweet collection used in TweetNER7 is same as what used in [TweetTopic](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi). The dataset is integrated in [TweetNLP](https://tweetnlp.org/) too. - Entity Types: `corperation`, `creative_work`, `event`, `group`, `location`, `product`, `person` ### Preprocessing We pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token `{{URL}}` and non-verified usernames into `{{USERNAME}}`. For verified usernames, we replace its display name (or account name) with symbols `{@}`. For example, a tweet ``` Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek ``` is transformed into the following text. ``` Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}} ``` A simple function to format tweet follows below. ```python import re from urlextract import URLExtract extractor = URLExtract() def format_tweet(tweet): # mask web urls urls = extractor.find_urls(tweet) for url in urls: tweet = tweet.replace(url, "{{URL}}") # format twitter account tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet) return tweet target = """Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek""" target_format = format_tweet(target) print(target_format) 'Get the all-analog Classic Vinyl Edition of "Takin\' Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}}' ``` We ask annotators to ignore those special tokens but label the verified users' mentions. ### Data Split | split | number of instances | description | |:------------------|------:|------:| | train_2020 | 4616 | training dataset from September 2019 to August 2020 | | train_2021 | 2495 | training dataset from September 2020 to August 2021 | | train_all | 7111 | combined training dataset of `train_2020` and `train_2021` | | validation_2020 | 576 | validation dataset from September 2019 to August 2020 | | validation_2021 | 310 | validation dataset from September 2020 to August 2021 | | test_2020 | 576 | test dataset from September 2019 to August 2020 | | test_2021 | 2807 | test dataset from September 2020 to August 2021 | | train_random | 4616 | randomly sampled training dataset with the same size as `train_2020` from `train_all` | | validation_random | 576 | randomly sampled training dataset with the same size as `validation_2020` from `validation_all` | | extra_2020 | 87880 | extra tweet without annotations from September 2019 to August 2020 | | extra_2021 | 93594 | extra tweet without annotations from September 2020 to August 2021 | For the temporal-shift setting, model should be trained on `train_2020` with `validation_2020` and evaluate on `test_2021`. In general, model would be trained on `train_all`, the most representative training set with `validation_2021` and evaluate on `test_2021`. ## Dataset Structure ### Data Instances An example of `train` looks as follows. ``` { 'tokens': ['Morning', '5km', 'run', 'with', '{{USERNAME}}', 'for', 'breast', 'cancer', 'awareness', '#', 'pinkoctober', '#', 'breastcancerawareness', '#', 'zalorafit', '#', 'zalorafitxbnwrc', '@', 'The', 'Central', 'Park', ',', 'Desa', 'Parkcity', '{{URL}}'], 'tags': [14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 2, 14, 2, 14, 14, 14, 14, 14, 14, 4, 11, 11, 11, 11, 14], 'id': '1183344337016381440', 'date': '2019-10-13' } ``` ### Label ID The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/tweetner7/raw/main/dataset/label.json). ```python { "B-corporation": 0, "B-creative_work": 1, "B-event": 2, "B-group": 3, "B-location": 4, "B-person": 5, "B-product": 6, "I-corporation": 7, "I-creative_work": 8, "I-event": 9, "I-group": 10, "I-location": 11, "I-person": 12, "I-product": 13, "O": 14 } ``` ## Models See full evaluation metrics [here](https://github.com/asahi417/tner/blob/master/MODEL_CARD.md#models-for-tweetner7). ### Main Models | Model (link) | Data | Language Model | Micro F1 (2021) | Macro F1 (2021) | |:--------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------|------------------:|------------------:| | [`tner/roberta-large-tweetner7-all`](https://huggingface.co/tner/roberta-large-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 65.75 | 61.25 | | [`tner/roberta-base-tweetner7-all`](https://huggingface.co/tner/roberta-base-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 65.16 | 60.81 | | [`tner/twitter-roberta-base-2019-90m-tweetner7-all`](https://huggingface.co/tner/twitter-roberta-base-2019-90m-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-2019-90m`](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) | 65.68 | 61 | | [`tner/twitter-roberta-base-dec2020-tweetner7-all`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 65.26 | 60.7 | | [`tner/bertweet-large-tweetner7-all`](https://huggingface.co/tner/bertweet-large-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 66.46 | 61.87 | | [`tner/bertweet-base-tweetner7-all`](https://huggingface.co/tner/bertweet-base-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 65.36 | 60.52 | | [`tner/bert-large-tweetner7-all`](https://huggingface.co/tner/bert-large-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 63.58 | 59 | | [`tner/bert-base-tweetner7-all`](https://huggingface.co/tner/bert-base-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-base`](https://huggingface.co/bert-base) | 62.3 | 57.59 | | [`tner/roberta-large-tweetner7-continuous`](https://huggingface.co/tner/roberta-large-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 66.02 | 60.9 | | [`tner/roberta-base-tweetner7-continuous`](https://huggingface.co/tner/roberta-base-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 65.47 | 60.01 | | [`tner/twitter-roberta-base-2019-90m-tweetner7-continuous`](https://huggingface.co/tner/twitter-roberta-base-2019-90m-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-2019-90m`](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) | 65.87 | 61.07 | | [`tner/twitter-roberta-base-dec2020-tweetner7-continuous`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 65.51 | 60.57 | | [`tner/bertweet-large-tweetner7-continuous`](https://huggingface.co/tner/bertweet-large-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 66.41 | 61.66 | | [`tner/bertweet-base-tweetner7-continuous`](https://huggingface.co/tner/bertweet-base-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 65.84 | 61.02 | | [`tner/bert-large-tweetner7-continuous`](https://huggingface.co/tner/bert-large-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 63.2 | 57.67 | | [`tner/roberta-large-tweetner7-2021`](https://huggingface.co/tner/roberta-large-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.05 | 59.11 | | [`tner/roberta-base-tweetner7-2021`](https://huggingface.co/tner/roberta-base-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 61.76 | 57 | | [`tner/twitter-roberta-base-dec2020-tweetner7-2021`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 63.98 | 58.91 | | [`tner/bertweet-large-tweetner7-2021`](https://huggingface.co/tner/bertweet-large-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 62.9 | 58.13 | | [`tner/bertweet-base-tweetner7-2021`](https://huggingface.co/tner/bertweet-base-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 63.09 | 57.35 | | [`tner/bert-large-tweetner7-2021`](https://huggingface.co/tner/bert-large-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 59.75 | 53.93 | | [`tner/bert-base-tweetner7-2021`](https://huggingface.co/tner/bert-base-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-base`](https://huggingface.co/bert-base) | 60.67 | 55.5 | | [`tner/roberta-large-tweetner7-2020`](https://huggingface.co/tner/roberta-large-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.76 | 60 | | [`tner/roberta-base-tweetner7-2020`](https://huggingface.co/tner/roberta-base-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 64.21 | 59.11 | | [`tner/twitter-roberta-base-2019-90m-tweetner7-2020`](https://huggingface.co/tner/twitter-roberta-base-2019-90m-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-2019-90m`](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) | 64.28 | 59.31 | | [`tner/twitter-roberta-base-dec2020-tweetner7-2020`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 62.87 | 58.26 | | [`tner/bertweet-large-tweetner7-2020`](https://huggingface.co/tner/bertweet-large-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 64.01 | 59.47 | | [`tner/bertweet-base-tweetner7-2020`](https://huggingface.co/tner/bertweet-base-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 64.06 | 59.44 | | [`tner/bert-large-tweetner7-2020`](https://huggingface.co/tner/bert-large-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 61.43 | 56.14 | | [`tner/bert-base-tweetner7-2020`](https://huggingface.co/tner/bert-base-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-base`](https://huggingface.co/bert-base) | 60.09 | 54.67 | Model description follows below. * Model with suffix `-all`: Model fine-tuned on `train_all` and validated on `validation_2021`. * Model with suffix `-continuous`: Model fine-tuned on `train_2021` continuously after fine-tuning on `train_2020` and validated on `validation_2021`. * Model with suffix `-2021`: Model fine-tuned only on `train_2021` and validated on `validation_2021`. * Model with suffix `-2020`: Model fine-tuned only on `train_2021` and validated on `validation_2020`. ### Sub Models (used in ablation study) - Model fine-tuned only on `train_random` and validated on `validation_2020`. | Model (link) | Data | Language Model | Micro F1 (2021) | Macro F1 (2021) | |:------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------|------------------:|------------------:| | [`tner/roberta-large-tweetner7-random`](https://huggingface.co/tner/roberta-large-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 66.33 | 60.96 | | [`tner/twitter-roberta-base-2019-90m-tweetner7-random`](https://huggingface.co/tner/twitter-roberta-base-2019-90m-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-2019-90m`](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) | 63.29 | 58.5 | | [`tner/roberta-base-tweetner7-random`](https://huggingface.co/tner/roberta-base-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 64.04 | 59.23 | | [`tner/twitter-roberta-base-dec2020-tweetner7-random`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 64.72 | 59.97 | | [`tner/bertweet-large-tweetner7-random`](https://huggingface.co/tner/bertweet-large-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 64.86 | 60.49 | | [`tner/bertweet-base-tweetner7-random`](https://huggingface.co/tner/bertweet-base-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 65.55 | 59.58 | | [`tner/bert-large-tweetner7-random`](https://huggingface.co/tner/bert-large-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 62.39 | 57.54 | | [`tner/bert-base-tweetner7-random`](https://huggingface.co/tner/bert-base-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-base`](https://huggingface.co/bert-base) | 60.91 | 55.92 | - Model fine-tuned on the self-labeled dataset on `extra_{2020,2021}` and validated on `validation_2020`. | Model (link) | Data | Language Model | Micro F1 (2021) | Macro F1 (2021) | |:----------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------|:--------------------------------------------------------|------------------:|------------------:| | [`tner/roberta-large-tweetner7-selflabel2020`](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.56 | 59.63 | | [`tner/roberta-large-tweetner7-selflabel2021`](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.6 | 59.45 | | [`tner/roberta-large-tweetner7-2020-selflabel2020-all`](https://huggingface.co/tner/roberta-large-tweetner7-2020-selflabel2020-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 65.46 | 60.39 | | [`tner/roberta-large-tweetner7-2020-selflabel2021-all`](https://huggingface.co/tner/roberta-large-tweetner7-2020-selflabel2021-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.52 | 59.45 | | [`tner/roberta-large-tweetner7-selflabel2020-continuous`](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2020-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 65.15 | 60.23 | | [`tner/roberta-large-tweetner7-selflabel2021-continuous`](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2021-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.48 | 59.41 | Model description follows below. * Model with suffix `-self2020`: Fine-tuning on the self-annotated data of `extra_2020` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). * Model with suffix `-self2021`: Fine-tuning on the self-annotated data of `extra_2021` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). * Model with suffix `-2020-self2020-all`: Fine-tuning on the self-annotated data of `extra_2020` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). Combined training dataset of `extra_2020` and `train_2020`. * Model with suffix `-2020-self2021-all`: Fine-tuning on the self-annotated data of `extra_2021` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). Combined training dataset of `extra_2021` and `train_2020`. * Model with suffix `-2020-self2020-continuous`: Fine-tuning on the self-annotated data of `extra_2020` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). Fine-tuning on `train_2020` and continuing fine-tuning on `extra_2020`. * Model with suffix `-2020-self2021-continuous`: Fine-tuning on the self-annotated data of `extra_2021` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). Fine-tuning on `train_2020` and continuing fine-tuning on `extra_2020`. ### Reproduce Experimental Result To reproduce the experimental result on our AACL paper, please see the repository [https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper](https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper). ## Citation Information ``` @inproceedings{ushio-etal-2022-tweet, title = "{N}amed {E}ntity {R}ecognition in {T}witter: {A} {D}ataset and {A}nalysis on {S}hort-{T}erm {T}emporal {S}hifts", author = "Ushio, Asahi and Neves, Leonardo and Silva, Vitor and Barbieri, Francesco. and Camacho-Collados, Jose", booktitle = "The 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing", month = nov, year = "2022", address = "Online", publisher = "Association for Computational Linguistics", } ```
tner/tweetner7
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "multilinguality:monolingual", "size_categories:1k<10K", "language:en", "license:other", "arxiv:2210.03797", "region:us" ]
2022-07-18T09:39:50+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1k<10K"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "TweetNER7"}
2022-11-27T18:50:28+00:00
2a9cbafadf550f3671a5d70fd13edfbe3924f50e
bdura/swann
[ "license:mit", "region:us" ]
2022-07-18T09:55:59+00:00
{"license": "mit"}
2022-07-18T09:55:59+00:00
5d617042fffaa7876c93750add2a0a47b6f6826a
fshllaku/test
[ "license:apache-2.0", "region:us" ]
2022-07-18T12:03:39+00:00
{"license": "apache-2.0"}
2022-07-18T12:03:39+00:00
5bc51fd7d10388377950fee5a9612482d279e189
Top 20 hits for queries from training data in "MS-MARCO v2 passage" by Lucene Searcher (using pyserini) hits@20 0.1957 See also : https://github.com/castorini/pyserini/blob/master/docs/prebuilt-indexes.md For java11 installation in linux : https://stackoverflow.com/questions/52504825/how-to-install-jdk-11-under-ubuntu
Doohae/marcopolo-v2-passage
[ "region:us" ]
2022-07-18T13:53:43+00:00
{}
2022-07-18T14:33:08+00:00
d066df8bb6b8f2a837fa8bf8ff0fe1048e4a7b2f
neongeckocom/cv-tts-clean
[ "license:bsd-3-clause", "region:us" ]
2022-07-18T14:41:15+00:00
{"license": "bsd-3-clause"}
2022-09-29T19:44:12+00:00
a3c510486e8715aeb27ffb9e3846d2a6ca0f3500
# Dataset Card for MSLR2022 ## Table of Contents - [Dataset Card for MSLR2022](#dataset-card-for-mslr2022) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://github.com/allenai/mslr-shared-task - **Repository:** https://github.com/allenai/mslr-shared-task - **Paper:** https://aclanthology.org/2021.emnlp-main.594 - **Leaderboard:** https://github.com/allenai/mslr-shared-task#leaderboard - **Point of Contact:** https://github.com/allenai/mslr-shared-task#contact-us ### Dataset Summary The Multidocument Summarization for Literature Review (MSLR) Shared Task aims to study how medical evidence from different clinical studies are summarized in literature reviews. Reviews provide the highest quality of evidence for clinical care, but are expensive to produce manually. (Semi-)automation via NLP may facilitate faster evidence synthesis without sacrificing rigor. The MSLR shared task uses two datasets to assess the current state of multidocument summarization for this task, and to encourage the development of modeling contributions, scaffolding tasks, methods for model interpretability, and improved automated evaluation methods in this domain. ### Supported Tasks and Leaderboards This dataset is used for the MSLR2022 Shared Task. For information on the shared task leaderboard, please refer [here](https://github.com/allenai/mslr-shared-task#leaderboard). ### Languages English ## Dataset Structure More information on dataset structure [here](https://github.com/allenai/mslr-shared-task#data-structure). ### Data Instances __MS^2__ ```json { "review_id": "30760312", "pmid": [ "22776744", "25271670", "3493740", "1863023", "16291984", "23984728", "23996433", "18466198", "12151469", "27400308", "16053970", "22922316", "11897647", "11597664", "4230647" ], "title": [ "Improved Cell Survival and Paracrine Capacity of Human Embryonic Stem Cell-Derived Mesenchymal Stem Cells Promote Therapeutic Potential for Pulmonary Arterial Hypertension", "Adipose-derived stem cells attenuate pulmonary arterial hypertension and ameliorate pulmonary arterial remodeling in monocrotaline-induced pulmonary hypertensive rats", "Effect of bone marrow mesenchymal stem cells on experimental pulmonary arterial hypertension", "Survival in patients with primary pulmonary hypertension. Results from a national prospective registry.", "Sildenafil citrate therapy for pulmonary arterial hypertension.", "Macitentan and morbidity and mortality in pulmonary arterial hypertension.", "Long-term research of stem cells in monocrotaline-induced pulmonary arterial hypertension", "Safety and efficacy of autologous endothelial progenitor cells transplantation in children with idiopathic pulmonary arterial hypertension: open-label pilot study.", "Inhaled iloprost for severe pulmonary hypertension.", "Sildenafil reduces pulmonary vascular resistance in single ventricular physiology.", "Ambrisentan therapy for pulmonary arterial hypertension.", "Mesenchymal stem cell prevention of vascular remodeling in high flow-induced pulmonary hypertension through a paracrine mechanism.", "Continuous subcutaneous infusion of treprostinil, a prostacyclin analogue, in patients with pulmonary arterial hypertension: a double-blind, randomized, placebo-controlled trial.", "Effects of the dual endothelin-receptor antagonist bosentan in patients with pulmonary hypertension: a randomised placebocontrolled study", "SYRCLE\\u2019s risk of bias tool for animal studies" ], "abstract": [ "Although transplantation of adult bone marrow mesenchymal stem cells ( BM-MSCs ) holds promise in the treatment for pulmonary arterial hypertension ( PAH ) , the poor survival and differentiation potential of adult BM-MSCs have limited their therapeutic efficiency . Here , we compared the therapeutic efficacy of human embryonic stem cell-derived MSCs ( hESC-MSCs ) with adult BM-MSCs for the treatment of PAH in an animal model . One week following monocrotaline (MCT)-induced PAH , mice were r and omly assigned to receive phosphate-buffered saline ( MCT group ) ; 3.0 \\u00d7 106 human BM-derived MSCs ( BM-MSCs group ) or 3.0 \\u00d7 106 hESC-derived MSCs ( hESC-MSCs group ) via tail vein injection . At 3 weeks posttransplantation , the right ventricular systolic pressure ( RVSP ) , degree of RV hypertrophy , and medial wall thickening of pulmonary arteries were lower= , and pulmonary capillary density was higher in the hESC-MSC group as compared with BM-MSC and MCT groups ( all p < 0.05 ) . At 1 week posttransplantation , the number of engrafted MSCs in the lungs was found significantly higher in the hESC-MSC group than in the BM-MSC group ( all p < 0.01 ) . At 3 weeks posttransplantation , implanted BM-MSCs were undetectable whereas hESC-MSCs were not only engrafted in injured pulmonary arteries but had also undergone endothelial differentiation . In addition , protein profiling of hESC-MSC- and BM-MSC-conditioned medium revealed a differential paracrine capacity . Classification of these factors into bioprocesses revealed that secreted factors from hESC-MSCs were preferentially involved in early embryonic development and tissue differentiation , especially blood vessel morphogenesis . We concluded that improved cell survival and paracrine capacity of hESC-MSCs provide better therapeutic efficacy than BM-MSCs in the treatment for PAH", "Abstract We investigated the effect of adipose-derived stem cells ( ADSCs ) transplantation effects on structural remodeling and pulmonary artery pressure in monocrotaline (MCT)-induced pulmonary hypertensive rats . In the first experiment , 32 male Sprague-Dawley ( SD ) rats were r and omly divided into four groups ( n = 8/group ) : 3 ADSCs treated groups and normal control ( Ctrl ) . ADSCs were administered through the left jugular vein at 105 , 106 and 107 cells , respectively , and a cell density of 106cells/ml was shown to be optimal . The GFP-tagged ADSCs were identified in the lungs and differentiated into endothelial-like cells . In the second experiment , 96 male SD rats were r and omly divided into three groups ( n = 32/group ) : Ctrl , MCT-induced pulmonary arterial hypertension ( PAH ) , and PAH treated with ADSCs ( ADSCs ) . Two weeks post-MCT administration , the ADSCs group received 1 \\u00d7 106 ADSCs via the external jugular vein . Compared to PAH rats , mean pulmonary arterial pressure was decreased in rats at 1 , 2 , and 3 weeks after ADSCs-treatment ( 18.63 \\u00b1 2.15 mmHg versus 24.53 \\u00b1 2.90 mmHg ; 23.07 \\u00b1 2.84 mmHg versus 33.18 \\u00b1 2.30 mmHg ; 22.98 \\u00b1 2.34 mmHg versus 36.38 \\u00b1 3.28 mmHg , p < 0.05 ) . Meanwhile , the right heart hypertrophy index ( 36.2 1 \\u00b1 4.27 % versus 41.01 \\u00b1 1.29 % ; 39.47 \\u00b1 4.02 % versus 48.75 \\u00b1 2 .13 % ; 41.02 \\u00b1 0.9 % versus 50.52 \\u00b1 1.49 % , p < 0.05 , respectively ) , ratio of wall/lumen thickness , as well as the wall/lumen area were significantly reduced in PAH rats at these time points following ADSCs-treatment , as compared with untreated PAH rats . In summary , ADSCs may colonize the pulmonary arteries , attenuate pulmonary arterial hypertension and ameliorate pulmonary arterial remodeling", "The aim of the present study was to investigate the effect of bone marrow mesenchymal stem cell ( BMSC ) transp1antation on lung and heart damage in a rat model of monocrotaline (MCT)-induced pulmonary arterial hypertension ( PAH ) . The animals were r and omly divided into 3 groups : control , PAH and BMSC implantation groups . Structural changes in the pulmonary vascular wall , such as the pulmonary artery lumen area ( VA ) and vascular area ( TAA ) were measured by hematoxylin and eosin ( H&E ) staining , and the hemodynamics were detected by echocardiography . Two weeks post-operation , our results demonstrated that sublingual vein injection of BMSCs significantly attenuated the pulmonary vascular structural and hemodynamic changes caused by pulmonary arterial hypertension . The mechanism may be executed via paracrine effects", "OBJECTIVE To characterize mortality in persons diagnosed with primary pulmonary hypertension and to investigate factors associated with survival . DESIGN Registry with prospect i ve follow-up . SETTING Thirty-two clinical centers in the United States participating in the Patient Registry for the Characterization of Primary Pulmonary Hypertension supported by the National Heart , Lung , and Blood Institute . PATIENTS Patients ( 194 ) diagnosed at clinical centers between 1 July 1981 and 31 December 1985 and followed through 8 August 1988 . MEASUREMENTS At diagnosis , measurements of hemodynamic variables , pulmonary function , and gas exchange variables were taken in addition to information on demographic variables , medical history , and life-style . Patients were followed for survival at 6-month intervals . MAIN RESULTS The estimated median survival of these patients was 2.8 years ( 95 % Cl , 1.9 to 3.7 years ) . Estimated single-year survival rates were as follows : at 1 year , 68 % ( Cl , 61 % to 75 % ) ; at 3 years , 48 % ( Cl , 41 % to 55 % ) ; and at 5 years , 34 % ( Cl , 24 % to 44 % ) . Variables associated with poor survival included a New York Heart Association ( NYHA ) functional class of III or IV , presence of Raynaud phenomenon , elevated mean right atrial pressure , elevated mean pulmonary artery pressure , decreased cardiac index , and decreased diffusing capacity for carbon monoxide ( DLCO ) . Drug therapy at entry or discharge was not associated with survival duration . CONCLUSIONS Mortality was most closely associated with right ventricular hemodynamic function and can be characterized by means of an equation using three variables : mean pulmonary artery pressure , mean right atrial pressure , and cardiac index . Such an equation , once vali date d prospect ively , could be used as an adjunct in planning treatment strategies and allocating medical re sources", "BACKGROUND Sildenafil inhibits phosphodiesterase type 5 , an enzyme that metabolizes cyclic guanosine monophosphate , thereby enhancing the cyclic guanosine monophosphate-mediated relaxation and growth inhibition of vascular smooth-muscle cells , including those in the lung . METHODS In this double-blind , placebo-controlled study , we r and omly assigned 278 patients with symptomatic pulmonary arterial hypertension ( either idiopathic or associated with connective-tissue disease or with repaired congenital systemic-to-pulmonary shunts ) to placebo or sildenafil ( 20 , 40 , or 80 mg ) orally three times daily for 12 weeks . The primary end point was the change from baseline to week 12 in the distance walked in six minutes . The change in mean pulmonary-artery pressure and World Health Organization ( WHO ) functional class and the incidence of clinical worsening were also assessed , but the study was not powered to assess mortality . Patients completing the 12-week r and omized study could enter a long-term extension study . RESULTS The distance walked in six minutes increased from baseline in all sildenafil groups ; the mean placebo-corrected treatment effects were 45 m ( + 13.0 percent ) , 46 m ( + 13.3 percent ) , and 50 m ( + 14.7 percent ) for 20 , 40 , and 80 mg of sildenafil , respectively ( P<0.001 for all comparisons ) . All sildenafil doses reduced the mean pulmonary-artery pressure ( P=0.04 , P=0.01 , and P<0.001 , respectively ) , improved the WHO functional class ( P=0.003 , P<0.001 , and P<0.001 , respectively ) , and were associated with side effects such as flushing , dyspepsia , and diarrhea . The incidence of clinical worsening did not differ significantly between the patients treated with sildenafil and those treated with placebo . Among the 222 patients completing one year of treatment with sildenafil monotherapy , the improvement from baseline at one year in the distance walked in six minutes was 51 m. CONCLUSIONS Sildenafil improves exercise capacity , WHO functional class , and hemodynamics in patients with symptomatic pulmonary arterial hypertension", "BACKGROUND Current therapies for pulmonary arterial hypertension have been adopted on the basis of short-term trials with exercise capacity as the primary end point . We assessed the efficacy of macitentan , a new dual endothelin-receptor antagonist , using a primary end point of morbidity and mortality in a long-term trial . METHODS We r and omly assigned patients with symptomatic pulmonary arterial hypertension to receive placebo once daily , macitentan at a once-daily dose of 3 mg , or macitentan at a once-daily dose of 10 mg . Stable use of oral or inhaled therapy for pulmonary arterial hypertension , other than endothelin-receptor antagonists , was allowed at study entry . The primary end point was the time from the initiation of treatment to the first occurrence of a composite end point of death , atrial septostomy , lung transplantation , initiation of treatment with intravenous or subcutaneous prostanoids , or worsening of pulmonary arterial hypertension . RESULTS A total of 250 patients were r and omly assigned to placebo , 250 to the 3-mg macitentan dose , and 242 to the 10-mg macitentan dose . The primary end point occurred in 46.4 % , 38.0 % , and 31.4 % of the patients in these groups , respectively . The hazard ratio for the 3-mg macitentan dose as compared with placebo was 0.70 ( 97.5 % confidence interval [ CI ] , 0.52 to 0.96 ; P=0.01 ) , and the hazard ratio for the 10-mg macitentan dose as compared with placebo was 0.55 ( 97.5 % CI , 0.39 to 0.76 ; P<0.001 ) . Worsening of pulmonary arterial hypertension was the most frequent primary end-point event . The effect of macitentan on this end point was observed regardless of whether the patient was receiving therapy for pulmonary arterial hypertension at baseline . Adverse events more frequently associated with macitentan than with placebo were headache , nasopharyngitis , and anemia . CONCLUSIONS Macitentan significantly reduced morbidity and mortality among patients with pulmonary arterial hypertension in this event-driven study . ( Funded by Actelion Pharmaceuticals ; SERAPHIN Clinical Trials.gov number , NCT00660179 . )", "Our previous studies have shown that bone marrow mesenchymal stem cells ( BMSCs ) can inhibit the progression of pulmonary artery hypertension ( PAH ) in the monocrotaline ( MCT ) model in the short term . The aim of this study was to further investigate the long-term effect of BMSCs on PAH and to explore the mechanism of the protective effect including the pulmonary vascular remodeling and cell differentiation . PAH model was established by subcutaneous injection of 50 mg/kg MCT as previously study . Postoperatively , the animals were r and omly divided into three groups ( n = 10 in each group ) : control , PAH group , and BMSCs implantation group . Six months after injection , immunology and immunohistochemistry analysis indicated the MCT-induced intima-media thickness in muscular arteries was reduced ( P < 0.05 ) ; the area of collagen fibers in lung tissue was lower ( P < 0.05 ) , and the proliferating cell nuclear antigen level in pulmonary artery smooth muscle cells was decreased ( P < 0.05 ) . Immunofluorescence showed that the cells have the ability to differentiate between von Willebr and factor and vascular endothelial growth factor . Six months after intravenous injection , BMSCs could significantly improve pulmonary function by inhibiting the ventricular remodeling and the effect of cell differentiation", "Experimental data suggest that transplantation of EPCs attenuates monocrotaline-induced pulmonary hypertension in rats and dogs . In addition , our previous studies suggested that autologous EPC transplantation was feasible , safe , and might have beneficial effects on exercise capacity and pulmonary hemodynamics in adults with IPAH . Thus , we hypothesized that transplantation of EPCs would improve exercise capacity and pulmonary hemodynamics in children with IPAH . Thirteen children with IPAH received intravenous infusion of autologous EPCs . The right-sided heart catheterization and 6-MWD test were performed at baseline and at the time of 12 wk after cell infusion . At the time of 12 wk , mPAP decreased by 6.4 mmHg from 70.3 + /- 19.0 to 63.9 + /- 19.3 mmHg ( p = 0.015 ) . PVR decreased by approximately 19 % from 1118 + /- 537 to 906 + /- 377 dyn s/cm(5 ) ( p = 0.047 ) . CO increased from 3.39 + /- 0.79 to 3.85 + /- 0.42 L/min ( p = 0.048 ) . The 6-MWD increased by 39 m from 359 + /- 82 to 399 + /- 74 m ( p = 0.012 ) . NYHA functional class also improved . There were no severe adverse events with cell infusion . The small pilot study suggested that intravenous infusion of autologous EPCs was feasible , safe , and associated with significant improvements in exercise capacity , NYHA functional class , and pulmonary hemodynamics in children with IPAH . Confirmation of these results in a r and omized controlled trial are essential", "BACKGROUND Uncontrolled studies suggested that aerosolized iloprost , a stable analogue of prostacyclin , causes selective pulmonary vasodilatation and improves hemodynamics and exercise capacity in patients with pulmonary hypertension . METHODS We compared repeated daily inhalations of 2.5 or 5.0 microg of iloprost ( six or nine times per day ; median inhaled dose , 30 microg per day ) with inhalation of placebo . A total of 203 patients with selected forms of severe pulmonary arterial hypertension and chronic thromboembolic pulmonary hypertension ( New York Heart Association [ NYHA ] functional class III or IV ) were included . The primary end point was met if , after week 12 , the NYHA class and distance walked in six minutes were improved by at least one class and at least 10 percent , respectively , in the absence of clinical deterioration according to predefined criteria and death . RESULTS The combined clinical end point was met by 16.8 percent of the patients receiving iloprost , as compared with 4.9 percent of the patients receiving placebo ( P=0.007 ) . There were increases in the distance walked in six minutes of 36.4 m in the iloprost group as a whole ( P=0.004 ) and of 58.8 m in the subgroup of patients with primary pulmonary hypertension . Overall , 4.0 percent of patients in the iloprost group ( including one who died ) and 13.7 percent of those in the placebo group ( including four who died ) did not complete the study ( P=0.024 ) ; the most common reason for withdrawal was clinical deterioration . As compared with base-line values , hemodynamic values were significantly improved at 12 weeks when measured after iloprost inhalation ( P<0.001 ) , were largely unchanged when measured before iloprost inhalation , and were significantly worse in the placebo group . Further significant beneficial effects of iloprost treatment included an improvement in the NYHA class ( P=0.03 ) , dyspnea ( P=0.015 ) , and quality of life ( P=0.026 ) . Syncope occurred with similar frequency in the two groups but was more frequently rated as serious in the iloprost group , although this adverse effect was not associated with clinical deterioration . CONCLUSIONS Inhaled iloprost is an effective therapy for patients with severe pulmonary hypertension", "BACKGROUND High pulmonary vascular resistance ( PVR ) may be a risk factor for early and late mortality in both Glen shunt and Fontan operation patients . Furthermore , PVR may increase long after the Fontan operation . Whether pulmonary vasodilators such as phosphodiesterase 5 inhibitors can decrease PVR in patients with single ventricular physiology remains undetermined . METHODS AND RESULTS This was a prospect i ve , multicenter study . Patients with single ventricular physiology who have a PVR index higher than 2.5 Wood units \\u00b7 \\u33a1 ( WU ) were enrolled . Cardiac catheterization was performed before and after administration of sildenafil in all patients . After the Fontan operation , a six minute walk test ( 6MWT ) was also performed . A total of 42 patients were enrolled . PVR was significantly decreased in each stage of single ventricular physiology after sildenafil administration : from 4.3\\u00b11.5WU to 2.1\\u00b10.6WU ( p<0.01 ) in patients before a Glenn shunt , from 3.2\\u00b10.5WU to 1.6\\u00b10.6WU ( p<0.001 ) in patients after a Glenn shunt , and from 3.9\\u00b11.7WU to 2.3\\u00b10.8WU ( p<0.001 ) in patients after Fontan . In patients after Fontan , the 6MWT increased from 416\\u00b174 m to 485\\u00b172 m ( p<0.01 ) , and NYHA functional class improved significantly ( p<0.05 ) after sildenafil administration . No major side effects were observed in any patients . CONCLUSIONS Sildenafil reduced PVR in patients with single ventricle physiology . Sildenafil increased exercise capacity and improved NYHA functional class in patients after a Fontan operation . This implies that pulmonary vasodilation is a potential therapeutic target in selected patients with elevated PVR with single ventricle physiology . Long-term clinical significance warrants further study", "OBJECTIVES The purpose of this study was to examine the efficacy and safety of four doses of ambrisentan , an oral endothelin type A receptor-selective antagonist , in patients with pulmonary arterial hypertension ( PAH ) . BACKGROUND Pulmonary arterial hypertension is a life-threatening and progressive disease with limited treatment options . Endothelin is a vasoconstrictor and smooth muscle cell mitogen that plays a critical role in the pathogenesis and progression of PAH . METHODS In this double-blind , dose-ranging study , 64 patients with idiopathic PAH or PAH associated with collagen vascular disease , anorexigen use , or human immunodeficiency virus infection were r and omized to receive 1 , 2.5 , 5 , or 10 mg of ambrisentan once daily for 12 weeks followed by 12 weeks of open-label ambrisentan . The primary end point was an improvement from baseline in 6-min walk distance ( 6MWD ) ; secondary end points included Borg dyspnea index , World Health Organization ( WHO ) functional class , a subject global assessment , and cardiopulmonary hemodynamics . RESULTS At 12 weeks , ambrisentan increased 6MWD ( + 36.1 m , p < 0.0001 ) with similar and statistically significant increases for each dose group ( range , + 33.9 to + 38.1 m ) . Improvements were also observed in Borg dyspnea index , WHO functional class , subject global assessment , mean pulmonary arterial pressure ( -5.2 mm Hg , p < 0.0001 ) , and cardiac index ( + 0.33 l/min/m2 , p < 0.0008 ) . Adverse events were mild and unrelated to dose , including the incidence of elevated serum aminotransferase concentrations > 3 times the upper limit of normal ( 3.1 % ) . CONCLUSIONS Ambrisentan appears to improve exercise capacity , symptoms , and hemodynamics in patients with PAH . The incidence and severity of liver enzyme abnormalities appear to be low", "UNLABELLED Pulmonary arterial hypertension ( PAH ) is characterized by functional and structural changes in the pulmonary vasculature , and despite the drug treatment that made significant progress , the prognosis of patients with advanced PH remains extremely poor . In the present study , we investigated the early effect of bone marrow mesenchymal stem cells ( BMSCs ) on experimental high blood flow-induced PAH model rats and discussed the mechanism . BMSCs were isolated , cultured from bone marrow of Sprague-Dawley ( SD ) rat . The animal model of PAH was created by surgical methods to produce a left-to-right shunt . Following the successful establishment of the PAH model , rats were r and omly assigned to three groups ( n=20 in each group ) : sham group ( control ) , PAH group , and BMSC group ( received a sublingual vein injection of 1 - 5 \\u00d7 10(6 ) BMSCs ) . Two weeks after the administration , BMSCs significantly reduced the vascular remodeling , improved the hemodynamic data , and deceased the right ventricle weight ratio to left ventricular plus septal weight ( RV/LV+S ) ( P<0.05 ) . Real-time reverse transcription-polymerase chain reaction ( RT-PCR ) and immunohistochemistry analysis results indicated that the inflammation factors such as interleukin-1\\u03b2 ( IL-1\\u03b2 ) , IL-6 , and tumor necrosis factor-\\u03b1 ( TNF-\\u03b1 ) were reduced ( P<0.05 ) ; the expression of matrix metallo proteinase-9 ( MMP-9 ) was lower ( P<0.05 ) ; vascular endothelial growth factor ( VEGF ) was higher in BMSC group than those in PAH group ( P<0.05 ) . CONCLUSION Sublingual vein injection of BMSCs for 2 weeks , significantly improved the lung and heart injury caused by left-to-right shunt-induced PAH ; decreased pulmonary vascular remodeling and inflammation ; and enhanced angiogenesis", "Pulmonary arterial hypertension is a life-threatening disease for which continuous intravenous prostacyclin has proven to be effective . However , this treatment requires a permanent central venous catheter with the associated risk of serious complications such as sepsis , thromboembolism , or syncope . Treprostinil , a stable prostacyclin analogue , can be administered by a continuous subcutaneous infusion , avoiding these risks . We conducted a 12-week , double-blind , placebo-controlled multicenter trial in 470 patients with pulmonary arterial hypertension , either primary or associated with connective tissue disease or congenital systemic-to-pulmonary shunts . Exercise capacity improved with treprostinil and was unchanged with placebo ; the between treatment group difference in median six-minute walking distance was 16 m ( p = 0.006 ) . Improvement in exercise capacity was greater in the sicker patients and was dose-related , but independent of disease etiology . Concomitantly , treprostinil significantly improved indices of dyspnea , signs and symptoms of pulmonary hypertension , and hemodynamics . The most common side effect attributed to treprostinil was infusion site pain ( 85 % ) leading to premature discontinuation from the study in 8 % of patients . Three patients in the treprostinil treatment group presented with an episode of gastrointestinal hemorrhage . We conclude that chronic subcutaneous infusion of treprostinil is an effective treatment with an acceptable safety profile in patients with pulmonary arterial hypertension", "BACKGROUND Endothelin 1 , a powerful endogenous vasoconstrictor and mitogen , might be a cause of pulmonary hypertension . We describe the efficacy and safety of bosentan , a dual endothelin-receptor antagonist that can be taken orally , in patients with severe pulmonary hypertension . METHODS In this double-blind , placebo-controlled study , 32 patients with pulmonary hypertension ( primary or associated with scleroderma ) were r and omly assigned to bosentan ( 62.5 mg taken twice daily for 4 weeks then 125 mg twice daily ) or placebo for a minimum of 12 weeks . The primary endpoint was change in exercise capacity . Secondary endpoints included changes in cardiopulmonary haemodynamics , Borg dyspnoea index , WHO functional class , and withdrawal due to clinical worsening . Analysis was by intention to treat . FINDINGS In patients given bosentan , the distance walked in 6 min improved by 70 m at 12 weeks compared with baseline , whereas it worsened by 6 m in those on placebo ( difference 76 m [ 95 % CI 12 - 139 ] , p=0.021 ) . The improvement was maintained for at least 20 weeks . The cardiac index was 1.0 L min(-1 ) m(-2 ) ( 95 % CI 0.6 - 1.4 , p<0.0001 ) greater in patients given bosentan than in those given placebo . Pulmonary vascular resistance decreased by 223 dyn s cm(-)(5 ) with bosentan , but increased by 191 dyn s cm(-5 ) with placebo ( difference -415 [ -608 to -221 ] , p=0.0002 ) . Patients given bosentan had a reduced Borg dyspnoea index and an improved WHO functional class . All three withdrawals from clinical worsening were in the placebo group ( p=0.033 ) . The number and nature of adverse events did not differ between the two groups . INTERPRETATION Bosentan increases exercise capacity and improves haemodynamics in patients with pulmonary hypertension , suggesting that endothelin has an important role in pulmonary hypertension", "Background Systematic Review s ( SRs ) of experimental animal studies are not yet common practice , but awareness of the merits of conducting such SRs is steadily increasing . As animal intervention studies differ from r and omized clinical trials ( RCT ) in many aspects , the methodology for SRs of clinical trials needs to be adapted and optimized for animal intervention studies . The Cochrane Collaboration developed a Risk of Bias ( RoB ) tool to establish consistency and avoid discrepancies in assessing the method ological quality of RCTs . A similar initiative is warranted in the field of animal experimentation . Methods We provide an RoB tool for animal intervention studies ( SYRCLE \\u2019s RoB tool ) . This tool is based on the Cochrane RoB tool and has been adjusted for aspects of bias that play a specific role in animal intervention studies . To enhance transparency and applicability , we formulated signalling questions to facilitate judgment . Results The result ing RoB tool for animal studies contains 10 entries . These entries are related to selection bias , performance bias , detection bias , attrition bias , reporting bias and other biases . Half these items are in agreement with the items in the Cochrane RoB tool . Most of the variations between the two tools are due to differences in design between RCTs and animal studies . Shortcomings in , or unfamiliarity with , specific aspects of experimental design of animal studies compared to clinical studies also play a role . Conclusions SYRCLE \\u2019s RoB tool is an adapted version of the Cochrane RoB tool . Widespread adoption and implementation of this tool will facilitate and improve critical appraisal of evidence from animal studies . This may subsequently enhance the efficiency of translating animal research into clinical practice and increase awareness of the necessity of improving the method ological quality of animal studies" ], "target": "Conclusions SC therapy is effective for PAH in pre clinical studies .\\nThese results may help to st and ardise pre clinical animal studies and provide a theoretical basis for clinical trial design in the future .", "background": "Background Despite significant progress in drug treatment , the prognosis of patients with advanced pulmonary arterial hypertension ( PAH ) remains extremely poor .\\nMany pre clinical studies have reported the efficacy of stem cell ( SC ) therapy for PAH ; however , this approach remains controversial .\\nThe aim of this systematic review and meta- analysis is to assess the potential efficacy of SC therapy for PAH .", "reviews_info": "Background Despite significant progress in drug treatment , the prognosis of patients with advanced pulmonary arterial hypertension ( PAH ) remains extremely poor .\\nMany pre clinical studies have reported the efficacy of stem cell ( SC ) therapy for PAH ; however , this approach remains controversial .\\nThe aim of this systematic review and meta- analysis is to assess the potential efficacy of SC therapy for PAH ." } ``` __Cochrane__ ```json { "review_id": "CD007697", "pmid": [ "16394043" ], "title": [ "Aggressive surgical effort and improved survival in advanced-stage ovarian cancer." ], "abstract": [ "Residual disease after initial surgery for ovarian cancer is the strongest prognostic factor for survival. However, the extent of surgical resection required to achieve optimal cytoreduction is controversial. Our goal was to estimate the effect of aggressive surgical resection on ovarian cancer patient survival.\\n A retrospective cohort study of consecutive patients with International Federation of Gynecology and Obstetrics stage IIIC ovarian cancer undergoing primary surgery was conducted between January 1, 1994, and December 31, 1998. The main outcome measures were residual disease after cytoreduction, frequency of radical surgical resection, and 5-year disease-specific survival.\\n The study comprised 194 patients, including 144 with carcinomatosis. The mean patient age and follow-up time were 64.4 and 3.5 years, respectively. After surgery, 131 (67.5%) of the 194 patients had less than 1 cm of residual disease (definition of optimal cytoreduction). Considering all patients, residual disease was the only independent predictor of survival; the need to perform radical procedures to achieve optimal cytoreduction was not associated with a decrease in survival. For the subgroup of patients with carcinomatosis, residual disease and the performance of radical surgical procedures were the only independent predictors. Disease-specific survival was markedly improved for patients with carcinomatosis operated on by surgeons who most frequently used radical procedures compared with those least likely to use radical procedures (44% versus 17%, P < .001).\\n Overall, residual disease was the only independent predictor of survival. Minimizing residual disease through aggressive surgical resection was beneficial, especially in patients with carcinomatosis.\\n II-2." ], "target": "We found only low quality evidence comparing ultra-radical and standard surgery in women with advanced ovarian cancer and carcinomatosis. The evidence suggested that ultra-radical surgery may result in better survival.\\u00a0 It was unclear whether there were any differences in progression-free survival, QoL and morbidity between the two groups. The cost-effectiveness of this intervention has not been investigated. We are, therefore, unable to reach definite conclusions about the relative benefits and adverse effects of the two types of surgery.\\nIn order to determine the role of ultra-radical surgery in the management of advanced stage ovarian cancer, a sufficiently powered randomised controlled trial comparing ultra-radical and standard surgery or well-designed non-randomised studies would be required." } ``` ### Data Fields __MS^2__ - `"review_id"`: The PubMed ID of the review. - `"pmid"`: The PubMed IDs of the included studies. - `"title"`: The titles of the included studies. - `"abstract"`: The abstracts of the included studies. - `"target"`: The conclusions, taken from the abstract of the review, that serve as the summarization target. - `"background"`: A description of the reviews objective. __Cochrane__ - `"review_id"`: The PubMed ID of the review. - `"pmid"`: The PubMed IDs of the included studies. - `"title"`: The titles of the included studies. - `"abstract"`: The abstracts of the included studies. - `"target"`: The conclusions, taken from the abstract of the review, that serve as the summarization target. ### Data Splits Each dataset is split into training, validation and test partitions __MS^2__ | train | validation | test | |------:|-----------:|-----:| | 14188 | 2021 | 1667 | __Cochrane__ | train | validation | test | |------:|-----------:|-----:| | 3752 | 470 | 470 | ## Dataset Creation Please refer to the following papers for details about dataset curation: [MSˆ2: A Dataset for Multi-Document Summarization of Medical Studies](https://aclanthology.org/2021.emnlp-main.594.pdf) [Generating (Factual?) Narrative Summaries of RCTs: Experiments with Neural Multi-Document Summarization](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8378607/) ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information Licensing information can be found [here](https://github.com/allenai/mslr-shared-task/blob/main/LICENSE). ### Citation Information **DeYoung, Jay, Iz Beltagy, Madeleine van Zuylen, Bailey Kuehl and Lucy Lu Wang. "MS2: A Dataset for Multi-Document Summarization of Medical Studies." EMNLP (2021).** ```bibtex @inproceedings{DeYoung2021MS2MS, title={MSˆ2: Multi-Document Summarization of Medical Studies}, author={Jay DeYoung and Iz Beltagy and Madeleine van Zuylen and Bailey Kuehl and Lucy Lu Wang}, booktitle={EMNLP}, year={2021} } ``` **Byron C. Wallace, Sayantani Saha, Frank Soboczenski, and Iain James Marshall. (2020). "Generating (factual?) narrative summaries of RCTs: Experiments with neural multi-document summarization." AMIA Annual Symposium.** ```bibtex @article{Wallace2020GeneratingN, title={Generating (Factual?) Narrative Summaries of RCTs: Experiments with Neural Multi-Document Summarization}, author={Byron C. Wallace and Sayantani Saha and Frank Soboczenski and Iain James Marshall}, journal={AMIA Annual Symposium}, year={2020}, volume={abs/2008.11293} } ```
allenai/mslr2022
[ "task_categories:summarization", "task_categories:text2text-generation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-MS^2", "source_datasets:extended|other-Cochrane", "language:en", "license:apache-2.0", "region:us" ]
2022-07-18T15:24:24+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-MS^2", "extended|other-Cochrane"], "task_categories": ["summarization", "text2text-generation"], "paperswithcode_id": "multi-document-summarization", "pretty_name": "MSLR Shared Task"}
2022-11-18T21:16:10+00:00
db53cfec44e55e89ad01a01e1e75e5619d7be909
# Dataset Card for yaakov/wikipedia-de-splits ## Dataset Description The only goal of this dataset is to have random German Wikipedia articles at various dataset sizes: Small datasets for fast development and large datasets for statistically relevant measurements. For this purpose, I loaded the 2665357 articles in the `test` set of the pre-processed German Wikipedia dump from 2022-03-01, randomly permuted the articles and created splits of sizes `2**n`: `1, 2, 4, 8, ...`. The split names are strings. The split `'all'` contains all 2665357 available articles. ## Dataset creation This dataset has been created with the following script: !apt install git-lfs !pip install -q transformers datasets from huggingface_hub import notebook_login notebook_login() from datasets import load_dataset wikipedia_de = load_dataset("wikipedia", "20220301.de")['train'] shuffled = wikipedia_de.shuffle(seed=42) from datasets import DatasetDict res = DatasetDict() k, n = 0, 1 while n <= shuffled.num_rows: res[str(k)] = shuffled.select(range(n)) k += 1; n *= 2 res['all'] = shuffled res.push_to_hub('yaakov/wikipedia-de-splits')
yaakov/wikipedia-de-splits
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:crowdsourced", "size_categories:n<1K", "size_categories:1K<n<10K", "size_categories:10K<n<100K", "size_categories:100K<n<1M", "size_categories:1M<n<10M", "source_datasets:wikipedia", "language:de", "license:cc-by-sa-3.0", "license:gfdl", "region:us" ]
2022-07-18T15:50:25+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["de"], "license": ["cc-by-sa-3.0", "gfdl"], "size_categories": ["n<1K", "1K<n<10K", "10K<n<100K", "100K<n<1M", "1M<n<10M"], "source_datasets": ["wikipedia"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "wikipedia-de-splits", "configs": ["1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "all"]}
2022-07-18T17:28:34+00:00
9bdb7aefc0244fafa68e2ea3543d5068335296e1
# Dataset Card for "relbert/semeval2012_relational_similarity" ## Dataset Description - **Repository:** [RelBERT](https://github.com/asahi417/relbert) - **Paper:** [https://aclanthology.org/S12-1047/](https://aclanthology.org/S12-1047/) - **Dataset:** SemEval2012: Relational Similarity ### Dataset Summary Relational similarity dataset from [SemEval2012 task 2](https://aclanthology.org/S12-1047/), compiled to fine-tune [RelBERT](https://github.com/asahi417/relbert) model. The dataset contains a list of positive and negative word pair from 89 pre-defined relations. The relation types are constructed on top of following 10 parent relation types. ```shell { 1: "Class Inclusion", # Hypernym 2: "Part-Whole", # Meronym, Substance Meronym 3: "Similar", # Synonym, Co-hypornym 4: "Contrast", # Antonym 5: "Attribute", # Attribute, Event 6: "Non Attribute", 7: "Case Relation", 8: "Cause-Purpose", 9: "Space-Time", 10: "Representation" } ``` Each of the parent relation is further grouped into child relation types where the definition can be found [here](https://drive.google.com/file/d/0BzcZKTSeYL8VenY0QkVpZVpxYnc/view?resourcekey=0-ZP-UARfJj39PcLroibHPHw). ## Dataset Structure ### Data Instances An example of `train` looks as follows. ``` { 'relation_type': '8d', 'positives': [ [ "breathe", "live" ], [ "study", "learn" ], [ "speak", "communicate" ], ... ] 'negatives': [ [ "starving", "hungry" ], [ "clean", "bathe" ], [ "hungry", "starving" ], ... ] } ``` ### Data Splits | name |train|validation| |---------|----:|---------:| |semeval2012_relational_similarity| 89 | 89| ### Number of Positive/Negative Word-pairs in each Split | relation_type | positive (train) | negative (train) | positive (validation) | negative (validation) | |:----------------|-------------------:|-------------------:|------------------------:|------------------------:| | 1 | 50 | 740 | 63 | 826 | | 10 | 60 | 730 | 66 | 823 | | 10a | 10 | 799 | 14 | 894 | | 10b | 10 | 797 | 13 | 893 | | 10c | 10 | 800 | 11 | 898 | | 10d | 10 | 799 | 10 | 898 | | 10e | 10 | 795 | 8 | 896 | | 10f | 10 | 799 | 10 | 898 | | 1a | 10 | 797 | 14 | 892 | | 1b | 10 | 797 | 14 | 892 | | 1c | 10 | 800 | 11 | 898 | | 1d | 10 | 797 | 16 | 890 | | 1e | 10 | 794 | 8 | 895 | | 2 | 100 | 690 | 117 | 772 | | 2a | 10 | 799 | 15 | 893 | | 2b | 10 | 796 | 11 | 894 | | 2c | 10 | 798 | 13 | 894 | | 2d | 10 | 798 | 10 | 897 | | 2e | 10 | 799 | 11 | 897 | | 2f | 10 | 802 | 11 | 900 | | 2g | 10 | 796 | 16 | 889 | | 2h | 10 | 799 | 11 | 897 | | 2i | 10 | 800 | 9 | 900 | | 2j | 10 | 801 | 10 | 900 | | 3 | 80 | 710 | 80 | 809 | | 3a | 10 | 799 | 11 | 897 | | 3b | 10 | 802 | 11 | 900 | | 3c | 10 | 798 | 12 | 895 | | 3d | 10 | 798 | 14 | 893 | | 3e | 10 | 802 | 5 | 906 | | 3f | 10 | 803 | 11 | 901 | | 3g | 10 | 801 | 6 | 904 | | 3h | 10 | 801 | 10 | 900 | | 4 | 80 | 710 | 82 | 807 | | 4a | 10 | 802 | 11 | 900 | | 4b | 10 | 797 | 7 | 899 | | 4c | 10 | 800 | 12 | 897 | | 4d | 10 | 796 | 4 | 901 | | 4e | 10 | 802 | 12 | 899 | | 4f | 10 | 802 | 9 | 902 | | 4g | 10 | 798 | 15 | 892 | | 4h | 10 | 801 | 12 | 898 | | 5 | 90 | 700 | 105 | 784 | | 5a | 10 | 798 | 14 | 893 | | 5b | 10 | 801 | 8 | 902 | | 5c | 10 | 799 | 11 | 897 | | 5d | 10 | 797 | 15 | 891 | | 5e | 10 | 801 | 8 | 902 | | 5f | 10 | 801 | 11 | 899 | | 5g | 10 | 802 | 9 | 902 | | 5h | 10 | 800 | 15 | 894 | | 5i | 10 | 800 | 14 | 895 | | 6 | 80 | 710 | 99 | 790 | | 6a | 10 | 798 | 15 | 892 | | 6b | 10 | 801 | 11 | 899 | | 6c | 10 | 801 | 13 | 897 | | 6d | 10 | 804 | 10 | 903 | | 6e | 10 | 801 | 11 | 899 | | 6f | 10 | 799 | 12 | 896 | | 6g | 10 | 798 | 12 | 895 | | 6h | 10 | 799 | 15 | 893 | | 7 | 80 | 710 | 91 | 798 | | 7a | 10 | 800 | 14 | 895 | | 7b | 10 | 796 | 7 | 898 | | 7c | 10 | 797 | 11 | 895 | | 7d | 10 | 800 | 14 | 895 | | 7e | 10 | 797 | 10 | 896 | | 7f | 10 | 796 | 12 | 893 | | 7g | 10 | 794 | 9 | 894 | | 7h | 10 | 795 | 14 | 890 | | 8 | 80 | 710 | 90 | 799 | | 8a | 10 | 797 | 14 | 892 | | 8b | 10 | 801 | 7 | 903 | | 8c | 10 | 796 | 12 | 893 | | 8d | 10 | 796 | 13 | 892 | | 8e | 10 | 796 | 11 | 894 | | 8f | 10 | 797 | 12 | 894 | | 8g | 10 | 793 | 7 | 895 | | 8h | 10 | 798 | 14 | 893 | | 9 | 90 | 700 | 96 | 793 | | 9a | 10 | 795 | 14 | 890 | | 9b | 10 | 799 | 12 | 896 | | 9c | 10 | 790 | 7 | 892 | | 9d | 10 | 803 | 9 | 903 | | 9e | 10 | 804 | 8 | 905 | | 9f | 10 | 799 | 10 | 898 | | 9g | 10 | 796 | 14 | 891 | | 9h | 10 | 799 | 13 | 895 | | 9i | 10 | 799 | 9 | 899 | | SUM | 1580 | 70207 | 1778 | 78820 | ### Citation Information ``` @inproceedings{jurgens-etal-2012-semeval, title = "{S}em{E}val-2012 Task 2: Measuring Degrees of Relational Similarity", author = "Jurgens, David and Mohammad, Saif and Turney, Peter and Holyoak, Keith", booktitle = "*{SEM} 2012: The First Joint Conference on Lexical and Computational Semantics {--} Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation ({S}em{E}val 2012)", month = "7-8 " # jun, year = "2012", address = "Montr{\'e}al, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/S12-1047", pages = "356--364", } ```
research-backup/semeval2012_relational_similarity
[ "multilinguality:monolingual", "size_categories:1K<n<10K", "language:en", "license:other", "region:us" ]
2022-07-18T16:59:33+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "pretty_name": "SemEval2012 task 2 Relational Similarity"}
2022-07-20T17:56:37+00:00
d607d2b6dbe4cf86623fa542bc6d696e10ec3799
# Dataset Card for "relbert/analogy_questions" ## Dataset Description - **Repository:** [RelBERT](https://github.com/asahi417/relbert) - **Paper:** [https://aclanthology.org/2021.acl-long.280/](https://aclanthology.org/2021.acl-long.280/) - **Dataset:** Analogy Questions ### Dataset Summary This dataset contains 5 different word analogy questions used in [Analogy Language Model](https://aclanthology.org/2021.acl-long.280/). - original analogy questions | name | Size (valid/test) | Num of choice | Num of relation group | Original Reference | |-----------|------------------:|--------------:|----------------------:|:--------------------------------------------------------------------------:| | `u2` | 24/228 | 5,4,3 | 9 | [EnglishForEveryone](https://englishforeveryone.org/Topics/Analogies.html) | | `u4` | 48/432 | 5,4,3 | 5 | [EnglishForEveryone](https://englishforeveryone.org/Topics/Analogies.html) | | `google` | 50/500 | 4 | 2 | [Mikolov et al., (2013)](https://www.aclweb.org/anthology/N13-1090.pdf) | | `bats` | 199/1799 | 4 | 3 | [Gladkova et al., (2016)](https://www.aclweb.org/anthology/N18-2017.pdf) | - extra analogy questions | name | Size (valid/test) | Num of choice (valid/test) | Num of relation group (valid/test) | Original Reference | |:------------------------------------|:--------------------|:-----------------------------|:-------------------------------------|:-----------------------------------------------------------------------------------------------------------------------| | `semeval2012_relational_similarity` | 79/- | 3/- | 79/- | [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity) | | `t_rex_relational_similarity` | 496/183 | 74/48 | 60/19 | [relbert/t_rex_relational_similarity](https://huggingface.co/datasets/relbert/t_rex_relational_similarity) | | `conceptnet_relational_similarity` | 1112/1192 | 19/17 | 18/16 | [relbert/conceptnet_relational_similarity](https://huggingface.co/datasets/relbert/conceptnet_relational_similarity) | | `nell_relational_similarity` | 400/600 | 5/7 | 4/6 | [relbert/nell_relational_similarity](https://huggingface.co/datasets/relbert/nell_relational_similarity) | | `scan` | 178/1616 | 3,36,136,10,45,78,15,21,55,120,153,91,28/3,36,136,10,45,78,15,21,55,120,153,91,28 | 2/2 | [relbert/scientific_and_creative_analogy](https://huggingface.co/datasets/relbert/scientific_and_creative_analogy) | ## Dataset Structure ### Data Instances An example of `test` looks as follows. ``` { "stem": ["raphael", "painter"], "answer": 2, "choice": [["andersen", "plato"], ["reading", "berkshire"], ["marx", "philosopher"], ["tolstoi", "edison"]] } ``` The `stem` is the query word pair, `choice` has word pair candidates, and `answer` indicates the index of correct candidate which starts from `0`. All data is lowercased except Google dataset. ### Citation Information ``` @inproceedings{ushio-etal-2021-bert-is, title ={{BERT} is to {NLP} what {A}lex{N}et is to {CV}: {C}an {P}re-{T}rained {L}anguage {M}odels {I}dentify {A}nalogies?}, author={Ushio, Asahi and Espinosa-Anke, Luis and Schockaert, Steven and Camacho-Collados, Jose}, booktitle={Proceedings of the {ACL}-{IJCNLP} 2021 Main Conference}, year={2021}, publisher={Association for Computational Linguistics} } ``` ### LICENSE The LICENSE of all the resources are under [CC-BY-NC-4.0](./LICENSE). Thus, they are freely available for academic purpose or individual research, but restricted for commercial use.
relbert/analogy_questions
[ "multilinguality:monolingual", "size_categories:n<1K", "language:en", "license:other", "region:us" ]
2022-07-18T17:01:16+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "pretty_name": "Analogy Question"}
2023-05-16T19:24:12+00:00
d20edb6795642707df6470800216cd5941ee48fc
teymur/art_schools
[ "region:us" ]
2022-07-18T17:41:47+00:00
{}
2022-07-18T17:42:41+00:00
d81b8291e5998f5726ab7f35a0a557e761532aac
# Dataset Card for Mostly Basic Python Problems (mbpp) ## Table of Contents - [Dataset Card for Mostly Basic Python Problems (mbpp)](#dataset-card-for-mostly-basic-python-problems-(mbpp)) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/google-research/google-research/tree/master/mbpp - **Paper:** [Program Synthesis with Large Language Models](https://arxiv.org/abs/2108.07732) ### Dataset Summary The benchmark consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry level programmers, covering programming fundamentals, standard library functionality, and so on. Each problem consists of a task description, code solution and 3 automated test cases. As described in the paper, a subset of the data has been hand-verified by us. Released [here](https://github.com/google-research/google-research/tree/master/mbpp) as part of [Program Synthesis with Large Language Models, Austin et. al., 2021](https://arxiv.org/abs/2108.07732). ### Supported Tasks and Leaderboards This dataset is used to evaluate code generations. ### Languages English - Python code ## Dataset Structure ```python dataset_full = load_dataset("mbpp") DatasetDict({ test: Dataset({ features: ['task_id', 'text', 'code', 'test_list', 'test_setup_code', 'challenge_test_list'], num_rows: 974 }) }) dataset_sanitized = load_dataset("mbpp", "sanitized") DatasetDict({ test: Dataset({ features: ['source_file', 'task_id', 'prompt', 'code', 'test_imports', 'test_list'], num_rows: 427 }) }) ``` ### Data Instances #### mbpp - full ``` { 'task_id': 1, 'text': 'Write a function to find the minimum cost path to reach (m, n) from (0, 0) for the given cost matrix cost[][] and a position (m, n) in cost[][].', 'code': 'R = 3\r\nC = 3\r\ndef min_cost(cost, m, n): \r\n\ttc = [[0 for x in range(C)] for x in range(R)] \r\n\ttc[0][0] = cost[0][0] \r\n\tfor i in range(1, m+1): \r\n\t\ttc[i][0] = tc[i-1][0] + cost[i][0] \r\n\tfor j in range(1, n+1): \r\n\t\ttc[0][j] = tc[0][j-1] + cost[0][j] \r\n\tfor i in range(1, m+1): \r\n\t\tfor j in range(1, n+1): \r\n\t\t\ttc[i][j] = min(tc[i-1][j-1], tc[i-1][j], tc[i][j-1]) + cost[i][j] \r\n\treturn tc[m][n]', 'test_list': [ 'assert min_cost([[1, 2, 3], [4, 8, 2], [1, 5, 3]], 2, 2) == 8', 'assert min_cost([[2, 3, 4], [5, 9, 3], [2, 6, 4]], 2, 2) == 12', 'assert min_cost([[3, 4, 5], [6, 10, 4], [3, 7, 5]], 2, 2) == 16'], 'test_setup_code': '', 'challenge_test_list': [] } ``` #### mbpp - sanitized ``` { 'source_file': 'Benchmark Questions Verification V2.ipynb', 'task_id': 2, 'prompt': 'Write a function to find the shared elements from the given two lists.', 'code': 'def similar_elements(test_tup1, test_tup2):\n res = tuple(set(test_tup1) & set(test_tup2))\n return (res) ', 'test_imports': [], 'test_list': [ 'assert set(similar_elements((3, 4, 5, 6),(5, 7, 4, 10))) == set((4, 5))', 'assert set(similar_elements((1, 2, 3, 4),(5, 4, 3, 7))) == set((3, 4))', 'assert set(similar_elements((11, 12, 14, 13),(17, 15, 14, 13))) == set((13, 14))' ] } ``` ### Data Fields - `source_file`: unknown - `text`/`prompt`: description of programming task - `code`: solution for programming task - `test_setup_code`/`test_imports`: necessary code imports to execute tests - `test_list`: list of tests to verify solution - `challenge_test_list`: list of more challenging test to further probe solution ### Data Splits There are two version of the dataset (full and sanitized) which only one split each (test). ## Dataset Creation See section 2.1 of original [paper](https://arxiv.org/abs/2108.07732). ### Curation Rationale In order to evaluate code generation functions a set of simple programming tasks as well as solutions is necessary which this dataset provides. ### Source Data #### Initial Data Collection and Normalization The dataset was manually created from scratch. #### Who are the source language producers? The dataset was created with an internal crowdsourcing effort at Google. ### Annotations #### Annotation process The full dataset was created first and a subset then underwent a second round to improve the task descriptions. #### Who are the annotators? The dataset was created with an internal crowdsourcing effort at Google. ### Personal and Sensitive Information None. ## Considerations for Using the Data Make sure you execute generated Python code in a safe environment when evauating against this dataset as generated code could be harmful. ### Social Impact of Dataset With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models. ### Discussion of Biases ### Other Known Limitations Since the task descriptions might not be expressive enough to solve the task. The `sanitized` split aims at addressing this issue by having a second round of annotators improve the dataset. ## Additional Information ### Dataset Curators Google Research ### Licensing Information CC-BY-4.0 ### Citation Information ``` @article{austin2021program, title={Program Synthesis with Large Language Models}, author={Austin, Jacob and Odena, Augustus and Nye, Maxwell and Bosma, Maarten and Michalewski, Henryk and Dohan, David and Jiang, Ellen and Cai, Carrie and Terry, Michael and Le, Quoc and others}, journal={arXiv preprint arXiv:2108.07732}, year={2021} ``` ### Contributions Thanks to [@lvwerra](https://github.com/lvwerra) for adding this dataset.
Muennighoff/mbpp
[ "task_categories:text2text-generation", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:cc-by-4.0", "code-generation", "arxiv:2108.07732", "region:us" ]
2022-07-18T18:05:21+00:00
{"annotations_creators": ["crowdsourced", "expert-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "Mostly Basic Python Problems", "tags": ["code-generation"]}
2022-10-20T18:43:58+00:00
c692cd0d633f0a920eb45833ec64f748b9e7ca72
# Description This dataset is a subset of [https://huggingface.co/datasets/librispeech_asr](LibriSpeech) that has been adversarially modified. It is designed to fool ASR models into predicting a target of our choosing instead of the correct output. ## Splits The dataset contains several splits. Each split consists of the same utterances, modified with different types and amount of noise. 3 noises have been used: * Adversarial noise of radius 0.04 (`adv_0.04` split) * Adversarial noise of radius 0.015 (`adv_0.015` split) * Adversarial noise of radius 0.015 combined with Room Impulse Response (RIR) noise (`adv_0.015_RIR` split) In addition we provide the original inputs (`natural` split) For each split we actually provide two text keys: `true_text` which is the original LibriSpeech label, i.e. the sentence one can actually hear when listening to the audio; and `target_text`, which is the target sentence of our adversarial attack. An ASR model that this dataset fools would get a low WER on `target_text` and a high WER on `true_text`. An ASR model robust to this dataset would get the opposite. ## Usage You should evaluate your model on this dataset as you would evaluate it on LibriSpeech. Here is an example with Wav2Vec2 ```python from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import torch from jiwer import wer librispeech_adv_eval = load_dataset("RaphaelOlivier/librispeech_asr_adversarial", "adv", split="adv_0.15_adv_txt") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h").to("cuda") processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") def map_to_pred(batch): input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values with torch.no_grad(): logits = model(input_values.to("cuda")).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_adv_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"]) print("WER on correct labels:", wer(result["true_text"], result["transcription"])) print("WER on attack targets:", wer(result["target_text"], result["transcription"])) ``` *Result (WER)*: | "0.015 target_text" | "0.015 true_text" | "0.04 target_text" | "0.04 true_text" |---|---|---|---| | 58.2 | 108 | 49.5 | 108 |
RaphaelOlivier/librispeech_asr_adversarial
[ "region:us" ]
2022-07-18T18:08:15+00:00
{}
2022-08-02T23:02:08+00:00
db08ee5f909bebfadfdee104a5653078574e8602
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: rajistics/finetuned-indian-food * Dataset: rajistics/indian_food_images * Config: rajistics--indian_food_images * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@@rajistics](https://huggingface.co/@rajistics) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-rajistics__indian_food_images-7f4d71b4-11165495
[ "autotrain", "evaluation", "region:us" ]
2022-07-18T19:02:31+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["rajistics/indian_food_images"], "eval_info": {"task": "image_multi_class_classification", "model": "rajistics/finetuned-indian-food", "metrics": [], "dataset_name": "rajistics/indian_food_images", "dataset_config": "rajistics--indian_food_images", "dataset_split": "test", "col_mapping": {"image": "image", "target": "label"}}}
2022-07-18T19:03:52+00:00
dac358f5f9e237b2670b04bf261c3c200326257d
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11 * Dataset: kmfoda/booksum * Config: kmfoda--booksum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-7d55fc88-11175496
[ "autotrain", "evaluation", "region:us" ]
2022-07-18T19:08:31+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V11", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}}
2022-07-19T05:04:56+00:00
b5fb582e95e2b842ce33d94d1cc48b18442f19d1
nateraw/documentation-images
[ "license:mit", "region:us" ]
2022-07-18T19:21:34+00:00
{"license": "mit"}
2023-11-07T10:38:56+00:00
24b7cc19e0ca633cccf49ad39a42e8feca1ac4d1
# Dataset Card for lampeter_corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ota.bodleian.ox.ac.uk/repository/xmlui/handle/20.500.12024/3193 - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** Josef Schmied, Claudia Claridge, Rainer Siemund ### Dataset Summary The Lampeter Corpus of Early Modern English Tracts is a collection of texts on various subject matter published between 1640 and 1740,  a time that was marked by the rise of mass publication, the development of public discourse in many areas of everyday life and last but not least, the standardisation of British English. Each text belongs to one of the following genres: Law, Economy, Religion, Politics, Science, Miscellaneous ### Supported Tasks and Leaderboards - `text-classification`: This dataset comes with dates and genre classifications for each text which can be used to finetune a model for text classification. ### Languages The text in the dataset is British English. The associated BCP-47 code is `en-GB` ## Dataset Structure ### Data Instances A typical data point contains an id, a text, the head of the text (which can be missing on some occasions) and the title. The two features which can be used for classification are `date`, which is the year of publication and `genre` which classifies the text into one of six broad areas. ``` { 'id': 'SciB1735', 'text': '\nI. WHEN I read your Defence of the British Mathematicians, I could not, Sir, but admire your Courage in asserting with such undoubting Assurance things so easily disproved. This to me seemed unaccountable, till I reflected on what you say (p. 32.) when upon my having appealed to every thinking Reader, whether it be possible to frame any clear Conception of Fluxions, you express yourself in the following manner, "Pray, Sir, who are those thinking Readers you ap\npeal to? Are they Geometricians, or Persons wholly ignorant of Geometry? If the former, I leave it to them: If the latter, I ask how well are they qualified to judge of the Method of Fluxions"? It must be acknowledged you seem by this Dilemma secure in the favour of one Part of your Readers, and the ignorance of the other. I am nevertheless persuaded there are fair and candid Men among the Mathematicians. And for those who are not Mathematicians, I shall endeavour so to unveil this Mystery, [TRUNCATED]', 'date': '1735', 'genre': 'Science', ' head': 'A DEFENCE OF FREE-THINKING IN Mathematics; &c.\n', 'title': 'A defence of free-thinking in mathematics [...]' } ``` ### Data Fields The dataset contains the following fields: - `id`: Unique identifier("string"), - `text`: ext in the document("string"), - `date`: Date of publication("date64"), - `genre`: Broad classification("string"), - `head`: Often same as title. Can be missing("string"), - `title`: Title of document("string") ### Data Splits Train: 120 ## Dataset Creation ### Curation Rationale The period covered by the Lampeter Corpus, 1640 to 1740, marks a crucial period in English history and the elaboration of English as a multi-purpose language. The texts selected for the corpus reflect the standardisation process of English and historical developments between the outbreak of the Civil War and the beginning of the Industrial Revolution. To meet the needs of linguists and historians alike, the Lampeter project has attempted to create a balanced corpus rather than a randomly chosen archive or collection. A balanced corpus, then, is characterised by several transparent sampling criteria. ### Source Data #### Initial Data Collection and Normalization The original data is selected according to the following criteria: - Complete texts only, including dedications, prefaces, postscripts, etc. - Texts are of varying length, ranging from c. 3,000 to c. 20,000 words. - Each author appears only once to avoid idiosyncratic language use. - Major literary figures of the time were excluded since their writing style can be studied in other, existing collections. - Generally, only first editions of the texts; later editions only if changes were made by the original authors, thus ensuring the authenticity of the language. #### Who are the source language producers? Authors of texts between 1640-1740 ### Annotations #### Annotation process N/A #### Who are the annotators? N/A ### Personal and Sensitive Information N/A ## Considerations for Using the Data ### Social Impact of Dataset N/A ### Discussion of Biases The social biases of the time in terms of race, sex, gender, etc. might be encountered in this dataset ### Other Known Limitations None ## Additional Information ### Dataset Curators Josef Schmied, Claudia Claridge, Rainer Siemund ### Licensing Information Creative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0) ### Citation Information University of Oxford, The Lampeter Corpus of Early Modern English Tracts, Oxford Text Archive, http://hdl.handle.net/20.500.12024/3193.
biglam/lampeter_corpus
[ "task_categories:text-classification", "task_ids:multi-label-classification", "task_ids:multi-class-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "region:us" ]
2022-07-18T20:33:13+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-label-classification", "multi-class-classification"], "pretty_name": "Lampeter Corpus"}
2022-09-15T14:52:46+00:00
017c5c5cada61bfacf5431573b0d054d7a9ce6c6
# Dataset Card for NLLB Multi-Domain ## Table of Contents - [Dataset Card for NLLB Multi-Domain](#dataset-card-for-nllb-multi-domain) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Home:** [Flores](https://github.com/facebookresearch/flores/tree/main/nllb_md) - **Repository:** [Github](https://github.com/facebookresearch/flores/tree/main/nllb_md) ### Dataset Summary NLLB Multi Domain is a set of professionally-translated sentences in News, Unscripted informal speech, and Health domains. It is designed to enable assessment of out-of-domain performance and to study domain adaptation for machine translation. Each domain has approximately 3000 sentences. ### Supported Tasks and Leaderboards #### Multilingual Machine Translation Refer to the [Dynabench leaderboard](https://dynabench.org/flores/Flores%20MT%20Evaluation%20(FULL)) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on [Large-Scale Multilingual Machine Translation](http://www.statmt.org/wmt21/large-scale-multilingual-translation-task.html). Flores 200 is an extention of this. ### Languages Language | FLORES-200 code ---|--- Central Aymara | ayr_Latn Bhojpuri | bho_Deva Dyula | dyu_Latn Friulian | fur_Latn Russian | rus_Cyrl Wolof | wol_Latn Use a hyphenated pairing to get two langauges in one datapoint (e.g., "eng_Latn-rus_Cyrl" will provide sentences in the format below). ## Dataset Structure ### Data Instances See Dataset Viewer. The text is provided as-in the original dataset, without further preprocessing or tokenization. ### Data Fields - `id`: Row number for the data entry, starting at 1. - `sentence`: The full sentence in the specific language (may have _lang for pairings) - `domain`: The domain of the sentence. ### Dataset Creation Please refer to the original article [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) for additional information on dataset creation. ## Additional Information ### Dataset Curators See paper for details. ### Licensing Information Licensed with Creative Commons Attribution Share Alike 4.0. License available [here](https://creativecommons.org/licenses/by-sa/4.0/). ### Citation Information Please cite the authors if you use these corpora in your work: ```bibtex @article{nllb2022, author = {NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Jeff Wang}, title = {No Language Left Behind: Scaling Human-Centered Machine Translation}, year = {2022} } ``` Please also cite prior work that this dataset builds on: ```bibtex @inproceedings{, title={The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation}, author={Goyal, Naman and Gao, Cynthia and Chaudhary, Vishrav and Chen, Peng-Jen and Wenzek, Guillaume and Ju, Da and Krishnan, Sanjana and Ranzato, Marc'Aurelio and Guzm\'{a}n, Francisco and Fan, Angela}, year={2021} } ``` ```bibtex @inproceedings{, title={Two New Evaluation Datasets for Low-Resource Machine Translation: Nepali-English and Sinhala-English}, author={Guzm\'{a}n, Francisco and Chen, Peng-Jen and Ott, Myle and Pino, Juan and Lample, Guillaume and Koehn, Philipp and Chaudhary, Vishrav and Ranzato, Marc'Aurelio}, journal={arXiv preprint arXiv:1902.01382}, year={2019} } ```
breakend/nllb-multi-domain
[ "annotations_creators:found", "language_creators:expert-generated", "multilinguality:multilingual", "multilinguality:translation", "size_categories:unknown", "source_datasets:extended|flores", "language:en", "language:ru", "language:ayr", "language:bho", "language:dyu", "language:fur", "language:wol", "license:cc-by-sa-4.0", "arxiv:2207.04672", "region:us" ]
2022-07-18T22:01:53+00:00
{"annotations_creators": ["found"], "language_creators": ["expert-generated"], "language": ["en", "ru", "ayr", "bho", "dyu", "fur", "wol"], "license": ["cc-by-sa-4.0"], "multilinguality": ["multilingual", "translation"], "size_categories": ["unknown"], "source_datasets": ["extended|flores"], "task_categories": ["conditional-text-generation"], "task_ids": ["machine-translation"], "paperswithcode_id": "flores", "pretty_name": "nllb-multi-domain"}
2022-08-09T19:44:23+00:00
7cf9edbb26f77e278980a0a7274c9b9cfe736a0a
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/led-large-book-summary * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-samsum-e4148a42-11205497
[ "autotrain", "evaluation", "region:us" ]
2022-07-18T22:40:35+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/led-large-book-summary", "metrics": ["perplexity"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-07-18T23:46:59+00:00
ad38a8b3a538f495d14beab585c71d704249645b
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-samsum-e4148a42-11205498
[ "autotrain", "evaluation", "region:us" ]
2022-07-18T22:40:38+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP", "metrics": ["perplexity"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-07-18T23:11:46+00:00
00d6311c524f0c3e5d420dce399855e3ee777cba
gorkemozkaya/blended_en_tr
[ "license:other", "region:us" ]
2022-07-19T03:36:09+00:00
{"license": "other"}
2022-07-19T04:29:32+00:00
7c2e7e455a7a832656bcc0fb0e299e2af85f9778
label_ids: - (0) contradiction - (2) entailment
gorkaartola/ZS-train_SDG_Descriptions_S1-sentence_S2-SDGtitle_Negative_Sample_Filter-All
[ "region:us" ]
2022-07-19T10:14:23+00:00
{}
2022-09-06T13:45:08+00:00
2f101624310c129a6303a1f4f3df70a191357911
label_ids: - (0) contradiction - (2) entailment
gorkaartola/ZS-train_SDG_Descriptions_S1-sentence_S2-SDGtitle_Negative_Sample_Filter-None
[ "region:us" ]
2022-07-19T10:16:17+00:00
{}
2022-09-06T13:45:55+00:00
7b02284135e8ef3867e5fc168f9bbb9cbd355335
label_ids: - (0) contradiction - (2) entailment
gorkaartola/ZS-train_SDG_Descriptions_S1-sentence_S2-SDGtitle_Negative_Sample_Filter-Only_Title
[ "region:us" ]
2022-07-19T10:25:09+00:00
{}
2022-09-06T13:48:25+00:00
231d3567e46d83cc26158c2a712862e27a633dda
PierreMeester/TestBloom
[ "license:afl-3.0", "region:us" ]
2022-07-19T10:28:06+00:00
{"license": "afl-3.0"}
2022-07-19T10:28:06+00:00
aa7465b952ba969304d1a6b8f32b7bbb00873fbb
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-f9efad07-2209-4d77-9230-9fd08f3882ea-41
[ "autotrain", "evaluation", "region:us" ]
2022-07-19T10:46:16+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-19T13:25:37+00:00
cde78f02852c8d40801626a4e54802f560797583
# Dataset Card for dummy_tags
albertvillanova/dummy_tags
[ "language:en", "test", "dummy", "region:us" ]
2022-07-19T11:43:29+00:00
{"language": ["en"], "tags": ["test", "dummy"]}
2022-07-19T11:45:12+00:00
f7f915d4676a984516b6dc1a6a898852d81e4b40
this is a test
liyangbing/water
[ "license:afl-3.0", "region:us" ]
2022-07-19T11:51:21+00:00
{"license": "afl-3.0"}
2022-07-19T12:11:13+00:00
6d8a794fba6e00890cdb0dffba4e1cc5edc52664
label_ids: - (0) contradiction - (2) entailment
gorkaartola/ZS-train_SDG_Descriptions_S1-sentence_S2-SDGtitle_Negative_Sample_Filter-Only_Targets
[ "region:us" ]
2022-07-19T11:59:16+00:00
{}
2022-09-06T13:51:18+00:00
43610fc18d73da3c4af78813f71b1c3c70c2dc44
label_ids: - (0) contradiction - (2) entailment
gorkaartola/ZS-train_SDG_Descriptions_S1-sentence_S2-SDGtitle_Negative_Sample_Filter-Only_Indicators
[ "region:us" ]
2022-07-19T12:03:15+00:00
{}
2022-09-06T13:43:39+00:00
6ca901142522d395700127df0deed52dde59816c
nli-label: - (0) entailment - (2) contradiction
gorkaartola/SC-train-valid-test_SDG-Descriptions
[ "region:us" ]
2022-07-19T12:23:45+00:00
{}
2023-01-18T13:58:15+00:00
7aa921ee95641df5965f5589fdfd1a7426296547
# Dataset description This dataset consists of sequences of Python code followed by a a docstring explaining its function. It was constructed by concatenating code and text pairs from this [dataset](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs) that were originally code and markdown cells in Jupyter Notebooks. The content of each example the following: ```` [CODE] """ Explanation: [TEXT] End of explanation """ [CODE] """ Explanation: [TEXT] End of explanation """ ... ```` # How to use it ```python from datasets import load_dataset ds = load_dataset("codeparrot/github-jupyter-code-to-text", split="train") ```` ```` Dataset({ features: ['repo_name', 'path', 'license', 'content'], num_rows: 47452 }) ````
codeparrot/github-jupyter-code-to-text
[ "task_categories:text-generation", "size_categories:10K<n<100K", "language:en", "license:apache-2.0", "code", "region:us" ]
2022-07-19T13:00:45+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "tags": ["code"]}
2023-11-04T23:51:23+00:00
802411c3010cb00d1b05bad57ca77365a3c699d6
# Dataset Card for CodeContests ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/deepmind/code_contests/ - **Paper:** [Competition-Level Code Generation with AlphaCode](https://arxiv.org/abs/2203.07814v1) - **Leaderboard:** [Code Generation on CodeContests](https://paperswithcode.com/sota/code-generation-on-codecontests) - **Point of Contact:** [David Choi](mailto:[email protected]) ### Dataset Summary CodeContests is a competitive programming dataset for machine-learning. This dataset was used when training [AlphaCode](https://deepmind.com/blog/article/Competitive-programming-with-AlphaCode). It consists of programming problems, from a variety of sources: Site | URL | Source ----------- | --------------------------- | ------ Aizu | https://judge.u-aizu.ac.jp | [CodeNet](https://github.com/IBM/Project_CodeNet) AtCoder | https://atcoder.jp | [CodeNet](https://github.com/IBM/Project_CodeNet) CodeChef | https://www.codechef.com | [description2code](https://github.com/ethancaballero/description2code) Codeforces | https://codeforces.com | [description2code](https://github.com/ethancaballero/description2code) and Codeforces HackerEarth | https://www.hackerearth.com | [description2code](https://github.com/ethancaballero/description2code) Problems include test cases in the form of paired inputs and outputs, as well as both correct and incorrect human solutions in a variety of languages. ### Supported Tasks and Leaderboards - `translation` - the competitive programming code generation problem can be viewed as a sequence-to-sequence translation task: given a problem description 𝑋 in natural language, produce a corresponding solution 𝑌 in a programming language. The metric used for evaluation is "percentage of problems solved using 𝑛 submissions from 𝑘 samples per problem", denoted as 𝑛@𝑘. More information on the evaluation of AlphaCode can be found in Section 2.2. and Appendix A.3. of the paper. The leaderboard for this task is available [here](https://paperswithcode.com/sota/code-generation-on-codecontests). ### Languages English. ## Dataset Structure ### Data Instances A data point corresponds to a singular contest problem: ``` { 'name': '76_B. Mice', 'description': 'Modern researches has shown that a flock of hungry mice ' 'searching for a piece of...', 'public_tests': {'input': ['3 2 0 2\n0 1 3\n2 5\n'], 'output': ['1\n']}, 'private_tests': {'input': ['20 18 1 2\n' '-9999944 -9999861 -9999850 -9999763 -9999656 ' '-9999517 -9999375 -999927...', ..., '7 11 10 20\n' '6 18 32 63 66 68 87\n' '6 8 15 23 25 41 53 59 60 75 90\n'], 'output': ['2\n', ..., '1\n']}, 'generated_tests': {'input': ['7 11 10 5\n' '6 18 32 63 66 68 87\n' '6 8 15 23 25 41 53 59 60 75 90\n', ..., '7 11 10 4\n' '6 18 46 63 85 84 87\n' '6 8 15 18 25 41 53 59 60 75 90\n'], 'output': ['1\n', ..., '2\n']}, 'source': 2, 'difficulty': 8, 'solutions': {'language': [2, ..., 2], 'solution': ['#include <bits/stdc++.h>\n' 'using namespace std;\n' 'int n, m;\n' 'int data[2][100010], t[1...', ..., '#include <bits/stdc++.h>\n' 'using namespace std;\n' 'int n, m, pos[100100], food[100100...']}, 'incorrect_solutions': {'language': [2, ..., 2], 'solution': ['#include <bits/stdc++.h>\n' 'using namespace std;\n' 'vector<pair<int, int> > v[100010];...', ..., '#include <bits/stdc++.h>\n' 'using namespace std;\n' 'vector<pair<int, int> > v[100010];...']}, 'cf_contest_id': 76, 'cf_index': 'B', 'cf_points': 0.0, 'cf_rating': 2100, 'cf_tags': ['greedy', 'two pointers'], 'is_description_translated': False, 'untranslated_description': '', 'time_limit': {'seconds': 0, 'nanos': 500000000}, 'memory_limit_bytes': 256000000, 'input_file': '', 'output_file': '' } ``` ### Data Fields - `name`: The name of the contest. Note that names could agree between different sources. - `description`: A natural language description of a programming problem. - `public_tests`: Public tests are those that are available before submitting a solution, typically as part of the description itself. Represented as a paired `input` and `output` that can be used to test potential solutions. They are therefore acceptable inputs to a model. - `private_tests`: Private tests are not visible before submitting a solution, so should not be made available as inputs to a model. - `generated_tests`: Generated tests are automatically generated by modifying inputs from public and private tests and validating using known correct solutions. - `source`: The original source of the problem, with possible values including `UNKNOWN_SOURCE` (0),`CODECHEF` (1), `CODEFORCES` (2), `HACKEREARTH` (3), `CODEJAM` (4), `ATCODER` (5) and `AIZU` (6). - `difficulty`: A representation of the difficulty of the problem with possible values including `UNKNOWN_DIFFICULTY` (0), `EASY` (1), `MEDIUM` (2), `HARD` (3), `HARDER` (4), `HARDEST` (5), `EXTERNAL` (6), `A` (7), `B` (8), `C` (9), `D` (10), `E` (11), `F` (12), `G` (13), `H` (14), `I` (15), `J` (16), `K` (17), `L` (18), `M` (19), `N` (20), `O` (21), `P` (22), `Q` (23), `R` (24), `S` (25), `T` (26), `U` (27) and `V` (28). Note that different sources use different, non-comparable gradings. For Codeforces problems, `cf_rating` is a more reliable measure of difficulty when available. - `solutions`: Correct solutions to the problem. Contrast with `incorrect_solutions` below. - `incorrect_solutions`: Incorrect solutions. - `cf_contest_id`: The Contest ID. Note that Contest ID is not monotonic with respect to time. - `cf_index`: Problem index, e.g. `"A"` or `"B"` or `"C"`. - `cf_points`: Points for the problem, e.g. `1000.0` - `cf_rating`: Problem rating (difficulty), e.g. `1100` - `cf_tags`: Problem tags, e.g. `['greedy', 'math']` - `is_description_translated`: Whether the problem was translated to English. - `untranslated_description`: The untranslated description is only available for translated problems. - `time_limit`: The time limit constraint to use when executing solutions. Represented as a dictionary with two keys, `seconds` and `nanos`. This field is None if not defined. - `memory_limit_bytes`: The memory limit constraint to use when executing solutions. - `input_file`: Most problems use stdin for IO. Some problems expect specific files to be used instead. - `output_file`: Most problems use stdout for IO. Some problems expect specific files to be used instead. All tests are represented as a paired `input` and `output` that can be used to test potential solutions and all solutions comprise a `language`, with possible values including `UNKNOWN_LANGUAGE` (0), `PYTHON` (1) (solutions written in PYTHON2), `CPP` (2), `PYTHON3` (3) and `JAVA` (4), and a `solution` string written in that `language`. The fields preceded with `cf_` denote extra meta-data for Codeforces problems. ### Data Splits The data is split into training, validation and test set. The training set contains 13328 samples, the validation set 117 samples and the test set 165 samples. ## Dataset Creation ### Curation Rationale This dataset was created for fine-tuning AlphaCode models: > Models pre-trained on GitHub can generate good code and solve simple programming problems, but as shown in Appendix B.3 they can solve very few competitive programming problems. Fine-tuning the model on a dedicated competitive programming dataset is critical for performance. ### Source Data #### Initial Data Collection and Normalization The information on the data collection and normalization procedures can found in Section 3.2. and Appendinx B.2. of the paper. #### Who are the source language producers? The problems are scraped from the following platforms: [Aizu](https://judge.u-aizu.ac.jp), [AtCoder](https://atcoder.jp ), [CodeChef](https://www.codechef.com), [Codeforces](https://codeforces.com) and [HackerEarch](https://www.hackerearth.com). Additionally, some data from the existing public competitive programming dataset Description2Code ([Caballero et al., 2016](https://github.com/ethancaballero/description2code)) and CodeNet ([(Puri et al., 2021](https://arxiv.org/pdf/2105.12655.pdf)) is mixed into the training set. ### Annotations #### Annotation process The solutions are scapred alongside the problem descriptions. #### Who are the annotators? Same as the source data creators. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson d'Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel J. Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu and Oriol Vinyals. ### Licensing Information This dataset is made available under the terms of the CC BY 4.0 license ([Creative Commons Attribution 4.0 International license](https://creativecommons.org/licenses/by/4.0/legalcode)). Additional acknowledged contributions: * Codeforces materials are sourced from http://codeforces.com. * Description2Code materials are sourced from: [Description2Code Dataset](https://github.com/ethancaballero/description2code), licensed under the [MIT open source license](https://opensource.org/licenses/MIT), copyright not specified. * CodeNet materials are sourced from: [Project_CodeNet](https://github.com/IBM/Project_CodeNet), licensed under [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0), copyright not specified. ### Citation Information ```bibtex @article{li2022competition, title={Competition-Level Code Generation with AlphaCode}, author={Li, Yujia and Choi, David and Chung, Junyoung and Kushman, Nate and Schrittwieser, Julian and Leblond, R{\'e}mi and Eccles, Tom and Keeling, James and Gimeno, Felix and Dal Lago, Agustin and Hubert, Thomas and Choy, Peter and de Masson d'Autume, Cyprien and Babuschkin, Igor and Chen, Xinyun and Huang, Po-Sen and Welbl, Johannes and Gowal, Sven and Cherepanov, Alexey and Molloy, James and Mankowitz, Daniel and Sutherland Robson, Esme and Kohli, Pushmeet and de Freitas, Nando and Kavukcuoglu, Koray and Vinyals, Oriol}, journal={arXiv preprint arXiv:2203.07814}, year={2022} } ``` ### Contributions Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
deepmind/code_contests
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:2203.07814", "arxiv:2105.12655", "region:us" ]
2022-07-19T15:02:55+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "paperswithcode_id": "codecontests", "pretty_name": "CodeContests"}
2023-06-11T11:22:30+00:00
7a73e5c5d9569f29a92fc65be56c3908ec280419
# Dataset Card for "relbert/conceptnet_high_confidence" ## Dataset Description - **Repository:** [RelBERT](https://github.com/asahi417/relbert) - **Paper:** [https://home.ttic.edu/~kgimpel/commonsense.html](https://home.ttic.edu/~kgimpel/commonsense.html) - **Dataset:** High Confidence Subset of ConceptNet ### Dataset Summary The selected subset of ConceptNet used in [this work](https://home.ttic.edu/~kgimpel/commonsense.html), which compiled to fine-tune [RelBERT](https://github.com/asahi417/relbert) model. ## Dataset Structure ### Data Instances An example of `train` looks as follows. ``` { "relation_type": "AtLocation", "positives": [["fish", "water"], ["cloud", "sky"], ["child", "school"], ... ], "negatives": [["pen", "write"], ["sex", "fun"], ["soccer", "sport"], ["fish", "school"], ... ] } ``` ### Data Splits | name |train|validation| |---------|----:|---------:| |conceptnet_high_confidence| 25 | 24| ### Number of Positive/Negative Word-pairs in each Split | relation_type | positive (train) | negative (train) | positive (validation) | negative (validation) | |:-----------------|-------------------:|-------------------:|------------------------:|------------------------:| | AtLocation | 383 | 1768 | 97 | 578 | | CapableOf | 195 | 1790 | 73 | 600 | | Causes | 71 | 1797 | 26 | 595 | | CausesDesire | 9 | 1793 | 11 | 595 | | CreatedBy | 2 | 1796 | 0 | 0 | | DefinedAs | 0 | 0 | 2 | 595 | | Desires | 16 | 1794 | 12 | 595 | | HasA | 67 | 1814 | 17 | 595 | | HasFirstSubevent | 2 | 1796 | 0 | 0 | | HasLastSubevent | 2 | 1796 | 3 | 593 | | HasPrerequisite | 168 | 1803 | 57 | 592 | | HasProperty | 94 | 1801 | 39 | 605 | | HasSubevent | 125 | 1798 | 40 | 609 | | IsA | 310 | 1764 | 98 | 603 | | MadeOf | 17 | 1793 | 7 | 593 | | MotivatedByGoal | 14 | 1796 | 11 | 595 | | NotCapableOf | 15 | 1793 | 0 | 0 | | NotDesires | 4 | 1795 | 4 | 592 | | PartOf | 34 | 1801 | 7 | 593 | | ReceivesAction | 18 | 1793 | 8 | 593 | | SymbolOf | 0 | 0 | 2 | 596 | | UsedFor | 249 | 1815 | 81 | 588 | | SUM | 1795 | 35896 | 595 | 11305 | ### Citation Information ``` @InProceedings{P16-1137, author = "Li, Xiang and Taheri, Aynaz and Tu, Lifu and Gimpel, Kevin", title = "Commonsense Knowledge Base Completion", booktitle = "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) ", year = "2016", publisher = "Association for Computational Linguistics", pages = "1445--1455", location = "Berlin, Germany", doi = "10.18653/v1/P16-1137", url = "http://aclweb.org/anthology/P16-1137" } ```
research-backup/conceptnet_high_confidence
[ "multilinguality:monolingual", "size_categories:1K<n<10K", "language:en", "license:other", "region:us" ]
2022-07-19T18:26:12+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "pretty_name": "ConceptNet with High Confidence"}
2022-09-20T00:13:24+00:00
41b8a9a3b3f7aab40340b983c8fd852240cf5fc5
# Dataset Card for "relbert/conceptnet" ## Dataset Description - **Repository:** [RelBERT](https://github.com/asahi417/relbert) - **Paper:** [https://ojs.aaai.org/index.php/AAAI/article/view/11164](https://ojs.aaai.org/index.php/AAAI/article/view/11164) - **Dataset:** ConceptNet5 ### Dataset Summary ConceptNet5, which compiled to fine-tune [RelBERT](https://github.com/asahi417/relbert) model. ## Dataset Structure ### Data Instances An example of `train` looks as follows. ``` { "relation_type": "AtLocation", "positives": [["fish", "water"], ["cloud", "sky"], ["child", "school"], ... ], "negatives": [["pen", "write"], ["sex", "fun"], ["soccer", "sport"], ["fish", "school"], ... ] } ``` ### Data Splits | name |train|validation| |---------|----:|---------:| |conceptnet| 33 | 25| ### Number of Positive/Negative Word-pairs in each Split | relation_type | positive (train) | negative (train) | positive (validation) | negative (validation) | |:-----------------|-------------------:|-------------------:|------------------------:|------------------------:| | Antonym | 3175 | 206870 | 703 | 65330 | | AtLocation | 6974 | 203071 | 727 | 65306 | | CapableOf | 603 | 209442 | 0 | 0 | | Causes | 906 | 209139 | 83 | 65950 | | CausesDesire | 195 | 209850 | 30 | 66003 | | CreatedBy | 104 | 209941 | 4 | 66029 | | DefinedAs | 16 | 210029 | 2 | 66031 | | Desires | 374 | 209671 | 0 | 0 | | DistinctFrom | 1552 | 208493 | 426 | 65607 | | Entails | 277 | 209768 | 118 | 65915 | | HasA | 606 | 209439 | 10 | 66023 | | HasContext | 4664 | 205381 | 1936 | 64097 | | HasFirstSubevent | 66 | 209979 | 17 | 66016 | | HasLastSubevent | 82 | 209963 | 14 | 66019 | | HasPrerequisite | 586 | 209459 | 123 | 65910 | | HasProperty | 1397 | 208648 | 0 | 0 | | HasSubevent | 644 | 209401 | 64 | 65969 | | InstanceOf | 1 | 210044 | 0 | 0 | | IsA | 54028 | 156017 | 21122 | 44911 | | LocatedNear | 21 | 210024 | 3 | 66030 | | MadeOf | 221 | 209824 | 23 | 66010 | | MannerOf | 8762 | 201283 | 3747 | 62286 | | MotivatedByGoal | 282 | 209763 | 35 | 65998 | | NotCapableOf | 17 | 210028 | 0 | 0 | | NotDesires | 235 | 209810 | 0 | 0 | | NotHasProperty | 74 | 209971 | 19 | 66014 | | PartOf | 6880 | 203165 | 2629 | 63404 | | ReceivesAction | 290 | 209755 | 0 | 0 | | RelatedTo | 61672 | 148373 | 11356 | 54677 | | SimilarTo | 82 | 209963 | 36 | 65997 | | SymbolOf | 1 | 210044 | 0 | 0 | | Synonym | 52261 | 157784 | 22391 | 43642 | | UsedFor | 2997 | 207048 | 415 | 65618 | | SUM | 210045 | 6.72144e+06 | 66033 | 1.58479e+06 | ### Citation Information ``` @inproceedings{speer2017conceptnet, title={Conceptnet 5.5: An open multilingual graph of general knowledge}, author={Speer, Robyn and Chin, Joshua and Havasi, Catherine}, booktitle={Thirty-first AAAI conference on artificial intelligence}, year={2017} } ```
research-backup/conceptnet
[ "multilinguality:monolingual", "size_categories:1K<n<10K", "language:en", "license:other", "region:us" ]
2022-07-19T18:27:44+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "pretty_name": "ConceptNet"}
2022-07-26T09:24:35+00:00
c5cd49c2881afa3525bbf9298f503934f3805f5c
# Dataset Card for lancaster_newsbooks ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ota.bodleian.ox.ac.uk/repository/xmlui/handle/20.500.12024/2531 - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** Tony McEnery ### Dataset Summary This corpus consists of two collections of seventeenth-century English "newsbooks". Both were drawn from the Thomason Tracts collection, which is held at the British Library and available in graphical form via Early English Books Online (EEBO). The construction of these keyboarded versions were in both cases funded by the British Academy. The FIRST collection (1654_newsbooks) consists of every newsbook published in London and still surviving in the Thomason Tracts from the first half of 1654 (to be precise, for the second half of December 1653 to the end of May 1654, with one or two additions from the first week in June, 1654). This was constructed for the project "Looking at text re-use in a corpus of seventeenth-century news reportage", funded by the British Academy, grant reference SG-33825. The SECOND collection (mercurius_fumigosus) consists of every surviving issue published of the highly idiosyncratic newsbook "Mercurius Fumigosus", written by John Crouch between summer 1654 and early autumn 1655. This was constructed for the project "Decoding the news - Mercurius Fumigosus as a source of news in the interregnum, 1654-1655", funded by the British Academy, grant reference LRG-35423. This is version 1.0 of the corpus, released April 2007; it supercedes earlier versions circulated informally. For more information about the corpus, see www.ling.lancs.ac.uk/newsbooks ### Supported Tasks and Leaderboards `text-classification`: This dataset can be used to augent existing datasets to find stylistic differences between texts from different time periods ### Languages The language in this dataset is English from 1654. The associated BCP-47 code is `en:GB` ## Dataset Structure ### Data Instances ``` { 'id': 'PerfAcc170', 'text': "Another late fight in Scotland, betwixt Col. Morgan and the Highlanders; with the number that were slain and taken Prisoners. The removing of Lieut. Col. John Lilburn from the Tower of London. The readiness of our Fleet for new action, though Peace be agreed on with Holland and Denmark. The taking of several more Prizes at sea. An Order of the Commissioners for the Trial and Approbation of public Preachers. Several proceedings of His Highness the Lord Protector and his Council, and another Ordinance touching the adjourning of the Term. Together with variety of choice Intelligence from several Foreign parts. From Wednesday APRIL 5 TO Wednesday April 12. 1654. Many Addresses were made to his Highness the Lord Protector, in the name of the City and County of York, and other places, wherein they acknowledge the great blessing of God to this Nation, that they have so great, so good and able a Protector. This day the Sessions began in the Old Bailey, and one of those that committed the late Robbery on Black-Heath, being called to his Trial, he refused to plead; but more hereafter. This evening about 9 of the Clock, the Dutch Ambassadors signed and sealed the Ratification of the Articles of Peace so long spoken of; so did likewise the Commissioners appointed to treat with them by his Highness the Lord Protector. Paris April 11, 1654. The Cardinal de Retz being removed from Vincennes by the Marshal de la Mesteray, is now safe arrived at Nantes, and put into the Castle. The Court Emissaries give out that he is not to be long there, but in a few days to be set at liberty, only that his Majesty desireth satisfaction upon some certain points, although the main drift is to make him surrender his place of Archbishop of this City. The Commissioners of Languedoc cannot yet prevail in anything upon their Complaints, but are like the Commissioners of Catalonia, who hitherto have prevailed no further than to receive many fair words, but nothing effectual, the main work now in hand, is to find monies speedily for the setting forth of the Army, that they may be in the field as soon as may be, and to that end the Partisans are not wanting to find out new ways for exacting of monies, preferring large sums to be put into the King's Coffers, the difficulty lieth only in the effecting of it, by reason that the Country is in most places so exhausted of monies, that they are scarce able to live: The design for the King's Coronation is now on foot again, and if I am rightly informed, it will be done about the middle of May next, which being done, his Majesty shall go upon the borders and down to Picardy to forward his Army in their Action, so much the rather, by reason that the Prince of Conde, whom we hear was last week at Valenciennes, and then taking a view of his Army, is returned to Bruxels, there to confer with the Archduke Leopoldus for to obtain money and other necessaries for the march of his Army, that so they may fall to action as soon as the weather and season will give them leave, his Lady and son are still at Rocroy, where they are expecting some alteration to their present condition. The Earl of Harcourt hath not yet received any answer from the Court upon those proposals which he lately sent to the Court. We have news, that the Duke Francis hath at last accepted the command of his Brother the Duke of Lorrain's Army, and is expected there in a few days, which our Cardinal doth very well relish. The forces that were in the Country of Liege are now marching homewards, and are to be quartered in Lorrain. The great preparation for an Armado to go from Marseilles and Touloon, is much at a stand, only there are lately 5 men of War gone to Sea, and 3 more are to follow, but upon no design than to rob and plunder upon the sea, sparing scarce any they encounter, whether they be friends or foes. This day his Highness the Lord Protector and his Council, passed an Ordinance for adjourning of Easter Term, from and after the first Return thereof, called Quindena Pasch, until the first Return of Trinity Term, called Crastino Trinatatis. Dalkieth, April 3. Cap. Sherwin Commander of the Primrose, and Cap. Smith Commander of the Duchess, in their return from Orkney, took a Dutch vessel laden with French and Spanish Wines, linen Cloth, and other good commodities, bound for the West Indies; they sent her into Aberdeen. Some young Lairds and others purposing to glean a party of horse in Lothian, and repair to the enemy, are taken, and brought hither prisoners. Aberdeen, April 1. The Earl of Athol is come to Glencarn with about 700 horse and foot, Seaford and some new raised forces are daily expected to join with them. Glencarn with his whole force, consisting of 2000 horse and foot, is at Dingwel, two miles from Brahan, not undeserving the name of an Island, so that we hope to engage them there. In order whereunto Lieut. Col. Mitchell is marched towards Inverness with 9 companies of Foot, and Col. Morgan hath followed him with 5 troops of Col Rich his Regiment, and 4 troops of Dragoons; he intends to take Col. Tomlinson's Regiment, which is in his way, and to draw 5 companies of Foot out of Inverness. From Cows in the Isle of Wight, April 6. A private man of War hath, about two days since, taken and brought in hither two French vessels, one of which is laden with Salt, the other hath but little except ballast; Our Fleet is for the most part near St. Helens point and the rest as the Spits head, being in all near 100 sail, gallant ships, and bravely accommodated. One of our Frigates hath taken a Holland ship, and carried her to Portsmouth; she hath in her 8 Bales of Paper, and some small quantity of Indico. Many ships that were here, went away yesterday morning towards the Downs; and several Merchants' ships are at present here in this road, being detained by contrary winds; they expect some favourable Easterly gales, that so they may proceed on their intended voyages. Deal, April 7. A man of War of ours is this morning gone for Holland, to get the Ratification of the Peace made with them, and an Express from the Dutch Ambassador, touching the Agreement. Most part of the ships which remained in this Road, are gone up into the River of Thames; here is only some few left that are bound to the Southward. A Fleet consisting of about 40 or 50 sail of ships, great and small, passed by this place, which we suppose to be the Dunkirk fleet bound for London. Because many will not give credit to the Agreement of Peace between the Commonwealths of England and Holland, (though their Unbelief proceeds from several causes, some prejudicately fearing the worst, and others wishing and desiring rather than the Fountain of Blood may still be open) We can, and do assure you, That the Articles (as we said before) were signed and sealed by the Commissioners on both sides, on Wednesday night last, and within 14 days are to be signed and sealed by the Lord Protector, and the States of Holland, and then to publicly proclaimed and published, both in England and Holland in one day. The Agreement with Denmark is also taken in upon the Articles: And for satisfaction of the loss which our English Merchants sustained by that King's command, whose demands amount to about 150000l. it is referred to four Merchants, two whereof to be English, and the other two Dutch; which four Merchants shall have absolute power to determine those demands within the space of twenty days; the place where they are to sit, is Guildhall. As touching the business of Amboyna, it is referred to eight Commissioners, who have six months time to agree thereon, and in case they agree not, then Umpires are nominated to determine that business. Let those that delight themselves in blood, have blood to drink, for they are worthy. From Legorn, March 23. thus. This week in the sight of this City was a sore fight between two ships at Sea, the one Dutchman of War of 32 guns, and the other an English ship called the Expedition, who came from Zant with Currans; the fight lasted 6 hours, but night having parted them, both ships sunk; most of the men were saved, but nothing else, though the fight was near the shore. It is advertised from Cullen, That the Treaty between that Elector and the Spanish Commissioners, is brought to perfection, and signed, which is, That both French and Spanish shall have free passage through the Country of Liege, not committing any acts of hostility upon each other; and the Spaniards in point of satisfaction for the losses received from them and the Lorrainers, shall pay to the said Elector 200000 Rixdollars out of the Duke of Lorrain's estate, and for security of performance, the Lordship of Kerpen, and another in Gulick shall be put into his hands until full payment. From Poland thus. The General of the Cossacks hath delivered up three very considerable places to the Muscovite, and caused himself to be re baptized after the Muscovia manner, which is so ill resented by all sorts of people in that Country, that the Commanders sent to the King of Poland, That if he pleased to send them a general pardon for what they had done, and the rest of the Army, they will return with the major part of the Army into his Majesty's service; which hath so incensed the General, that having caused them to be apprehended he hath made each of them shorter by the head, which hath caused much heart burning among the people. Whereas many abuses and corruptions are crept into the ordinary course and administration of Justice, both in Law and Equity, the reformation whereof hath not yet been attained; Out of a tender care and desire that so necessary and good a work may at length be brought to effect, it is held convenient that so necessary and good a work may at length be brought to effect, it is held convenient that so necessary and good a work may at length be brought to effect, it is held convenient and necessary to adjourn part of the next Term of Easter; be if therefore Ordained by his Highness the Lord Protector, by and with the consent of his Council, That part of the said Term of Easter now next coming be adjourned, that is to say, from and after the first Return, called Quindena Pasch, unto the last Return of the said Easter Term, called Crastino Ascensionis; And all and every person or persons, which have cause, or commandment to appear in any of the Courts at Westminster, in or at any day or time, from and after the said Return, called Quindena Pasch, may tarry at their dwellings, or where their business shall lie, without resorting to any of the said Courts for that Cause, until the said last Return, called Crastino Ascensionis, without danger or forfeiture, penalty or contempt to be in that behalf. And be it also ordained by the Authority aforesaid, That Writs of Adjournment shall be directed to the Justices of the said Courts, and Barons of the Exchequer, giving them authority to adjourn the said part of the said Term of Easter, as aforesaid, that is to say, from and after the said first Return called Quindena Pasch, until the said last Return of the said Term, called Crastino Ascensionis, as before is said, and the said adjournment shall be made, as aforesaid. And be it further Ordained, That all Matters, Causes and Suits, depending in any of the said Courts, shall have continuance, and the parties shall have day, from the day of the said Adjournment, until the said Return of Crastino Ascensionis, as is aforesaid; and the Lord's Commissioners of the Great Seal are required to issue forth Writs accordingly. And be it further Ordained, That a former Ordinance of the sixth day of this instant April, for the Adjourning of part of the said Term, until the first Return of Trinity Term next, called Crastino Trinitatis, be from henceforth Repealed and void. And it is lastly Ordained by the Authority aforesaid, That the Sheriffs of London and Middlesex, and all other Sheriffs both in England and Wales, do forthwith proclaim and publish this Ordinance in the chief Market Towns and usual places within their several and respective Counties. Lieutenant Colonel John Lilburn being said to have again attempted something against the State, is removed from the Tower to be prisoner in some more remote place. The titular King of Scots is still at Paris, and of late something more merry than ordinary. The Deputies for Languedoc telling him, that if there were a Peace concluded with England, it would be well for all the Protestants in France; He made answer that he was glad of it, for it would then be the better for himself. This day was the Gaol delivery; three were hanged, one whereof died most desperately, and going up the Cart, drank a health to the Devil's Majesty: One was pressed last Saturday, and being afterwards heard to groan, was carried down to the Press-yard again to have the execution dispatched. The Commissioners for Approbation of public Ministers, sate at Whitehall, and divers Certificates were presented unto them in behalf of several particular persons, for approbation; and in regard that none hereafter should out of carelessness of partiality set their hands to a Certificate for any person that hereafter should out of carelessness or partiality let their hands to a Certificate for any person that hereafter may be found unworthy to be admitted, and so become prejudicial to the Church of Christ, and frustrate the intentions of our Governors which made this Ordinance; the said Commissioners do earnestly beseech all whom it may concern (in the bowels of Christ) as they tender the honour of the great God himself, whose servants we all are, the prejudice of the souls of his people purchased by the blood of his Son, the advancement and propagation of his Gospel, through all the parts of this Land and Nation, whereunto we belong, so to lend assistance both of their fervent prayers, and due informations, that thereby the work may be carried on more prosperously, and the Commissioners more encouraged to attend it. Signed in the name, and at the request of the Commissioners for Approbation of public Preachers. By Francis Rouse, Io. Arrowsmith. William Goss. Stephen Marshall. The last Letters from Edinburgh speak of another Engagement betwixt Col. Morgan, and the Enemy; but they tell us not the particulars, only they say, that the Enemy is once more dispersed, and driven further up into the mountains, with the loss of about 200 men. The peace with Holland being concluded (as you heard before) our Merchants are lading of goods on shipboard, as fast as Lighters can be gotten to carry them where the ships ride at anchor. We likewise hear of the like preparations in Holland for transporting of goods of several sorts hither. And now all the rest of Europe are at a stand, or at leastwise stand gazing upon us, and begin to cast about with themselves, what action may be great and considerable enough for to be undertaken next by those great Fleets, which are as ready for action as any opportunity can be to offer itself. How they will be disposed of Time will discover. London, Printed by E. Alsop 1654.", 'title': 'A Perfect Account, Issue 170'} ``` ### Data Fields ``` { "id": Unique identifier for that data point("string"), "text": Text in that datapoint("string"), "title": The title of the news article("string") } ``` ### Data Splits Train: 303 ## Dataset Creation ### Curation Rationale The FIRST collection (1654_newsbooks) consists of every newsbook published in London and still surviving in the Thomason Tracts from the first half of 1654 (to be precise, for the second half of December 1653 to the end of May 1654, with one or two additions from the first week in June, 1654) and was constructed for the project "Looking at text re-use in a corpus of seventeenth-century news reportage", funded by the British Academy, grant reference SG-33825. The SECOND collection (mercurius_fumigosus) consists of every surviving issue published of the highly idiosyncratic newsbook "Mercurius Fumigosus", written by John Crouch between summer 1654 and early autumn 1655. This was constructed for the project "Decoding the news - Mercurius Fumigosus as a source of news in the interregnum, 1654-1655", funded by the British Academy, grant reference LRG-35423. ### Source Data #### Initial Data Collection and Normalization This corpus was created by the Department of Linguistics and English Language, Lancaster University. #### Who are the source language producers? The original data was humna-generated from existing newsbooks ### Annotations #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information None, since this dataset is from 1654 ## Considerations for Using the Data ### Social Impact of Dataset This dataset provides an insight into the news and social systems from 17th century England ### Discussion of Biases The dataset is from the 17th century and some articles might reflect social biases of the time in terms of sexuality, gender, race, etc. ### Other Known Limitations [N/A] ## Additional Information ### Dataset Curators This corpus was created by the Department of Linguistics and English Language, Lancaster University. Project leader: Tony McEnery Corpus editor: Andrew Hardie ### Licensing Information Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License ### Citation Information @misc{20.500.12024/2531, title = {The Lancaster Newsbooks Corpus}, author = {Thomason, George, d. 1666}, url = {http://hdl.handle.net/20.500.12024/2531}, note = {Oxford Text Archive}, copyright = {Distributed by the University of Oxford under a Creative Commons Attribution-{NonCommercial}-{ShareAlike} 3.0 Unported License.}, year = {2005} }
biglam/lancaster_newsbooks
[ "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "newsbooks", "1654", "lancaster", "oxford text", "region:us" ]
2022-07-19T18:48:58+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": [], "task_ids": [], "pretty_name": "Lancaster Newsbooks", "tags": ["newsbooks", "1654", "lancaster", "oxford text"]}
2022-08-18T15:03:54+00:00
6c18754cc3af5656edef386b34f37ef496788a33
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11 * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-samsum-7328461a-11225503
[ "autotrain", "evaluation", "region:us" ]
2022-07-19T20:48:49+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V11", "metrics": ["perplexity"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-07-19T21:01:15+00:00
97d2dd14602e380348a4f29f4441e70a01858e1f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11 * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-73f27c66-11235504
[ "autotrain", "evaluation", "region:us" ]
2022-07-19T20:54:14+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V11", "metrics": ["perplexity"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-07-21T04:32:04+00:00
7f0115a4b758a71d6473b8d085751692da2fef98
naver-clova-ix/cord-v2
[ "license:cc-by-4.0", "region:us" ]
2022-07-19T22:35:08+00:00
{"license": "cc-by-4.0"}
2022-07-19T22:43:33+00:00
0a6c80f0a7934f718fcc0d7b2f22fdf9440b231f
## Donut 🍩 : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets For more information, please visit https://github.com/clovaai/donut ![image](https://github.com/clovaai/donut/blob/master/misc/sample_synthdog.png?raw=true) The links to the SynthDoG-generated datasets are here: - [`synthdog-en`](https://huggingface.co/datasets/naver-clova-ix/synthdog-en): English, 0.5M. - [`synthdog-zh`](https://huggingface.co/datasets/naver-clova-ix/synthdog-zh): Chinese, 0.5M. - [`synthdog-ja`](https://huggingface.co/datasets/naver-clova-ix/synthdog-ja): Japanese, 0.5M. - [`synthdog-ko`](https://huggingface.co/datasets/naver-clova-ix/synthdog-ko): Korean, 0.5M. To generate synthetic datasets with our SynthDoG, please see `./synthdog/README.md` and [our paper](#how-to-cite) for details. ## How to Cite If you find this work useful to you, please cite: ```bibtex @inproceedings{kim2022donut, title = {OCR-Free Document Understanding Transformer}, author = {Kim, Geewook and Hong, Teakgyu and Yim, Moonbin and Nam, JeongYeon and Park, Jinyoung and Yim, Jinyeong and Hwang, Wonseok and Yun, Sangdoo and Han, Dongyoon and Park, Seunghyun}, booktitle = {European Conference on Computer Vision (ECCV)}, year = {2022} } ```
naver-clova-ix/synthdog-zh
[ "region:us" ]
2022-07-19T23:42:55+00:00
{}
2024-01-31T23:56:24+00:00
5c895d0deb129102f9c2fe279eb456548e261c8a
## Donut 🍩 : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets For more information, please visit https://github.com/clovaai/donut ![image](https://github.com/clovaai/donut/blob/master/misc/sample_synthdog.png?raw=true) The links to the SynthDoG-generated datasets are here: - [`synthdog-en`](https://huggingface.co/datasets/naver-clova-ix/synthdog-en): English, 0.5M. - [`synthdog-zh`](https://huggingface.co/datasets/naver-clova-ix/synthdog-zh): Chinese, 0.5M. - [`synthdog-ja`](https://huggingface.co/datasets/naver-clova-ix/synthdog-ja): Japanese, 0.5M. - [`synthdog-ko`](https://huggingface.co/datasets/naver-clova-ix/synthdog-ko): Korean, 0.5M. To generate synthetic datasets with our SynthDoG, please see `./synthdog/README.md` and [our paper](#how-to-cite) for details. ## How to Cite If you find this work useful to you, please cite: ```bibtex @inproceedings{kim2022donut, title = {OCR-Free Document Understanding Transformer}, author = {Kim, Geewook and Hong, Teakgyu and Yim, Moonbin and Nam, JeongYeon and Park, Jinyoung and Yim, Jinyeong and Hwang, Wonseok and Yun, Sangdoo and Han, Dongyoon and Park, Seunghyun}, booktitle = {European Conference on Computer Vision (ECCV)}, year = {2022} } ```
naver-clova-ix/synthdog-ja
[ "region:us" ]
2022-07-19T23:45:12+00:00
{}
2024-01-31T23:56:09+00:00
1e6c76a1a5f10aa967a60a880f7dbc06ac29a8d6
## Donut 🍩 : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets For more information, please visit https://github.com/clovaai/donut ![image](https://github.com/clovaai/donut/blob/master/misc/sample_synthdog.png?raw=true) The links to the SynthDoG-generated datasets are here: - [`synthdog-en`](https://huggingface.co/datasets/naver-clova-ix/synthdog-en): English, 0.5M. - [`synthdog-zh`](https://huggingface.co/datasets/naver-clova-ix/synthdog-zh): Chinese, 0.5M. - [`synthdog-ja`](https://huggingface.co/datasets/naver-clova-ix/synthdog-ja): Japanese, 0.5M. - [`synthdog-ko`](https://huggingface.co/datasets/naver-clova-ix/synthdog-ko): Korean, 0.5M. To generate synthetic datasets with our SynthDoG, please see `./synthdog/README.md` and [our paper](#how-to-cite) for details. ## How to Cite If you find this work useful to you, please cite: ```bibtex @inproceedings{kim2022donut, title = {OCR-Free Document Understanding Transformer}, author = {Kim, Geewook and Hong, Teakgyu and Yim, Moonbin and Nam, JeongYeon and Park, Jinyoung and Yim, Jinyeong and Hwang, Wonseok and Yun, Sangdoo and Han, Dongyoon and Park, Seunghyun}, booktitle = {European Conference on Computer Vision (ECCV)}, year = {2022} } ```
naver-clova-ix/synthdog-ko
[ "region:us" ]
2022-07-19T23:45:45+00:00
{}
2024-01-31T23:55:41+00:00
672de36dac4dff8857b5b4f07443f721d0cada1a
miyoung/datasetTest
[ "license:afl-3.0", "region:us" ]
2022-07-20T00:42:34+00:00
{"license": "afl-3.0"}
2022-07-20T00:42:35+00:00
961402a28a0c436af83eab460132148053441208
from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom") model = AutoModel.from_pretrained("bigscience/bloom")
Willaim/H
[ "region:us" ]
2022-07-20T01:48:57+00:00
{}
2022-07-20T01:50:07+00:00
8acfecc725b172d1283aa50f67521ddc08b3c682
# ShahNegar (A Plotted version of The Shahnameh) This dataset is a plotted version of Ferdowsi's Shahnameh (which is a highly-regarded ancient set of Farsi poems) generated using DALL-E mini (aka [craiyon](https://www.craiyon.com/)). You can use this dataset using the code below: ```python from datasets import load_dataset dataset = load_dataset("sadrasabouri/ShahNegar") ``` ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Paper:** - **Point of Contact:** [Sadra Sabouri](mailto:[email protected]) ### Dataset Summary This dataset contains more than 30K images with their corresponding text from the Shahnameh. For each Shahnameh paragraph, we generated at most 9 images. Images corresponding to the same paragraphs have the same `id` field. There was a human annotation post-process in which we removed some harmful/private generated images from the dataset. After all we reached to more than 30K, 256 * 256 images. ### Supported Tasks and Leaderboards The main purpose of making this dataset open source is because of its artistic value, but it can also be used for the below tasks: + text-to-image + image-to-text (image captioning) ### Languages The Shahnameh was generally written in Farsi (Persian) but the translated version we used for this dataset - [satoor](https://www.sattor.com/english/Shahnameh.pdf) - was completely in English with no alignments for the corresponding Farsi poem. We are planning to add another field to dataset entries which is the corresponding Farsi poem as soon as possible. ## Dataset Structure ### Data Fields Here is an instance of our dataset: ```json { "image": <PIL Image Bytes>, "id": 0, "text": "He took up his abode in the mountains, and clad himself and his people in tiger-skins, and from him sprang all kindly nurture and the arts of clothing, till then unknown." } ``` + `image`: the image for given text. + `id`: the id for the text (**Not for the image**). + `text`: the English text for the image. ### Data Splits This dataset has only a split (`train` split). ## Dataset Creation The translated version of the Shahnameh was generally derived from the [satoor](https://www.sattor.com/english/Shahnameh.pdf) website. We first extracted texts from the pdf. After that, we divided paragraphs into sentences and give each sentence to the DALL-E mini model through its online API. It generated nine images for each sentence. After a few annotations, we came up with more than 30000 images. ### Annotations #### Annotation process Through the process of image generation, we noticed a bias in the DALL-E models towards the word `iran`. It was biased so that each sentence with this given word would have pictures from Iran's political figures which were usually totally irrelevant. The annotation process mainly focused to deal with these pictures. We removed those images which seems to be harmful to those figures and/or were irrelevant to the context. #### Who are the annotators? Mahsa Namdar and Sadra Sabouri were the annotators of this dataset. ### Personal and Sensitive Information Since the textual data is easily downloadable and the images were generated through an image generation model there shouldn't be any personal information in this dataset. Just in case you find something harmful or violating of one's personal information please let us know. We will take proper action as soon as possible. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is mainly aimed to release for its artistic value. The process of generating images for the Shahnameh - which is one of the most important Farsi poem books - is our precious contribution. This dataset is not only used for this purpose but also can as a dataset in image-to-text and text-to-image tasks. ### Discussion of Biases The dataset's possible biases would come from the DALL-E mini biases. It's actually a good practice to check the dataset entries in order to find biases in that model. One it's worth mentioning in this work is the DALL-E mini model's bias for the word `iran` which nearly always comes up with images from political figures of this country. ### Other Known Limitations There are constant debates in the literature about the limitations of machine-generated datasets. Some believe that since nowadays models are not perfect - and so do their output, it wouldn't be a good idea to use these artificially generated datasets as input to the new model. They suggest that by doing so we are actually limiting our accuracy by the model's accuracy which provided the primary dataset. ## Additional Information ### Dataset Curators + Emad Fatemizadeh: The general idea for generating a graphical version of Farsi poems was firstly introduced by him. + Sadra Sabouri: He looked up a translated version of the Shahnameh, extract and tokenized poems from it, and used the online DALL-E mini API to generate images from poems. + Mahsa Namdar: The process of annotation as a post-process on data has been held by her. ### Licensing Information MIT ### Citation Information [More Information Needed] ### Contributions Thanks to [@sadrasabouri](https://github.com/sadrasabouri) for adding this dataset.
sadrasabouri/ShahNegar
[ "task_categories:image-to-text", "task_categories:text-to-image", "task_ids:image-captioning", "annotations_creators:machine-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "region:us" ]
2022-07-20T04:13:00+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["image-to-text", "text-to-image"], "task_ids": ["image-captioning"], "pretty_name": "ShahNegar"}
2022-10-21T10:54:05+00:00
471fb121de4f1806d7f0fd4dde685089c9cb2012
## Donut 🍩 : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets For more information, please visit https://github.com/clovaai/donut ![image](https://github.com/clovaai/donut/blob/master/misc/sample_synthdog.png?raw=true) The links to the SynthDoG-generated datasets are here: - [`synthdog-en`](https://huggingface.co/datasets/naver-clova-ix/synthdog-en): English, 0.5M. - [`synthdog-zh`](https://huggingface.co/datasets/naver-clova-ix/synthdog-zh): Chinese, 0.5M. - [`synthdog-ja`](https://huggingface.co/datasets/naver-clova-ix/synthdog-ja): Japanese, 0.5M. - [`synthdog-ko`](https://huggingface.co/datasets/naver-clova-ix/synthdog-ko): Korean, 0.5M. To generate synthetic datasets with our SynthDoG, please see `./synthdog/README.md` and [our paper](#how-to-cite) for details. ## How to Cite If you find this work useful to you, please cite: ```bibtex @inproceedings{kim2022donut, title = {OCR-Free Document Understanding Transformer}, author = {Kim, Geewook and Hong, Teakgyu and Yim, Moonbin and Nam, JeongYeon and Park, Jinyoung and Yim, Jinyeong and Hwang, Wonseok and Yun, Sangdoo and Han, Dongyoon and Park, Seunghyun}, booktitle = {European Conference on Computer Vision (ECCV)}, year = {2022} } ```
naver-clova-ix/synthdog-en
[ "region:us" ]
2022-07-20T04:33:24+00:00
{}
2024-01-31T23:56:41+00:00
a5057855c7aa264709b35de7bd85258d943bec22
This Urdu sentiment dataset was formed by concatenating the following two datasets: https://github.com/MuhammadYaseenKhan/Urdu-Sentiment-Corpus https://www.kaggle.com/datasets/akkefa/imdb-dataset-of-50k-movie-translated-urdu-reviews
hassan4830/urdu-binary-classification-data
[ "license:afl-3.0", "region:us" ]
2022-07-20T04:56:40+00:00
{"license": "afl-3.0"}
2022-07-21T08:40:56+00:00
59519e655088aa83999037b3ba8fa88d77eb3b83
annotations_creators: [] language: - en language_creators: [] license: [] multilinguality: [] pretty_name: HuggingFace GitHub Issues size_categories: [] source_datasets: [] tags: [] task_categories: - text-classification - text-retrieval task_ids: - multi-class-classification - multi-label-classification - document-retrieval
SakaiJun/github-issues
[ "region:us" ]
2022-07-20T06:23:42+00:00
{}
2022-07-20T06:37:59+00:00
fd526b15b744502f4e24b21126f543d845a8c59e
# Dataset Card for `reviews_with_drift` ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description ### Dataset Summary This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place. ### Supported Tasks and Leaderboards `text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative). ### Languages Text is mainly written in english. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset.
arize-ai/fashion_mnist_quality_drift
[ "task_categories:image-classification", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|imdb", "language:en", "license:mit", "region:us" ]
2022-07-20T06:31:58+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|imdb"], "task_categories": ["image-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "sentiment-classification-reviews-with-drift"}
2022-10-25T09:40:17+00:00
8116d3b3bedf70dcc6f755e461f5ab499ef13e18
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: ARTeLab/it5-summarization-ilpost * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@dishant16](https://huggingface.co/dishant16) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-6cd6bf3a-11245505
[ "autotrain", "evaluation", "region:us" ]
2022-07-20T06:44:58+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "ARTeLab/it5-summarization-ilpost", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-07-20T06:53:57+00:00
591e29480dfe46d7247cbe2e9d582ec97b8fb11e
voidful/DRCD
[ "license:cc-by-3.0", "region:us" ]
2022-07-20T07:16:09+00:00
{"license": "cc-by-3.0"}
2022-07-20T07:33:48+00:00
9e3c700a884eb823b3b6c9bd993f3197cdfdacb6
# Dataset Card for asvspoof2019 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://datashare.ed.ac.uk/handle/10283/3336 - **Repository:** [Needs More Information] - **Paper:** https://arxiv.org/abs/1911.01601 - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary This is a database used for the Third Automatic Speaker Verification Spoofing and Countermeasuers Challenge, for short, ASVspoof 2019 (http://www.asvspoof.org) organized by Junichi Yamagishi, Massimiliano Todisco, Md Sahidullah, Héctor Delgado, Xin Wang, Nicholas Evans, Tomi Kinnunen, Kong Aik Lee, Ville Vestman, and Andreas Nautsch in 2019. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages English ## Dataset Structure ### Data Instances ``` {'speaker_id': 'LA_0091', 'audio_file_name': 'LA_T_8529430', 'audio': {'path': 'D:/Users/80304531/.cache/huggingface/datasets/downloads/extracted/8cabb6d5c283b0ed94b2219a8d459fea8e972ce098ef14d8e5a97b181f850502/LA/ASVspoof2019_LA_train/flac/LA_T_8529430.flac', 'array': array([-0.00201416, -0.00234985, -0.0022583 , ..., 0.01309204, 0.01339722, 0.01461792], dtype=float32), 'sampling_rate': 16000}, 'system_id': 'A01', 'key': 1} ``` ### Data Fields Logical access (LA): - `speaker_id`: `LA_****`, a 4-digit speaker ID - `audio_file_name`: name of the audio file - `audio`: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - `system_id`: ID of the speech spoofing system (A01 - A19), or, for bonafide speech SYSTEM-ID is left blank ('-') - `key`: 'bonafide' for genuine speech, or, 'spoof' for spoofing speech Physical access (PA): - `speaker_id`: `PA_****`, a 4-digit speaker ID - `audio_file_name`: name of the audio file - `audio`: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - `environment_id`: a triplet (S,R,D_s), which take one letter in the set {a,b,c} as categorical value, defined as | | a | b | c | | -------------------------------- | ------ | ------- | -------- | | S: Room size (square meters) | 2-5 | 5-10 | 10-20 | | R: T60 (ms) | 50-200 | 200-600 | 600-1000 | | D_s: Talker-to-ASV distance (cm) | 10-50 | 50-100 | 100-150 | - `attack_id`: a duple (D_a,Q), which take one letter in the set {A,B,C} as categorical value, defined as | | A | B | C | | ----------------------------------- | ------- | ------ | ----- | | Z: Attacker-to-talker distance (cm) | 10-50 | 50-100 | > 100 | | Q: Replay device quality | perfect | high | low | for bonafide speech, `attack_id` is left blank ('-') - `key`: 'bonafide' for genuine speech, or, 'spoof' for spoofing speech ### Data Splits | | Training set | Development set | Evaluation set | | -------- | ------------ | --------------- | -------------- | | Bonafide | 2580 | 2548 | 7355 | | Spoof | 22800 | 22296 | 63882 | | Total | 25380 | 24844 | 71237 | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information This ASVspoof 2019 dataset is made available under the Open Data Commons Attribution License: http://opendatacommons.org/licenses/by/1.0/ ### Citation Information ``` @InProceedings{Todisco2019, Title = {{ASV}spoof 2019: {F}uture {H}orizons in {S}poofed and {F}ake {A}udio {D}etection}, Author = {Todisco, Massimiliano and Wang, Xin and Sahidullah, Md and Delgado, H ́ector and Nautsch, Andreas and Yamagishi, Junichi and Evans, Nicholas and Kinnunen, Tomi and Lee, Kong Aik}, booktitle = {Proc. of Interspeech 2019}, Year = {2019} } ```
LanceaKing/asvspoof2019
[ "task_categories:audio-classification", "annotations_creators:other", "language_creators:other", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|vctk", "language:en", "license:odc-by", "voice-anti-spoofing", "arxiv:1911.01601", "region:us" ]
2022-07-20T07:29:40+00:00
{"annotations_creators": ["other"], "language_creators": ["other"], "language": ["en"], "license": ["odc-by"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|vctk"], "task_categories": ["audio-classification"], "task_ids": [], "pretty_name": "asvspoof2019", "tags": ["voice-anti-spoofing"]}
2022-11-11T08:41:54+00:00
c0197df20a67b8ad636f63e4983e36208b3ea977
tokeron/Piyyut
[ "task_categories:text-classification", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:heb", "license:afl-3.0", "metaphor-detection", "region:us" ]
2022-07-20T08:01:23+00:00
{"language": ["heb"], "license": "afl-3.0", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "tags": ["metaphor-detection"], "viewer": true}
2023-04-08T09:36:57+00:00
468d0b8716ec40f521f557a4617039975a3a16e4
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion * Dataset: emotion * Config: default * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nickprock](https://huggingface.co/nickprock) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-emotion-5a29f55d-11295506
[ "autotrain", "evaluation", "region:us" ]
2022-07-20T10:03:35+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion", "metrics": ["bertscore"], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "validation", "col_mapping": {"text": "text", "target": "label"}}}
2022-07-20T10:04:02+00:00
f9189e3914ce04ed0d10de11d38c145c6ee58385
legotin/movielens-1m-ratings-standardized
[ "license:apache-2.0", "region:us" ]
2022-07-20T10:52:59+00:00
{"license": "apache-2.0"}
2022-07-20T12:58:58+00:00
bbb2a0157b760465002fd12a61af81b475cd387a
# Dataset Card for Multilingual European Datasets for Sensitive Entity Detection in the Legal Domain ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - ** Repository:** [Spanish](https://elrc-share.eu/repository/browse/mapa-anonymization-package-spanish/b550e1a88a8311ec9c1a00155d026706687917f92f64482587c6382175dffd76/), [Most](https://elrc-share.eu/repository/search/?q=mfsp:3222a6048a8811ec9c1a00155d0267067eb521077db54d6684fb14ce8491a391), [German, Portuguese, Slovak, Slovenian, Swedish](https://elrc-share.eu/repository/search/?q=mfsp:833df1248a8811ec9c1a00155d0267067685dcdb77064822b51cc16ab7b81a36) - **Paper:** de Gibert Bonet, O., García Pablos, A., Cuadros, M., & Melero, M. (2022). Spanish Datasets for Sensitive Entity Detection in the Legal Domain. Proceedings of the Language Resources and Evaluation Conference, June, 3751–3760. http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.400.pdf - **Leaderboard:** - **Point of Contact:** [Joel Niklaus](mailto:[email protected]) ### Dataset Summary The dataset consists of 12 documents (9 for Spanish due to parsing errors) taken from EUR-Lex, a multilingual corpus of court decisions and legal dispositions in the 24 official languages of the European Union. The documents have been annotated for named entities following the guidelines of the [MAPA project]( https://mapa-project.eu/) which foresees two annotation level, a general and a more fine-grained one. The annotated corpus can be used for named entity recognition/classification. ### Supported Tasks and Leaderboards The dataset supports the task of Named Entity Recognition and Classification (NERC). ### Languages The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hu, it, lt, lv, mt, nl, pt, ro, sk, sv ## Dataset Structure ### Data Instances The file format is jsonl and three data splits are present (train, validation and test). Named Entity annotations are non-overlapping. ### Data Fields For the annotation the documents have been split into sentences. The annotations has been done on the token level. The files contain the following data fields - `language`: language of the sentence - `type`: The document type of the sentence. Currently, only EUR-LEX is supported. - `file_name`: The document file name the sentence belongs to. - `sentence_number`: The number of the sentence inside its document. - `tokens`: The list of tokens in the sentence. - `coarse_grained`: The coarse-grained annotations for each token - `fine_grained`: The fine-grained annotations for each token As previously stated, the annotation has been conducted on a global and a more fine-grained level. The tagset used for the global and the fine-grained named entities is the following: - Address - Building - City - Country - Place - Postcode - Street - Territory - Amount - Unit - Value - Date - Year - Standard Abbreviation - Month - Day of the Week - Day - Calender Event - Person - Age - Email - Ethnic Category - Family Name - Financial - Given Name – Female - Given Name – Male - Health Insurance Number - ID Document Number - Initial Name - Marital Status - Medical Record Number - Nationality - Profession - Role - Social Security Number - Title - Url - Organisation - Time - Vehicle - Build Year - Colour - License Plate Number - Model - Type The final coarse grained tagset (in IOB notation) is the following: `['O', 'B-ORGANISATION', 'I-ORGANISATION', 'B-ADDRESS', 'I-ADDRESS', 'B-DATE', 'I-DATE', 'B-PERSON', 'I-PERSON', 'B-AMOUNT', 'I-AMOUNT', 'B-TIME', 'I-TIME']` The final fine grained tagset (in IOB notation) is the following: `[ 'O', 'B-BUILDING', 'I-BUILDING', 'B-CITY', 'I-CITY', 'B-COUNTRY', 'I-COUNTRY', 'B-PLACE', 'I-PLACE', 'B-TERRITORY', 'I-TERRITORY', 'I-UNIT', 'B-UNIT', 'B-VALUE', 'I-VALUE', 'B-YEAR', 'I-YEAR', 'B-STANDARD ABBREVIATION', 'I-STANDARD ABBREVIATION', 'B-MONTH', 'I-MONTH', 'B-DAY', 'I-DAY', 'B-AGE', 'I-AGE', 'B-ETHNIC CATEGORY', 'I-ETHNIC CATEGORY', 'B-FAMILY NAME', 'I-FAMILY NAME', 'B-INITIAL NAME', 'I-INITIAL NAME', 'B-MARITAL STATUS', 'I-MARITAL STATUS', 'B-PROFESSION', 'I-PROFESSION', 'B-ROLE', 'I-ROLE', 'B-NATIONALITY', 'I-NATIONALITY', 'B-TITLE', 'I-TITLE', 'B-URL', 'I-URL', 'B-TYPE', 'I-TYPE', ]` ### Data Splits Splits created by Joel Niklaus. | language | # train files | # validation files | # test files | # train sentences | # validation sentences | # test sentences | |:-----------|----------------:|---------------------:|---------------:|--------------------:|-------------------------:|-------------------:| | bg | 9 | 1 | 2 | 1411 | 166 | 560 | | cs | 9 | 1 | 2 | 1464 | 176 | 563 | | da | 9 | 1 | 2 | 1455 | 164 | 550 | | de | 9 | 1 | 2 | 1457 | 166 | 558 | | el | 9 | 1 | 2 | 1529 | 174 | 584 | | en | 9 | 1 | 2 | 893 | 98 | 408 | | es | 7 | 1 | 1 | 806 | 248 | 155 | | et | 9 | 1 | 2 | 1391 | 163 | 516 | | fi | 9 | 1 | 2 | 1398 | 187 | 531 | | fr | 9 | 1 | 2 | 1297 | 97 | 490 | | ga | 9 | 1 | 2 | 1383 | 165 | 515 | | hu | 9 | 1 | 2 | 1390 | 171 | 525 | | it | 9 | 1 | 2 | 1411 | 162 | 550 | | lt | 9 | 1 | 2 | 1413 | 173 | 548 | | lv | 9 | 1 | 2 | 1383 | 167 | 553 | | mt | 9 | 1 | 2 | 937 | 93 | 442 | | nl | 9 | 1 | 2 | 1391 | 164 | 530 | | pt | 9 | 1 | 2 | 1086 | 105 | 390 | | ro | 9 | 1 | 2 | 1480 | 175 | 557 | | sk | 9 | 1 | 2 | 1395 | 165 | 526 | | sv | 9 | 1 | 2 | 1453 | 175 | 539 | ## Dataset Creation ### Curation Rationale *„[…] to our knowledge, there exist no open resources annotated for NERC [Named Entity Recognition and Classificatio] in Spanish in the legal domain. With the present contribution, we intend to fill this gap. With the release of the created resources for fine-tuning and evaluation of sensitive entities detection in the legal domain, we expect to encourage the development of domain-adapted anonymisation tools for Spanish in this field“* (de Gibert Bonet et al., 2022) ### Source Data #### Initial Data Collection and Normalization The dataset consists of documents taken from EUR-Lex corpus which is publicly available. No further information on the data collection process are given in de Gibert Bonet et al. (2022). #### Who are the source language producers? The source language producers are presumably lawyers. ### Annotations #### Annotation process *"The annotation scheme consists of a complex two level hierarchy adapted to the legal domain, it follows the scheme described in (Gianola et al., 2020) […] Level 1 entities refer to general categories (PERSON, DATE, TIME, ADDRESS...) and level 2 entities refer to more fine-grained subcategories (given name, personal name, day, year, month...). Eur-Lex, CPP and DE have been annotated following this annotation scheme […] The manual annotation was performed using INCePTION (Klie et al., 2018) by a sole annotator following the guidelines provided by the MAPA consortium."* (de Gibert Bonet et al., 2022) #### Who are the annotators? Only one annotator conducted the annotation. More information are not provdided in de Gibert Bonet et al. (2022). ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Note that the dataset at hand presents only a small portion of a bigger corpus as described in de Gibert Bonet et al. (2022). At the time of writing only the annotated documents from the EUR-Lex corpus were available. Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script ```convert_to_hf_dataset.py``` in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card. ## Additional Information ### Dataset Curators The names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*. Additional changes were made by Joel Niklaus ([Email](mailto:[email protected]) ; [Github](https://github.com/joelniklaus)) and Veton Matoshi ([Email](mailto:[email protected]) ; [Github](https://github.com/kapllan)). ### Licensing Information [Attribution 4.0 International (CC BY 4.0) ](https://creativecommons.org/licenses/by/4.0/) ### Citation Information ``` @article{DeGibertBonet2022, author = {{de Gibert Bonet}, Ona and {Garc{\'{i}}a Pablos}, Aitor and Cuadros, Montse and Melero, Maite}, journal = {Proceedings of the Language Resources and Evaluation Conference}, number = {June}, pages = {3751--3760}, title = {{Spanish Datasets for Sensitive Entity Detection in the Legal Domain}}, url = {https://aclanthology.org/2022.lrec-1.400}, year = {2022} } ``` ### Contributions Thanks to [@JoelNiklaus](https://github.com/joelniklaus) and [@kapllan](https://github.com/kapllan) for adding this dataset.
joelniklaus/mapa
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:other", "language_creators:found", "multilinguality:multilingual", "size_categories:1K<n<10K", "source_datasets:original", "language:multilingual", "language:bg", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:et", "language:fi", "language:fr", "language:ga", "language:hu", "language:it", "language:lt", "language:lv", "language:mt", "language:nl", "language:pt", "language:ro", "language:sk", "language:sv", "license:cc-by-4.0", "named-entity-recognition-and-classification", "region:us" ]
2022-07-20T11:14:50+00:00
{"annotations_creators": ["other"], "language_creators": ["found"], "language": ["multilingual", "bg", "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "ga", "hu", "it", "lt", "lv", "mt", "nl", "pt", "ro", "sk", "sv"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "Spanish Datasets for Sensitive Entity Detection in the Legal Domain", "tags": ["named-entity-recognition-and-classification"]}
2022-10-25T15:17:09+00:00
e4d8ebdbd6644c78caac2655731820a7e07fd298
## advABSA An adversarial aspect-based sentiment analysis (ABSA) benchmark, dubbed [*adv*ABSA](https://arxiv.org/pdf/2207.08099.pdf) for both aspect-based sentiment classification (SC) and opinion extraction (OE). ### *adv*ABSA (Adversarial ABSA Benchmark) In response to the concerning robustness issue of ABSA, [Arts](https://aclanthology.org/2020.emnlp-main.292.pdf) is proposed, which contains datasets crafted only for adversarial evaluaiton on SC but not for OE. So we additionally craft datasets for adversarial evaluaiton on OE following their track. These gathered datasets form *adv*ABSA. That is, *adv*ABSA can be decomposed to two parts, where the first part is Arts-\[domain\]-SC reused from Arts and the second part is Arts-\[domain\]-OE newly produced by us. ### *std*ABSA (Standard ABSA Benchmark) In addition, we also provide *std*ABSA containing datasets from SemEval14 for standard evaluation, namely Sem14-\[domain\]-SC and Sem14-\[domain\]-OE. So corresponding performance drops can be measured properly. ### Citation If you find *adv*ABSA useful, please kindly star this repositary and cite our paper as follows: ```bibtex @inproceedings{ma-etal-2022-aspect, title = "Aspect-specific Context Modeling for Aspect-based Sentiment Analysis", author = "Ma, Fang and Zhang, Chen and Zhang, Bo and Song, Dawei", booktitle = "NLPCC", month = "sep", year = "2022", address = "Guilin, China", url = "https://arxiv.org/pdf/2207.08099.pdf", } ``` ### Credits The benchmark is mainly processed by [Fang Ma](https://github.com/BD-MF).
becurrio/advABSA
[ "license:apache-2.0", "arxiv:2207.08099", "region:us" ]
2022-07-20T11:24:25+00:00
{"license": "apache-2.0"}
2022-07-21T04:57:48+00:00
88226971c2c3968d9bcef3eea281995c0313f108
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: tuner007/pegasus_summarizer * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Neez](https://huggingface.co/Neez) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-8015d52c-11325509
[ "autotrain", "evaluation", "region:us" ]
2022-07-20T15:03:06+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "tuner007/pegasus_summarizer", "metrics": ["accuracy"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-07-20T16:31:44+00:00
02af6989833382fc594889cc1294954c46a74fe3
adamnik/event_detection_dataset
[ "license:mit", "region:us" ]
2022-07-20T18:17:53+00:00
{"license": "mit"}
2022-07-20T18:18:18+00:00
9862d1e870fe6dba4922d3d326c9c8b90a2ecad5
# Dataset Card for "relbert/lexical_relation_classification" ## Dataset Description - **Repository:** [RelBERT](https://github.com/asahi417/relbert) - **Paper:** [https://aclanthology.org/P19-1169/](https://aclanthology.org/P19-1169/) - **Dataset:** Lexical Relation Classification ### Dataset Summary Five different datasets (`BLESS`, `CogALexV`, `EVALution`, `K&H+N`, `ROOT09`) for lexical relation classification used in [SphereRE](https://www.aclweb.org/anthology/P19-1169/). ### Dataset Summary This dataset contains 5 different word analogy questions used in [Analogy Language Model](https://aclanthology.org/2021.acl-long.280/). | name | train | validation | test | |---------------|------:|-------:|-----:| | `BLESS` | 18582 | 1327 | 6637 | | `CogALexV` | 3054 | - | 4260 | | `EVALution` | 5160 | 372 | 1846 | | `K&H+N` | 40256 | 2876 | 14377 | | `ROOT09` | 8933 | 638 | 3191 | ## Dataset Structure ### Data Instances An example looks as follows. ``` {"head": "turtle", "tail": "live", "relation": "event"} ``` The `stem` and `tail` are the word pair and `relation` is the corresponding relation label. ### Citation Information ``` @inproceedings{wang-etal-2019-spherere, title = "{S}phere{RE}: Distinguishing Lexical Relations with Hyperspherical Relation Embeddings", author = "Wang, Chengyu and He, Xiaofeng and Zhou, Aoying", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P19-1169", doi = "10.18653/v1/P19-1169", pages = "1727--1737", abstract = "Lexical relations describe how meanings of terms relate to each other. Typical examples include hypernymy, synonymy, meronymy, etc. Automatic distinction of lexical relations is vital for NLP applications, and also challenging due to the lack of contextual signals to discriminate between such relations. In this work, we present a neural representation learning model to distinguish lexical relations among term pairs based on Hyperspherical Relation Embeddings (SphereRE). Rather than learning embeddings for individual terms, the model learns representations of relation triples by mapping them to the hyperspherical embedding space, where relation triples of different lexical relations are well separated. Experiments over several benchmarks confirm SphereRE outperforms state-of-the-arts.", } ``` ### LICENSE The LICENSE of all the resources are under [CC-BY-NC-4.0](./LICENSE). Thus, they are freely available for academic purpose or individual research, but restricted for commercial use.
relbert/lexical_relation_classification
[ "multilinguality:monolingual", "size_categories:n<1K", "language:en", "license:other", "region:us" ]
2022-07-20T21:45:48+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "pretty_name": "Lexical Relation Classification"}
2022-07-20T22:24:17+00:00
517e8e60404a2e2961bf28e0fd3631cd8424e81d
# Dataset Card for "relbert/relation_mapping" ## Dataset Description - **Repository:** [RelBERT](https://github.com/asahi417/relbert) - **Paper:** [https://www.jair.org/index.php/jair/article/view/10583](https://www.jair.org/index.php/jair/article/view/10583) - **Dataset:** Relation Mapping ### Dataset Summary Relation Mapping is a task to choose optimal combination of word pairs (see more detail in the [paper](https://www.jair.org/index.php/jair/article/view/10583)). Relation mapping `M` is the set of bijective map in between two sets of terms (`A` and `B`): ``` [set `A`]: ("solar system", "sun", "planet", "mass", "attracts", "revolves", "gravity") [set `B`]: ("atom", "nucleus", "electron", "charge", "attracts", "revolves", "electromagnetism") [Relation Mapping `M`] * "solar system" -> "atom" * "sun" -> "nucleus" * "planet" -> "electron" * "mass" -> "charge" * "attracts" -> "attracts" * "revolves" -> "revolves" * "gravity" -> "electromagnetism" ``` ***[Relation Mapping Problem](https://www.jair.org/index.php/jair/article/view/10583)*** is the task to identify the mapping `M` given the sets of terms `A` and `B`. ## Dataset Structure ### Data Instances An example looks as follows. ``` { "id": "m10", "reference": ["seeing", "understanding"], "source": ["seeing", "light", "illuminating", "darkness", "view", "hidden"], "target": ["understanding", "knowledge", "explaining", "confusion", "interpretation", "secret"], "agreement": [68.2, 77.3, 86.4, 86.4, 68.2, 86.4], "pos": ["vbg", "nn", "vbg", "nn", "nn", "jj"], "target_random": ["knowledge", "interpretation", "explaining", "confusion", "understanding", "secret"] } ``` - `source`: A list of terms, which is the source of the relation mapping from. - `target_random`: A list of terms, where we want to find a mapping from `source` to. - `target`: A correctly ordered `target_random` that aligns with the `source`. Given `source` and `target_random`, the task is to predict the correct order of `target_random` so that it matches `target`. In average 7 terms are in the set, so the total number of possible order is 5040. ### Data Splits | name |test| |---------|----:| |relation_mapping| 20 | ### Citation Information ``` @article{turney2008latent, title={The latent relation mapping engine: Algorithm and experiments}, author={Turney, Peter D}, journal={Journal of Artificial Intelligence Research}, volume={33}, pages={615--655}, year={2008} } ```
relbert/relation_mapping
[ "multilinguality:monolingual", "size_categories:1<n<1K", "language:en", "license:other", "region:us" ]
2022-07-20T21:46:33+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1<n<1K"], "pretty_name": "Relation Mapping"}
2022-08-11T09:51:58+00:00
06cdd71aa5f3779efac159b56d9be175b6719a52
richartruddie/richartruddie
[ "license:apache-2.0", "region:us" ]
2022-07-21T04:42:42+00:00
{"license": "apache-2.0"}
2022-07-21T04:42:42+00:00
10c6f27014e29ecee20aaa336dc25412c0fedf81
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11 * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-8bc70ef8-11355511
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T04:48:39+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V11", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-07-22T05:44:01+00:00
ff221b56ac6468869eb8b0630a01921263aae6e3
## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Installation](#installation) - [Install requirements](#install-requirements) - [Download settings](#download-settings) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://www.kietzmannlab.org/ecoset](https://www.kietzmannlab.org/ecoset/) - **Repository:** [https://codeocean.com/capsule/9570390/tree/v1](https://codeocean.com/capsule/6266601/tree/v1) - **Paper:** [https://www.pnas.org/doi/full/10.1073/pnas.2011417118](https://doi.org/10.1073/pnas.2011417118) - **Point of Contact:** [[email protected]]([email protected]) ### Dataset Summary Tired of all the dogs in ImageNet (ILSVRC)? Then ecoset is here for you. 1.5m images from 565 basic level categories, chosen to be both (i) frequent in linguistic usage, and (ii) rated by human observers as concrete (e.g. ‘table’ is concrete, ‘romance’ is not). Ecoset is a typical image recognition dataset, combining images of objects with appropriate labels (one label per image). Importantly, ecoset is intended to provide higher ecological validity than its counterparts, with a mislabelling error rate < 5% and filtered for NSFW content. For more information on the dataset, consider reading the [original publication](https://doi.org/10.1073/pnas.2011417118). Ecoset consists of a train, test, and validation subset which all are openly available to the user. ### Supported Tasks and Leaderboards Ecoset is a large multi-class single-label object recognition image dataset (similar to ImageNet). ## Installation ### Install Requirements In order to work with ecoset, please make sure to install huggingface datasets: ```bash pip install datasets ``` If you want to work with the dataset in `Huggingface.datasets`, you might also want to make sure to install PIL (`pip install Pillow`) in order to work with image input. However, downloading the dataset will work despite not having installed PIL. ### Download Settings Please set `verification_mode=no_checks`. when downloading this dataset, else the download will result in an error, additionally you may need to install defusedxml via pip to avoid Permission Errors required by _generate_examples method: ```python from datasets import load_dataset dataset = load_dataset("kietzmannlab/ecoset", verification_mode=no_checks) ``` optionally a cache_dir can be specified where the zip file will be downloaded and extracted ```python from datasets import load_dataset dataset = load_dataset("kietzmannlab/ecoset", verification_mode=no_checks, cache_dir='/path/to/dir') ``` | NOTE: If you get errors like: `FileNotFoundError: [Errno 2] No such file or directory:'<DATASET_PATH>'` this is likely due do having previously downloaded the dataset and then cancelling the download. If this is the case for you, you can fix this error by manually removing the dataset path and reinstalling the dataset. | | --- | ## Dataset Structure We show detailed information for all the configurations of the dataset. Currently, there is only one setting (`Full`) available, containing all data. ### Data Instances #### Full - **Size of downloaded dataset files:** 155 GB - **Total amount of disk used:** 311 GB ## Dataset Creation A total of 565 categories were selected based on the following: 1) their word frequency in American television and film subtitles (SUBTLEX_US), 2) the perceived concreteness by human observers, and 3) the availability of a minimum of 700 images. Images were sourced via the overall ImageNet database (the same resource used for ILSVRC 2012) or obtained under CC BY-NC-SA 2.0 license from Bing image search and Flickr. Thorough data cleaning procedures were put in place to remove duplicates and to assure an expected misclassification rate per category of <4%. ### Curation Rationale More information on the curation of the dataset can be found in the [original publication](https://doi.org/10.1073/pnas.2011417118). ### Source Data The source data is available under: [https://codeocean.com/capsule/9570390/tree/v1](https://codeocean.com/capsule/6266601/tree/v1) ### Annotations Each ecoset image folder is annotated with class labels according to the main object depicted in a class of images. No further annotations are added to the dataset. ### Personal and Sensitive Information The dataset was tested to exclude sensitive images using Yahoo's Open NSFW detection model, removing all image with an NSFW score above 0.8. For this dataset, only images with secured license information was used, which should prevent the inclusion of images without consent of the image's authors and subjects. Despite these measures, it is possible that the images in the dataset contain personal and sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset Large-scale image-label datasets such as ImageNet are the backbone of modern Computer Vision. However, such large datasets often suffer from problems like mislabeling, category biases, misrepresentations, and unsafe content. Ecoset was created with the aim to reduce these biases and consequently improve the social impact of Computer Vision techniques trained on the dataset. More information on the social impact of the dataset can be found in the [original publication](https://doi.org/10.1073/pnas.2011417118). ### Discussion of Biases Despite best efforts to provide an ecologically valid and overall less biased dataset, ecoset is still likely to contain biased data. The category selection of ecoset was based on human concreteness ratings and word frequencies in a corpus consisting of American television and film subtitles. This undoubtedly biases the category selection toward Western cultures. Image inclusion was based on the availability via Bing/Flickr search results as well as the existence of relevant ImageNet categories. Images depicting people, specifically the categories “man,” “woman,” and “child,” were not sampled according to census distributions (age, ethnicity, gender, etc.). ### Other Known Limitations In addition to points mentioned in [Discussion of Biases](#discussion-of-biases), ecoset image and category distributions do not reflect the naturalistic, egocentric visual input typically encountered in the everyday life of infant and adults. ## Additional Information ### Dataset Curators The corpus was put together by Johannes Mehrer, Courtney J. Spoerer, Emer C. Jones, Nikolaus Kriegeskorte, and Tim C. Kietzmann. ### Licensing Information Ecoset is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 2.0 license (cc-by-nc-sa-2.0). ### Citation Information ``` @article{mehrer2021ecologically, title={An ecologically motivated image dataset for deep learning yields better models of human vision}, author={Mehrer, Johannes and Spoerer, Courtney J and Jones, Emer C and Kriegeskorte, Nikolaus and Kietzmann, Tim C}, journal={Proceedings of the National Academy of Sciences}, volume={118}, number={8}, pages={e2011417118}, year={2021}, publisher={National Acad Sciences} } ``` ### Contributions The ecoset dataloader and dataset card was created by [@DiGyt](https://github.com/DiGyt) on behalf of [@kietzmannlab](https://huggingface.co/kietzmannlab). For questions and suggestions feel free to reach out.
kietzmannlab/ecoset
[ "task_categories:image-classification", "task_ids:multi-class-classification", "task_ids:multi-class-image-classification", "source_datasets:original", "license:cc", "other-image-classification", "image-classification", "region:us" ]
2022-07-21T06:33:50+00:00
{"license": "cc", "source_datasets": ["original"], "task_categories": ["image-classification"], "task_ids": ["multi-class-classification", "multi-class-image-classification"], "paperswithcode_id": "ecoset", "pretty_name": "Ecoset", "tags": ["other-image-classification", "image-classification"]}
2024-02-02T19:13:47+00:00
9d7c3583cb446ef2e26c6fca24324e7dd295e238
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: cestwc/cnn_dailymail-test50 * Config: cestwc--cnn_dailymail-test50 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Buckeyes2019](https://huggingface.co/Buckeyes2019) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-cestwc__cnn_dailymail-test50-b9fb5faf-11395515
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T08:56:33+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cestwc/cnn_dailymail-test50"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": [], "dataset_name": "cestwc/cnn_dailymail-test50", "dataset_config": "cestwc--cnn_dailymail-test50", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-07-21T08:57:46+00:00
035943f67ab75602dc39ab84e279f27f10e80e1e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-cnn_dailymail * Dataset: cestwc/cnn_dailymail-test50 * Config: cestwc--cnn_dailymail-test50 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Buckeyes2019](https://huggingface.co/Buckeyes2019) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-cestwc__cnn_dailymail-test50-b9fb5faf-11395514
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T08:56:33+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cestwc/cnn_dailymail-test50"], "eval_info": {"task": "summarization", "model": "google/pegasus-cnn_dailymail", "metrics": [], "dataset_name": "cestwc/cnn_dailymail-test50", "dataset_config": "cestwc--cnn_dailymail-test50", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-07-21T08:58:16+00:00
d3e2e9677ffe643f58270fbcc7321bd8ac9fa598
nev/usdb-karaoke-animux
[ "license:cc-by-4.0", "region:us" ]
2022-07-21T09:47:55+00:00
{"license": "cc-by-4.0"}
2022-07-21T09:50:32+00:00
0f685a035621e4a9c17aa71437e1d6325144d5d4
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: nickprock/distilbert-base-uncased-banking77-classification * Dataset: banking77 * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nickprock](https://huggingface.co/nickprock) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-banking77-10fe815c-11415521
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T11:41:00+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["banking77"], "eval_info": {"task": "multi_class_classification", "model": "nickprock/distilbert-base-uncased-banking77-classification", "metrics": [], "dataset_name": "banking77", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-07-21T11:41:56+00:00
e83125a08d57be6c9e0aa40ad7f06ecb1d77adc5
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: nickprock/distilbert-base-uncased-banking77-classification * Dataset: banking77 * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nickprock](https://huggingface.co/nickprock) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-banking77-34727576-11425522
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T11:41:05+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["banking77"], "eval_info": {"task": "multi_class_classification", "model": "nickprock/distilbert-base-uncased-banking77-classification", "metrics": [], "dataset_name": "banking77", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-07-21T11:41:53+00:00
1f3971387a63eab5ed76d795c501249904f2161b
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: nickprock/distilbert-base-uncased-banking77-classification * Dataset: banking77 * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nickprock](https://huggingface.co/nickprock) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-banking77-9cb960fa-11435523
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T11:41:28+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["banking77"], "eval_info": {"task": "multi_class_classification", "model": "nickprock/distilbert-base-uncased-banking77-classification", "metrics": [], "dataset_name": "banking77", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-07-21T11:41:59+00:00
2ba19f47e9b5a645c1c2e9232c8abd69f91ec8df
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: cnn_dailymail * Config: 3.0.0 * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@jmsteen](https://huggingface.co/jmsteen) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-82ea4996-11445524
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T13:22:27+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "train", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-07-22T13:59:19+00:00
85a3e098ce748e1590a85b370b61a62e898d0bf5
acul3/pmd_indonesia
[ "license:cc-by-4.0", "region:us" ]
2022-07-21T14:33:42+00:00
{"license": "cc-by-4.0"}
2022-07-21T15:37:36+00:00
f39a0f32e1e09f34099c4b0ed22b35935e537cbc
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-976d13e6-0b05-475e-9b4e-e8fbc174cfae-346
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T14:35:27+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-21T14:37:45+00:00
e66c0d2ce2bde245f0a64d8eea309b2f27e26c80
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-d3ec9b9a-b64a-40a0-baff-3af478f604df-367
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T14:44:12+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-21T14:50:03+00:00
0a02e8200fb7a51296112bade2ab912df6f09361
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/roberta-base-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-f2158b57-4f5f-457d-9656-edbe0fb0d311-398
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T14:58:13+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/roberta-base-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-21T15:01:11+00:00
127f37dff7cde0aad160e7e0343214ae6114046e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/roberta-base-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-e81e3618-f3e1-472b-97e0-2794cda0adb2-409
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T15:06:41+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/roberta-base-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-21T15:09:50+00:00
37906d94ced6a00549b67d7e5d5bd8b295042f5d
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/roberta-base-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-df92c53c-2bfd-442d-8572-7541578e7feb-4110
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T15:19:59+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/roberta-base-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-21T15:23:07+00:00
e3d88e993898dafec8e57a66d67a24b757568ad5
calbert/hinglish-large
[ "task_categories:feature-extraction", "task_categories:fill-mask", "task_categories:sentence-similarity", "task_categories:text2text-generation", "task_ids:masked-language-modeling", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "multilinguality:other-hindi-english-transliteration", "size_categories:100K<n<1M", "license:cc-by-4.0", "calbert", "code-mixing", "code-mixed", "hinglish", "india", "indic", "english", "hindi", "region:us" ]
2022-07-21T15:21:45+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual", "other-hindi-english-transliteration"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["feature-extraction", "fill-mask", "sentence-similarity", "text2text-generation"], "task_ids": ["masked-language-modeling"], "pretty_name": "IndicCorp Hinglish", "language_bcp47": ["en-hi"], "tags": ["calbert", "code-mixing", "code-mixed", "hinglish", "india", "indic", "english", "hindi"]}
2022-09-22T12:54:30+00:00
738a202f3044f0e5191aeee1061701c61f15e6cb
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-9ec0b53a-81c5-4d01-88f6-bf53413cd1a8-4611
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T15:32:08+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-21T15:34:17+00:00
6d679cc141274969e47290ea5e6e6b3f25016591
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/distilbert-base-cased-distilled-squad * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-9ec0b53a-81c5-4d01-88f6-bf53413cd1a8-4612
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T15:37:48+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/distilbert-base-cased-distilled-squad", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-21T16:25:56+00:00
56bcdcb3662d0c7a9409485d4499472ab7302350
rjac/all-the-news-2-1-Component-ones-cluster-labels
[ "region:us" ]
2022-07-21T16:43:09+00:00
{}
2022-07-31T15:42:40+00:00
1c37d22eef2e4e729d8908c098b0362848f42c51
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/roberta-base-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad_v2-7c1a5e5f-11505530
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T16:43:24+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/roberta-base-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-21T16:47:03+00:00
4cef5e07f40409be5073c3f94d5d5e7ef5ce7f62
iuihgisgsd/KHGKJHKGH
[ "license:cc-by-sa-4.0", "region:us" ]
2022-07-21T16:58:38+00:00
{"license": "cc-by-sa-4.0"}
2022-07-21T16:58:38+00:00
42ab35c272ec2a3248521e36ffffed0115dab581
# Dataset Card for Auditor Sentiment ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) ## Dataset Description Auditor review sentiment collected by News Department - **Point of Contact:** Talked to COE for Auditing, currently [email protected] ### Dataset Summary Auditor sentiment dataset of sentences from financial news. The dataset consists of several thousand sentences from English language financial news categorized by sentiment. ### Supported Tasks and Leaderboards Sentiment Classification ### Languages English ## Dataset Structure ### Data Instances ``` "sentence": "Pharmaceuticals group Orion Corp reported a fall in its third-quarter earnings that were hit by larger expenditures on R&D and marketing .", "label": "negative" ``` ### Data Fields - sentence: a tokenized line from the dataset - label: a label corresponding to the class as a string: 'positive' - (2), 'neutral' - (1), or 'negative' - (0) ### Data Splits A train/test split was created randomly with a 75/25 split ## Dataset Creation ### Curation Rationale To gather our auditor evaluations into one dataset. Previous attempts using off-the-shelf sentiment had only 70% F1, this dataset was an attempt to improve upon that performance. ### Source Data #### Initial Data Collection and Normalization The corpus used in this paper is made out of English news reports. #### Who are the source language producers? The source data was written by various auditors. ### Annotations #### Annotation process This release of the auditor reviews covers a collection of 4840 sentences. The selected collection of phrases was annotated by 16 people with adequate background knowledge on financial markets. The subset here is where inter-annotation agreement was greater than 75%. #### Who are the annotators? They were pulled from the SME list, names are held by [email protected] ### Personal and Sensitive Information There is no personal or sensitive information in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases All annotators were from the same institution and so interannotator agreement should be understood with this taken into account. ### Licensing Information License: Demo.Org Proprietary - DO NOT SHARE This dataset is based on the [financial phrasebank](https://huggingface.co/datasets/financial_phrasebank) dataset.
FinanceInc/auditor_sentiment
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "region:us" ]
2022-07-21T17:25:47+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "sentiment-classification"], "pretty_name": "Auditor_Sentiment"}
2022-07-21T18:03:51+00:00
795824409d295424e69005d881d5370f177265b8
annotations_creators: - no-annotation language: - en language_creators: - found license: - unknown multilinguality: - monolingual pretty_name: structured song lyrics size_categories: [] source_datasets: [] tags: - lyrics task_categories: - text-generation task_ids: - language-modeling [Needs More Information] # Dataset Card for song_lyrics ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary Structured song lyrics ### Supported Tasks and Leaderboards text generation ### Languages English ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields [Needs More Information] ### Data Splits [Needs More Information] ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information]
nbsullivan/song_lyrics
[ "region:us" ]
2022-07-21T18:55:40+00:00
{}
2022-07-21T19:19:14+00:00
e670508f77f244a24a8bcf100f02011df9d8435b
[Midjourney](https://midjourney.com) is an independent research lab whose broad mission is to "explore new mediums of thought". In 2022, they launched a text-to-image service that, given a natural language prompt, produces visual depictions that are faithful to the description. Their service is accessible via a public [Discord server](https://discord.com/invite/midjourney): users issue a query in natural language, and the Midjourney bot returns AI-generated images that follow the given description. The raw dataset (with Discord messages) can be found on Kaggle: [Midjourney User Prompts & Generated Images (250k)](https://www.kaggle.com/datasets/succinctlyai/midjourney-texttoimage). The authors of the scraped dataset have no affiliation to Midjourney. This HuggingFace dataset was [processed](https://www.kaggle.com/code/succinctlyai/midjourney-text-prompts-huggingface) from the raw Discord messages to solely include the text prompts issued by the user (thus excluding the generated images and any other metadata). It could be used, for instance, to fine-tune a large language model to produce or auto-complete creative prompts for image generation. Check out [succinctly/text2image-prompt-generator](https://huggingface.co/succinctly/text2image-prompt-generator), a GPT-2 model fine-tuned on this dataset.
succinctly/midjourney-prompts
[ "license:apache-2.0", "region:us" ]
2022-07-21T19:29:49+00:00
{"license": "apache-2.0"}
2022-07-22T00:49:16+00:00
b9190341f1939b12ce99c0b3120590e9d24033dc
# Dataset Card for "WikiArt" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Artificio/WikiArt
[ "region:us" ]
2022-07-21T20:18:50+00:00
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "artist", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "genre", "dtype": "string"}, {"name": "style", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "filename", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "embeddings_pca512", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 1659296285.75, "num_examples": 103250}], "download_size": 1711766693, "dataset_size": 1659296285.75}}
2023-01-18T17:13:54+00:00
35a56f3c865a3b3abdc7e3386804fe2063efd6f2
# Dataset Card for `reviews_with_drift` ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description ### Dataset Summary This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place. ### Supported Tasks and Leaderboards `text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative). ### Languages Text is mainly written in english. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset.
arize-ai/cifar10_quality_drift
[ "task_categories:image-classification", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|imdb", "language:en", "license:mit", "region:us" ]
2022-07-21T22:00:55+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|imdb"], "task_categories": ["image-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "sentiment-classification-reviews-with-drift"}
2022-10-25T09:40:25+00:00
3a0ac3296e467afae7bd4d6ffc6ab795af8904d9
# Dataset Card for NERDE ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [NERDE repository](https://github.com/guipaiva/NERDE) - **Point of Contact:** [Guilherme P. Paiva](mailto:[email protected]) ### Dataset Summary NERDE is a dataset for Named Entity Recognition for Economic Defense. It was created in collaboration with LATITUDE/UnB Laboratory and the Administrative Council for Economic Defense (Cade) ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The language in the dataset is Brazilian Portuguese from legal documents. The BCP-47 code for Brazilian Portuguese is pt-BR ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@guipaiva](https://github.com/guipaiva) for adding this dataset.
Gpaiva/NERDE
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:pt", "license:cc-by-4.0", "ner", "portuguese-ner", "economic-defense", "region:us" ]
2022-07-22T00:50:19+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["pt"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "NERDE", "tags": ["ner", "portuguese-ner", "economic-defense"]}
2022-07-28T00:27:18+00:00
04a24bc0667e9a45a51f0ada6681aebc35898723
ASCCCCCCCC/mix_info
[ "license:apache-2.0", "region:us" ]
2022-07-22T02:36:51+00:00
{"license": "apache-2.0"}
2022-07-22T02:41:12+00:00
49ea9e40149871828d02aed166988c67dcda75c4
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: distilbert-base-uncased-finetuned-sst-2-english * Dataset: sst2 * Config: default * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Neez](https://huggingface.co/Neez) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-sst2-ee5c821a-11545531
[ "autotrain", "evaluation", "region:us" ]
2022-07-22T05:30:11+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["sst2"], "eval_info": {"task": "multi_class_classification", "model": "distilbert-base-uncased-finetuned-sst-2-english", "metrics": [], "dataset_name": "sst2", "dataset_config": "default", "dataset_split": "train", "col_mapping": {"text": "sentence", "target": "label"}}}
2022-07-22T05:33:53+00:00
97197c4a27472a1cb112d4f384ba6f70e040b2a6
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: tuner007/pegasus_summarizer * Dataset: cnn_dailymail * Config: 3.0.0 * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Neez](https://huggingface.co/Neez) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-7c900a64-11555532
[ "autotrain", "evaluation", "region:us" ]
2022-07-22T06:39:27+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "tuner007/pegasus_summarizer", "metrics": ["accuracy", "f1", "precision", "recall"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "train", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-07-23T21:08:35+00:00
d1b54f2b452230e082fbdc30fe42b0f96c44ff16
This dataset provides information of all the spaces (~6,200 at time of snapshot) created on [HuggingFace Spaces](https://huggingface.co/spaces) 🤗. Most of the data comes from a public API endpoint while some of the data is enriched by web scraping. The dataset is intended to provide a snapshot of the spaces and was last updated in first week of *July-2022*. Along with the name of the space, the dataset consists of following columns: - likes (number of likes on the space) - sdk (streamlit,gradio or other) - status (was running successfully or had error when snapshot was taken) - total_commits (number of commits in the space) - last_commit (when did last commit happen) - community_interactions (number of interactions in the newly introduced Community tab) Apart from these, we have also added some post-processing columns (where space was using gradio): - inputs (Image/Text/Slider etc) - outputs (Image/Audio/Textbox etc) - ai_ml_reqs (If the requirements.txt contain a popular ML repo dependency like: torch, tensorflow, pandas, sklearn, scipy etc) Contributors: - [Abdullah Meda](https://www.linkedin.com/in/abdmeda/) - [Ayush Ranwa](https://twitter.com/Ayushranwa6) - [Deepak Rawat](https://twitter.com/dsr_ai) - [Kartik Godawat](https://twitter.com/kartik_godawat) Please reach out to us for any queries or discussions.
deepklarity/huggingface-spaces-dataset
[ "license:cc", "region:us" ]
2022-07-22T07:45:29+00:00
{"license": "cc"}
2022-07-22T08:10:17+00:00
def0f9aff0c7f41639cb13e0307cdb17d76965ec
ccpp/test1
[ "license:afl-3.0", "region:us" ]
2022-07-22T08:01:23+00:00
{"license": "afl-3.0"}
2022-07-22T08:01:23+00:00
f2f8f031c380b6d0ccd2a8102a40717e4a036884
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad-7ad816c0-11585539
[ "autotrain", "evaluation", "region:us" ]
2022-07-22T08:31:03+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-22T08:33:29+00:00
add96f0971c3921b3b77150838ef0d0494986fa9
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/roberta-base-squad2 * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad-7ad816c0-11585538
[ "autotrain", "evaluation", "region:us" ]
2022-07-22T08:31:03+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/roberta-base-squad2", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-22T08:34:17+00:00
7d2e66ed02c4ff5b893295433a4e2f9f7aaa3592
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/distilbert-base-cased-distilled-squad * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad-7ad816c0-11585540
[ "autotrain", "evaluation", "region:us" ]
2022-07-22T08:31:08+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/distilbert-base-cased-distilled-squad", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-22T08:33:32+00:00