sha
stringlengths
40
40
text
stringlengths
0
13.4M
id
stringlengths
2
117
tags
list
created_at
stringlengths
25
25
metadata
stringlengths
2
31.7M
last_modified
stringlengths
25
25
50e79c314a7603ebc92236b66a0973d51a00ed8c
# Dataset Card for JGLUE [![CI](https://github.com/shunk031/huggingface-datasets_JGLUE/actions/workflows/ci.yaml/badge.svg)](https://github.com/shunk031/huggingface-datasets_JGLUE/actions/workflows/ci.yaml) [![ACL2020 2020.acl-main.419](https://img.shields.io/badge/LREC2022-2022.lrec--1.317-red)](https://aclanthology.org/2022.lrec-1.317) This dataset loading script is developed on [GitHub](https://github.com/shunk031/huggingface-datasets_JGLUE). Please feel free to open an [issue](https://github.com/shunk031/huggingface-datasets_JGLUE/issues/new/choose) or [pull request](https://github.com/shunk031/huggingface-datasets_JGLUE/pulls). ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/yahoojapan/JGLUE - **Repository:** https://github.com/shunk031/huggingface-datasets_JGLUE ### Dataset Summary From [JGLUE's README.md](https://github.com/yahoojapan/JGLUE#jglue-japanese-general-language-understanding-evaluation): > JGLUE, Japanese General Language Understanding Evaluation, is built to measure the general NLU ability in Japanese. JGLUE has been constructed from scratch without translation. We hope that JGLUE will facilitate NLU research in Japanese. > JGLUE has been constructed by a joint research project of Yahoo Japan Corporation and Kawahara Lab at Waseda University. ### Supported Tasks and Leaderboards From [JGLUE's README.md](https://github.com/yahoojapan/JGLUE#tasksdatasets): > JGLUE consists of the tasks of text classification, sentence pair classification, and QA. Each task consists of multiple datasets. #### Supported Tasks ##### MARC-ja From [JGLUE's README.md](https://github.com/yahoojapan/JGLUE#marc-ja): > MARC-ja is a dataset of the text classification task. This dataset is based on the Japanese portion of [Multilingual Amazon Reviews Corpus (MARC)](https://docs.opendata.aws/amazon-reviews-ml/readme.html) ([Keung+, 2020](https://aclanthology.org/2020.emnlp-main.369/)). ##### JCoLA From [JCoLA's README.md](https://github.com/osekilab/JCoLA#jcola-japanese-corpus-of-linguistic-acceptability) > JCoLA (Japanese Corpus of Linguistic Accept010 ability) is a novel dataset for targeted syntactic evaluations of language models in Japanese, which consists of 10,020 sentences with acceptability judgments by linguists. The sentences are manually extracted from linguistics journals, handbooks and textbooks. JCoLA is included in [JGLUE benchmark](https://github.com/yahoojapan/JGLUE) (Kurihara et al., 2022). ##### JSTS From [JGLUE's README.md](https://github.com/yahoojapan/JGLUE#jsts): > JSTS is a Japanese version of the STS (Semantic Textual Similarity) dataset. STS is a task to estimate the semantic similarity of a sentence pair. The sentences in JSTS and JNLI (described below) are extracted from the Japanese version of the MS COCO Caption Dataset, [the YJ Captions Dataset](https://github.com/yahoojapan/YJCaptions) ([Miyazaki and Shimizu, 2016](https://aclanthology.org/P16-1168/)). ##### JNLI From [JGLUE's README.md](https://github.com/yahoojapan/JGLUE#jnli): > JNLI is a Japanese version of the NLI (Natural Language Inference) dataset. NLI is a task to recognize the inference relation that a premise sentence has to a hypothesis sentence. The inference relations are entailment, contradiction, and neutral. ##### JSQuAD From [JGLUE's README.md](https://github.com/yahoojapan/JGLUE#jsquad): > JSQuAD is a Japanese version of [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) ([Rajpurkar+, 2018](https://aclanthology.org/P18-2124/)), one of the datasets of reading comprehension. Each instance in the dataset consists of a question regarding a given context (Wikipedia article) and its answer. JSQuAD is based on SQuAD 1.1 (there are no unanswerable questions). We used [the Japanese Wikipedia dump](https://dumps.wikimedia.org/jawiki/) as of 20211101. ##### JCommonsenseQA From [JGLUE's README.md](https://github.com/yahoojapan/JGLUE#jcommonsenseqa): > JCommonsenseQA is a Japanese version of [CommonsenseQA](https://www.tau-nlp.org/commonsenseqa) ([Talmor+, 2019](https://aclanthology.org/N19-1421/)), which is a multiple-choice question answering dataset that requires commonsense reasoning ability. It is built using crowdsourcing with seeds extracted from the knowledge base [ConceptNet](https://conceptnet.io/). #### Leaderboard From [JGLUE's README.md](https://github.com/yahoojapan/JGLUE#leaderboard): > A leaderboard will be made public soon. The test set will be released at that time. ### Languages The language data in JGLUE is in Japanese ([BCP-47 ja-JP](https://www.rfc-editor.org/info/bcp47)). ## Dataset Structure ### Data Instances When loading a specific configuration, users has to append a version dependent suffix: #### MARC-ja ```python from datasets import load_dataset dataset = load_dataset("shunk031/JGLUE", name="MARC-ja") print(dataset) # DatasetDict({ # train: Dataset({ # features: ['sentence', 'label', 'review_id'], # num_rows: 187528 # }) # validation: Dataset({ # features: ['sentence', 'label', 'review_id'], # num_rows: 5654 # }) # }) ``` #### JCoLA ```python from datasets import load_dataset dataset = load_dataset("shunk031/JGLUE", name="JCoLA") print(dataset) # DatasetDict({ # train: Dataset({ # features: ['uid', 'source', 'label', 'diacritic', 'sentence', 'original', 'translation', 'gloss', 'simple', 'linguistic_phenomenon'], # num_rows: 6919 # }) # validation: Dataset({ # features: ['uid', 'source', 'label', 'diacritic', 'sentence', 'original', 'translation', 'gloss', 'simple', 'linguistic_phenomenon'], # num_rows: 865 # }) # validation_out_of_domain: Dataset({ # features: ['uid', 'source', 'label', 'diacritic', 'sentence', 'original', 'translation', 'gloss', 'simple', 'linguistic_phenomenon'], # num_rows: 685 # }) # validation_out_of_domain_annotated: Dataset({ # features: ['uid', 'source', 'label', 'diacritic', 'sentence', 'original', 'translation', 'gloss', 'simple', 'linguistic_phenomenon'], # num_rows: 685 # }) # }) ``` An example of the JCoLA dataset (validation - out of domain annotated) looks as follows: ```json { "uid": 9109, "source": "Asano_and_Ura_2010", "label": 1, "diacritic": "g", "sentence": "太郎のゴミの捨て方について話した。", "original": "太郎のゴミの捨て方", "translation": "‘The way (for Taro) to throw out garbage’", "gloss": true, "linguistic_phenomenon": { "argument_structure": true, "binding": false, "control_raising": false, "ellipsis": false, "filler_gap": false, "island_effects": false, "morphology": false, "nominal_structure": false, "negative_polarity_concord_items": false, "quantifier": false, "verbal_agreement": false, "simple": false } } ``` #### JSTS ```python from datasets import load_dataset dataset = load_dataset("shunk031/JGLUE", name="JSTS") print(dataset) # DatasetDict({ # train: Dataset({ # features: ['sentence_pair_id', 'yjcaptions_id', 'sentence1', 'sentence2', 'label'], # num_rows: 12451 # }) # validation: Dataset({ # features: ['sentence_pair_id', 'yjcaptions_id', 'sentence1', 'sentence2', 'label'], # num_rows: 1457 # }) # }) ``` An example of the JSTS dataset looks as follows: ```json { "sentence_pair_id": "691", "yjcaptions_id": "127202-129817-129818", "sentence1": "街中の道路を大きなバスが走っています。 (A big bus is running on the road in the city.)", "sentence2": "道路を大きなバスが走っています。 (There is a big bus running on the road.)", "label": 4.4 } ``` #### JNLI ```python from datasets import load_dataset dataset = load_dataset("shunk031/JGLUE", name="JNLI") print(dataset) # DatasetDict({ # train: Dataset({ # features: ['sentence_pair_id', 'yjcaptions_id', 'sentence1', 'sentence2', 'label'], # num_rows: 20073 # }) # validation: Dataset({ # features: ['sentence_pair_id', 'yjcaptions_id', 'sentence1', 'sentence2', 'label'], # num_rows: 2434 # }) # }) ``` An example of the JNLI dataset looks as follows: ```json { "sentence_pair_id": "1157", "yjcaptions_id": "127202-129817-129818", "sentence1": "街中の道路を大きなバスが走っています。 (A big bus is running on the road in the city.)", "sentence2": "道路を大きなバスが走っています。 (There is a big bus running on the road.)", "label": "entailment" } ``` #### JSQuAD ```python from datasets import load_dataset dataset = load_dataset("shunk031/JGLUE", name="JSQuAD") print(dataset) # DatasetDict({ # train: Dataset({ # features: ['id', 'title', 'context', 'question', 'answers', 'is_impossible'], # num_rows: 62859 # }) # validation: Dataset({ # features: ['id', 'title', 'context', 'question', 'answers', 'is_impossible'], # num_rows: 4442 # }) # }) ``` An example of the JSQuAD looks as follows: ```json { "id": "a1531320p0q0", "title": "東海道新幹線", "context": "東海道新幹線 [SEP] 1987 年(昭和 62 年)4 月 1 日の国鉄分割民営化により、JR 東海が運営を継承した。西日本旅客鉄道(JR 西日本)が継承した山陽新幹線とは相互乗り入れが行われており、東海道新幹線区間のみで運転される列車にも JR 西日本所有の車両が使用されることがある。2020 年(令和 2 年)3 月現在、東京駅 - 新大阪駅間の所要時間は最速 2 時間 21 分、最高速度 285 km/h で運行されている。", "question": "2020 年(令和 2 年)3 月現在、東京駅 - 新大阪駅間の最高速度はどのくらいか。", "answers": { "text": ["285 km/h"], "answer_start": [182] }, "is_impossible": false } ``` #### JCommonsenseQA ```python from datasets import load_dataset dataset = load_dataset("shunk031/JGLUE", name="JCommonsenseQA") print(dataset) # DatasetDict({ # train: Dataset({ # features: ['q_id', 'question', 'choice0', 'choice1', 'choice2', 'choice3', 'choice4', 'label'], # num_rows: 8939 # }) # validation: Dataset({ # features: ['q_id', 'question', 'choice0', 'choice1', 'choice2', 'choice3', 'choice4', 'label'], # num_rows: 1119 # }) # }) ``` An example of the JCommonsenseQA looks as follows: ```json { "q_id": 3016, "question": "会社の最高責任者を何というか? (What do you call the chief executive officer of a company?)", "choice0": "社長 (president)", "choice1": "教師 (teacher)", "choice2": "部長 (manager)", "choice3": "バイト (part-time worker)", "choice4": "部下 (subordinate)", "label": 0 } ``` ### Data Fields #### MARC-ja - `sentence_pair_id`: ID of the sentence pair - `yjcaptions_id`: sentence ids in yjcaptions (explained below) - `sentence1`: first sentence - `sentence2`: second sentence - `label`: sentence similarity: 5 (equivalent meaning) - 0 (completely different meaning) ##### Explanation for `yjcaptions_id` From [JGLUE's README.md](https://github.com/yahoojapan/JGLUE#explanation-for-yjcaptions_id), there are the following two cases: 1. sentence pairs in one image: `(image id)-(sentence1 id)-(sentence2 id)` - e.g., 723-844-847 - a sentence id starting with "g" means a sentence generated by a crowdworker (e.g., 69501-75698-g103): only for JNLI 2. sentence pairs in two images: `(image id of sentence1)_(image id of sentence2)-(sentence1 id)-(sentence2 id)` - e.g., 91337_217583-96105-91680 #### JCoLA From [JCoLA's README.md](https://github.com/osekilab/JCoLA#data-description) and [JCoLA's paper](https://arxiv.org/abs/2309.12676) - `uid`: unique id of the sentence - `source`: author and the year of publication of the source article - `label`: acceptability judgement label (0 for unacceptable, 1 for acceptable) - `diacritic`: acceptability judgement as originally notated in the source article - `sentence`: sentence (modified by the author if needed) - `original`: original sentence as presented in the source article - `translation`: English translation of the sentence as presentend in the source article (if any) - `gloss`: gloss of the sentence as presented in the source article (if any) - `linguistic_phenomenon` - `argument_structure`: acceptability judgements based on the order of arguments and case marking - `binding`: acceptability judgements based on the binding of noun phrases - `control_raising`: acceptability judgements based on predicates that are categorized as control or raising - `ellipsis`: acceptability judgements based on the possibility of omitting elements in the sentences - `filler_gap`: acceptability judgements based on the dependency between the moved element and the gap - `island effects`: acceptability judgements based on the restrictions on filler-gap dependencies such as wh-movements - `morphology`: acceptability judgements based on the morphology - `nominal_structure`: acceptability judgements based on the internal structure of noun phrases - `negative_polarity_concord_items`: acceptability judgements based on the restrictions on where negative polarity/concord items (NPIs/NCIs) can appear - `quantifiers`: acceptability judgements based on the distribution of quantifiers such as floating quantifiers - `verbal_agreement`: acceptability judgements based on the dependency between subjects and verbs - `simple`: acceptability judgements that do not have marked syntactic structures #### JNLI - `sentence_pair_id`: ID of the sentence pair - `yjcaptions_id`: sentence ids in the yjcaptions - `sentence1`: premise sentence - `sentence2`: hypothesis sentence - `label`: inference relation #### JSQuAD - `title`: title of a Wikipedia article - `paragraphs`: a set of paragraphs - `qas`: a set of pairs of a question and its answer - `question`: question - `id`: id of a question - `answers`: a set of answers - `text`: answer text - `answer_start`: start position (character index) - `is_impossible`: all the values are false - `context`: a concatenation of the title and paragraph #### JCommonsenseQA - `q_id`: ID of the question - `question`: question - `choice{0..4}`: choice - `label`: correct choice id ### Data Splits From [JGLUE's README.md](https://github.com/yahoojapan/JGLUE/blob/main/README.md#tasksdatasets): > Only train/dev sets are available now, and the test set will be available after the leaderboard is made public. From [JCoLA's paper](https://arxiv.org/abs/2309.12676): > The in-domain data is split into training data (6,919 instances), development data (865 instances), and test data (865 instances). On the other hand, the out-of-domain data is only used for evaluation, and divided into development data (685 instances) and test data (686 instances). | Task | Dataset | Train | Dev | Test | |------------------------------|----------------|--------:|------:|------:| | Text Classification | MARC-ja | 187,528 | 5,654 | 5,639 | | | JCoLA | 6,919 | 865† / 685‡ | 865† / 685‡ | | Sentence Pair Classification | JSTS | 12,451 | 1,457 | 1,589 | | | JNLI | 20,073 | 2,434 | 2,508 | | Question Answering | JSQuAD | 62,859 | 4,442 | 4,420 | | | JCommonsenseQA | 8,939 | 1,119 | 1,118 | > JCoLA: † in domain. ‡ out of domain. ## Dataset Creation ### Curation Rationale From [JGLUE's paper](https://aclanthology.org/2022.lrec-1.317/): > JGLUE is designed to cover a wide range of GLUE and SuperGLUE tasks and consists of three kinds of tasks: text classification, sentence pair classification, and question answering. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? - The source language producers are users of Amazon (MARC-ja), crowd-workers of [Yahoo! Crowdsourcing](https://crowdsourcing.yahoo.co.jp/) (JSTS, JNLI and JCommonsenseQA), writers of the Japanese Wikipedia (JSQuAD), crowd-workers of [Lancers](https://www.lancers.jp/). ### Annotations #### Annotation process ##### MARC-ja From [JGLUE's paper](https://aclanthology.org/2022.lrec-1.317/): > As one of the text classification datasets, we build a dataset based on the Multilingual Amazon Reviews Corpus (MARC) (Keung et al., 2020). MARC is a multilingual corpus of product reviews with 5-level star ratings (1-5) on the Amazon shopping site. This corpus covers six languages, including English and Japanese. For JGLUE, we use the Japanese part of MARC and to make it easy for both humans and computers to judge a class label, we cast the text classification task as a binary classification task, where 1- and 2-star ratings are converted to “negative”, and 4 and 5 are converted to “positive”. We do not use reviews with a 3-star rating. > One of the problems with MARC is that it sometimes contains data where the rating diverges from the review text. This happens, for example, when a review with positive content is given a rating of 1 or 2. These data degrade the quality of our dataset. To improve the quality of the dev/test instances used for evaluation, we crowdsource a positive/negative judgment task for approximately 12,000 reviews. We adopt only reviews with the same votes from 7 or more out of 10 workers and assign a label of the maximum votes to these reviews. We divide the resulting reviews into dev/test data. > We obtained 5,654 and 5,639 instances for the dev and test data, respectively, through the above procedure. For the training data, we extracted 187,528 instances directly from MARC without performing the cleaning procedure because of the large number of training instances. The statistics of MARC-ja are listed in Table 2. For the evaluation metric for MARC-ja, we use accuracy because it is a binary classification task of texts. ##### JCoLA From [JCoLA's paper](https://arxiv.org/abs/2309.12676): > ### 3 JCoLA > In this study, we introduce JCoLA (Japanese Corpus of Linguistic Acceptability), which will be the first large-scale acceptability judgment task dataset focusing on Japanese. JCoLA consists of sentences from textbooks and handbooks on Japanese syntax, as well as from journal articles on Japanese syntax that are published in JEAL (Journal of East Asian Linguistics), one of the prestigious journals in theoretical linguistics. > #### 3.1 Data Collection > Sentences in JCoLA were collected from prominent textbooks and handbooks focusing on Japanese syntax. In addition to the main text, example sentences included in the footnotes were also considered for collection. We also collected acceptability judgments from journal articles on Japanese syntax published in JEAL (Journal of East Asian Linguistics): one of the prestigious journals in the-oretical linguistics. Specifically, we examined all the articles published in JEAL between 2006 and 2015 (133 papers in total), and extracted 2,252 acceptability judgments from 26 papers on Japanese syntax (Table 2). Acceptability judgments include sentences in appendices and footnotes, but not sentences presented for analyses of syntactic structures (e.g. sentences with brackets to show their syntactic structures). As a result, a total of 11,984 example. sentences were collected. Using this as a basis, JCoLA was constructed through the methodology explained in the following sections. ##### JSTS and JNLI From [JGLUE's paper](https://aclanthology.org/2022.lrec-1.317/): > For the sentence pair classification datasets, we construct a semantic textual similarity (STS) dataset, JSTS, and a natural language inference (NLI) dataset, JNLI. > ### Overview > STS is a task of estimating the semantic similarity of a sentence pair. Gold similarity is usually assigned as an average of the integer values 0 (completely different meaning) to 5 (equivalent meaning) assigned by multiple workers through crowdsourcing. > NLI is a task of recognizing the inference relation that a premise sentence has to a hypothesis sentence. Inference relations are generally defined by three labels: “entailment”, “contradiction”, and “neutral”. Gold inference relations are often assigned by majority voting after collecting answers from multiple workers through crowdsourcing. > For the STS and NLI tasks, STS-B (Cer et al., 2017) and MultiNLI (Williams et al., 2018) are included in GLUE, respectively. As Japanese datasets, JSNLI (Yoshikoshi et al., 2020) is a machine translated dataset of the NLI dataset SNLI (Stanford NLI), and JSICK (Yanaka and Mineshima, 2021) is a human translated dataset of the STS/NLI dataset SICK (Marelli et al., 2014). As mentioned in Section 1, these have problems originating from automatic/manual translations. To solve this problem, we construct STS/NLI datasets in Japanese from scratch. We basically extract sentence pairs in JSTS and JNLI from the Japanese version of the MS COCO Caption Dataset (Chen et al., 2015), the YJ Captions Dataset (Miyazaki and Shimizu, 2016). Most of the sentence pairs in JSTS and JNLI overlap, allowing us to analyze the relationship between similarities and inference relations for the same sentence pairs like SICK and JSICK. > The similarity value in JSTS is assigned a real number from 0 to 5 as in STS-B. The inference relation in JNLI is assigned from the above three labels as in SNLI and MultiNLI. The definitions of the inference relations are also based on SNLI. > ### Method of Construction > Our construction flow for JSTS and JNLI is shown in Figure 1. Basically, two captions for the same image of YJ Captions are used as sentence pairs. For these sentence pairs, similarities and NLI relations of entailment and neutral are obtained by crowdsourcing. However, it is difficult to collect sentence pairs with low similarity and contradiction relations from captions for the same image. To solve this problem, we collect sentence pairs with low similarity from captions for different images. We collect contradiction relations by asking workers to write contradictory sentences for a given caption. > The detailed construction procedure for JSTS and JNLI is described below. > 1. We crowdsource an STS task using two captions for the same image from YJ Captions. We ask five workers to answer the similarity between two captions and take the mean value as the gold similarity. We delete sentence pairs with a large variance in the answers because such pairs have poor answer quality. We performed this task on 16,000 sentence pairs and deleted sentence pairs with a similarity variance of 1.0 or higher, resulting in the collection of 10,236 sentence pairs with gold similarity. We refer to this collected data as JSTS-A. > 2. To collect sentence pairs with low similarity, we crowdsource the same STS task as Step 1 using sentence pairs of captions for different images. We conducted this task on 4,000 sentence pairs and collected 2,970 sentence pairs with gold similarity. We refer to this collected data as JSTS-B. > 3. For JSTS-A, we crowdsource an NLI task. Since inference relations are directional, we obtain inference relations in both directions for sentence pairs. As mentioned earlier,it is difficult to collect instances of contradiction from JSTS-A, which was collected from the captions of the same images,and thus we collect instances of entailment and neutral in this step. We collect inference relation answers from 10 workers. If six or more people give the same answer, we adopt it as the gold label if it is entailment or neutral. To obtain inference relations in both directions for JSTS-A, we performed this task on 20,472 sentence pairs, twice as many as JSTS-A. As a result, we collected inference relations for 17,501 sentence pairs. We refer to this collected data as JNLI-A. We do not use JSTS-B for the NLI task because it is difficult to define and determine the inference relations between captions of different images. > 4. To collect NLI instances of contradiction, we crowdsource a task of writing four contradictory sentences for each caption in YJCaptions. From the written sentences, we remove sentence pairs with an edit distance of 0.75 or higher to remove low-quality sentences, such as short sentences and sentences with low relevance to the original sentence. Furthermore, we perform a one-way NLI task with 10 workers to verify whether the created sentence pairs are contradictory. Only the sentence pairs answered as contradiction by at least six workers are adopted. Finally,since the contradiction relation has no direction, we automatically assign contradiction in the opposite direction of the adopted sentence pairs. Using 1,800 captions, we acquired 7,200 sentence pairs, from which we collected 3,779 sentence pairs to which we assigned the one-way contradiction relation.By automatically assigning the contradiction relation in the opposite direction, we doubled the number of instances to 7,558. We refer to this collected data as JNLI-C. > 5. For the 3,779 sentence pairs collected in Step 4, we crowdsource an STS task, assigning similarity and filtering in the same way as in Steps1 and 2. In this way, we collected 2,303 sentence pairs with gold similarity from 3,779 pairs. We refer to this collected data as JSTS-C. ##### JSQuAD From [JGLUE's paper](https://aclanthology.org/2022.lrec-1.317/): > As QA datasets, we build a Japanese version of SQuAD (Rajpurkar et al., 2016), one of the datasets of reading comprehension, and a Japanese version ofCommonsenseQA, which is explained in the next section. > Reading comprehension is the task of reading a document and answering questions about it. Many reading comprehension evaluation sets have been built in English, followed by those in other languages or multilingual ones. > In Japanese, reading comprehension datasets for quizzes (Suzukietal.,2018) and those in the drivingdomain (Takahashi et al., 2019) have been built, but none are in the general domain. We use Wikipedia to build a dataset for the general domain. The construction process is basically based on SQuAD 1.1 (Rajpurkar et al., 2016). > First, to extract high-quality articles from Wikipedia, we use Nayuki, which estimates the quality of articles on the basis of hyperlinks in Wikipedia. We randomly chose 822 articles from the top-ranked 10,000 articles. For example, the articles include “熊本県 (Kumamoto Prefecture)” and “フランス料理 (French cuisine)”. Next, we divide an article into paragraphs, present each paragraph to crowdworkers, and ask them to write questions and answers that can be answered if one understands the paragraph. Figure 2 shows an example of JSQuAD. We ask workers to write two additional answers for the dev and test sets to make the system evaluation robust. ##### JCommonsenseQA From [JGLUE's paper](https://aclanthology.org/2022.lrec-1.317/): > ### Overview > JCommonsenseQA is a Japanese version of CommonsenseQA (Talmor et al., 2019), which consists of five choice QA to evaluate commonsense reasoning ability. Figure 3 shows examples of JCommonsenseQA. In the same way as CommonsenseQA, JCommonsenseQA is built using crowdsourcing with seeds extracted from the knowledge base ConceptNet (Speer et al., 2017). ConceptNet is a multilingual knowledge base that consists of triplets of two concepts and their relation. The triplets are directional and represented as (source concept, relation, target concept), for example (bullet train, AtLocation, station). > ### Method of Construction > The construction flow for JCommonsenseQA is shown in Figure 4. First, we collect question sets (QSs) from ConceptNet, each of which consists of a source concept and three target concepts that have the same relation to the source concept. Next, for each QS, we crowdAtLocation 2961source a task of writing a question with only one target concept as the answer and a task of adding two distractors. We describe the detailed construction procedure for JCommonsenseQA below, showing how it differs from CommonsenseQA. > 1. We collect Japanese QSs from ConceptNet. CommonsenseQA uses only forward relations (source concept, relation, target concept) excluding general ones such as “RelatedTo” and “IsA”. JCommonsenseQA similarly uses a set of 22 relations5, excluding general ones, but the direction of the relations is bidirectional to make the questions more diverse. In other words, we also use relations in the opposite direction (source concept, relation−1, target concept).6 With this setup, we extracted 43,566 QSs with Japanese source/target concepts and randomly selected 7,500 from them. > 2. Some low-quality questions in CommonsenseQA contain distractors that can be considered to be an answer. To improve the quality of distractors, we add the following two processes that are not adopted in CommonsenseQA. First, if three target concepts of a QS include a spelling variation or a synonym of one another, this QS is removed. To identify spelling variations, we use the word ID of the morphological dictionary Juman Dic7. Second, we crowdsource a task of judging whether target concepts contain a synonym. As a result, we adopted 5,920 QSs from 7,500. > 3. For each QS, we crowdsource a task of writing a question sentence in which only one from the three target concepts is an answer. In the example shown in Figure 4, “駅 (station)” is an answer, and the others are distractors. To remove low quality question sentences, we remove the following question sentences. > - Question sentences that contain a choice word(this is because such a question is easily solved). > - Question sentences that contain the expression “XX characters”.8 (XX is a number). > - Improperly formatted question sentences that do not end with “?”. > - As a result, 5,920 × 3 = 17,760question sentences were created, from which we adopted 15,310 by removing inappropriate question sentences. > 4. In CommonsenseQA, when adding distractors, one is selected from ConceptNet, and the other is created by crowdsourcing. In JCommonsenseQA, to have a wider variety of distractors, two distractors are created by crowdsourcing instead of selecting from ConceptNet. To improve the quality of the questions9, we remove questions whose added distractors fall into one of the following categories: > - Distractors are included in a question sentence. > - Distractors overlap with one of existing choices. > - As a result, distractors were added to the 15,310 questions, of which we adopted 13,906. > 5. We asked three crowdworkers to answer each question and adopt only those answered correctly by at least two workers. As a result, we adopted 11,263 out of the 13,906 questions. #### Who are the annotators? From [JGLUE's README.md](https://github.com/yahoojapan/JGLUE/blob/main/README.md#tasksdatasets): > We use Yahoo! Crowdsourcing for all crowdsourcing tasks in constructing the datasets. From [JCoLA's paper](https://arxiv.org/abs/2309.12676): > As a reference for the upper limit of accuracy in JCoLA, human acceptability judgment experiments were conducted on Lancers2 with a subset of the JCoLA data. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset From [JGLUE's paper](https://aclanthology.org/2022.lrec-1.317/): > We build a Japanese NLU benchmark, JGLUE, from scratch without translation to measure the general NLU ability in Japanese. We hope that JGLUE will facilitate NLU research in Japanese. ### Discussion of Biases [More Information Needed] ### Other Known Limitations From [JCoLA's paper](https://arxiv.org/abs/2309.12676): > All the sentences included in JCoLA have been extracted from textbooks, handbooks and journal articles on theoretical syntax. Therefore, those sentences are guaranteed to be theoretically meaningful, making JCoLA a challenging dataset. However, the distribution of linguistic phenomena directly reflects that of the source literature and thus turns out to be extremely skewed. Indeed, as can be seen in Table 3, while the number of sentences exceeds 100 for most linguistic phenomena, there are several linguistic phenomena for which there are only about 10 sentences. In addition, since it is difficult to force language models to interpret sentences given specific contexts, those sentences whose unacceptability depends on contexts were inevitably removed from JCoLA. This removal process resulted in the deletion of unacceptable sentences from some linguistic phenomena (such as ellipsis), consequently skewing the balance between acceptable and unacceptable sentences (with a higher proportion of acceptable sentences). ## Additional Information - 日本語言語理解ベンチマーク JGLUE の構築 〜 自然言語処理モデルの評価用データセットを公開しました - Yahoo! JAPAN Tech Blog https://techblog.yahoo.co.jp/entry/2022122030379907/ ### Dataset Curators #### MARC-ja - Keung, Phillip, et al. "The Multilingual Amazon Reviews Corpus." Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020. #### JCoLA - Someya, Sugimoto, and Oseki. "JCoLA: Japanese Corpus of Linguistic Acceptability." arxiv preprint arXiv:2309.12676 (2023). #### JSTS and JNLI - Miyazaki, Takashi, and Nobuyuki Shimizu. "Cross-lingual image caption generation." Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2016. #### JSQuAD The JGLUE's 'authors curated the original data for JSQuAD from the Japanese wikipedia dump. #### JCommonsenseQA In the same way as CommonsenseQA, JCommonsenseQA is built using crowdsourcing with seeds extracted from the knowledge base ConceptNet ### Licensing Information #### JGLUE From [JGLUE's README.md'](https://github.com/yahoojapan/JGLUE#license): > This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. #### JCoLA From [JCoLA's README.md'](https://github.com/osekilab/JCoLA#license): > The text in this corpus is excerpted from the published works, and copyright (where applicable) remains with the original authors or publishers. We expect that research use within Japan is legal under fair use, but make no guarantee of this. ### Citation Information #### JGLUE ```bibtex @inproceedings{kurihara-lrec-2022-jglue, title={JGLUE: Japanese general language understanding evaluation}, author={Kurihara, Kentaro and Kawahara, Daisuke and Shibata, Tomohide}, booktitle={Proceedings of the Thirteenth Language Resources and Evaluation Conference}, pages={2957--2966}, year={2022}, url={https://aclanthology.org/2022.lrec-1.317/} } ``` ```bibtex @inproceedings{kurihara-nlp-2022-jglue, title={JGLUE: 日本語言語理解ベンチマーク}, author={栗原健太郎 and 河原大輔 and 柴田知秀}, booktitle={言語処理学会第 28 回年次大会}, pages={2023--2028}, year={2022}, url={https://www.anlp.jp/proceedings/annual_meeting/2022/pdf_dir/E8-4.pdf}, note={in Japanese} } ``` #### MARC-ja ```bibtex @inproceedings{marc_reviews, title={The Multilingual Amazon Reviews Corpus}, author={Keung, Phillip and Lu, Yichao and Szarvas, György and Smith, Noah A.}, booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing}, year={2020} } ``` #### JCoLA ```bibtex @article{someya-arxiv-2023-jcola, title={JCoLA: Japanese Corpus of Linguistic Acceptability}, author={Taiga Someya and Yushi Sugimoto and Yohei Oseki}, year={2023}, eprint={2309.12676}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @inproceedings{someya-nlp-2022-jcola, title={日本語版 CoLA の構築}, author={染谷 大河 and 大関 洋平}, booktitle={言語処理学会第 28 回年次大会}, pages={1872--1877}, year={2022}, url={https://www.anlp.jp/proceedings/annual_meeting/2022/pdf_dir/E7-1.pdf}, note={in Japanese} } ``` #### JSTS and JNLI ```bibtex @inproceedings{miyazaki2016cross, title={Cross-lingual image caption generation}, author={Miyazaki, Takashi and Shimizu, Nobuyuki}, booktitle={Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)}, pages={1780--1790}, year={2016} } ``` ### Contributions Thanks to [Kentaro Kurihara](https://twitter.com/kkurihara_cs), [Daisuke Kawahara](https://twitter.com/daisukekawahar1), and [Tomohide Shibata](https://twitter.com/stomohide) for creating JGLUE dataset. Thanks to [Taiga Someya](https://twitter.com/T0a8i0g9a) for creating JCoLA dataset.
shunk031/JGLUE
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:sentence-similarity", "task_categories:text-classification", "task_ids:multiple-choice-qa", "task_ids:open-domain-qa", "task_ids:multi-class-classification", "task_ids:sentiment-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:ja", "license:cc-by-4.0", "MARC", "CoLA", "STS", "NLI", "SQuAD", "CommonsenseQA", "arxiv:2309.12676", "region:us" ]
2023-02-27T08:31:09+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced", "found"], "language": ["ja"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": ["original"], "task_categories": ["multiple-choice", "question-answering", "sentence-similarity", "text-classification"], "task_ids": ["multiple-choice-qa", "open-domain-qa", "multi-class-classification", "sentiment-classification"], "pretty_name": "JGLUE", "tags": ["MARC", "CoLA", "STS", "NLI", "SQuAD", "CommonsenseQA"]}
2023-09-26T11:41:51+00:00
e1d0531e474b2c7e22fd5fa29d2f980debb29ab2
Implicature corpus ```bib @article{george2020conversational, title={Conversational implicatures in English dialogue: Annotated dataset}, author={George, Elizabeth Jasmi and Mamidi, Radhika}, journal={Procedia Computer Science}, volume={171}, pages={2316--2323}, year={2020}, publisher={Elsevier} } ``` Augmented with generated distractors https://colab.research.google.com/drive/1ix0FgwzPAjQkIQA2E3ctlylvcmya7vGy?usp=sharing, for tasksource ```bib @article{sileo2023tasksource, title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation}, author={Sileo, Damien}, url= {https://arxiv.org/abs/2301.05948}, journal={arXiv preprint arXiv:2301.05948}, year={2023} } ```
tasksource/implicatures
[ "license:gpl", "arxiv:2301.05948", "region:us" ]
2023-02-27T08:57:17+00:00
{"license": "gpl"}
2023-02-27T09:01:42+00:00
7fc8ba185af3039a569b4a4083f44d2e4250c956
# Dataset Card for "donut-docvqa-concert1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
aymanechilah/donut-docvqa-concert1
[ "region:us" ]
2023-02-27T10:36:43+00:00
{"dataset_info": {"features": [{"name": "file_name", "dtype": "string"}, {"name": "ground_truth", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 246710917.83032492, "num_examples": 249}, {"name": "test", "num_bytes": 27742593.16967509, "num_examples": 28}], "download_size": 73786135, "dataset_size": 274453511.0}}
2023-02-27T10:36:55+00:00
63f2d6ef47be7832a6c20a6a238dab7029c5b7c9
# Dataset Card for "summarize-from-feedback" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
HuggingFaceH4/summarize-from-feedback
[ "region:us" ]
2023-02-27T10:50:52+00:00
{"dataset_info": {"features": [{"name": "meta", "struct": [{"name": "id", "dtype": "string"}, {"name": "post", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "subreddit", "dtype": "string"}, {"name": "site", "dtype": "string"}, {"name": "article", "dtype": "string"}]}, {"name": "responses", "list": [{"name": "text", "dtype": "string"}, {"name": "policy", "dtype": "string"}, {"name": "note", "dtype": "string"}]}, {"name": "label", "dtype": "int32"}, {"name": "worker", "dtype": "string"}, {"name": "batch", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "extra", "struct": [{"name": "confidence", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 172539153, "num_examples": 92858}, {"name": "validation", "num_bytes": 170579710, "num_examples": 86086}], "download_size": 43943406, "dataset_size": 343118863}}
2023-02-27T11:56:24+00:00
06b74fb30ea77051503b57d8400c800fd497b965
# Dataset Card for "balanced_augmented_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Jsevisal/balanced_augmented_dataset
[ "region:us" ]
2023-02-27T11:21:13+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "tokens", "sequence": "string"}, {"name": "gestures", "sequence": "string"}, {"name": "label", "sequence": {"class_label": {"names": {"0": "B-BUT", "1": "I-BUT", "2": "B-CALM_DOWN", "3": "I-CALM_DOWN", "4": "B-COME_ON", "5": "I-COME_ON", "6": "B-EMPHATIC", "7": "I-EMPHATIC", "8": "B-ENTHUSIASTIC", "9": "I-ENTHUSIASTIC", "10": "B-EXPLAIN", "11": "I-EXPLAIN", "12": "B-FRONT", "13": "I-FRONT", "14": "B-GREET", "15": "I-GREET", "16": "B-ITERATE", "17": "I-ITERATE", "18": "B-NEUTRAL", "19": "I-NEUTRAL", "20": "B-NO", "21": "I-NO", "22": "B-NO_GESTURE", "23": "I-NO_GESTURE", "24": "B-OTHER_PEER", "25": "I-OTHER_PEER", "26": "B-PLEASE", "27": "I-PLEASE", "28": "B-QUESTION", "29": "I-QUESTION", "30": "B-SELF", "31": "I-SELF", "32": "B-SORRY", "33": "I-SORRY", "34": "B-THANKS", "35": "I-THANKS", "36": "B-THINKING", "37": "I-THINKING", "38": "B-THIRD_PERSON", "39": "I-THIRD_PERSON", "40": "B-YES", "41": "I-YES"}}}}], "splits": [{"name": "train", "num_bytes": 165787.0, "num_examples": 504}, {"name": "test", "num_bytes": 53037.0, "num_examples": 121}], "download_size": 40359, "dataset_size": 218824.0}}
2023-09-14T10:31:51+00:00
1d93779d60195d4e25cb552ce096f41cfb3c2f4b
leo214gamer/satono
[ "license:openrail", "region:us" ]
2023-02-27T11:33:14+00:00
{"license": "openrail"}
2023-03-10T04:14:11+00:00
512509ffaed10c913f9dd8975938d290fd55855e
# Dataset Card for "go-emotion-dk-autotranlated-10k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
RJuro/go-emotion-dk-autotranlated-10k
[ "region:us" ]
2023-02-27T12:59:56+00:00
{"dataset_info": {"features": [{"name": "text_en", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "admiration", "1": "amusement", "2": "anger", "3": "annoyance", "4": "approval", "5": "caring", "6": "confusion", "7": "curiosity", "8": "desire", "9": "disappointment", "10": "disapproval", "11": "disgust", "12": "embarrassment", "13": "excitement", "14": "fear", "15": "gratitude", "16": "grief", "17": "joy", "18": "love", "19": "nervousness", "20": "neutral", "21": "optimism", "22": "pride", "23": "realization", "24": "relief", "25": "remorse", "26": "sadness", "27": "surprise"}}}}, {"name": "__index_level_0__", "dtype": "int64"}, {"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 2916184, "num_examples": 9000}, {"name": "test", "num_bytes": 159317, "num_examples": 500}, {"name": "valid", "num_bytes": 162379, "num_examples": 500}], "download_size": 1662215, "dataset_size": 3237880}}
2023-02-27T13:00:06+00:00
17d61a19643ca7dafa5a0c2f62d6415962e39c07
# Dataset Card for "go-emotion-dk-autotranlated-10k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Korsholm22/go-emotion-dk-autotranlated-10k
[ "region:us" ]
2023-02-27T13:00:01+00:00
{"dataset_info": {"features": [{"name": "text_en", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "admiration", "1": "amusement", "2": "anger", "3": "annoyance", "4": "approval", "5": "caring", "6": "confusion", "7": "curiosity", "8": "desire", "9": "disappointment", "10": "disapproval", "11": "disgust", "12": "embarrassment", "13": "excitement", "14": "fear", "15": "gratitude", "16": "grief", "17": "joy", "18": "love", "19": "nervousness", "20": "neutral", "21": "optimism", "22": "pride", "23": "realization", "24": "relief", "25": "remorse", "26": "sadness", "27": "surprise"}}}}, {"name": "__index_level_0__", "dtype": "int64"}, {"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 2905258, "num_examples": 9000}, {"name": "test", "num_bytes": 163076, "num_examples": 500}, {"name": "valid", "num_bytes": 169546, "num_examples": 500}], "download_size": 1662396, "dataset_size": 3237880}}
2023-02-27T13:00:14+00:00
221003b6900b8eaaa9a6c3e3aaa0bf78812dc51d
Comming soon...! # Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
deepsynthbody/conditional-polyp-diffusion
[ "doi:10.57967/hf/0430", "region:us" ]
2023-02-27T13:24:47+00:00
{}
2023-03-11T00:31:35+00:00
47d570a83071ec6828d512adef3bc5e2657fa78d
h2o/environmental_crowdfuding_campaigns
[ "language:en", "region:us" ]
2023-02-27T14:23:49+00:00
{"language": ["en"]}
2023-02-27T16:12:17+00:00
2faa83218cb9053440548e161aaaaae96d1e7b73
Belongsx/atacom_human_record
[ "size_categories:100M<n<1B", "language:en", "license:mit", "human_robot_interaction", "robotics", "safe_reinforcement_learning", "region:us" ]
2023-02-27T14:47:00+00:00
{"language": ["en"], "license": "mit", "size_categories": ["100M<n<1B"], "pretty_name": "Human Record", "tags": ["human_robot_interaction", "robotics", "safe_reinforcement_learning"]}
2023-02-27T14:53:24+00:00
b41a243c5e6c1b8b3a0fd26a2cf20aacfa463620
# Dataset Card for "test-_8" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
juancopi81/test-_8
[ "region:us" ]
2023-02-27T15:15:12+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "playlist_title", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1052419.0, "num_examples": 3}], "download_size": 1054186, "dataset_size": 1052419.0}}
2023-02-27T15:15:23+00:00
7994136fcc7ca478da220bde3404900510247977
BiJaDadowy/Gjhfbhg
[ "license:openrail", "region:us" ]
2023-02-27T15:15:18+00:00
{"license": "openrail"}
2023-02-27T15:15:18+00:00
b820a352ea594a1540f46e42bf5a9714f1a69fb1
Adapting/Abstracts-for-Clustering
[ "license:mit", "region:us" ]
2023-02-27T15:24:38+00:00
{"license": "mit"}
2023-02-28T14:13:52+00:00
a9f95c5cadd648bae49357cb784f510f9a4996e2
# Dataset Card for "UC_Merced_LandUse_MultiLabel" ## Dataset Description - **Paper:** [Bag-of-visual-words and spatial extensions for land-use classification](https://dl.acm.org/doi/pdf/10.1145/1869790.1869829) - **Paper:** [Multilabel Remote Sensing Image Retrieval Using a Semisupervised Graph-Theoretic Method](https://ieeexplore.ieee.org/iel7/36/4358825/08089668.pdf) ### Licensing Information Public Domain; “Map services and data available from U.S. Geological Survey, National Geospatial Program.” ## Citation Information Imagery: [Bag-of-visual-words and spatial extensions for land-use classification](https://dl.acm.org/doi/pdf/10.1145/1869790.1869829) Multilabels: [Multilabel Remote Sensing Image Retrieval Using a Semisupervised Graph-Theoretic Method](https://ieeexplore.ieee.org/iel7/36/4358825/08089668.pdf) ``` @inproceedings{yang2010bag, title = {Bag-of-visual-words and spatial extensions for land-use classification}, author = {Yang, Yi and Newsam, Shawn}, year = 2010, booktitle = {Proceedings of the 18th SIGSPATIAL international conference on advances in geographic information systems}, pages = {270--279} } @article{8089668, title = {Multilabel Remote Sensing Image Retrieval Using a Semisupervised Graph-Theoretic Method}, author = {Chaudhuri, Bindita and Demir, Begüm and Chaudhuri, Subhasis and Bruzzone, Lorenzo}, year = 2018, journal = {IEEE Transactions on Geoscience and Remote Sensing}, volume = 56, number = 2, pages = {1144--1158}, doi = {10.1109/TGRS.2017.2760909} } ```
jonathan-roberts1/UC_Merced_LandUse_MultiLabel
[ "license:other", "region:us" ]
2023-02-27T15:54:34+00:00
{"license": "other", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "sequence": {"class_label": {"names": {"0": "airplane", "1": "bare soil", "2": "buildings", "3": "cars", "4": "chaparral", "5": "court", "6": "dock", "7": "field", "8": "grass", "9": "mobile home", "10": "pavement", "11": "sand", "12": "sea", "13": "ship", "14": "tanks", "15": "trees", "16": "water"}}}}], "splits": [{"name": "train", "num_bytes": 438859816.5, "num_examples": 2100}], "download_size": 416309630, "dataset_size": 438859816.5}}
2023-04-03T15:33:24+00:00
733f3c4f48046646326b688097127fe875db0c07
# Dataset Card for funsd-vqa ## Dataset Description - **Homepage:** https://huggingface.co/datasets/munish0838/funsd_vqa - **Repository:** https://github.com/munish0838/FUNSD - **Point of Contact:** [email protected] ### Dataset Summary This dataset has been processed to be used by Donut model for DocVQA fine tuninf on FUNSD dataset. The final dataset is in `.jsonl` file format. ### Languages - English ## Dataset Structure ### Data Fields id -> Name of Image file/json file file_name -> Path of image file questions -> array of all questions in corresponding to the image words -> list of all words present in image bounding_boxes -> contains bounding box of all words answers -> array of all answers in corresponding to the image grount_truth -> has gt_parses in donut required format for processing ## Dataset Creation Refer this github repo link: https://github.com/munish0838/FUNSD ### Source Data https://guillaumejaume.github.io/FUNSD/
munish0838/funsd-vqa
[ "task_categories:document-question-answering", "size_categories:n<1K", "language:en", "license:openrail", "region:us" ]
2023-02-27T16:01:49+00:00
{"language": ["en"], "license": "openrail", "size_categories": ["n<1K"], "task_categories": ["document-question-answering"]}
2024-02-15T03:31:07+00:00
10d052310cbb1b10211b1cb119a1184e1090cdf3
# Dataset Card for "test-audio" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
juancopi81/test-audio
[ "region:us" ]
2023-02-27T16:10:20+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "playlist_title", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1052419.0, "num_examples": 3}], "download_size": 0, "dataset_size": 1052419.0}}
2023-03-03T13:52:47+00:00
c47490c6f28c70bc01d5d275029e152835e9da40
# Dataset Card for "VISBank_CleaneParsed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Yamei/VISBank_Final
[ "region:us" ]
2023-02-27T16:21:54+00:00
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "paper_id", "dtype": "int64"}, {"name": "abstract", "dtype": "string"}, {"name": "authors", "list": [{"name": "first", "dtype": "string"}, {"name": "middle", "sequence": "string"}, {"name": "last", "dtype": "string"}, {"name": "suffix", "dtype": "string"}]}, {"name": "year", "dtype": "float64"}, {"name": "arxiv_id", "dtype": "string"}, {"name": "acl_id", "dtype": "string"}, {"name": "pmc_id", "dtype": "string"}, {"name": "pubmed_id", "dtype": "string"}, {"name": "doi", "dtype": "string"}, {"name": "venue", "dtype": "string"}, {"name": "journal", "dtype": "string"}, {"name": "mag_id", "dtype": "string"}, {"name": "outbound_citations", "sequence": "string"}, {"name": "inbound_citations", "sequence": "string"}, {"name": "has_outbound_citations", "dtype": "bool"}, {"name": "has_inbound_citations", "dtype": "bool"}, {"name": "has_pdf_parse", "dtype": "bool"}, {"name": "s2_url", "dtype": "string"}, {"name": "has_pdf_body_text", "dtype": "float64"}, {"name": "has_pdf_parsed_abstract", "dtype": "float64"}, {"name": "has_pdf_parsed_body_text", "dtype": "float64"}, {"name": "has_pdf_parsed_bib_entries", "dtype": "float64"}, {"name": "has_pdf_parsed_ref_entries", "dtype": "float64"}, {"name": "entities", "sequence": {"sequence": "string"}}], "splits": [{"name": "train", "num_bytes": 254427395, "num_examples": 125745}], "download_size": 133946624, "dataset_size": 254427395}}
2023-03-04T20:18:07+00:00
37fb78be2f9ed923c7ad9543bb3bb19d98057f65
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [train] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
michaelpenaariet/PIdemo
[ "size_categories:n<1K", "language:en", "region:us" ]
2023-02-27T17:13:27+00:00
{"language": ["en"], "size_categories": ["n<1K"]}
2023-03-15T20:41:48+00:00
fb0a43d61e014f91c69423c9fd72ccef91c2b8e8
jancco/TestDataset1
[ "license:unknown", "region:us" ]
2023-02-27T18:14:49+00:00
{"license": "unknown"}
2023-02-27T18:14:49+00:00
e6010aa8b32734e2f2dbc11ffdeebb01f2a6858d
# Dataset Card for "MLRSNet" ## Dataset Description - **Paper:** [MLRSNet: A multi-label high spatial resolution remote sensing dataset for semantic scene understanding](https://www.sciencedirect.com/science/article/pii/S0924271620302677) ### Licensing Information CC BY 4.0 ## Citation Information [MLRSNet: A multi-label high spatial resolution remote sensing dataset for semantic scene understanding](https://www.sciencedirect.com/science/article/pii/S0924271620302677) ``` @article{qi2020mlrsnet, title = {MLRSNet: A multi-label high spatial resolution remote sensing dataset for semantic scene understanding}, author = {Qi, Xiaoman and Zhu, Panpan and Wang, Yuebin and Zhang, Liqiang and Peng, Junhuan and Wu, Mengfan and Chen, Jialong and Zhao, Xudong and Zang, Ning and Mathiopoulos, P Takis}, year = 2020, journal = {ISPRS Journal of Photogrammetry and Remote Sensing}, publisher = {Elsevier}, volume = 169, pages = {337--350} } ```
jonathan-roberts1/MLRSNet
[ "license:cc-by-4.0", "region:us" ]
2023-02-27T18:19:58+00:00
{"license": "cc-by-4.0", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "sequence": {"class_label": {"names": {"0": "airplane", "1": "airport", "2": "bare soil", "3": "baseball diamond", "4": "basketball court", "5": "beach", "6": "bridge", "7": "buildings", "8": "cars", "9": "chaparral", "10": "cloud", "11": "containers", "12": "crosswalk", "13": "dense residential area", "14": "desert", "15": "dock", "16": "factory", "17": "field", "18": "football field", "19": "forest", "20": "freeway", "21": "golf course", "22": "grass", "23": "greenhouse", "24": "gully", "25": "habor", "26": "intersection", "27": "island", "28": "lake", "29": "mobile home", "30": "mountain", "31": "overpass", "32": "park", "33": "parking lot", "34": "parkway", "35": "pavement", "36": "railway", "37": "railway station", "38": "river", "39": "road", "40": "roundabout", "41": "runway", "42": "sand", "43": "sea", "44": "ships", "45": "snow", "46": "snowberg", "47": "sparse residential area", "48": "stadium", "49": "swimming pool", "50": "tanks", "51": "tennis court", "52": "terrace", "53": "track", "54": "trail", "55": "transmission tower", "56": "trees", "57": "water", "58": "wetland", "59": "wind turbine"}}}}], "splits": [{"name": "train", "num_bytes": 1327782862.875, "num_examples": 109161}], "download_size": 1304951717, "dataset_size": 1327782862.875}}
2023-04-03T15:34:12+00:00
7f6497517d7aec4fe9369eeeefe72bb5bd765b7d
akshayylr/skull_xray
[ "license:openrail", "region:us" ]
2023-02-27T18:23:08+00:00
{"license": "openrail"}
2023-03-13T01:55:12+00:00
5928ac565d22df80e018bee0c5ad1e25b9e70dc9
# Dataset Card for "captionary-dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
eeshclusive/captionary-dataset
[ "region:us" ]
2023-02-27T18:46:46+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 55042409.0, "num_examples": 162}, {"name": "test", "num_bytes": 16034324.0, "num_examples": 51}], "download_size": 14804192, "dataset_size": 71076733.0}}
2023-02-27T19:40:17+00:00
83e9cc641daf76a2a8f7605a60fa4e92b0597762
pln-udelar/uy22
[ "language:es", "license:mit", "region:us" ]
2023-02-27T19:30:24+00:00
{"language": ["es"], "license": "mit", "pretty_name": "uy22"}
2023-02-28T01:50:29+00:00
13c1c1529bda712c4432388f63d56fd53c306e79
nadlej/reuters15k
[ "task_categories:tabular-classification", "size_categories:10K<n<100K", "reuters15k", "reuters", "tabular", "region:us" ]
2023-02-27T19:49:18+00:00
{"size_categories": ["10K<n<100K"], "task_categories": ["tabular-classification"], "tags": ["reuters15k", "reuters", "tabular"]}
2023-03-01T18:52:02+00:00
3abe1044f9048306ce710180d8fefd38bf85b1ec
# Dataset Card for "biomed-fr-pubmed-en" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
rntc/biomed-fr-pubmed-en
[ "region:us" ]
2023-02-27T20:24:10+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4562571188, "num_examples": 15561370}, {"name": "validation", "num_bytes": 46015018, "num_examples": 157186}], "download_size": 3088461733, "dataset_size": 4608586206}}
2023-02-27T20:51:43+00:00
c6dab4788cb6b2e62c4f462063f4f56c78a95180
# Adult The [Adult dataset](https://archive.ics.uci.edu/ml/datasets/Adult) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets). Census dataset including personal characteristic of a person, and their income threshold. # Configurations and tasks | **Configuration** | **Task** | Description | |-------------------|---------------------------|-----------------------------------------------------------------| | encoding | | Encoding dictionary showing original values of encoded features.| | income | Binary classification | Classify the person's income as over or under the threshold. | | income-no race | Binary classification | As `income`, but the `race` feature is removed. | | race | Multiclass classification | Predict the race of the individual. | # Usage ```python from datasets import load_dataset dataset = load_dataset("mstz/adult", "income")["train"] ``` # Features Target feature changes according to the selected configuration and is always in last position in the dataset. |**Feature** |**Type** | **Description** | |-------------------------------|-----------|------------------------------------------------------------| |`age` |`[int64]` | Age of the person. | |`capital_gain` |`[float64]`| Capital gained by the person. | |`capital_loss` |`[float64]`| Capital lost by the person. | |`education` |`[int8]` | Education level: the higher, the more educated the person. | |`final_weight` |`[int64]` | | |`hours_worked_per_week` |`[int64]` | Hours worked per week. | |`marital_status` |`[string]` | Marital status of the person. | |`native_country` |`[string]` | Native country of the person. | |`occupation` |`[string]` | Job of the person. | |`race` |`[string]` | Race of the person. | |`relationship` |`[string]` | | |`is_male` |`[bool]` | Man/Woman. | |`workclass` |`[string]` | Type of job of the person. | |**over_threshold** |`int8` | `1` for income `>= 50k$`, `0` otherwise. |
mstz/adult
[ "task_categories:tabular-classification", "size_categories:10K<n<100K", "language:en", "license:cc", "adult", "tabular_classification", "binary_classification", "multiclass_classification", "UCI", "region:us" ]
2023-02-27T21:17:48+00:00
{"language": ["en"], "license": "cc", "size_categories": ["10K<n<100K"], "task_categories": ["tabular-classification"], "pretty_name": "Adult", "tags": ["adult", "tabular_classification", "binary_classification", "multiclass_classification", "UCI"], "configs": ["encoding", "income", "income-no race", "race"]}
2023-04-15T10:37:47+00:00
644b26ae9efcb63bcb0faaee6f79e96d70f967ea
joaovaladaresf2f/teste
[ "license:openrail", "region:us" ]
2023-02-27T22:59:36+00:00
{"license": "openrail"}
2023-02-27T22:59:36+00:00
287c90b7e555d2a8e67baf0f392e8424235eee37
# Dataset Card for "ImageNet15_animals_unbalanced_augmented1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CVdatasets/ImageNet15_animals_unbalanced_augmented1
[ "region:us" ]
2023-02-27T23:05:15+00:00
{"dataset_info": {"features": [{"name": "labels", "dtype": {"class_label": {"names": {"0": "Italian greyhound", "1": "coyote, prairie wolf, brush wolf, Canis latrans", "2": "beagle", "3": "Rottweiler", "4": "hyena, hyaena", "5": "Greater Swiss Mountain dog", "6": "triceratops", "7": "French bulldog", "8": "red wolf, maned wolf, Canis rufus, Canis niger", "9": "Egyptian cat", "10": "Chihuahua", "11": "Irish terrier", "12": "tiger cat", "13": "white wolf, Arctic wolf, Canis lupus tundrarum", "14": "timber wolf, grey wolf, gray wolf, Canis lupus"}}}}, {"name": "img", "dtype": "image"}], "splits": [{"name": "validation", "num_bytes": 60570468.125, "num_examples": 1439}, {"name": "train", "num_bytes": 161485444.02117264, "num_examples": 3681}], "download_size": 222111550, "dataset_size": 222055912.14617264}}
2023-02-27T23:05:28+00:00
93aa4f6e6d06371cb51f76e4414fa91b059905ef
# Dataset Card for "latin_english_parallel" 101k translation pairs between Latin and English, split 99/1/1 as train/test/val. These have been collected roughly 66% from the Loeb Classical Library and 34% from the Vulgate translation. For those that were gathered from the Loeb Classical Library, alignment was performd manually between Source and Target sequences. Each sample is annotated with the index and file (and therefore author/work) that the sample is from. If you find errors, please feel free to submit a PR to fix them. ![alt text](distribution.png)
grosenthal/latin_english_translation
[ "task_categories:translation", "size_categories:10K<n<100K", "language:la", "language:en", "license:mit", "doi:10.57967/hf/0903", "region:us" ]
2023-02-28T00:10:51+00:00
{"language": ["la", "en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["translation"], "pretty_name": "Latin to English Translation Pairs", "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "la", "dtype": "string"}, {"name": "en", "dtype": "string"}, {"name": "file", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39252644, "num_examples": 99343}, {"name": "test", "num_bytes": 405056, "num_examples": 1014}, {"name": "valid", "num_bytes": 392886, "num_examples": 1014}], "download_size": 25567350, "dataset_size": 40050586}}
2023-07-17T20:59:06+00:00
8d51ad6c40af4095a750d23e6c684c3e8573bb34
# Dataset Card for "audio-diffusion-512" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tvergho/audio-diffusion-512
[ "region:us" ]
2023-02-28T00:16:35+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "audio_file", "dtype": "string"}, {"name": "slice", "dtype": "int16"}], "splits": [{"name": "train", "num_bytes": 896860831.5, "num_examples": 6964}], "download_size": 895892605, "dataset_size": 896860831.5}}
2023-02-28T01:03:36+00:00
442ce64039b5234704f0447107aa80987fccb922
# Dataset Card for "presidents" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Tuana/presidents
[ "region:us" ]
2023-02-28T00:51:03+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "content_type", "dtype": "string"}, {"name": "meta", "struct": [{"name": "url", "dtype": "string"}, {"name": "_split_id", "dtype": "int64"}]}, {"name": "id_hash_keys", "sequence": "string"}, {"name": "score", "dtype": "null"}, {"name": "embedding", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 9366886, "num_examples": 5529}], "download_size": 4997888, "dataset_size": 9366886}}
2023-02-28T01:06:47+00:00
b72078fb93374c7f338d896ce9410513d8793c37
# Dataset Card for "guilbert_tok" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mekjr1/guilbert_tok
[ "region:us" ]
2023-02-28T02:14:23+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "emotion_name", "dtype": {"class_label": {"names": {"0": "Adoring", "1": "Amused", "2": "Angry", "3": "Annoyed", "4": "Caring", "5": "Confused", "6": "Curious", "7": "Disappointed", "8": "Disgusted", "9": "Embarrassed", "10": "Excited", "11": "Guilty", "12": "Happy", "13": "Heartbroken", "14": "Loving", "15": "Nervous", "16": "Passionate", "17": "Proud", "18": "Relieved", "19": "Sad", "20": "Sorry", "21": "Surprised", "22": "Thoughtful"}}}}, {"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "word_ids", "sequence": "int64"}], "splits": [{"name": "validation", "num_bytes": 41853427, "num_examples": 5561}, {"name": "train", "num_bytes": 984129684, "num_examples": 130680}, {"name": "test", "num_bytes": 20945009, "num_examples": 2781}], "download_size": 48041275, "dataset_size": 1046928120}}
2023-02-28T02:14:45+00:00
6965b3e3e0bd23606a5a04b5b9b85957cef7e58c
# Dataset Card for "maestro" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tvergho/maestro
[ "region:us" ]
2023-02-28T02:27:28+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "audio_file", "dtype": "string"}, {"name": "slice", "dtype": "int16"}], "splits": [{"name": "train", "num_bytes": 8059364821.5, "num_examples": 59668}], "download_size": 8051660600, "dataset_size": 8059364821.5}}
2023-02-28T04:05:57+00:00
b1b50ac0de05f0e0bf1088f1fae918921d4a3102
# Dataset Card for "VQAv2_minival_google_flan_t5_xxl_mode_VQAv2_visclues_detection_caption_module_ns_1000_open_ended" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/VQAv2_minival_google_flan_t5_xxl_mode_VQAv2_visclues_detection_caption_module_ns_1000_open_ended
[ "region:us" ]
2023-02-28T03:31:57+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 145355, "num_examples": 1000}], "download_size": 55086, "dataset_size": 145355}}
2023-02-28T03:31:59+00:00
cb8f7b6b3b9b903ec049fa48b9914867aa20e303
Luckydepaula/piramides
[ "license:openrail", "region:us" ]
2023-02-28T03:35:35+00:00
{"license": "openrail"}
2023-02-28T03:35:35+00:00
25442daa2332b813612fad2029a8a8493bf248a2
EarthnDusk/Gambit_Dataset_and_Lora
[ "task_categories:text-to-image", "size_categories:1K<n<10K", "language:en", "license:creativeml-openrail-m", "comics", "region:us" ]
2023-02-28T03:57:49+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "size_categories": ["1K<n<10K"], "task_categories": ["text-to-image"], "pretty_name": "Gambit Lora Dataset", "tags": ["comics"]}
2023-03-01T07:31:51+00:00
0770f3084f9d960be398d9be26c4d5b62dd69b6d
# Dataset Card for "LEGIT-2023" Label key: - 0 or 1: word 0 or 1 is more legible, other unknown - 2: both words are equally legible - 3: neither word is legible
dvsth/LEGIT
[ "region:us" ]
2023-02-28T04:10:36+00:00
{"dataset_info": {"features": [{"name": "choice", "dtype": "int64"}, {"name": "k", "dtype": "int64"}, {"name": "k1", "dtype": "int64"}, {"name": "n", "dtype": "float64"}, {"name": "n1", "dtype": "float64"}, {"name": "word", "dtype": "string"}, {"name": "word0", "dtype": "string"}, {"name": "word1", "dtype": "string"}, {"name": "model0", "dtype": "string"}, {"name": "model1", "dtype": "string"}, {"name": "img0", "dtype": "image"}, {"name": "img1", "dtype": "image"}], "splits": [{"name": "test", "num_bytes": 3686021.0, "num_examples": 3712}, {"name": "train", "num_bytes": 14024307.25, "num_examples": 14283}, {"name": "valid", "num_bytes": 3184961.75, "num_examples": 3237}], "download_size": 17726271, "dataset_size": 20895290.0}}
2023-02-28T05:19:24+00:00
eec02cd0f1c057c8a4ab8377d8c86efe11b702a0
# Dataset Card for "big-animal-dataset-high-res-embedding" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Isamu136/big-animal-dataset-high-res-embedding
[ "region:us" ]
2023-02-28T04:11:45+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "caption", "dtype": "string"}, {"name": "l14_embeddings", "sequence": "float32"}, {"name": "moco_vitb_imagenet_embeddings", "sequence": "float32"}, {"name": "moco_vitb_imagenet_embeddings_without_last_layer", "sequence": "float32"}, {"name": "ibot_b_16_embedding", "sequence": "float32"}, {"name": "ibot_b_16_last_self_attn", "sequence": "float32"}, {"name": "midas_dpt_swin2_large_384", "dtype": "image"}, {"name": "subject_noun", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3744432126.3, "num_examples": 26180}], "download_size": 3795367998, "dataset_size": 3744432126.3}}
2023-03-12T15:55:07+00:00
d9615d7f084f00bf265c8d327e5a4e03c8b3eb68
KevinG/small-wikipedia
[ "license:openrail", "region:us" ]
2023-02-28T05:31:22+00:00
{"license": "openrail"}
2023-02-28T05:31:22+00:00
13c4daeb3ac77b75cd25137946fdca9f6e138161
HuggingFaceH4/self-instruct-eval
[ "license:apache-2.0", "region:us" ]
2023-02-28T06:04:52+00:00
{"license": "apache-2.0"}
2023-02-28T06:05:58+00:00
2069fb706f36c6db8edad904fa2174e62514f4a8
# Dataset Card for "avatar_captioned-augmented" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jlbaker361/avatar_captioned-augmented
[ "region:us" ]
2023-02-28T07:29:17+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "src", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1616992137.25, "num_examples": 6894}], "download_size": 1616179230, "dataset_size": 1616992137.25}}
2023-03-19T04:48:08+00:00
4a64766145e61c4b24739314fbdd0fbe2df5d629
# Dataset Card for "internship-midi-data-science" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
roszcz/internship-midi-data-science
[ "region:us" ]
2023-02-28T07:33:02+00:00
{"dataset_info": {"features": [{"name": "notes", "struct": [{"name": "end", "sequence": "float64"}, {"name": "pitch", "sequence": "int64"}, {"name": "start", "sequence": "float64"}, {"name": "velocity", "sequence": "int64"}]}, {"name": "control_changes", "struct": [{"name": "number", "sequence": "int64"}, {"name": "time", "sequence": "float64"}, {"name": "value", "sequence": "int64"}]}, {"name": "user", "dtype": "string"}, {"name": "record_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 23504548, "num_examples": 6}], "download_size": 7471835, "dataset_size": 23504548}}
2023-02-28T07:34:05+00:00
e0b1a1e87ed21a453b5f6f4ecdc57073b0aa9ce2
# Testing
LangChainHub-Prompts/testing
[ "region:us" ]
2023-02-28T07:34:51+00:00
{}
2023-02-28T07:37:24+00:00
6e6981e64ec0e99b410704d25630fc20e71f93f1
A collection of images of Anya Taylor-Joy for the [PromptHero Academy Students](https://prompthero.com/academy/courses). ![anyataylorjoy (00).jpg](https://s3.amazonaws.com/moonup/production/uploads/1677573205992-63265d019f9d19bfd4f45031.jpeg)
prompthero-diffusion-models/anyataylorjoy
[ "region:us" ]
2023-02-28T08:26:16+00:00
{}
2023-03-20T16:09:38+00:00
a7ef3c1cf117237f3c3e6addd2a26d88fa32803c
tasksource/dynahate
[ "license:gpl", "region:us" ]
2023-02-28T08:39:13+00:00
{"license": "gpl"}
2023-02-28T08:39:47+00:00
38076f31ea05129b751c81e825b27eb19b7275f8
A collection of images of Six N. Five studio for the [PromptHero Academy Students](https://prompthero.com/academy/courses). <img src="https://s3.amazonaws.com/moonup/production/uploads/1677574020748-63265d019f9d19bfd4f45031.jpeg" width="30%"/>
prompthero-diffusion-models/sixnfive
[ "region:us" ]
2023-02-28T08:44:13+00:00
{}
2023-03-07T22:59:04+00:00
231abb2b08b44349e26df3ec61d95d0a0e0562c5
# Unsupervised malay speakers from youtube videos 10492 unique speakers with at least 75 hours of voice activities. Steps to reproduce at https://github.com/huseinzol05/malaya-speech/blob/master/data/youtube/process-youtube.ipynb ## how-to 1. Download and extract [processed-youtube.tar.gz](processed-youtube.tar.gz), each processed videos saved as pickle, `{video_name}.pkl`. 2. Each pickle file got, ```python [{'wav_data': '/home/husein/ssd2/processed-youtube-v2/"Abam_peluk_saya_lama_atas_pentas_akhir_MLM"-_Ali_Puteh_menangis_imbau_saat_manis_dengan_arwah_abang-_MdgGr7VD7w/0.mp3', 'timestamp': datetime.datetime(2023, 3, 2, 18, 45, 45, 778042), 'asr_model': ('kenapa tak mahu bangun kau abang', [0.5325799628135358], [309, 9, 399, 633, 108, 252]), 'classification_model': (array([ 3.02432757e-03, -3.64390127e-02, 2.93319039e-02, -2.84599233e-02, -5.04244901e-02, 6.03185333e-02, 7.04260264e-03, 7.36895157e-03, 2.41034012e-02, -3.31214964e-02, -1.61228217e-02, -1.92081463e-02, -1.77928973e-02, 1.05488757e-02, 5.11314301e-03, 2.08497643e-02, 2.80407351e-02, -1.34683009e-02, 1.10213496e-02, -5.76948654e-03, 2.11171638e-02, -3.10498872e-03, 1.60899870e-02, -2.22061612e-02, -3.09270490e-02, 1.03673469e-02, 2.29822248e-02, 5.44358939e-02, -9.44061391e-03, 3.24469656e-02, -1.40673192e-02, 6.55731931e-03, 1.94134321e-02, 2.31755860e-02, -8.62774719e-03, -3.72681394e-03, -3.17485556e-02, -1.12474747e-02, 1.65595114e-02, 2.31244415e-02, 3.28784771e-02, 8.52510054e-03, -6.41896739e-04, 3.13562714e-03, -3.15982029e-02, 1.72785181e-03, 1.58039071e-02, -9.93900001e-03, 2.03248486e-02, -2.98949536e-02, 3.53759155e-02, 3.06809470e-02, -3.68881435e-03, -3.98267582e-02, -2.07101982e-02, 2.51877047e-02, -2.51530181e-03, 1.06034977e-02, 1.24978041e-02, 2.35916697e-03, 1.31300613e-02, -1.62451845e-02, -2.09861826e-02, 3.17490734e-02, -1.18532358e-02, 4.25735563e-02, 4.17908467e-02, 1.21251179e-03, -3.85571155e-03, -9.50544327e-03, -7.37808086e-03, 2.63940021e-02, 1.09219365e-02, 3.05683501e-02, -4.08848785e-02, -1.71920974e-02, -1.46033484e-02, -3.29053291e-05, 3.84788848e-02, -7.86552951e-03, 1.01251132e-03, 2.72140447e-02, 2.52339337e-02, 3.39004360e-02, -1.38184745e-02, 2.60320995e-02, -1.01425601e-02, -1.16012329e-02, 4.30319924e-03, -1.01203052e-02, -4.66396799e-03, -2.64480542e-02, 3.44322808e-02, -4.64622118e-03, 1.06053520e-02, 1.37923108e-02, -2.05409434e-03, -1.19995829e-02, 2.10450366e-02, -2.87155900e-03, -1.39515549e-02, -1.51185887e-02, 2.29053162e-02, -1.78178120e-02, 1.95855577e-03, 2.37271357e-02, 2.80657201e-03, -6.08753460e-03, -2.01220363e-02, 3.22612897e-02, 1.82474777e-02, 5.31493872e-02, -7.08705634e-02, 2.76431069e-03, 1.03597697e-02, -3.53837833e-02, 1.38167264e-02, -5.91275143e-03, 1.84398554e-02, 6.05177172e-02, 1.14565976e-02, 1.56977493e-02, -1.82731878e-02, -4.58574407e-02, -1.08330613e-02, -1.16500622e-02, -1.19803764e-04, 6.48374185e-02, -1.21538760e-03, -5.41793741e-02, 1.38867721e-02, 3.52845751e-02, -2.08288375e-02, 1.03750750e-02, -2.17110049e-02, 2.29265504e-02, -1.21381739e-02, -1.47071329e-03, -4.36875001e-02, -2.25690063e-02, -4.16939743e-02, -8.39853752e-03, -2.06098761e-02, 2.30504461e-02, 3.48615423e-02, -4.18495797e-02, -2.41985917e-03, -3.18994140e-03, 1.22078639e-02, -9.50168632e-03, -1.97298196e-03, 1.30731370e-02, 2.07234323e-02, 1.08521534e-02, 2.30542179e-02, -2.54045837e-02, 1.45645533e-02, -1.08493539e-02, -1.30415503e-02, 3.29123251e-02, 3.46204527e-02, 2.58748885e-04, -1.28235819e-03, -1.32823242e-02, 5.47284493e-03, -2.62062326e-02, 2.31803600e-02, -2.04505119e-02, 2.32407395e-02, 2.12946888e-02, -1.28869051e-02, -6.81399694e-03, 5.68802692e-02, 4.31004271e-04, 1.67261921e-02, 2.93559525e-02, 1.32581135e-02, -9.03073605e-03, -9.38207190e-03, 1.74718127e-02, 1.72506981e-02, 5.02267219e-02, -1.32851647e-02, 5.07321544e-02, -1.87530685e-02, 4.18599546e-02, 1.50075918e-02, -2.61102356e-02, -1.59594957e-02, 1.36823149e-03, -9.64679196e-03, 1.71130225e-02], dtype=float32), 'speaker 0')}] ``` - all mp3 files postprocessing using https://malaya-speech.readthedocs.io/en/latest/load-noise-reduction.html and https://malaya-speech.readthedocs.io/en/latest/load-speech-enhancement.html - `wav_data` is directory of the audio, prune the path to proper extracted directory. - `asr_model` is predicted using the best model that we have, `conformer-medium`, returned `(text, probability, subwords)`, https://malaya-speech.readthedocs.io/en/latest/load-stt-transducer-model-pt.html - `classification_model` is predicted using NEMO TITANET Large speaker verification model, https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/titanet_large, with streaming speaker similarity, https://malaya-speech.readthedocs.io/en/latest/huggingface-repository.html 3. Group by similar speakers using pagerank method (scipy.sparse.linalg.gmres), - 90% similar, from 10492 unique speakers become 6085 unique speakers, https://github.com/huseinzol05/malaya-speech/blob/master/data/youtube/mapping-youtube-speakers-90.json - 85% similar, from 10492 unique speakers become 4312 unique speakers, https://github.com/huseinzol05/malaya-speech/blob/master/data/youtube/mapping-youtube-speakers-85.json - 80% similar, from 10492 unique speakers become 2912 unique speakers, https://github.com/huseinzol05/malaya-speech/blob/master/data/youtube/mapping-youtube-speakers-80.json Speaker name defined as, ```python import os import pickle pkl = 'filename.pkl' with open(pkl, 'rb') as fopen: data = pickle.load(fopen) filename = os.path.split(pkl)[1].replace('.pkl', '') for result in data: speaker_name = f'{filename}-{speaker}' actual_speaker = unique_speakers[speaker_name] ``` Check example at https://github.com/huseinzol05/malaya-speech/blob/master/data/youtube/calculate-lengths-80.ipynb
mesolitica/unsupervised-malay-youtube-speaker-diarization
[ "language:ms", "region:us" ]
2023-02-28T09:17:34+00:00
{"language": ["ms"]}
2023-03-04T13:05:16+00:00
e6adb4bf004a8e1c498b009262ef47bbc834431f
# Dataset Card for Beyond Words ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://labs.loc.gov/ - **Repository:** https://github.com/LibraryOfCongress/newspaper-navigator - **Paper:** https://arxiv.org/abs/2005.01583 - **Leaderboard:** - **Point of Contact:** [email protected] ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ```bibtex @inproceedings{10.1145/3340531.3412767, author = {Lee, Benjamin Charles Germain and Mears, Jaime and Jakeway, Eileen and Ferriter, Meghan and Adams, Chris and Yarasavage, Nathan and Thomas, Deborah and Zwaard, Kate and Weld, Daniel S.}, title = {The Newspaper Navigator Dataset: Extracting Headlines and Visual Content from 16 Million Historic Newspaper Pages in Chronicling America}, year = {2020}, isbn = {9781450368599}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3340531.3412767}, doi = {10.1145/3340531.3412767}, abstract = {Chronicling America is a product of the National Digital Newspaper Program, a partnership between the Library of Congress and the National Endowment for the Humanities to digitize historic American newspapers. Over 16 million pages have been digitized to date, complete with high-resolution images and machine-readable METS/ALTO OCR. Of considerable interest to Chronicling America users is a semantified corpus, complete with extracted visual content and headlines. To accomplish this, we introduce a visual content recognition model trained on bounding box annotations collected as part of the Library of Congress's Beyond Words crowdsourcing initiative and augmented with additional annotations including those of headlines and advertisements. We describe our pipeline that utilizes this deep learning model to extract 7 classes of visual content: headlines, photographs, illustrations, maps, comics, editorial cartoons, and advertisements, complete with textual content such as captions derived from the METS/ALTO OCR, as well as image embeddings. We report the results of running the pipeline on 16.3 million pages from the Chronicling America corpus and describe the resulting Newspaper Navigator dataset, the largest dataset of extracted visual content from historic newspapers ever produced. The Newspaper Navigator dataset, finetuned visual content recognition model, and all source code are placed in the public domain for unrestricted re-use.}, booktitle = {Proceedings of the 29th ACM International Conference on Information &amp; Knowledge Management}, pages = {3055–3062}, numpages = {8}, keywords = {digital humanities, dataset, chronicling america, newspaper navigator, document analysis, information retrieval, digital libraries and archives, public domain, historic newspapers}, location = {Virtual Event, Ireland}, series = {CIKM '20} } ``` ### Contributions Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.
biglam/loc_beyond_words
[ "task_categories:object-detection", "size_categories:1K<n<10K", "license:cc0-1.0", "lam", "newspapers", "document-layout", "arxiv:2005.01583", "region:us" ]
2023-02-28T09:34:42+00:00
{"license": "cc0-1.0", "size_categories": ["1K<n<10K"], "task_categories": ["object-detection"], "pretty_name": "Beyond Words", "dataset_info": {"features": [{"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "width", "dtype": "int32"}, {"name": "height", "dtype": "int32"}, {"name": "objects", "sequence": [{"name": "bw_id", "dtype": "string"}, {"name": "category_id", "dtype": {"class_label": {"names": {"0": "Photograph", "1": "Illustration", "2": "Map", "3": "Comics/Cartoon", "4": "Editorial Cartoon", "5": "Headline", "6": "Advertisement"}}}}, {"name": "image_id", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "area", "dtype": "int64"}, {"name": "bbox", "sequence": "float32", "length": 4}, {"name": "iscrowd", "dtype": "bool"}]}], "splits": [{"name": "train", "num_bytes": 2854507, "num_examples": 2846}, {"name": "validation", "num_bytes": 731782, "num_examples": 712}], "download_size": 1200053819, "dataset_size": 3586289}, "tags": ["lam", "newspapers", "document-layout"]}
2023-03-01T11:46:54+00:00
a60f228d836fbbf990d11b4590c60b3e7f2aec2d
PoQuaD dataset
clarin-pl/poquad
[ "task_categories:question-answering", "task_ids:extractive-qa", "task_ids:open-domain-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:pl", "license:cc-by-4.0", "region:us" ]
2023-02-28T09:46:17+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["pl"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa", "open-domain-qa"], "pretty_name": "PoQuaD"}
2023-07-04T09:50:43+00:00
35bc7244c311becb06acb0b246b37d868b9fd86c
# Dataset Card for "sidewalk-imagery" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
zhu123/sidewalk-imagery
[ "region:us" ]
2023-02-28T09:51:21+00:00
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 86083036.0, "num_examples": 10}], "download_size": 7144967, "dataset_size": 86083036.0}}
2023-02-28T09:51:25+00:00
e287a9fb356c04881fc0e2745330c562eac951c5
# Dataset Card for Pong-v4-expert-MCTS ## Table of Contents - [Supported Tasks and Baseline](#support-tasks-and-baseline) - [Data Usage](#data-usage) - [Data Discription](##data-description) - [Data Fields](##data-fields) - [Data Splits](##data-splits) - [Initial Data Collection and Normalization](##Initial-Data-Collection-and-Normalization) - [Additional Information](#Additional-Information) - [Who are the source data producers?](##Who-are-the-source-data-producers) - [Social Impact of Dataset](##Social-Impact-of-Dataset) - [Known Limitations](##Known-Limitations) - [Licensing Information](##Licensing-Information) - [Citation Information](##Citation-Information) - [Contributions](##Contributions) ## Supported Tasks and Baseline - This dataset supports the training for [Procedure Cloning (PC )](https://arxiv.org/abs/2205.10816) algorithm. - Baselines when sequence length for decision is 0: | Train loss | Test Acc | Reward | | -------------------------------------------------- | -------- | ------ | | ![feature](./sup_loss.png) | 0.90 | 20 | - Baselines when sequence length for decision is 4: | Train action loss | Train hidden state loss | Train acc (auto-regressive mode) | Reward | | ----------------------------------------------------- | ------------------------------------------------- | --------------------------------------------------- | ------ | | ![feature](./action_loss.png) | ![feature](./hs_loss.png) | ![feature](./train_acc.png) | -21 | ## Data Usage ### Data description This dataset includes 8 episodes of pong-v4 environment. The expert policy is [EfficientZero](https://arxiv.org/abs/2111.00210), which is able to generate MCTS hidden states. Because of the contained hidden states for each observation, this dataset is suitable for Imitation Learning methods that learn from a sequence like PC. ### Data Fields - `obs`: An Array3D containing observations from 8 trajectories of an evaluated agent. The data type is uint8 and each value is in 0 to 255. The shape of this tensor is [96, 96, 3], that is, the channel dimension in the last dimension. - `actions`: An integer containing actions from 8 trajectories of an evaluated agent. This value is from 0 to 5. Details about this environment can be viewed at [Pong - Gym Documentation](https://www.gymlibrary.dev/environments/atari/pong/). - `hidden_state`: An Array3D containing corresponding hidden states generated by EfficientZero, from 8 trajectories of an evaluated agent. The data type is float32. This is an example for loading the data using iterator: ```python from safetensors import saveopen def generate_examples(self, filepath): data = {} with safe_open(filepath, framework="pt", device="cpu") as f: for key in f.keys(): data[key] = f.get_tensor(key) for idx in range(len(data['obs'])): yield idx, { 'observation': data['obs'][idx], 'action': data['actions'][idx], 'hidden_state': data['hidden_state'][idx], } ``` ### Data Splits There is only a training set for this dataset, as evaluation is undertaken by interacting with a simulator. ### Initial Data Collection and Normalization - This dataset is collected by EfficientZero policy. - The standard for expert data is that each return of 8 episodes is over 20. - No normalization is previously applied ( i.e. each value of observation is a uint8 scalar in [0, 255] ) ## Additional Information ### Who are the source language producers? [@kxzxvbk](https://huggingface.co/kxzxvbk) ### Social Impact of Dataset - This dataset can be used for Imitation Learning, especially for algorithms that learn from a sequence. - Very few dataset is open-sourced currently for MCTS based policy. - This dataset can potentially promote the research for sequence based imitation learning algorithms. ### Known Limitations - This dataset is only used for academic research. - For any commercial use or other cooperation, please contact: [email protected] ### License This dataset is under [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Citation Information ``` @misc{Pong-v4-expert-MCTS, title={{Pong-v4-expert-MCTS: OpenDILab} A dataset for Procedure Cloning algorithm using Pong-v4.}, author={Pong-v4-expert-MCTS Contributors}, publisher = {huggingface}, howpublished = {\url{https://huggingface.co/datasets/OpenDILabCommunity/Pong-v4-expert-MCTS}}, year={2023}, } ``` ### Contributions This data is partially based on the following repo, many thanks to their pioneering work: - https://github.com/opendilab/DI-engine - https://github.com/opendilab/LightZero Please view the [doc](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cardsHow) for anyone who want to contribute to this dataset.
OpenDILabCommunity/Pong-v4-expert-MCTS
[ "license:apache-2.0", "arxiv:2205.10816", "arxiv:2111.00210", "region:us" ]
2023-02-28T10:23:18+00:00
{"license": "apache-2.0"}
2023-04-21T08:21:34+00:00
3995d7f74b51ac8d954293d3277b9398fb0484d7
# PDB Sequences This dataset contains 780,163 protein sequences from the [RCCB Protein Data Bank](https://www.rcsb.org/)
ronig/pdb_sequences
[ "license:pddl", "region:us" ]
2023-02-28T10:46:21+00:00
{"license": "pddl"}
2023-06-24T17:33:17+00:00
37b6e9b78459bcae1667cf7c8f527a8392aa4c53
# Dataset Card ## Table of Contents - [Dataset Card](#dataset-card) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** -- - **Repository:** [github.com/pstuerner/ukraine-liveblog-data](https://github.com/pstuerner/ukraine-liveblog-data) - **Paper:** -- - **Leaderboard:** -- - **Point of Contact:** [email protected] ### Dataset Summary The "ukraine-liveblog" dataset contains a collection of news articles published on the liveblog of the popular German news website, tagesschau.de. The dataset covers the period from February 2022 to February 2023, and includes every news feed published during this time that covers the ongoing war in Ukraine. ### Supported Tasks and Leaderboards -- ### Languages The language of the dataset is German. ## Dataset Structure ### Data Instances Here is a JSON-formatted example of a typical instance in the "German Articles about the War in Ukraine" dataset: This example consists of a headline and the corresponding text separated by a colon. The headline reads "Warum Waffenlieferungen in Ostdeutschland skeptisch gesehen werden" (Why Weapons Deliveries are Viewed Skeptically in East Germany), and the text provides additional details and analysis about the topic. This format is consistent across the dataset and allows for easy identification and extraction of key information. ``` { "text": "Warum Waffenlieferungen in Ostdeutschland skeptisch gesehen werden: Die Debatten um Waffenlieferungen für die Ukraine stoßen in Ostdeutschland meist auf Ablehnung. Das lässt sich aber nicht allein mit Russlandfreundlichkeit erklären, sagt Politikwissenschaftlerin Sarah Pagung." ... } ``` ### Data Fields The "ukraine-liveblog" dataset includes the following fields: - `text`: The main body of the article, written in German. (string) ### Data Splits The dataset has been split into two sets: a training set and a validation set. The training set contains 90% of the data, or 15,083 instances, and the validation set contains the remaining 10%, or 1,676 instances. | | train | validation | test | |-------------------------|------:|-----------:|-----:| | Input Sentences | 15083 | 1676 | | | Average Sentence Length | 768 | 674 | | ## Dataset Creation ### Curation Rationale The creation of the dataset was motivated by a number of factors, such as the need to collect and analyze information about the conflict in Ukraine, understand how the conflict is being reported in German media, and provide a resource for NLP enthusiasts to fine-tune GPT2 on additional German data. ### Source Data The liveblog on tagesschau.de about the war in Ukraine. #### Initial Data Collection and Normalization The dataset was built using a custom Python script that leverages the newspaper and beautifulsoup4 libraries. The script was designed to scrape data from the liveblog about the war in Ukraine on tagesschau.de, starting from the latest day of the liveblog and working backwards until it reaches the first day of the liveblog. #### Who are the source language producers? The articles were written by Tagesschau reporters. ### Annotations -- #### Annotation process -- #### Who are the annotators? -- ### Personal and Sensitive Information All information is publicly available and doesn't include any personal or sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset -- ### Discussion of Biases -- ### Other Known Limitations -- ## Additional Information ### Dataset Curators -- ### Licensing Information -- ### Citation Information -- ### Contributions --
pstuerner/ukraine-liveblog
[ "task_categories:text-generation", "language:de", "german-gpt2", "region:us" ]
2023-02-28T11:09:53+00:00
{"language": ["de"], "task_categories": ["text-generation"], "pretty_name": "German Articles about the War in Ukraine", "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11818583, "num_examples": 15083}, {"name": "test", "num_bytes": 1152954, "num_examples": 1676}], "download_size": 7404260, "dataset_size": 12971537}, "tags": ["german-gpt2"]}
2023-02-28T13:20:20+00:00
bb512e9cabbeceed9966f05970c31c06320a61d2
earmas/yomismo
[ "size_categories:n<1K", "language:es", "license:openrail", "region:us" ]
2023-02-28T11:50:12+00:00
{"language": ["es"], "license": "openrail", "size_categories": ["n<1K"], "pretty_name": "yomismo"}
2023-02-28T11:52:07+00:00
f7fa96cae0d2b218e602d530813d02757810e0e5
# Dataset Card for "enwiki20230101-minilml6v2-avgembeddings" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lsb/enwiki20230101-minilml6v2-avgembeddings
[ "region:us" ]
2023-02-28T12:07:18+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "avg_embed", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 31116288935, "num_examples": 6593739}], "download_size": 26588145966, "dataset_size": 31116288935}}
2023-02-28T13:01:08+00:00
50a35d2b3e8a6da3623b38f81745f958266a0422
awacke1/LOINC-Panels-and-Forms
[ "license:mit", "region:us" ]
2023-02-28T12:13:37+00:00
{"license": "mit"}
2023-02-28T12:29:13+00:00
7b23ba64ac437feabd4379db67e164af8a771e22
awacke1/LOINC-Clinical-Terminology
[ "license:mit", "region:us" ]
2023-02-28T12:16:52+00:00
{"license": "mit"}
2023-02-28T12:17:47+00:00
95bb6e51319293c14391a6e0cf087d27a73f5593
awacke1/SNOMED-Clinical-Terminology
[ "license:mit", "region:us" ]
2023-02-28T12:17:45+00:00
{"license": "mit"}
2023-02-28T13:38:37+00:00
927cb5f0c1e9295ad721a17220322e54e9f53740
ICD10-Clinical-Terminology pyarrow fast search demonstration for context AI MMoE
awacke1/ICD10-Clinical-Terminology
[ "license:mit", "region:us" ]
2023-02-28T12:20:41+00:00
{"license": "mit"}
2024-01-31T01:34:01+00:00
4209c5aa9ffaee467cc727e33e35bff7ac80a212
awacke1/OMS-Clinical-Terminology
[ "license:mit", "region:us" ]
2023-02-28T12:24:42+00:00
{"license": "mit"}
2023-02-28T12:26:31+00:00
085fdc50fc7d2fad63d3733118bd80976fb7c2dd
# Dataset Card for "data11" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hts98/data11
[ "region:us" ]
2023-02-28T12:52:35+00:00
{"dataset_info": {"features": [{"name": "input_length", "dtype": "int64"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}, {"name": "labels_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2275090376.0, "num_examples": 2366}, {"name": "test", "num_bytes": 569229384.0, "num_examples": 592}], "download_size": 2212617419, "dataset_size": 2844319760.0}}
2023-02-28T12:56:06+00:00
a34c548277084e8c43a0d0c226cd0c3c10cb651a
Alegzandra/REDv2_EN
[ "license:mit", "region:us" ]
2023-02-28T13:00:16+00:00
{"license": "mit"}
2023-02-28T13:00:54+00:00
221cb39ccd7284a23fa3d0bc35da7726a0b5852f
# Dataset Card for Project Gutenber - Multilanguage eBooks A collection of non-english language eBooks (7907, about 75-80% of all the ES, DE, FR, NL, IT, PT, HU books available on the site) from the Project Gutenberg site with metadata removed. Originally colected for https://github.com/LAION-AI/Open-Assistant | LANG | EBOOKS | |----|----| | ES | 717 | | DE | 1735 | | FR | 2863 | | NL | 904 | | IT | 692 | | PT | 501 | | HU | 495 | The METADATA column contains catalogue meta information on each book as a serialized JSON: | key | original column | |----|----| | language | - | | text_id | Text# unique book identifier on Prject Gutenberg as *int* | | title | Title of the book as *string* | | issued | Issued date as *string* | | authors | Authors as *string*, comma separated sometimes with dates | | subjects | Subjects as *string*, various formats | | locc | LoCC code as *string* | | bookshelves | Bookshelves as *string*, optional | ## Source data **How was the data generated?** - A crawler (see Open-Assistant repository) downloaded the raw HTML code for each eBook based on **Text#** id in the Gutenberg catalogue (if available) - The metadata and the body of text are not clearly separated so an additional parser attempts to split them, then remove transcriber's notes and e-book related information from the body of text (text clearly marked as copyrighted or malformed was skipped and not collected) - The body of cleaned TEXT as well as the catalogue METADATA is then saved as a parquet file, with all columns being strings **Copyright notice:** - Some of the books are copyrighted! The crawler ignored all books with an english copyright header by utilizing a regex expression, but make sure to check out the metadata for each book manually to ensure they are okay to use in your country! More information on copyright: https://www.gutenberg.org/help/copyright.html and https://www.gutenberg.org/policy/permission.html - Project Gutenberg has the following requests when using books without metadata: _Books obtianed from the Project Gutenberg site should have the following legal note next to them: "This eBook is for the use of anyone anywhere in the United States and most other parts of the world at no cost and with almost" no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at www.gutenberg.org. If you are not located in the United States, you will have to check the laws of the country where you are located before using this eBook."_
sedthh/gutenberg_multilang
[ "task_categories:text-generation", "size_categories:1K<n<10K", "language:es", "language:de", "language:fr", "language:nl", "language:it", "language:pt", "language:hu", "license:mit", "project gutenberg", "e-book", "gutenberg.org", "region:us" ]
2023-02-28T13:25:31+00:00
{"language": ["es", "de", "fr", "nl", "it", "pt", "hu"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation"], "pretty_name": "Project Gutenberg eBooks in different languages", "dataset_info": {"features": [{"name": "TEXT", "dtype": "string"}, {"name": "SOURCE", "dtype": "string"}, {"name": "METADATA", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3127780102, "num_examples": 7907}], "download_size": 1911528348, "dataset_size": 3127780102}, "tags": ["project gutenberg", "e-book", "gutenberg.org"]}
2023-03-16T14:22:26+00:00
a3beb1e148910f03335648c47508d739d124a924
```bib @inproceedings{mutual, title = "MuTual: A Dataset for Multi-Turn Dialogue Reasoning", author = "Cui, Leyang and Wu, Yu and Liu, Shujie and Zhang, Yue and Zhou, Ming" , booktitle = "Proceedings of the 58th Conference of the Association for Computational Linguistics", year = "2020", publisher = "Association for Computational Linguistics", } ```
tasksource/mutual
[ "region:us" ]
2023-02-28T13:26:25+00:00
{}
2023-02-28T13:27:49+00:00
d4c5cf7870e6fbfb40f03673a4e3e980f92b60ee
# Dataset card for Leetcode Performance Dataset
Saauan/leetcode-performance
[ "task_categories:text-generation", "size_categories:n<1K", "license:cc0-1.0", "region:us" ]
2023-02-28T13:51:41+00:00
{"license": "cc0-1.0", "size_categories": ["n<1K"], "task_categories": ["text-generation"], "pretty_name": "Leetcode performance dataset"}
2023-03-01T11:03:09+00:00
ea0275a074c5f120229bde506fed97e3fc91a456
# Dataset Card for "push_to_hub_empty" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
polinaeterna/push_to_hub_empty
[ "region:us" ]
2023-02-28T14:03:57+00:00
{"dataset_info": {"features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 48, "num_examples": 3}], "download_size": 1300, "dataset_size": 48}, "configs_kwargs": {"config_name": "default", "data_dir": "default"}}
2023-02-28T14:04:03+00:00
28973b04f28fd7be4a6186a042bc26159d4366ca
# Dataset Card for Project Gutenber - English Language eBooks A collection of non-english language eBooks (48284 rows, 80%+ of all english language books available on the site) from the Project Gutenberg site with metadata removed. Originally colected for https://github.com/LAION-AI/Open-Assistant (follows the OpenAssistant training format) The METADATA column contains catalogue meta information on each book as a serialized JSON: | key | original column | |----|----| | language | - | | text_id | Text# unique book identifier on Prject Gutenberg as *int* | | title | Title of the book as *string* | | issued | Issued date as *string* | | authors | Authors as *string*, comma separated sometimes with dates | | subjects | Subjects as *string*, various formats | | locc | LoCC code as *string* | | bookshelves | Bookshelves as *string*, optional | ## Source data **How was the data generated?** - A crawler (see Open-Assistant repository) downloaded the raw HTML code for each eBook based on **Text#** id in the Gutenberg catalogue (if available) - The metadata and the body of text are not clearly separated so an additional parser attempts to split them, then remove transcriber's notes and e-book related information from the body of text (text clearly marked as copyrighted or malformed was skipped and not collected) - The body of cleaned TEXT as well as the catalogue METADATA is then saved as a parquet file, with all columns being strings **Copyright notice:** - Some of the books are copyrighted! The crawler ignored all books with an english copyright header by utilizing a regex expression, but make sure to check out the metadata for each book manually to ensure they are okay to use in your country! More information on copyright: https://www.gutenberg.org/help/copyright.html and https://www.gutenberg.org/policy/permission.html - Project Gutenberg has the following requests when using books without metadata: _Books obtianed from the Project Gutenberg site should have the following legal note next to them: "This eBook is for the use of anyone anywhere in the United States and most other parts of the world at no cost and with almost" no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at www.gutenberg.org. If you are not located in the United States, you will have to check the laws of the country where you are located before using this eBook."_
sedthh/gutenberg_english
[ "task_categories:text-generation", "size_categories:10K<n<100K", "language:en", "license:mit", "project gutenberg", "e-book", "gutenberg.org", "region:us" ]
2023-02-28T14:15:24+00:00
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "pretty_name": "Project Gutenberg eBooks in English", "dataset_info": {"features": [{"name": "TEXT", "dtype": "string"}, {"name": "SOURCE", "dtype": "string"}, {"name": "METADATA", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18104255935, "num_examples": 48284}], "download_size": 10748877194, "dataset_size": 18104255935}, "tags": ["project gutenberg", "e-book", "gutenberg.org"]}
2023-03-17T09:50:22+00:00
1aa47a029af3b80d70c71ed6924d1170c7283a5e
# Pseudolabel Youtube Malay audiobooks using Whisper Large V3 Notebooks at https://github.com/mesolitica/malaysian-dataset/tree/master/speech-to-text-semisupervised/youtube-audiobook 1. Split based on 10 seconds utterances using WebRTC VAD. ## how-to Download files, ```bash wget https://huggingface.co/datasets/mesolitica/semisupervised-audiobook/resolve/main/bukan-kerana-aku-5secs-noisy.tar.gz wget https://huggingface.co/datasets/mesolitica/semisupervised-audiobook/resolve/main/bukan-kerana-aku-noisy.tar.gz wget https://huggingface.co/datasets/mesolitica/semisupervised-audiobook/resolve/main/harry-potter-5secs-noisy.tar.gz wget https://huggingface.co/datasets/mesolitica/semisupervised-audiobook/resolve/main/harry-potter-noisy.tar.gz wget https://huggingface.co/datasets/mesolitica/semisupervised-audiobook/resolve/main/teme-5secs-noisy.tar.gz wget https://huggingface.co/datasets/mesolitica/semisupervised-audiobook/resolve/main/teme-noisy.tar.gz wget https://huggingface.co/datasets/mesolitica/semisupervised-audiobook/resolve/main/semisupervised-audiobook-part1.json wget https://huggingface.co/datasets/mesolitica/semisupervised-audiobook/resolve/main/semisupervised-audiobook-part2.json ```
mesolitica/semisupervised-audiobook
[ "task_categories:automatic-speech-recognition", "task_categories:text-to-speech", "language:ms", "region:us" ]
2023-02-28T14:41:41+00:00
{"language": ["ms"], "task_categories": ["automatic-speech-recognition", "text-to-speech"]}
2024-01-01T04:21:23+00:00
1ec192e529973f8c73e46d65f0a8b10109ab5d74
# Dataset Card for "push_to_hub_config_none_be56a8b" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
polinaeterna/push_to_hub_config_none_be56a8b
[ "region:us" ]
2023-02-28T15:10:26+00:00
{"dataset_info": {"features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 48, "num_examples": 3}], "download_size": 950, "dataset_size": 48}, "configs_kwargs": {"config_name": "default", "data_dir": "default"}}
2023-02-28T15:10:36+00:00
21e3a6e5a8a4ecff6f08b815d9380b2ac5c08b8d
# Dataset Card for "dog_small" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yuvalkirstain/dog_small
[ "region:us" ]
2023-02-28T15:14:28+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "cat", "1": "dog"}}}}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3450627.0, "num_examples": 100}, {"name": "validation", "num_bytes": 3450627.0, "num_examples": 100}, {"name": "test", "num_bytes": 3450627.0, "num_examples": 100}], "download_size": 10146339, "dataset_size": 10351881.0}}
2023-02-28T15:14:32+00:00
5bf046e5ecd15b7bca0edff7eb38e594e2e1f88f
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
ArkLade/housemix1
[ "task_categories:zero-shot-classification", "size_categories:n<1K", "license:openrail", "region:us" ]
2023-02-28T15:16:21+00:00
{"license": "openrail", "size_categories": ["n<1K"], "task_categories": ["zero-shot-classification"], "pretty_name": "tiny_demo"}
2023-03-10T22:22:07+00:00
a62162cc5d869af69bf3d84b1bd2db898c0f6b5d
# Dataset Card for "AID_MultiLabel" ## Dataset Description - **Paper:** [AID: A benchmark data set for performance evaluation of aerial scene classification](https://ieeexplore.ieee.org/iel7/36/4358825/07907303.pdf) - **Paper:** [Relation Network for Multi-label Aerial Image Classification]() ### Licensing Information CC0: Public Domain ## Citation Information Imagery: [AID: A benchmark data set for performance evaluation of aerial scene classification](https://ieeexplore.ieee.org/iel7/36/4358825/07907303.pdf) Multilabels: [Relation Network for Multi-label Aerial Image Classification](https://ieeexplore.ieee.org/iel7/36/4358825/08986556.pdf) ``` @article{xia2017aid, title = {AID: A benchmark data set for performance evaluation of aerial scene classification}, author = {Xia, Gui-Song and Hu, Jingwen and Hu, Fan and Shi, Baoguang and Bai, Xiang and Zhong, Yanfei and Zhang, Liangpei and Lu, Xiaoqiang}, year = 2017, journal = {IEEE Transactions on Geoscience and Remote Sensing}, publisher = {IEEE}, volume = 55, number = 7, pages = {3965--3981} } @article{hua2019relation, title = {Relation Network for Multi-label Aerial Image Classification}, author = {Hua, Yuansheng and Mou, Lichao and Zhu, Xiao Xiang}, year = {DOI:10.1109/TGRS.2019.2963364}, journal = {IEEE Transactions on Geoscience and Remote Sensing} } ```
jonathan-roberts1/AID_MultiLabel
[ "task_categories:image-classification", "task_categories:zero-shot-image-classification", "license:cc0-1.0", "region:us" ]
2023-02-28T15:22:36+00:00
{"license": "cc0-1.0", "task_categories": ["image-classification", "zero-shot-image-classification"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "sequence": {"class_label": {"names": {"0": "airplane", "1": "bare soil", "2": "buildings", "3": "cars", "4": "chaparral", "5": "court", "6": "dock", "7": "field", "8": "grass", "9": "mobile home", "10": "pavement", "11": "sand", "12": "sea", "13": "ship", "14": "tanks", "15": "trees", "16": "water"}}}}], "splits": [{"name": "train", "num_bytes": 278244208, "num_examples": 3000}], "download_size": 278126146, "dataset_size": 278244208}}
2023-04-03T15:38:58+00:00
6e65beb2574aa414985af412848c14bda810eeb2
# Dataset Card for "banking77-topics-setfit" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dvilasuero/banking77-topics-setfit
[ "region:us" ]
2023-02-28T15:45:50+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Other", "1": "atm", "2": "balance", "3": "card", "4": "exchange rate", "5": "pin", "6": "top up", "7": "transfer"}}}}], "splits": [{"name": "train", "num_bytes": 9783.2, "num_examples": 156}, {"name": "test", "num_bytes": 2445.8, "num_examples": 39}], "download_size": 10209, "dataset_size": 12229.0}}
2023-02-28T15:54:34+00:00
677ea849b9b0abc29fe97d599093d293e2f39569
# Dataset Card for "MultiScene" ## Dataset Description - **Paper** [MultiScene: A Large-scale Dataset and Benchmark for Multi-scene Recognition in Single Aerial Images](https://ieeexplore.ieee.org/iel7/36/4358825/09537917.pdf) - **Split** Clean ### Split Information This HuggingFace dataset repository contains just the 'Clean' split. ### Licensing Information MIT. ## Citation Information [MultiScene: A Large-scale Dataset and Benchmark for Multi-scene Recognition in Single Aerial Images](https://ieeexplore.ieee.org/iel7/36/4358825/09537917.pdf) ``` @article{hua2021multiscene, title = {MultiScene: A Large-scale Dataset and Benchmark for Multi-scene Recognition in Single Aerial Images}, author = {Hua, Y. and Mou, L. and Jin, P. and Zhu, X. X.}, year = {in press}, journal = {IEEE Transactions on Geoscience and Remote Sensing} } ```
jonathan-roberts1/MultiScene
[ "task_categories:image-classification", "task_categories:zero-shot-image-classification", "license:mit", "region:us" ]
2023-02-28T16:13:48+00:00
{"license": "mit", "task_categories": ["image-classification", "zero-shot-image-classification"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "sequence": {"class_label": {"names": {"0": "apron", "1": "baseball field", "2": "basketball field", "3": "beach", "4": "bridge", "5": "cemetery", "6": "commercial", "7": "farmland", "8": "woodland", "9": "golf course", "10": "greenhouse", "11": "helipad", "12": "lake or pond", "13": "oil field", "14": "orchard", "15": "parking lot", "16": "park", "17": "pier", "18": "port", "19": "quarry", "20": "railway", "21": "residential", "22": "river", "23": "roundabout", "24": "runway", "25": "soccer", "26": "solar panel", "27": "sparse shrub", "28": "stadium", "29": "storage tank", "30": "tennis court", "31": "train station", "32": "wastewater plant", "33": "wind turbine", "34": "works", "35": "sea"}}}}], "splits": [{"name": "train", "num_bytes": 867506522, "num_examples": 14000}], "download_size": 867005851, "dataset_size": 867506522}}
2023-04-03T15:15:59+00:00
7b134e040da471d795aeeef22635bc529606201b
# Dataset Card for semantic-domains-greek-lemmatized ## Dataset Description - **Point of Contact:** https://huggingface.co/ryderwishart / https://github.com/ryderwishart ### Dataset Summary Semantic domains aligned to tokens, broken down by sentences. Tokens have been lemmatized according to data in [Clear-Bible/macula-greek](https://github.com/Clear-Bible/macula-greek). Domains are based on Louw and Nida's semantic domains for the Greek New Testament. ### Languages Greek, Hellenistic Greek, Koine Greek, Greek of the New Testament ## Dataset Structure ### Data Instances ``` DatasetDict({ train: Dataset({ features: ['tokens', 'tags', 'labels'], num_rows: 6408 }) test: Dataset({ features: ['tokens', 'tags', 'labels'], num_rows: 801 }) eval: Dataset({ features: ['tokens', 'tags', 'labels'], num_rows: 802 }) }) ``` ### Data Fields `tokens`: plaintext words (only split by whitespace); e.g., ``` ['δέ', 'ὁ', 'ἀποκρίνομαι', 'εἷς', 'αὐτός', 'λέγω', 'ἑταῖρος', 'οὐ', 'ἀδικέω', 'σύ', 'οὐχί', 'δηνάριον', 'συμφωνέω', 'ἐγώ'] ``` `tags`: integer IDs for each semantic domain (use these for training the model). `labels`: label strings for each tag; e.g., ``` ['89.124', '92.24', '33.28', '92.22', '92.11', '33.69', '34.16', '69.3', '88.128 88.22', '92.6', '69.12', '6.75', '31.15', '92.1'] ``` ### Data Splits Data split into train (75%), test (12.5%), and evaluation (12.5%) splits. ## Dataset Creation Greek words are based on the Nestle1904 base text, which is in the public domain. More information about the meanings of the semantic domain labels can be found online [here](https://www.laparola.net/greco/louwnida.php), or by consulting Louw and Nida's Lexicon. ## Considerations for Using the Data ### Social Impact of Dataset This data may be used to further Christ's kingdom and glorify God. ### Other Known Limitations Louw and Nida's semantic domains have some known limitations discussed [in this paper](https://academic.oup.com/ijl/article/31/4/394/5070421).
ryderwishart/semantic-domains-greek-lemmatized
[ "task_categories:token-classification", "size_categories:1K<n<10K", "language:el", "region:us" ]
2023-02-28T16:20:50+00:00
{"language": ["el"], "size_categories": ["1K<n<10K"], "task_categories": ["token-classification"], "pretty_name": "Semantic Domains of the Greek New Testament (Lemmatized)"}
2023-02-28T16:42:48+00:00
77ab3dc8620bbe91593f3094643d0034d97c0478
# Dataset Card for "70000_method2test_tokonized_ForCausalLM" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Minata/70000_method2test_tokonized_ForCausalLM
[ "region:us" ]
2023-02-28T16:30:07+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "test", "num_bytes": 598079592, "num_examples": 89694}], "download_size": 109394438, "dataset_size": 598079592}}
2023-02-28T19:26:58+00:00
ebab508fca45759713df14887b41f9d3fcd2a2a3
rlacombe/ClimateNet
[ "license:mit", "region:us" ]
2023-02-28T16:51:20+00:00
{"license": "mit"}
2023-04-01T00:18:13+00:00
c296aa9b4baa4de60e36cee7b57333b41746a1ac
This dataset is fork from [https://huggingface.co/datasets/iapp_wiki_qa_squad](https://huggingface.co/datasets/iapp_wiki_qa_squad) that made for Open Assistant. Pull request: [Add iapp_wiki_qa_squad to datasets #1903 ](https://github.com/LAION-AI/Open-Assistant/pull/1903)
wannaphong/iapp_wiki_qa_squad_oa
[ "language:th", "license:mit", "Open Assistant", "region:us" ]
2023-02-28T17:14:56+00:00
{"language": ["th"], "license": "mit", "tags": ["Open Assistant"]}
2023-02-28T17:30:16+00:00
73c3e2d6acd363dcd79a9641a09e8e258d04df01
# Dataset Card for "biomed-fr-small" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
rntc/biomed-fr-small
[ "region:us" ]
2023-02-28T17:29:18+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 289402750, "num_examples": 1393084}, {"name": "validation", "num_bytes": 2851257, "num_examples": 14072}], "download_size": 180185566, "dataset_size": 292254007}}
2023-02-28T18:05:21+00:00
8432a6615939d947fce807716ed89ace20befbdd
First try data generation for toolformer with retrieval, calculator, and calendar tasks. Don't expect too much magic. C4 en variant was used to generate this data. How to parse these: Each item in the dataset comes with three components: - file_index: index of c4 en streamed file - text: complete text input to generation - x_outputs - list of [score, token index, API call, API return] token index with gpt-j tokenizer.
dmayhem93/toolformer_raw_v0
[ "region:us" ]
2023-02-28T17:50:11+00:00
{}
2023-02-28T19:12:34+00:00
630ba77147a6edd5a968b80409c9b0d6ba477d35
promact/shabbattime
[ "license:openrail", "region:us" ]
2023-02-28T17:50:51+00:00
{"license": "openrail"}
2023-02-28T17:52:22+00:00
c32cf1ea353dd5953d126102cd5750f06d9f4018
# Dataset Card for "text2text_translation" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
davidberenstein1957/text2text_translation
[ "region:us" ]
2023-02-28T18:25:19+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5907.416666666667, "num_examples": 19}, {"name": "test", "num_bytes": 1554.5833333333333, "num_examples": 5}], "download_size": 12717, "dataset_size": 7462.0}}
2023-02-28T18:25:29+00:00
fe71e356c4673560fb0eef74be9e2408a782462a
# Dataset Card for "ImageNet15_animals_unbalanced_aug1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CVdatasets/ImageNet15_animals_unbalanced_aug1
[ "region:us" ]
2023-02-28T18:47:49+00:00
{"dataset_info": {"features": [{"name": "labels", "dtype": {"class_label": {"names": {"0": "Italian_greyhound", "1": "Coyote", "2": "Beagle", "3": "Rottweiler", "4": "Hyena", "5": "Greater_Swiss_Mountain_dog", "6": "Triceratops", "7": "French_bulldog", "8": "Red_wolf", "9": "Egyptian_cat", "10": "Chihuahua", "11": "Irish_terrier", "12": "Tiger_cat", "13": "White_wolf", "14": "Timber_wolf"}}}}, {"name": "img", "dtype": "image"}, {"name": "is_generated", "dtype": "bool"}], "splits": [{"name": "validation", "num_bytes": 60570648.125, "num_examples": 1439}, {"name": "train", "num_bytes": 174270537.875, "num_examples": 3705}], "download_size": 234762621, "dataset_size": 234841186.0}}
2023-02-28T18:48:02+00:00
df763835149d57dc61a8ba7c50ae4b428ce77720
# Dataset Card for "ImageNet15_animals_unbalanced_aug2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CVdatasets/ImageNet15_animals_unbalanced_aug2
[ "region:us" ]
2023-02-28T18:55:29+00:00
{"dataset_info": {"features": [{"name": "labels", "dtype": {"class_label": {"names": {"0": "Italian_greyhound", "1": "Coyote", "2": "Beagle", "3": "Rottweiler", "4": "Hyena", "5": "Greater_Swiss_Mountain_dog", "6": "Triceratops", "7": "French_bulldog", "8": "Red_wolf", "9": "Egyptian_cat", "10": "Chihuahua", "11": "Irish_terrier", "12": "Tiger_cat", "13": "White_wolf", "14": "Timber_wolf"}}}}, {"name": "img", "dtype": "image"}, {"name": "is_generated", "dtype": "bool"}], "splits": [{"name": "validation", "num_bytes": 60570648.125, "num_examples": 1439}, {"name": "train", "num_bytes": 186912186.125, "num_examples": 3735}], "download_size": 247404644, "dataset_size": 247482834.25}}
2023-02-28T18:55:41+00:00
d30adfbc901341a9c893028e38a2e923b7e36e22
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: huggingface-course/bert-finetuned-squad * Dataset: adversarial_qa * Config: adversarialQA * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@awacke1](https://huggingface.co/awacke1) for evaluating this model.
autoevaluate/autoeval-eval-adversarial_qa-adversarialQA-e7f19d-38131101061
[ "autotrain", "evaluation", "region:us" ]
2023-02-28T18:56:10+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "huggingface-course/bert-finetuned-squad", "metrics": ["accuracy", "bleu"], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2023-02-28T18:57:05+00:00
8e4da24c7537deb31b25a750603d2542ecb308c5
# Dataset Card for "semantic-domains-greek" See dataset card for [semantic-domains-greek-lemmatized](https://huggingface.co/datasets/ryderwishart/semantic-domains-greek-lemmatized). The only difference between these datasets is that this dataset (semantic-domains-greek) does not use lemmatized tokens.
ryderwishart/semantic-domains-greek
[ "region:us" ]
2023-02-28T19:01:21+00:00
{"dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "tags", "sequence": "int64"}, {"name": "labels", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 3694559, "num_examples": 6408}, {"name": "test", "num_bytes": 444391, "num_examples": 801}, {"name": "eval", "num_bytes": 366650, "num_examples": 802}], "download_size": 1334579, "dataset_size": 4505600}}
2023-03-02T19:33:54+00:00
f47fc4902afddd4ab64cee449229fa916efa36ee
# Dataset Card for "flores200_val_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bri25yu/flores200_val_test
[ "region:us" ]
2023-02-28T19:08:17+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "source_lang", "dtype": "string"}, {"name": "target_lang", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "val", "num_bytes": 2132022.3333333335, "num_examples": 5000}, {"name": "test", "num_bytes": 4264044.666666667, "num_examples": 10000}], "download_size": 4975535, "dataset_size": 6396067.0}}
2023-03-01T00:12:01+00:00
e809d631e29266f5e4bc84bae767c8439d97dc77
kndSet & kndSet_good_only: 宵崎奏 (~160p; 原图) yada_train_v1: ai生成图片, 含bad anatomy tagging (1024*1560; 原图) onimai: - danbooru + wd tags, 按概率排序后去重: - `onii-chan wa oshimai!` → `onimai` - `oyama mahiro`, `hozuki kaede`, `oyama mihari`
trojblue/public_data
[ "license:bigscience-openrail-m", "region:us" ]
2023-02-28T19:13:44+00:00
{"license": "bigscience-openrail-m"}
2023-10-03T18:29:37+00:00
bcc742e044525849586077272f24837906582e34
Druna/images
[ "license:eupl-1.1", "region:us" ]
2023-02-28T19:43:30+00:00
{"license": "eupl-1.1"}
2023-03-24T18:10:02+00:00
2a986401c5c733a102693d3ed833c3431b780747
# Dataset Card for "toolformer-v0-postprocessed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dmayhem93/toolformer-v0-postprocessed
[ "region:us" ]
2023-02-28T19:50:26+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 79229133, "num_examples": 2245}], "download_size": 33861921, "dataset_size": 79229133}}
2023-02-28T19:50:45+00:00
1c82309733e073b2c8052f62c227d24eab15ca3b
# Dataset Card for "coco-500" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
comet-team/coco-500
[ "region:us" ]
2023-02-28T20:00:33+00:00
{"dataset_info": {"features": [{"name": "row-id", "dtype": "int32"}, {"name": "ID", "dtype": "int32"}, {"name": "Image", "dtype": "image"}, {"name": "Score", "dtype": "float32"}, {"name": "Confidence", "dtype": "float32"}, {"name": "Filename", "dtype": "string"}, {"name": "Category 5", "dtype": "string"}, {"name": "Category 10", "dtype": "string"}, {"name": "Image--metadata", "dtype": "large_string"}], "splits": [{"name": "train", "num_bytes": 247000470.0, "num_examples": 500}], "download_size": 246448541, "dataset_size": 247000470.0}}
2023-02-28T20:06:22+00:00
fc54dde41ab160c941f29be4c5c2709490c04e73
512x512
xkkkk/regularization_images
[ "region:us" ]
2023-02-28T20:04:47+00:00
{}
2023-03-16T11:13:13+00:00
00dd4cff441611bf7dc9e83edb622ebe9e6e4b67
# Dataset Card for "maestro-v1-sustain" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
roszcz/maestro-v1-sustain
[ "region:us" ]
2023-02-28T20:38:48+00:00
{"dataset_info": {"features": [{"name": "notes", "struct": [{"name": "duration", "sequence": "float64"}, {"name": "end", "sequence": "float64"}, {"name": "pitch", "sequence": "int64"}, {"name": "start", "sequence": "float64"}, {"name": "velocity", "sequence": "int64"}]}, {"name": "composer", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "year", "dtype": "int64"}, {"name": "midi_filename", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 29686362, "num_examples": 177}, {"name": "validation", "num_bytes": 25599834, "num_examples": 137}, {"name": "train", "num_bytes": 226534277, "num_examples": 962}], "download_size": 87287914, "dataset_size": 281820473}}
2023-04-23T12:35:49+00:00
8458ebc86baf82518bed96d9bca39373892acf2d
# Dataset Card for "cmu_wiki_qa" A filtered / cleaned version of the http://www.cs.cmu.edu/~ark/QA-data/ Q&A dataset, which provides manually-generated factoid questions from Wikipedia articles. **Acknowledgments** These data were collected by Noah Smith, Michael Heilman, Rebecca Hwa, Shay Cohen, Kevin Gimpel, and many students at Carnegie Mellon University and the University of Pittsburgh between 2008 and 2010. Their research project was supported by NSF IIS-0713265 (to Smith), an NSF Graduate Research Fellowship (to Heilman), NSF IIS-0712810 and IIS-0745914 (to Hwa), and Institute of Education Sciences, U.S. Department of Education R305B040063 (to Carnegie Mellon). [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sedthh/cmu_wiki_qa
[ "task_categories:question-answering", "task_categories:summarization", "size_categories:1K<n<10K", "language:en", "license:mit", "Carnegie Mellon University", "University of Pittsburgh", "Wikipedia", "Q&A", "region:us" ]
2023-02-28T20:46:15+00:00
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering", "summarization"], "pretty_name": "Question-Answer Dataset", "dataset_info": {"features": [{"name": "INSTRUCTION", "dtype": "string"}, {"name": "RESPONSE", "dtype": "string"}, {"name": "SOURCE", "dtype": "string"}, {"name": "METADATA", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 410246, "num_examples": 1610}], "download_size": 105516, "dataset_size": 410246}, "tags": ["Carnegie Mellon University", "University of Pittsburgh", "Wikipedia", "Q&A"]}
2023-02-28T20:46:45+00:00
b3b3e1b104d413b97d68985b466713f8b09eecde
# Dataset Card for "ubuntu_dialogue_qa" Filtered the Ubuntu dialogue chatlogs from https://www.kaggle.com/datasets/rtatman/ubuntu-dialogue-corpus to include Q&A pairs **ONLY** **Acknowledgements** This dataset was ORIGINALLY collected by Ryan Lowe, Nissan Pow , Iulian V. Serban† and Joelle Pineau. It is made available here under the Apache License, 2.0. If you use this data in your work, please include the following citation: Ryan Lowe, Nissan Pow, Iulian V. Serban and Joelle Pineau, "The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems", SIGDial 2015. URL: http://www.sigdial.org/workshops/conference16/proceedings/pdf/SIGDIAL40.pdf
sedthh/ubuntu_dialogue_qa
[ "task_categories:question-answering", "task_categories:text-generation", "size_categories:10K<n<100K", "language:en", "license:mit", "ubuntu", "forum", "linux", "chat", "region:us" ]
2023-02-28T20:49:12+00:00
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering", "text-generation"], "pretty_name": "Q&A from the Ubuntu Dialogue Corpus", "dataset_info": {"features": [{"name": "INSTRUCTION", "dtype": "string"}, {"name": "RESPONSE", "dtype": "string"}, {"name": "SOURCE", "dtype": "string"}, {"name": "METADATA", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4021291, "num_examples": 16181}], "download_size": 2157548, "dataset_size": 4021291}, "tags": ["ubuntu", "forum", "linux", "chat"]}
2023-02-28T20:50:15+00:00
eefe292fe4eec3bcc82a59c662bb8380510356cf
Isotonic/human_assistant_conversation
[ "task_categories:text-generation", "task_categories:conversational", "size_categories:100K<n<1M", "language:en", "language:es", "language:zh", "license:afl-3.0", "region:us" ]
2023-02-28T20:59:35+00:00
{"language": ["en", "es", "zh"], "license": "afl-3.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation", "conversational"], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2724591096.91667, "num_examples": 1494223}, {"name": "test", "num_bytes": 681148230.08333, "num_examples": 373556}], "download_size": 1996990227, "dataset_size": 3405739327.0}}
2023-08-31T06:31:15+00:00
fc50699d275ed86a072f1166f7390473176719fd
This is the blind eval dataset of high-quality, diverse, human-written instructions with demonstrations. We will be using this for step 3 evaluations in our RLHF pipeline.
HuggingFaceH4/instruction-dataset
[ "license:apache-2.0", "region:us" ]
2023-02-28T21:26:43+00:00
{"license": "apache-2.0"}
2023-02-28T22:30:11+00:00
24ff0e69deecb40acaec51ea770c82d01b1a5e08
pradeep12/qwertyqwe
[ "license:openrail", "region:us" ]
2023-02-28T21:45:15+00:00
{"license": "openrail"}
2023-02-28T21:45:58+00:00
c39c8850d2443422d0e8f2fa1672f658bb96639a
speedoflight/Shapez-io-shape-thing-dataset
[ "license:unlicense", "region:us" ]
2023-02-28T23:25:08+00:00
{"license": "unlicense"}
2023-02-28T23:35:15+00:00