sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f35bbd040beb2795b1d712c518e4f474083ab44a
|
# Dataset Card for LLeQA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [maastrichtlawtech/lleqa](https://github.com/maastrichtlawtech/lleqa)
- **Paper:** [Interpretable Long-Form Legal Question Answering with Retrieval-Augmented Large Language Models](https://arxiv.org/abs/2309.17050)
- **Point of Contact:** [Maastricht Law & Tech Lab]([email protected])
### Dataset Summary
The Long-form Legal Question Answering (LLeQA) dataset is a French-native expert-annotated dataset for studying legal question answering. LLeQA builds upon [BSARD](https://huggingface.co/datasets/maastrichtlawtech/bsard), an information retrieval dataset comprising 1,108 legal questions labeled with relevant provisions from a corpus of 22,633 Belgian law articles, and enhance it in two ways:
1. We introduce 760 new legal questions (+69\%) and 5,308 additional statutory articles (+23\%).
2. We supplement the data with new types of annotations, including an exhaustive taxonomy for the question, the jurisdictions concerned, the exact paragraph-level references within the relevant articles, and a comprehensive answer written by seasoned legal professionals.
Owing to the rich variety of its annotations, LLeQA serves as a multifaceted resource that extends its utility beyond legal question answering and has the potential to catalyze significant progress in various legal tasks, such as legal inquiry classification, legal topic modeling, and legal information retrieval.
### Supported Tasks and Leaderboards
- `qestion-answering`: The dataset can be used to train a model for long-form question-answering (LFQA) in the legal domain, which consists in comprehensively answering a short legal question in a free-form based on a given context of one or several statutory articles. Success on this task is typically measured by achieving high [ROUGE](https://huggingface.co/spaces/evaluate-metric/rouge) or [METEOR](https://huggingface.co/spaces/evaluate-metric/meteor) scores, even though these metrics are not always correlated with human judgment.
- `text-retrieval`: The dataset can be used to train a model for information retrieval (IR) in the legal domain, which consists in retrieving relevant statutory articles based on a given legal question. Success on this task is typically measured by achieving high [recall](https://huggingface.co/spaces/evaluate-metric/recall) and [precision](https://huggingface.co/spaces/evaluate-metric/precision) scores at various cut-offs.
- `text-classification`: The dataset can be used to train a model for text classification in the legal domain, which consists in classifying a legal question into a predefined set of topics. Success on this task is typically measured by achieving high [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) scores.
### Languages
The text in the dataset is in French, as spoken in Wallonia and Brussels-Capital region. The associated BCP-47 code is `fr-BE`.
## Dataset Structure
### Data Instances
A `question` sample typically comprises a unique identifier (*int*), the question itself (*str*), the regions concerned (*List[str]*), related topics (*List[str]*), the IDs of the relevant articles from the knowledge corpus (*List[int]*), the exact paragraphs within those articles that are relevant to the question (*List[str]*), and a comprehensive expert-written answer (*str*). Below is an example of such sample from the LLeQA test set:
```json
{
"id":696,
"question":"Je souhaite divorcer pour cause de désunion irrémédiable. Puis-je fixer une limite dans le temps pour la pension alimentaire ?",
"regions":["Région wallonne", "Région de Bruxelles-Capitale", "Région flamande"],
"topics":["Famille, Obligations alimentaires, Les pensions alimentaires (entre époux/ex-époux), Pensions alimentaires dans le cadre d'une procédure de divorce, Procédure de divorce pour cause de désunion irrémédiable"],
"article_ids":[3604],
"paragraph_ids":["3604§4", "3604§10"],
"answer":"Oui, c'est le juge qui fixe cette limite dans le jugement de divorce. En principe, la durée de la pension alimentaire après divorce est limitée au maximum à la durée du mariage. Mais le juge peut la fixer pour une durée plus courte. Il décide toujours en fonction de la situation concrète des ex-conjoints. A l’expiration de ce délai, le juge peut prolonger le paiement de la pension alimentaire. Celui qui reçoit la pension alimentaire doit prouver qu'à cause de circonstances exceptionnelles et pour des raisons indépendantes de sa volonté, il est toujours dans un état de besoin. L'obligation de payer la pension alimentaire prend également fin si : celui qui reçoit la pension alimentaire se remarie ou fait une déclaration de cohabitation légale. Dans ce cas, il perd automatiquement son droit à la pension alimentaire après divorce, sauf si le jugement de divorce prévoit autre chose ; celui qui reçoit la pension alimentaire vit maritalement avec une autre personne. Dans ce cas, le juge peut décider de mettre fin à la pension alimentaire ; celui qui reçoit la pension alimentaire décède. Dans ce cas, le paiement de la pension alimentaire prend automatiquement fin.",
}
```
An `article` sample typically contains a unique identifier (*int*), a legislative reference (*str*), the authority that issued the article (*str*), a description resulting from the concatenated headings of the sections the article belong to (*str*), the individual headings of these sections (*str*), the article number in the statute (*str*), the full content of the article (*str*), and the content of its individual paragraphs (*Dict[str]*). Below is an example of such sample from the knwoledge corpus:
```json
{
"id":3604,
"reference":"Art. 301, Code civil (Livre I, Titre VI, Chapitre IV)",
"authority":"federale",
"description":"Des personnes, Du divorce, Des effets du divorce",
"article_no":"301",
"code":"Code civil",
"book":"Des personnes",
"part":null,
"act":"Du divorce",
"chapter":"Des effets du divorce",
"section":null,
"subsection":null,
"article":"§ 1er. Les époux peuvent convenir à tout moment de la pension alimentaire éventuelle, du montant de celle-ci et des modalités selon lesquelles le montant convenu pourrait être revu.§ 2. A défaut de la convention visée au § 1er, le tribunal de la famillepeut, dans le jugement prononçant le divorce ou lors d'une décision ultérieure, accorder, à la demande de l'époux dans le besoin, une pension alimentaire à charge de l'autre époux.Le tribunal peut refuser de faire droit à la demande de pension si le défendeur prouve que le demandeur a commis une faute grave ayant rendu impossible la poursuite de la vie commune.En aucun cas, la pension alimentaire n'est accordée au conjoint reconnu coupable d'un fait visé aux articles 375, 398 à 400, 402, 403 ou 405 du Code pénal, commis contre la personne du défendeur, ou d'une tentative de commettre un fait visé aux articles 375, 393, 394 ou 397 du même Code contre cette même personne.Par dérogation à l'article 4 du titre préliminaire du Code de procédure pénale, le juge peut, en attendant que la décision sur l'action publique soit coulée en force de chose jugée, allouer au demandeur une pension provisionnelle, en tenant compte de toutes les circonstances de la cause. Il peut subordonner l'octroi de cette pension provisionnelle à la constitution d'une garantie qu'il détermine et dont il fixe les modalités.§ 3. Le tribunal fixe le montant de la pension alimentaire qui doit couvrir au moins l'état de besoin du bénéficiaire.Il tient compte des revenus et possibilités des conjoints et de la dégradation significative de la situation économique du bénéficiaire. Pour apprécier cette dégradation, le juge se fonde notamment sur la durée du mariage, l'âge des parties, leur comportement durant le mariage quant à l'organisation de leurs besoins, la charge des enfants pendant la vie commune ou après celle-ci. Le juge peut décider le cas échéant que la pension sera dégressive et déterminer dans quelle mesure elle le sera.La pension alimentaire ne peut excéder le tiers des revenus du conjoint débiteur.§ 4. La durée de la pension ne peut être supérieure à celle du mariage.En cas de circonstances exceptionnelles, si le bénéficiaire démontre qu'à l'expiration du délai visé à l'alinéa 1er, il reste, pour des raisons indépendantes de sa volonté, dans un état de besoin, le tribunal peut prolonger le délai. Dans ce cas, le montant de la pension correspond au montant nécessaire pour couvrir l'état de besoin du bénéficiaire.§ 5. Si le défendeur prouve que l'état de besoin du demandeur résulte d'une décision prise unilatéralement par celui-ci, et sans que les besoins de la famille aient justifié ce choix, il peut être dispensé de payer la pension ou n'être tenu que de payer une pension réduite.§ 6. Le tribunal qui accorde la pension constate que celle-ci est adaptée de plein droit aux fluctuations de l'indice des prix à la consommation.Le montant de base de la pension correspond à l'indice des prix à la consommation du mois au cours duquel le jugement ou l'arrêt prononçant le divorce est coulé en force de chose jugée, à moins que le tribunal n'en décide autrement. Tous les douze mois, le montant de la pension est adapté en fonction de la hausse ou de la baisse de l'indice des prix à la consommation du mois correspondant.Ces modifications sont appliquées à la pension dès l'échéance qui suit la publication au Moniteur belge de l'indice nouveau à prendre en considération.Le tribunal peut, dans certains cas, appliquer un autre système d'adaptation de la pension au coût de la vie.§ 7. Sauf si les parties ont convenu expressément le contraire, le tribunal peut, ultérieurement, à la demande d'une des parties, augmenter, réduire ou supprimer la pension, si, à la suite de circonstances nouvelles et indépendantes de la volonté des parties, son montant n'est plus adapté.De même, si à la suite de la dissolution du mariage, la liquidation-partage du patrimoine commun ou de l'indivision ayant existé entre les époux entraîne une modification de leur situation financière qui justifie une adaptation de la pension alimentaire ayant fait l'objet d'un jugement ou d'une convention intervenus avant l'établissement de comptes de la liquidation, le tribunal peut adapter la pension, 2.§ 8. La pension peut à tout moment être remplacée, de l'accord des parties, par un capital homologué par le tribunal. A la demande du débiteur de la pension, le tribunal peut également accorder à tout moment la capitalisation.§ 9. Les époux ne peuvent pas renoncer aux droits à la pension alimentaire avant la dissolution du mariage.Ils peuvent néanmoins transiger, en cours de procédure, sur le montant de cette pension 5.§ 10. La pension n'est plus due au décès du débiteur, mais le bénéficiaire peut demander des aliments à charge de la succession aux conditions prévues à l'article 205bis, § 1er et §§ 3 à 6 .La pension prend, en toute hypothèse, définitivement fin en cas de remariage du bénéficiaire de la pension ou au moment où ce dernier fait une déclaration de cohabitation légale, sauf convention contraire des parties.Le juge peut mettre fin à la pension lorsque le bénéficiaire vit maritalement avec une autre personne.§ 11. Le tribunal peut décider qu'en cas de défaut d'exécution par le débiteur de son obligation de paiement, le bénéficiaire de la pension sera autorisé à percevoir les revenus de celui-ci ou ceux des biens qu'il administre en vertu de leur régime matrimonial, ainsi que toutes autres sommes qui lui sont dues par des tiers.Cette décision est opposable à tout tiers débiteur, actuel ou futur, sur la notification qui leur en est faite par le greffier à la requête du demandeur.§ 12. 1.",
"paragraphs":{
"1":"§ 1er. Les époux peuvent convenir à tout moment de la pension alimentaire éventuelle, du montant de celle-ci et des modalités selon lesquelles le montant convenu pourrait être revu",
"2":"§ 2. A défaut de la convention visée au § 1er, le tribunal de la famillepeut, dans le jugement prononçant le divorce ou lors d'une décision ultérieure, accorder, à la demande de l'époux dans le besoin, une pension alimentaire à charge de l'autre époux.Le tribunal peut refuser de faire droit à la demande de pension si le défendeur prouve que le demandeur a commis une faute grave ayant rendu impossible la poursuite de la vie commune.En aucun cas, la pension alimentaire n'est accordée au conjoint reconnu coupable d'un fait visé aux articles 375, 398 à 400, 402, 403 ou 405 du Code pénal, commis contre la personne du défendeur, ou d'une tentative de commettre un fait visé aux articles 375, 393, 394 ou 397 du même Code contre cette même personne.Par dérogation à l'article 4 du titre préliminaire du Code de procédure pénale, le juge peut, en attendant que la décision sur l'action publique soit coulée en force de chose jugée, allouer au demandeur une pension provisionnelle, en tenant compte de toutes les circonstances de la cause. Il peut subordonner l'octroi de cette pension provisionnelle à la constitution d'une garantie qu'il détermine et dont il fixe les modalités",
"3":"§ 3. Le tribunal fixe le montant de la pension alimentaire qui doit couvrir au moins l'état de besoin du bénéficiaire.Il tient compte des revenus et possibilités des conjoints et de la dégradation significative de la situation économique du bénéficiaire. Pour apprécier cette dégradation, le juge se fonde notamment sur la durée du mariage, l'âge des parties, leur comportement durant le mariage quant à l'organisation de leurs besoins, la charge des enfants pendant la vie commune ou après celle-ci. Le juge peut décider le cas échéant que la pension sera dégressive et déterminer dans quelle mesure elle le sera.La pension alimentaire ne peut excéder le tiers des revenus du conjoint débiteur",
"4":"§ 4. La durée de la pension ne peut être supérieure à celle du mariage.En cas de circonstances exceptionnelles, si le bénéficiaire démontre qu'à l'expiration du délai visé à l'alinéa 1er, il reste, pour des raisons indépendantes de sa volonté, dans un état de besoin, le tribunal peut prolonger le délai. Dans ce cas, le montant de la pension correspond au montant nécessaire pour couvrir l'état de besoin du bénéficiaire",
"5":"§ 5. Si le défendeur prouve que l'état de besoin du demandeur résulte d'une décision prise unilatéralement par celui-ci, et sans que les besoins de la famille aient justifié ce choix, il peut être dispensé de payer la pension ou n'être tenu que de payer une pension réduite",
"6":"§ 6. Le tribunal qui accorde la pension constate que celle-ci est adaptée de plein droit aux fluctuations de l'indice des prix à la consommation.Le montant de base de la pension correspond à l'indice des prix à la consommation du mois au cours duquel le jugement ou l'arrêt prononçant le divorce est coulé en force de chose jugée, à moins que le tribunal n'en décide autrement. Tous les douze mois, le montant de la pension est adapté en fonction de la hausse ou de la baisse de l'indice des prix à la consommation du mois correspondant.Ces modifications sont appliquées à la pension dès l'échéance qui suit la publication au Moniteur belge de l'indice nouveau à prendre en considération.Le tribunal peut, dans certains cas, appliquer un autre système d'adaptation de la pension au coût de la vie",
"7":"§ 7. Sauf si les parties ont convenu expressément le contraire, le tribunal peut, ultérieurement, à la demande d'une des parties, augmenter, réduire ou supprimer la pension, si, à la suite de circonstances nouvelles et indépendantes de la volonté des parties, son montant n'est plus adapté.De même, si à la suite de la dissolution du mariage, la liquidation-partage du patrimoine commun ou de l'indivision ayant existé entre les époux entraîne une modification de leur situation financière qui justifie une adaptation de la pension alimentaire ayant fait l'objet d'un jugement ou d'une convention intervenus avant l'établissement de comptes de la liquidation, le tribunal peut adapter la pension, 2",
"8":"§ 8. La pension peut à tout moment être remplacée, de l'accord des parties, par un capital homologué par le tribunal. A la demande du débiteur de la pension, le tribunal peut également accorder à tout moment la capitalisation",
"9":"§ 9. Les époux ne peuvent pas renoncer aux droits à la pension alimentaire avant la dissolution du mariage.Ils peuvent néanmoins transiger, en cours de procédure, sur le montant de cette pension 5",
"10":"§ 10. La pension n'est plus due au décès du débiteur, mais le bénéficiaire peut demander des aliments à charge de la succession aux conditions prévues à l'article 205bis, § 1er et §§ 3 à 6 .La pension prend, en toute hypothèse, définitivement fin en cas de remariage du bénéficiaire de la pension ou au moment où ce dernier fait une déclaration de cohabitation légale, sauf convention contraire des parties.Le juge peut mettre fin à la pension lorsque le bénéficiaire vit maritalement avec une autre personne",
"11":"§ 11. Le tribunal peut décider qu'en cas de défaut d'exécution par le débiteur de son obligation de paiement, le bénéficiaire de la pension sera autorisé à percevoir les revenus de celui-ci ou ceux des biens qu'il administre en vertu de leur régime matrimonial, ainsi que toutes autres sommes qui lui sont dues par des tiers.Cette décision est opposable à tout tiers débiteur, actuel ou futur, sur la notification qui leur en est faite par le greffier à la requête du demandeur"
}
}
``````
### Data Fields
- The `question`samples have the following fields:
- `id`: an *int32* feature corresponding to a unique ID number for the question.
- `question`: a *string* feature corresponding to the question.
- `regions`: a *list of strings* feature of regions concerned by the question.
- `topics`: a *list of strings* feature of topics related to the question.
- `article_ids`: a *list of ints* feature of article IDs from the knowledge corpus relevant to the question.
- `paragraph_ids`: a *list of strings* feature of the exact paragraph IDs within the articles that are relevant to the question.
- `answer`: a *string* feature corresponding to the comprehensive answer to the question.
- The `article` samples have the following fields:
- `id`: an *int32* feature corresponding to a unique ID number for the article.
- `reference`: a *string* feature corresponding to the legislative reference of the article.
- `authority`: a *string* feature corresponding to the authority that issued the article (either *"regional"* or *"federal"*).
- `description`: a *string* feature corresponding to the concatenated headings of the article.
- `article_no`: a *string* feature corresponding to the article number in the statute.
- `code`: a *string* feature corresponding to the law code to which the article belongs.
- `book`: a *string* feature corresponding to the book to which the article belongs.
- `part`: a *string* feature corresponding to the part to which the article belongs.
- `act`: a *string* feature corresponding to the act to which the article belongs.
- `chapter`: a *string* feature corresponding to the chapter to which the article belongs.
- `section`: a *string* feature corresponding to the section to which the article belongs.
- `subsection`: a *string* feature corresponding to the subsection to which the article belongs.
- `article`: a *string* feature corresponding to the full content of the article.
- `paragraphs`: a *dict of strings* feature corresponding to the content of the individual paragraphs of the article.
### Data Splits
The LLeQA dataset is split into a train, dev, and test sets with a 90/10/10 ratio, respectively. Number of `question` samples in each set is given below:
| | Train | Dev | Test |
| ----- | ------ | ---- | ----- |
| LLeQA | 1472 | 201 | 195 |
## Dataset Creation
### Curation Rationale
The dataset is intended to be used by researchers to build and evaluate IR and QA models in the legal domain. It should not be regarded as a reliable source of legal information at this point in time, as both the questions and articles correspond to an outdated version of the Belgian law from May 2023 (time of dataset collection). In the latter case, the user is advised to consult daily updated official legal resources (e.g., the Belgian Official Gazette).
### Source Data
#### Initial Data Collection and Normalization
The collection process of LLeQA involves three main stages. First, we gather and refine annotated legal questions. Then, we build an expansive corpus of supportive statutory articles drawn from Belgian legislation. Finally, we enrich the question annotations by generating paragraph-level references within relevant articles. We elaborate upon each of these steps below. Please refer to the paper for more details.
#### Who are the source language producers?
Speakers were not directly approached for inclusion in this dataset and thus could not be asked for demographic information. Questions were collected, anonimyzed, and reformulated by Belgian jurists from [Droits Quotidiens](https://www.droitsquotidiens.be/fr/equipe). Therefore, no direct information about the speakers’ age and gender distribution, or socioeconomic status is available. However, it is expected that most, but not all, of the speakers are adults (18+ years), speak French as a native language, and live in Wallonia or Brussels-Capital region.
### Annotations
#### Annotation process
We partner with [Droits Quotidiens](https://www.droitsquotidiens.be/fr/equipe), a Belgian non-profit organization that endeavors to make the law comprehensible and accessible to the most vulnerable. To this end, the organization maintains a rich website featuring thousands of legal questions commonly posed by Belgian citizens. Each question comes with its own individual page, encompassing one or more categorizations, references to relevant legislative statutes, and a detailed answer written in layman's terms by experienced jurists. Practically, their legal clarification process consists of four steps. First, they select a common legal issue based on the numerous support requests they receive every day. Then, they define a new anonymized "model" question on that issue expressed in simple terms, as close as possible as if a layperson had asked it. Finally, the jurists search the Belgian law for articles that help answer the model question, reference them, and write a comprehensive answer in a language that is understandable by the general public.
#### Who are the annotators?
A total of six Belgian jurists from [Droits Quotidiens](https://www.droitsquotidiens.be/fr/equipe) contributed to annotating the questions. All have a law degree from a Belgian university and years of experience in providing legal advice and clarifications of the law. They range in age from 30-60 years, including one man and five women, gave their ethnicity as white European, speak French as a native language, and represent upper middle class based on income levels.
### Personal and Sensitive Information
The questions represent informal, asynchronous, edited, written language that have an average length of 15 words. None of them contained hateful, aggressive, or inappropriate language as they were all reviewed and reworded by Droits Quotidiens to be neutral, anonymous, and comprehensive. The legal articles represent strong, formal, written language that have a median length of 84 words (yet 1500+ articles exceed 500 words).
## Considerations for Using the Data
### Social Impact of Dataset
We believe LLeQA can serve as a robust foundation for advancements in interpretable, long-form legal question answering, thereby contributing to the democratization of legal access.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
- It is essential to note that not all legal questions can be answered with statutes alone. For instance, the question “Can I evict my tenants if they make too much noise?” might not have a detailed answer within the statutory law that quantifies a specific noise threshold at which eviction is allowed. Instead, the landlord should probably rely more on case law and find precedents similar to their current situation (e.g., the tenant makes two parties a week until 2 am). Hence, some questions are better suited than others to the statutory article retrieval task, and the domain of the less suitable ones remains to be determined.
## Additional Information
### Dataset Curators
The dataset was created by Antoine Louis during work done at the Law & Tech lab of Maastricht University, with the help of jurists from [Droits Quotidiens](https://www.droitsquotidiens.be/fr/equipe).
### Licensing Information
LLeQA is distributed under a gated access for research purposes only and is licensed under the [CC BY-NC-SA 4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
```latex
@inproceedings{louis2024interpretable,
title = {Interpretable Long-Form Legal Question Answering with Retrieval-Augmented Large Language Models},
author = {Louis, Antoine and Van Dijck, Gijs and Spanakis, Gerasimos},
booktitle = {Proceedings of the 38th AAAI Conference on Artificial Intelligence},
year = {2024},
address = {Vancouver, Canada},
publisher = {AAAI Press},
url = {https://arxiv.org/abs/2309.17050},
pages = {tba}
}
```
### Contributions
Thanks to [@antoinelouis](https://huggingface.co/antoinelouis) for adding this dataset.
|
maastrichtlawtech/lleqa
|
[
"task_categories:question-answering",
"task_categories:text-retrieval",
"task_categories:text-classification",
"task_ids:closed-domain-qa",
"task_ids:document-question-answering",
"task_ids:document-retrieval",
"task_ids:topic-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:fr",
"license:cc-by-nc-sa-4.0",
"legal",
"arxiv:2309.17050",
"region:us"
] |
2023-09-27T12:31:22+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["fr"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["question-answering", "text-retrieval", "text-classification"], "task_ids": ["closed-domain-qa", "document-question-answering", "document-retrieval", "topic-classification"], "paperswithcode_id": "lleqa", "pretty_name": "LLeQA", "tags": ["legal"], "configs": [{"config_name": "corpus", "data_files": [{"split": "corpus", "path": "articles.json"}]}, {"config_name": "questions", "data_files": [{"split": "train", "path": "questions_train.json"}, {"split": "validation", "path": "questions_dev.json"}, {"split": "test", "path": "questions_test.json"}]}, {"config_name": "negatives", "data_files": [{"split": "bm25", "path": "negatives/negatives_bm25.json"}, {"split": "me5", "path": "negatives/negatives_me5_large.json"}]}], "extra_gated_fields": {"Name": "text", "Email": "text", "Affiliation": "text", "Job Title": "text", "Country": "text", "I agree to use this dataset for non-commercial use ONLY": "checkbox"}}
|
2024-01-12T23:02:01+00:00
|
[
"2309.17050"
] |
[
"fr"
] |
TAGS
#task_categories-question-answering #task_categories-text-retrieval #task_categories-text-classification #task_ids-closed-domain-qa #task_ids-document-question-answering #task_ids-document-retrieval #task_ids-topic-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-French #license-cc-by-nc-sa-4.0 #legal #arxiv-2309.17050 #region-us
|
# Dataset Card for LLeQA
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Repository: maastrichtlawtech/lleqa
- Paper: Interpretable Long-Form Legal Question Answering with Retrieval-Augmented Large Language Models
- Point of Contact: Maastricht Law & Tech Lab
### Dataset Summary
The Long-form Legal Question Answering (LLeQA) dataset is a French-native expert-annotated dataset for studying legal question answering. LLeQA builds upon BSARD, an information retrieval dataset comprising 1,108 legal questions labeled with relevant provisions from a corpus of 22,633 Belgian law articles, and enhance it in two ways:
1. We introduce 760 new legal questions (+69\%) and 5,308 additional statutory articles (+23\%).
2. We supplement the data with new types of annotations, including an exhaustive taxonomy for the question, the jurisdictions concerned, the exact paragraph-level references within the relevant articles, and a comprehensive answer written by seasoned legal professionals.
Owing to the rich variety of its annotations, LLeQA serves as a multifaceted resource that extends its utility beyond legal question answering and has the potential to catalyze significant progress in various legal tasks, such as legal inquiry classification, legal topic modeling, and legal information retrieval.
### Supported Tasks and Leaderboards
- 'qestion-answering': The dataset can be used to train a model for long-form question-answering (LFQA) in the legal domain, which consists in comprehensively answering a short legal question in a free-form based on a given context of one or several statutory articles. Success on this task is typically measured by achieving high ROUGE or METEOR scores, even though these metrics are not always correlated with human judgment.
- 'text-retrieval': The dataset can be used to train a model for information retrieval (IR) in the legal domain, which consists in retrieving relevant statutory articles based on a given legal question. Success on this task is typically measured by achieving high recall and precision scores at various cut-offs.
- 'text-classification': The dataset can be used to train a model for text classification in the legal domain, which consists in classifying a legal question into a predefined set of topics. Success on this task is typically measured by achieving high accuracy scores.
### Languages
The text in the dataset is in French, as spoken in Wallonia and Brussels-Capital region. The associated BCP-47 code is 'fr-BE'.
## Dataset Structure
### Data Instances
A 'question' sample typically comprises a unique identifier (*int*), the question itself (*str*), the regions concerned (*List[str]*), related topics (*List[str]*), the IDs of the relevant articles from the knowledge corpus (*List[int]*), the exact paragraphs within those articles that are relevant to the question (*List[str]*), and a comprehensive expert-written answer (*str*). Below is an example of such sample from the LLeQA test set:
An 'article' sample typically contains a unique identifier (*int*), a legislative reference (*str*), the authority that issued the article (*str*), a description resulting from the concatenated headings of the sections the article belong to (*str*), the individual headings of these sections (*str*), the article number in the statute (*str*), the full content of the article (*str*), and the content of its individual paragraphs (*Dict[str]*). Below is an example of such sample from the knwoledge corpus:
latex
@inproceedings{louis2024interpretable,
title = {Interpretable Long-Form Legal Question Answering with Retrieval-Augmented Large Language Models},
author = {Louis, Antoine and Van Dijck, Gijs and Spanakis, Gerasimos},
booktitle = {Proceedings of the 38th AAAI Conference on Artificial Intelligence},
year = {2024},
address = {Vancouver, Canada},
publisher = {AAAI Press},
url = {URL
pages = {tba}
}
'''
### Contributions
Thanks to @antoinelouis for adding this dataset.
|
[
"# Dataset Card for LLeQA",
"## Table of Contents\n\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Repository: maastrichtlawtech/lleqa\n- Paper: Interpretable Long-Form Legal Question Answering with Retrieval-Augmented Large Language Models\n- Point of Contact: Maastricht Law & Tech Lab",
"### Dataset Summary\n\nThe Long-form Legal Question Answering (LLeQA) dataset is a French-native expert-annotated dataset for studying legal question answering. LLeQA builds upon BSARD, an information retrieval dataset comprising 1,108 legal questions labeled with relevant provisions from a corpus of 22,633 Belgian law articles, and enhance it in two ways:\n \n1. We introduce 760 new legal questions (+69\\%) and 5,308 additional statutory articles (+23\\%).\n2. We supplement the data with new types of annotations, including an exhaustive taxonomy for the question, the jurisdictions concerned, the exact paragraph-level references within the relevant articles, and a comprehensive answer written by seasoned legal professionals.\n\nOwing to the rich variety of its annotations, LLeQA serves as a multifaceted resource that extends its utility beyond legal question answering and has the potential to catalyze significant progress in various legal tasks, such as legal inquiry classification, legal topic modeling, and legal information retrieval.",
"### Supported Tasks and Leaderboards\n\n- 'qestion-answering': The dataset can be used to train a model for long-form question-answering (LFQA) in the legal domain, which consists in comprehensively answering a short legal question in a free-form based on a given context of one or several statutory articles. Success on this task is typically measured by achieving high ROUGE or METEOR scores, even though these metrics are not always correlated with human judgment.\n- 'text-retrieval': The dataset can be used to train a model for information retrieval (IR) in the legal domain, which consists in retrieving relevant statutory articles based on a given legal question. Success on this task is typically measured by achieving high recall and precision scores at various cut-offs.\n- 'text-classification': The dataset can be used to train a model for text classification in the legal domain, which consists in classifying a legal question into a predefined set of topics. Success on this task is typically measured by achieving high accuracy scores.",
"### Languages\n\nThe text in the dataset is in French, as spoken in Wallonia and Brussels-Capital region. The associated BCP-47 code is 'fr-BE'.",
"## Dataset Structure",
"### Data Instances\n\nA 'question' sample typically comprises a unique identifier (*int*), the question itself (*str*), the regions concerned (*List[str]*), related topics (*List[str]*), the IDs of the relevant articles from the knowledge corpus (*List[int]*), the exact paragraphs within those articles that are relevant to the question (*List[str]*), and a comprehensive expert-written answer (*str*). Below is an example of such sample from the LLeQA test set:\n\n\n\nAn 'article' sample typically contains a unique identifier (*int*), a legislative reference (*str*), the authority that issued the article (*str*), a description resulting from the concatenated headings of the sections the article belong to (*str*), the individual headings of these sections (*str*), the article number in the statute (*str*), the full content of the article (*str*), and the content of its individual paragraphs (*Dict[str]*). Below is an example of such sample from the knwoledge corpus:\n\nlatex\n@inproceedings{louis2024interpretable,\n title = {Interpretable Long-Form Legal Question Answering with Retrieval-Augmented Large Language Models},\n author = {Louis, Antoine and Van Dijck, Gijs and Spanakis, Gerasimos},\n booktitle = {Proceedings of the 38th AAAI Conference on Artificial Intelligence},\n year = {2024},\n address = {Vancouver, Canada},\n publisher = {AAAI Press},\n url = {URL\n pages = {tba}\n}\n'''",
"### Contributions\n\nThanks to @antoinelouis for adding this dataset."
] |
[
"TAGS\n#task_categories-question-answering #task_categories-text-retrieval #task_categories-text-classification #task_ids-closed-domain-qa #task_ids-document-question-answering #task_ids-document-retrieval #task_ids-topic-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-French #license-cc-by-nc-sa-4.0 #legal #arxiv-2309.17050 #region-us \n",
"# Dataset Card for LLeQA",
"## Table of Contents\n\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Repository: maastrichtlawtech/lleqa\n- Paper: Interpretable Long-Form Legal Question Answering with Retrieval-Augmented Large Language Models\n- Point of Contact: Maastricht Law & Tech Lab",
"### Dataset Summary\n\nThe Long-form Legal Question Answering (LLeQA) dataset is a French-native expert-annotated dataset for studying legal question answering. LLeQA builds upon BSARD, an information retrieval dataset comprising 1,108 legal questions labeled with relevant provisions from a corpus of 22,633 Belgian law articles, and enhance it in two ways:\n \n1. We introduce 760 new legal questions (+69\\%) and 5,308 additional statutory articles (+23\\%).\n2. We supplement the data with new types of annotations, including an exhaustive taxonomy for the question, the jurisdictions concerned, the exact paragraph-level references within the relevant articles, and a comprehensive answer written by seasoned legal professionals.\n\nOwing to the rich variety of its annotations, LLeQA serves as a multifaceted resource that extends its utility beyond legal question answering and has the potential to catalyze significant progress in various legal tasks, such as legal inquiry classification, legal topic modeling, and legal information retrieval.",
"### Supported Tasks and Leaderboards\n\n- 'qestion-answering': The dataset can be used to train a model for long-form question-answering (LFQA) in the legal domain, which consists in comprehensively answering a short legal question in a free-form based on a given context of one or several statutory articles. Success on this task is typically measured by achieving high ROUGE or METEOR scores, even though these metrics are not always correlated with human judgment.\n- 'text-retrieval': The dataset can be used to train a model for information retrieval (IR) in the legal domain, which consists in retrieving relevant statutory articles based on a given legal question. Success on this task is typically measured by achieving high recall and precision scores at various cut-offs.\n- 'text-classification': The dataset can be used to train a model for text classification in the legal domain, which consists in classifying a legal question into a predefined set of topics. Success on this task is typically measured by achieving high accuracy scores.",
"### Languages\n\nThe text in the dataset is in French, as spoken in Wallonia and Brussels-Capital region. The associated BCP-47 code is 'fr-BE'.",
"## Dataset Structure",
"### Data Instances\n\nA 'question' sample typically comprises a unique identifier (*int*), the question itself (*str*), the regions concerned (*List[str]*), related topics (*List[str]*), the IDs of the relevant articles from the knowledge corpus (*List[int]*), the exact paragraphs within those articles that are relevant to the question (*List[str]*), and a comprehensive expert-written answer (*str*). Below is an example of such sample from the LLeQA test set:\n\n\n\nAn 'article' sample typically contains a unique identifier (*int*), a legislative reference (*str*), the authority that issued the article (*str*), a description resulting from the concatenated headings of the sections the article belong to (*str*), the individual headings of these sections (*str*), the article number in the statute (*str*), the full content of the article (*str*), and the content of its individual paragraphs (*Dict[str]*). Below is an example of such sample from the knwoledge corpus:\n\nlatex\n@inproceedings{louis2024interpretable,\n title = {Interpretable Long-Form Legal Question Answering with Retrieval-Augmented Large Language Models},\n author = {Louis, Antoine and Van Dijck, Gijs and Spanakis, Gerasimos},\n booktitle = {Proceedings of the 38th AAAI Conference on Artificial Intelligence},\n year = {2024},\n address = {Vancouver, Canada},\n publisher = {AAAI Press},\n url = {URL\n pages = {tba}\n}\n'''",
"### Contributions\n\nThanks to @antoinelouis for adding this dataset."
] |
[
166,
8,
125,
51,
239,
256,
42,
6,
370,
18
] |
[
"passage: TAGS\n#task_categories-question-answering #task_categories-text-retrieval #task_categories-text-classification #task_ids-closed-domain-qa #task_ids-document-question-answering #task_ids-document-retrieval #task_ids-topic-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-French #license-cc-by-nc-sa-4.0 #legal #arxiv-2309.17050 #region-us \n# Dataset Card for LLeQA## Table of Contents\n\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Repository: maastrichtlawtech/lleqa\n- Paper: Interpretable Long-Form Legal Question Answering with Retrieval-Augmented Large Language Models\n- Point of Contact: Maastricht Law & Tech Lab",
"passage: ### Dataset Summary\n\nThe Long-form Legal Question Answering (LLeQA) dataset is a French-native expert-annotated dataset for studying legal question answering. LLeQA builds upon BSARD, an information retrieval dataset comprising 1,108 legal questions labeled with relevant provisions from a corpus of 22,633 Belgian law articles, and enhance it in two ways:\n \n1. We introduce 760 new legal questions (+69\\%) and 5,308 additional statutory articles (+23\\%).\n2. We supplement the data with new types of annotations, including an exhaustive taxonomy for the question, the jurisdictions concerned, the exact paragraph-level references within the relevant articles, and a comprehensive answer written by seasoned legal professionals.\n\nOwing to the rich variety of its annotations, LLeQA serves as a multifaceted resource that extends its utility beyond legal question answering and has the potential to catalyze significant progress in various legal tasks, such as legal inquiry classification, legal topic modeling, and legal information retrieval.### Supported Tasks and Leaderboards\n\n- 'qestion-answering': The dataset can be used to train a model for long-form question-answering (LFQA) in the legal domain, which consists in comprehensively answering a short legal question in a free-form based on a given context of one or several statutory articles. Success on this task is typically measured by achieving high ROUGE or METEOR scores, even though these metrics are not always correlated with human judgment.\n- 'text-retrieval': The dataset can be used to train a model for information retrieval (IR) in the legal domain, which consists in retrieving relevant statutory articles based on a given legal question. Success on this task is typically measured by achieving high recall and precision scores at various cut-offs.\n- 'text-classification': The dataset can be used to train a model for text classification in the legal domain, which consists in classifying a legal question into a predefined set of topics. Success on this task is typically measured by achieving high accuracy scores.### Languages\n\nThe text in the dataset is in French, as spoken in Wallonia and Brussels-Capital region. The associated BCP-47 code is 'fr-BE'.## Dataset Structure"
] |
32cfbc147a5cbf3b74c8d402a0aaf674ae5da22e
|
# Dataset Card for SpeechCommands
## Dataset Description
- **Homepage:** [Renumics Homepage](https://renumics.com/?hf-dataset-card=speech-commands-enrichment_only)
- **GitHub** [Spotlight](https://github.com/Renumics/spotlight)
- **Dataset Homepage** [tensorflow.org/datasets](https://www.tensorflow.org/datasets/catalog/speech_commands)
- **Paper:** [Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition](https://arxiv.org/pdf/1804.03209.pdf)
- **Leaderboard:** [More Information Needed]
### Dataset Summary
📊 [Data-centric AI](https://datacentricai.org) principles have become increasingly important for real-world use cases.
At [Renumics](https://renumics.com/?hf-dataset-card=speech-commands-enriched) we believe that classical benchmark datasets and competitions should be extended to reflect this development.
🔍 This is why we are publishing benchmark datasets with application-specific enrichments (e.g. embeddings, baseline results, uncertainties, label error scores). We hope this helps the ML community in the following ways:
1. Enable new researchers to quickly develop a profound understanding of the dataset.
2. Popularize data-centric AI principles and tooling in the ML community.
3. Encourage the sharing of meaningful qualitative insights in addition to traditional quantitative metrics.
📚 This dataset is an enriched version of the [SpeechCommands Dataset](https://huggingface.co/datasets/speech_commands).
### Explore the Dataset
There are two configurations of the dataset: **Enrichment only** provides the enrichments calculated by Renumics using the MIT AST transformer, while **raw_and_enrichment_combined** provides a concatenated dataset of the original speech commands and the enrichment.
The enrichments allow you to quickly gain insights into the dataset. The open source data curation tool [Renumics Spotlight](https://github.com/Renumics/spotlight) enables that with just a few lines of code:
Install datasets and Spotlight via [pip](https://packaging.python.org/en/latest/key_projects/#pip):
```python
!pip install renumics-spotlight datasets[audio]
```
> **_Notice:_** On Linux, non-Python dependency on libsndfile package must be installed manually. See [Datasets - Installation](https://huggingface.co/docs/datasets/installation#audio) for more information.
Load the dataset from huggingface in your notebook and start exploring with a simple view:
```python
import datasets
from renumics import spotlight
from renumics.spotlight.layouts import debug_classification
dataset = datasets.load_dataset("renumics/speech_commands_enrichment_only", "raw_and_enrichment_combined")
joined_dataset = datasets.concatenate_datasets([dataset["train"], dataset["validation"], dataset["test"]])
layout = debug_classification(label='label_string', prediction='prediction', embedding='embedding_reduced',
features=["label", "prediction", "probability"], inspect={'audio': spotlight.Audio})
dtypes = {
"audio": spotlight.Audio,
"embedding_reduced": spotlight.Embedding
}
spotlight.show(
joined_dataset,
dtype=dtypes,
layout= layout
)
```
You can use the UI to interactively configure the view on the data. Depending on the concrete tasks (e.g. model comparison, debugging, outlier detection) you might want to leverage different enrichments and metadata.
As a plug and play option, you can check out the Huggingface space: [Huggingface Space for speech enrichment](https://huggingface.co/spaces/renumics/speech_commands_enrichment_space)
Alternatively, you can run the notebook exploration.ipynb locally.
### SpeechCommands Dataset
This is a set of one-second .wav audio files, each containing a single spoken
English word or background noise. These words are from a small set of commands, and are spoken by a
variety of different speakers. This data set is designed to help train simple
machine learning models. It is covered in more detail at [https://arxiv.org/abs/1804.03209](https://arxiv.org/abs/1804.03209).
Version 0.01 of the data set (configuration `"v0.01"`) was released on August 3rd 2017 and contains
64,727 audio files.
Version 0.02 of the data set (configuration `"v0.02"`) was released on April 11th 2018 and
contains 105,829 audio files.
### Supported Tasks and Leaderboards
* `keyword-spotting`: the dataset can be used to train and evaluate keyword
spotting systems. The task is to detect preregistered keywords by classifying utterances
into a predefined set of words. The task is usually performed on-device for the
fast response time. Thus, accuracy, model size, and inference time are all crucial.
### Languages
The language data in SpeechCommands is in English (BCP-47 `en`).
## Dataset Structure
### Data Instances
Example of a core word (`"label"` is a word, `"is_unknown"` is `False`):
```python
{
"file": "no/7846fd85_nohash_0.wav",
"audio": {
"path": "no/7846fd85_nohash_0.wav",
"array": array([ -0.00021362, -0.00027466, -0.00036621, ..., 0.00079346,
0.00091553, 0.00079346]),
"sampling_rate": 16000
},
"label": 1, # "no"
"is_unknown": False,
"speaker_id": "7846fd85",
"utterance_id": 0
}
```
Example of an auxiliary word (`"label"` is a word, `"is_unknown"` is `True`)
```python
{
"file": "tree/8b775397_nohash_0.wav",
"audio": {
"path": "tree/8b775397_nohash_0.wav",
"array": array([ -0.00854492, -0.01339722, -0.02026367, ..., 0.00274658,
0.00335693, 0.0005188]),
"sampling_rate": 16000
},
"label": 28, # "tree"
"is_unknown": True,
"speaker_id": "1b88bf70",
"utterance_id": 0
}
```
Example of background noise (`_silence_`) class:
```python
{
"file": "_silence_/doing_the_dishes.wav",
"audio": {
"path": "_silence_/doing_the_dishes.wav",
"array": array([ 0. , 0. , 0. , ..., -0.00592041,
-0.00405884, -0.00253296]),
"sampling_rate": 16000
},
"label": 30, # "_silence_"
"is_unknown": False,
"speaker_id": "None",
"utterance_id": 0 # doesn't make sense here
}
```
### Data Fields
* `file`: relative audio filename inside the original archive.
* `audio`: dictionary containing a relative audio filename,
a decoded audio array, and the sampling rate. Note that when accessing
the audio column: `dataset[0]["audio"]` the audio is automatically decoded
and resampled to `dataset.features["audio"].sampling_rate`.
Decoding and resampling of a large number of audios might take a significant
amount of time. Thus, it is important to first query the sample index before
the `"audio"` column, i.e. `dataset[0]["audio"]` should always be preferred
over `dataset["audio"][0]`.
* `label`: either word pronounced in an audio sample or background noise (`_silence_`) class.
Note that it's an integer value corresponding to the class name.
* `is_unknown`: if a word is auxiliary. Equals to `False` if a word is a core word or `_silence_`,
`True` if a word is an auxiliary word.
* `speaker_id`: unique id of a speaker. Equals to `None` if label is `_silence_`.
* `utterance_id`: incremental id of a word utterance within the same speaker.
### Data Splits
The dataset has two versions (= configurations): `"v0.01"` and `"v0.02"`. `"v0.02"`
contains more words (see section [Source Data](#source-data) for more details).
| | train | validation | test |
|----- |------:|-----------:|-----:|
| v0.01 | 51093 | 6799 | 3081 |
| v0.02 | 84848 | 9982 | 4890 |
Note that in train and validation sets examples of `_silence_` class are longer than 1 second.
You can use the following code to sample 1-second examples from the longer ones:
```python
def sample_noise(example):
# Use this function to extract random 1 sec slices of each _silence_ utterance,
# e.g. inside `torch.utils.data.Dataset.__getitem__()`
from random import randint
if example["label"] == "_silence_":
random_offset = randint(0, len(example["speech"]) - example["sample_rate"] - 1)
example["speech"] = example["speech"][random_offset : random_offset + example["sample_rate"]]
return example
```
## Dataset Creation
### Curation Rationale
The primary goal of the dataset is to provide a way to build and test small
models that can detect a single word from a set of target words and differentiate it
from background noise or unrelated speech with as few false positives as possible.
### Source Data
#### Initial Data Collection and Normalization
The audio files were collected using crowdsourcing, see
[aiyprojects.withgoogle.com/open_speech_recording](https://github.com/petewarden/extract_loudest_section)
for some of the open source audio collection code that was used. The goal was to gather examples of
people speaking single-word commands, rather than conversational sentences, so
they were prompted for individual words over the course of a five minute
session.
In version 0.01 thirty different words were recoded: "Yes", "No", "Up", "Down", "Left",
"Right", "On", "Off", "Stop", "Go", "Zero", "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine",
"Bed", "Bird", "Cat", "Dog", "Happy", "House", "Marvin", "Sheila", "Tree", "Wow".
In version 0.02 more words were added: "Backward", "Forward", "Follow", "Learn", "Visual".
In both versions, ten of them are used as commands by convention: "Yes", "No", "Up", "Down", "Left",
"Right", "On", "Off", "Stop", "Go". Other words are considered to be auxiliary (in current implementation
it is marked by `True` value of `"is_unknown"` feature). Their function is to teach a model to distinguish core words
from unrecognized ones.
The `_silence_` label contains a set of longer audio clips that are either recordings or
a mathematical simulation of noise.
#### Who are the source language producers?
The audio files were collected using crowdsourcing.
### Annotations
#### Annotation process
Labels are the list of words prepared in advances.
Speakers were prompted for individual words over the course of a five minute
session.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Creative Commons BY 4.0 License ((CC-BY-4.0)[https://creativecommons.org/licenses/by/4.0/legalcode]).
### Citation Information
```
@article{speechcommandsv2,
author = { {Warden}, P.},
title = "{Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition}",
journal = {ArXiv e-prints},
archivePrefix = "arXiv",
eprint = {1804.03209},
primaryClass = "cs.CL",
keywords = {Computer Science - Computation and Language, Computer Science - Human-Computer Interaction},
year = 2018,
month = apr,
url = {https://arxiv.org/abs/1804.03209},
}
```
### Contributions
[More Information Needed]
|
renumics/speech_commands_enrichment_only
|
[
"task_categories:audio-classification",
"task_ids:keyword-spotting",
"annotations_creators:other",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"source_datasets:extended|speech_commands",
"language:en",
"license:cc-by-4.0",
"spotlight",
"enriched",
"renumics",
"enhanced",
"audio",
"classification",
"extended",
"arxiv:1804.03209",
"region:us"
] |
2023-09-27T12:37:24+00:00
|
{"annotations_creators": ["other"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M", "10K<n<100K"], "source_datasets": ["extended|speech_commands"], "task_categories": ["audio-classification"], "task_ids": ["keyword-spotting"], "pretty_name": "SpeechCommands", "config_names": ["v0.01", "v0.02"], "tags": ["spotlight", "enriched", "renumics", "enhanced", "audio", "classification", "extended"], "dataset_info": [{"config_name": "enrichment_only", "features": [{"name": "label_string", "dtype": "string"}, {"name": "probability", "dtype": "float64"}, {"name": "probability_vector", "sequence": "float32"}, {"name": "prediction", "dtype": "int64"}, {"name": "prediction_string", "dtype": "string"}, {"name": "embedding_reduced", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 8763867, "num_examples": 51093}, {"name": "validation", "num_bytes": 1165942, "num_examples": 6799}, {"name": "test", "num_bytes": 528408, "num_examples": 3081}], "download_size": 0, "dataset_size": 10458217}, {"config_name": "raw_and_enrichment_combined", "features": [{"name": "file", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "label", "dtype": {"class_label": {"names": {"0": "yes", "1": "no", "2": "up", "3": "down", "4": "left", "5": "right", "6": "on", "7": "off", "8": "stop", "9": "go", "10": "zero", "11": "one", "12": "two", "13": "three", "14": "four", "15": "five", "16": "six", "17": "seven", "18": "eight", "19": "nine", "20": "bed", "21": "bird", "22": "cat", "23": "dog", "24": "happy", "25": "house", "26": "marvin", "27": "sheila", "28": "tree", "29": "wow", "30": "_silence_"}}}}, {"name": "is_unknown", "dtype": "bool"}, {"name": "speaker_id", "dtype": "string"}, {"name": "utterance_id", "dtype": "int8"}, {"name": "logits", "sequence": "float64"}, {"name": "embedding", "sequence": "float32"}, {"name": "label_string", "dtype": "string"}, {"name": "probability", "dtype": "float64"}, {"name": "probability_vector", "sequence": "float32"}, {"name": "prediction", "dtype": "int64"}, {"name": "prediction_string", "dtype": "string"}, {"name": "embedding_reduced", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 1803565876.375, "num_examples": 51093}, {"name": "validation", "num_bytes": 240795605.125, "num_examples": 6799}, {"name": "test", "num_bytes": 109673146.875, "num_examples": 3081}], "download_size": 0, "dataset_size": 2154034628.375}], "configs": [{"config_name": "enrichment_only", "data_files": [{"split": "train", "path": "enrichment_only/train-*"}, {"split": "validation", "path": "enrichment_only/validation-*"}, {"split": "test", "path": "enrichment_only/test-*"}]}, {"config_name": "raw_and_enrichment_combined", "data_files": [{"split": "train", "path": "raw_and_enrichment_combined/train-*"}, {"split": "validation", "path": "raw_and_enrichment_combined/validation-*"}, {"split": "test", "path": "raw_and_enrichment_combined/test-*"}]}]}
|
2023-09-28T11:25:09+00:00
|
[
"1804.03209"
] |
[
"en"
] |
TAGS
#task_categories-audio-classification #task_ids-keyword-spotting #annotations_creators-other #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-10K<n<100K #source_datasets-extended|speech_commands #language-English #license-cc-by-4.0 #spotlight #enriched #renumics #enhanced #audio #classification #extended #arxiv-1804.03209 #region-us
|
Dataset Card for SpeechCommands
===============================
Dataset Description
-------------------
* Homepage: Renumics Homepage
* GitHub Spotlight
* Dataset Homepage URL
* Paper: Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition
* Leaderboard:
### Dataset Summary
Data-centric AI principles have become increasingly important for real-world use cases.
At Renumics we believe that classical benchmark datasets and competitions should be extended to reflect this development.
This is why we are publishing benchmark datasets with application-specific enrichments (e.g. embeddings, baseline results, uncertainties, label error scores). We hope this helps the ML community in the following ways:
1. Enable new researchers to quickly develop a profound understanding of the dataset.
2. Popularize data-centric AI principles and tooling in the ML community.
3. Encourage the sharing of meaningful qualitative insights in addition to traditional quantitative metrics.
This dataset is an enriched version of the SpeechCommands Dataset.
### Explore the Dataset
There are two configurations of the dataset: Enrichment only provides the enrichments calculated by Renumics using the MIT AST transformer, while raw\_and\_enrichment\_combined provides a concatenated dataset of the original speech commands and the enrichment.
The enrichments allow you to quickly gain insights into the dataset. The open source data curation tool Renumics Spotlight enables that with just a few lines of code:
Install datasets and Spotlight via pip:
>
> *Notice:* On Linux, non-Python dependency on libsndfile package must be installed manually. See Datasets - Installation for more information.
>
>
>
Load the dataset from huggingface in your notebook and start exploring with a simple view:
You can use the UI to interactively configure the view on the data. Depending on the concrete tasks (e.g. model comparison, debugging, outlier detection) you might want to leverage different enrichments and metadata.
As a plug and play option, you can check out the Huggingface space: Huggingface Space for speech enrichment
Alternatively, you can run the notebook URL locally.
### SpeechCommands Dataset
This is a set of one-second .wav audio files, each containing a single spoken
English word or background noise. These words are from a small set of commands, and are spoken by a
variety of different speakers. This data set is designed to help train simple
machine learning models. It is covered in more detail at URL
Version 0.01 of the data set (configuration '"v0.01"') was released on August 3rd 2017 and contains
64,727 audio files.
Version 0.02 of the data set (configuration '"v0.02"') was released on April 11th 2018 and
contains 105,829 audio files.
### Supported Tasks and Leaderboards
* 'keyword-spotting': the dataset can be used to train and evaluate keyword
spotting systems. The task is to detect preregistered keywords by classifying utterances
into a predefined set of words. The task is usually performed on-device for the
fast response time. Thus, accuracy, model size, and inference time are all crucial.
### Languages
The language data in SpeechCommands is in English (BCP-47 'en').
Dataset Structure
-----------------
### Data Instances
Example of a core word ('"label"' is a word, '"is\_unknown"' is 'False'):
Example of an auxiliary word ('"label"' is a word, '"is\_unknown"' is 'True')
Example of background noise ('*silence*') class:
### Data Fields
* 'file': relative audio filename inside the original archive.
* 'audio': dictionary containing a relative audio filename,
a decoded audio array, and the sampling rate. Note that when accessing
the audio column: 'dataset[0]["audio"]' the audio is automatically decoded
and resampled to 'dataset.features["audio"].sampling\_rate'.
Decoding and resampling of a large number of audios might take a significant
amount of time. Thus, it is important to first query the sample index before
the '"audio"' column, i.e. 'dataset[0]["audio"]' should always be preferred
over 'dataset["audio"][0]'.
* 'label': either word pronounced in an audio sample or background noise ('*silence*') class.
Note that it's an integer value corresponding to the class name.
* 'is\_unknown': if a word is auxiliary. Equals to 'False' if a word is a core word or '*silence*',
'True' if a word is an auxiliary word.
* 'speaker\_id': unique id of a speaker. Equals to 'None' if label is '*silence*'.
* 'utterance\_id': incremental id of a word utterance within the same speaker.
### Data Splits
The dataset has two versions (= configurations): '"v0.01"' and '"v0.02"'. '"v0.02"'
contains more words (see section Source Data for more details).
Note that in train and validation sets examples of '*silence*' class are longer than 1 second.
You can use the following code to sample 1-second examples from the longer ones:
Dataset Creation
----------------
### Curation Rationale
The primary goal of the dataset is to provide a way to build and test small
models that can detect a single word from a set of target words and differentiate it
from background noise or unrelated speech with as few false positives as possible.
### Source Data
#### Initial Data Collection and Normalization
The audio files were collected using crowdsourcing, see
URL
for some of the open source audio collection code that was used. The goal was to gather examples of
people speaking single-word commands, rather than conversational sentences, so
they were prompted for individual words over the course of a five minute
session.
In version 0.01 thirty different words were recoded: "Yes", "No", "Up", "Down", "Left",
"Right", "On", "Off", "Stop", "Go", "Zero", "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine",
"Bed", "Bird", "Cat", "Dog", "Happy", "House", "Marvin", "Sheila", "Tree", "Wow".
In version 0.02 more words were added: "Backward", "Forward", "Follow", "Learn", "Visual".
In both versions, ten of them are used as commands by convention: "Yes", "No", "Up", "Down", "Left",
"Right", "On", "Off", "Stop", "Go". Other words are considered to be auxiliary (in current implementation
it is marked by 'True' value of '"is\_unknown"' feature). Their function is to teach a model to distinguish core words
from unrecognized ones.
The '*silence*' label contains a set of longer audio clips that are either recordings or
a mathematical simulation of noise.
#### Who are the source language producers?
The audio files were collected using crowdsourcing.
### Annotations
#### Annotation process
Labels are the list of words prepared in advances.
Speakers were prompted for individual words over the course of a five minute
session.
#### Who are the annotators?
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
Creative Commons BY 4.0 License ((CC-BY-4.0)[URL
### Contributions
|
[
"### Dataset Summary\n\n\nData-centric AI principles have become increasingly important for real-world use cases. \n\nAt Renumics we believe that classical benchmark datasets and competitions should be extended to reflect this development.\n\n\nThis is why we are publishing benchmark datasets with application-specific enrichments (e.g. embeddings, baseline results, uncertainties, label error scores). We hope this helps the ML community in the following ways:\n\n\n1. Enable new researchers to quickly develop a profound understanding of the dataset.\n2. Popularize data-centric AI principles and tooling in the ML community.\n3. Encourage the sharing of meaningful qualitative insights in addition to traditional quantitative metrics.\n\n\nThis dataset is an enriched version of the SpeechCommands Dataset.",
"### Explore the Dataset\n\n\nThere are two configurations of the dataset: Enrichment only provides the enrichments calculated by Renumics using the MIT AST transformer, while raw\\_and\\_enrichment\\_combined provides a concatenated dataset of the original speech commands and the enrichment.\n\n\nThe enrichments allow you to quickly gain insights into the dataset. The open source data curation tool Renumics Spotlight enables that with just a few lines of code:\n\n\nInstall datasets and Spotlight via pip:\n\n\n\n> \n> *Notice:* On Linux, non-Python dependency on libsndfile package must be installed manually. See Datasets - Installation for more information.\n> \n> \n> \n\n\nLoad the dataset from huggingface in your notebook and start exploring with a simple view:\n\n\nYou can use the UI to interactively configure the view on the data. Depending on the concrete tasks (e.g. model comparison, debugging, outlier detection) you might want to leverage different enrichments and metadata.\n\n\nAs a plug and play option, you can check out the Huggingface space: Huggingface Space for speech enrichment\n\n\nAlternatively, you can run the notebook URL locally.",
"### SpeechCommands Dataset\n\n\nThis is a set of one-second .wav audio files, each containing a single spoken\nEnglish word or background noise. These words are from a small set of commands, and are spoken by a\nvariety of different speakers. This data set is designed to help train simple\nmachine learning models. It is covered in more detail at URL\n\n\nVersion 0.01 of the data set (configuration '\"v0.01\"') was released on August 3rd 2017 and contains\n64,727 audio files.\n\n\nVersion 0.02 of the data set (configuration '\"v0.02\"') was released on April 11th 2018 and\ncontains 105,829 audio files.",
"### Supported Tasks and Leaderboards\n\n\n* 'keyword-spotting': the dataset can be used to train and evaluate keyword\nspotting systems. The task is to detect preregistered keywords by classifying utterances\ninto a predefined set of words. The task is usually performed on-device for the\nfast response time. Thus, accuracy, model size, and inference time are all crucial.",
"### Languages\n\n\nThe language data in SpeechCommands is in English (BCP-47 'en').\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nExample of a core word ('\"label\"' is a word, '\"is\\_unknown\"' is 'False'):\n\n\nExample of an auxiliary word ('\"label\"' is a word, '\"is\\_unknown\"' is 'True')\n\n\nExample of background noise ('*silence*') class:",
"### Data Fields\n\n\n* 'file': relative audio filename inside the original archive.\n* 'audio': dictionary containing a relative audio filename,\na decoded audio array, and the sampling rate. Note that when accessing\nthe audio column: 'dataset[0][\"audio\"]' the audio is automatically decoded\nand resampled to 'dataset.features[\"audio\"].sampling\\_rate'.\nDecoding and resampling of a large number of audios might take a significant\namount of time. Thus, it is important to first query the sample index before\nthe '\"audio\"' column, i.e. 'dataset[0][\"audio\"]' should always be preferred\nover 'dataset[\"audio\"][0]'.\n* 'label': either word pronounced in an audio sample or background noise ('*silence*') class.\nNote that it's an integer value corresponding to the class name.\n* 'is\\_unknown': if a word is auxiliary. Equals to 'False' if a word is a core word or '*silence*',\n'True' if a word is an auxiliary word.\n* 'speaker\\_id': unique id of a speaker. Equals to 'None' if label is '*silence*'.\n* 'utterance\\_id': incremental id of a word utterance within the same speaker.",
"### Data Splits\n\n\nThe dataset has two versions (= configurations): '\"v0.01\"' and '\"v0.02\"'. '\"v0.02\"'\ncontains more words (see section Source Data for more details).\n\n\n\nNote that in train and validation sets examples of '*silence*' class are longer than 1 second.\nYou can use the following code to sample 1-second examples from the longer ones:\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nThe primary goal of the dataset is to provide a way to build and test small\nmodels that can detect a single word from a set of target words and differentiate it\nfrom background noise or unrelated speech with as few false positives as possible.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe audio files were collected using crowdsourcing, see\nURL\nfor some of the open source audio collection code that was used. The goal was to gather examples of\npeople speaking single-word commands, rather than conversational sentences, so\nthey were prompted for individual words over the course of a five minute\nsession.\n\n\nIn version 0.01 thirty different words were recoded: \"Yes\", \"No\", \"Up\", \"Down\", \"Left\",\n\"Right\", \"On\", \"Off\", \"Stop\", \"Go\", \"Zero\", \"One\", \"Two\", \"Three\", \"Four\", \"Five\", \"Six\", \"Seven\", \"Eight\", \"Nine\",\n\"Bed\", \"Bird\", \"Cat\", \"Dog\", \"Happy\", \"House\", \"Marvin\", \"Sheila\", \"Tree\", \"Wow\".\n\n\nIn version 0.02 more words were added: \"Backward\", \"Forward\", \"Follow\", \"Learn\", \"Visual\".\n\n\nIn both versions, ten of them are used as commands by convention: \"Yes\", \"No\", \"Up\", \"Down\", \"Left\",\n\"Right\", \"On\", \"Off\", \"Stop\", \"Go\". Other words are considered to be auxiliary (in current implementation\nit is marked by 'True' value of '\"is\\_unknown\"' feature). Their function is to teach a model to distinguish core words\nfrom unrecognized ones.\n\n\nThe '*silence*' label contains a set of longer audio clips that are either recordings or\na mathematical simulation of noise.",
"#### Who are the source language producers?\n\n\nThe audio files were collected using crowdsourcing.",
"### Annotations",
"#### Annotation process\n\n\nLabels are the list of words prepared in advances.\nSpeakers were prompted for individual words over the course of a five minute\nsession.",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nCreative Commons BY 4.0 License ((CC-BY-4.0)[URL",
"### Contributions"
] |
[
"TAGS\n#task_categories-audio-classification #task_ids-keyword-spotting #annotations_creators-other #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-10K<n<100K #source_datasets-extended|speech_commands #language-English #license-cc-by-4.0 #spotlight #enriched #renumics #enhanced #audio #classification #extended #arxiv-1804.03209 #region-us \n",
"### Dataset Summary\n\n\nData-centric AI principles have become increasingly important for real-world use cases. \n\nAt Renumics we believe that classical benchmark datasets and competitions should be extended to reflect this development.\n\n\nThis is why we are publishing benchmark datasets with application-specific enrichments (e.g. embeddings, baseline results, uncertainties, label error scores). We hope this helps the ML community in the following ways:\n\n\n1. Enable new researchers to quickly develop a profound understanding of the dataset.\n2. Popularize data-centric AI principles and tooling in the ML community.\n3. Encourage the sharing of meaningful qualitative insights in addition to traditional quantitative metrics.\n\n\nThis dataset is an enriched version of the SpeechCommands Dataset.",
"### Explore the Dataset\n\n\nThere are two configurations of the dataset: Enrichment only provides the enrichments calculated by Renumics using the MIT AST transformer, while raw\\_and\\_enrichment\\_combined provides a concatenated dataset of the original speech commands and the enrichment.\n\n\nThe enrichments allow you to quickly gain insights into the dataset. The open source data curation tool Renumics Spotlight enables that with just a few lines of code:\n\n\nInstall datasets and Spotlight via pip:\n\n\n\n> \n> *Notice:* On Linux, non-Python dependency on libsndfile package must be installed manually. See Datasets - Installation for more information.\n> \n> \n> \n\n\nLoad the dataset from huggingface in your notebook and start exploring with a simple view:\n\n\nYou can use the UI to interactively configure the view on the data. Depending on the concrete tasks (e.g. model comparison, debugging, outlier detection) you might want to leverage different enrichments and metadata.\n\n\nAs a plug and play option, you can check out the Huggingface space: Huggingface Space for speech enrichment\n\n\nAlternatively, you can run the notebook URL locally.",
"### SpeechCommands Dataset\n\n\nThis is a set of one-second .wav audio files, each containing a single spoken\nEnglish word or background noise. These words are from a small set of commands, and are spoken by a\nvariety of different speakers. This data set is designed to help train simple\nmachine learning models. It is covered in more detail at URL\n\n\nVersion 0.01 of the data set (configuration '\"v0.01\"') was released on August 3rd 2017 and contains\n64,727 audio files.\n\n\nVersion 0.02 of the data set (configuration '\"v0.02\"') was released on April 11th 2018 and\ncontains 105,829 audio files.",
"### Supported Tasks and Leaderboards\n\n\n* 'keyword-spotting': the dataset can be used to train and evaluate keyword\nspotting systems. The task is to detect preregistered keywords by classifying utterances\ninto a predefined set of words. The task is usually performed on-device for the\nfast response time. Thus, accuracy, model size, and inference time are all crucial.",
"### Languages\n\n\nThe language data in SpeechCommands is in English (BCP-47 'en').\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nExample of a core word ('\"label\"' is a word, '\"is\\_unknown\"' is 'False'):\n\n\nExample of an auxiliary word ('\"label\"' is a word, '\"is\\_unknown\"' is 'True')\n\n\nExample of background noise ('*silence*') class:",
"### Data Fields\n\n\n* 'file': relative audio filename inside the original archive.\n* 'audio': dictionary containing a relative audio filename,\na decoded audio array, and the sampling rate. Note that when accessing\nthe audio column: 'dataset[0][\"audio\"]' the audio is automatically decoded\nand resampled to 'dataset.features[\"audio\"].sampling\\_rate'.\nDecoding and resampling of a large number of audios might take a significant\namount of time. Thus, it is important to first query the sample index before\nthe '\"audio\"' column, i.e. 'dataset[0][\"audio\"]' should always be preferred\nover 'dataset[\"audio\"][0]'.\n* 'label': either word pronounced in an audio sample or background noise ('*silence*') class.\nNote that it's an integer value corresponding to the class name.\n* 'is\\_unknown': if a word is auxiliary. Equals to 'False' if a word is a core word or '*silence*',\n'True' if a word is an auxiliary word.\n* 'speaker\\_id': unique id of a speaker. Equals to 'None' if label is '*silence*'.\n* 'utterance\\_id': incremental id of a word utterance within the same speaker.",
"### Data Splits\n\n\nThe dataset has two versions (= configurations): '\"v0.01\"' and '\"v0.02\"'. '\"v0.02\"'\ncontains more words (see section Source Data for more details).\n\n\n\nNote that in train and validation sets examples of '*silence*' class are longer than 1 second.\nYou can use the following code to sample 1-second examples from the longer ones:\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nThe primary goal of the dataset is to provide a way to build and test small\nmodels that can detect a single word from a set of target words and differentiate it\nfrom background noise or unrelated speech with as few false positives as possible.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe audio files were collected using crowdsourcing, see\nURL\nfor some of the open source audio collection code that was used. The goal was to gather examples of\npeople speaking single-word commands, rather than conversational sentences, so\nthey were prompted for individual words over the course of a five minute\nsession.\n\n\nIn version 0.01 thirty different words were recoded: \"Yes\", \"No\", \"Up\", \"Down\", \"Left\",\n\"Right\", \"On\", \"Off\", \"Stop\", \"Go\", \"Zero\", \"One\", \"Two\", \"Three\", \"Four\", \"Five\", \"Six\", \"Seven\", \"Eight\", \"Nine\",\n\"Bed\", \"Bird\", \"Cat\", \"Dog\", \"Happy\", \"House\", \"Marvin\", \"Sheila\", \"Tree\", \"Wow\".\n\n\nIn version 0.02 more words were added: \"Backward\", \"Forward\", \"Follow\", \"Learn\", \"Visual\".\n\n\nIn both versions, ten of them are used as commands by convention: \"Yes\", \"No\", \"Up\", \"Down\", \"Left\",\n\"Right\", \"On\", \"Off\", \"Stop\", \"Go\". Other words are considered to be auxiliary (in current implementation\nit is marked by 'True' value of '\"is\\_unknown\"' feature). Their function is to teach a model to distinguish core words\nfrom unrecognized ones.\n\n\nThe '*silence*' label contains a set of longer audio clips that are either recordings or\na mathematical simulation of noise.",
"#### Who are the source language producers?\n\n\nThe audio files were collected using crowdsourcing.",
"### Annotations",
"#### Annotation process\n\n\nLabels are the list of words prepared in advances.\nSpeakers were prompted for individual words over the course of a five minute\nsession.",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nCreative Commons BY 4.0 License ((CC-BY-4.0)[URL",
"### Contributions"
] |
[
146,
182,
280,
150,
95,
30,
88,
346,
99,
58,
4,
366,
20,
5,
33,
9,
50,
7,
8,
14,
6,
20,
5
] |
[
"passage: TAGS\n#task_categories-audio-classification #task_ids-keyword-spotting #annotations_creators-other #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-10K<n<100K #source_datasets-extended|speech_commands #language-English #license-cc-by-4.0 #spotlight #enriched #renumics #enhanced #audio #classification #extended #arxiv-1804.03209 #region-us \n### Dataset Summary\n\n\nData-centric AI principles have become increasingly important for real-world use cases. \n\nAt Renumics we believe that classical benchmark datasets and competitions should be extended to reflect this development.\n\n\nThis is why we are publishing benchmark datasets with application-specific enrichments (e.g. embeddings, baseline results, uncertainties, label error scores). We hope this helps the ML community in the following ways:\n\n\n1. Enable new researchers to quickly develop a profound understanding of the dataset.\n2. Popularize data-centric AI principles and tooling in the ML community.\n3. Encourage the sharing of meaningful qualitative insights in addition to traditional quantitative metrics.\n\n\nThis dataset is an enriched version of the SpeechCommands Dataset.",
"passage: ### Explore the Dataset\n\n\nThere are two configurations of the dataset: Enrichment only provides the enrichments calculated by Renumics using the MIT AST transformer, while raw\\_and\\_enrichment\\_combined provides a concatenated dataset of the original speech commands and the enrichment.\n\n\nThe enrichments allow you to quickly gain insights into the dataset. The open source data curation tool Renumics Spotlight enables that with just a few lines of code:\n\n\nInstall datasets and Spotlight via pip:\n\n\n\n> \n> *Notice:* On Linux, non-Python dependency on libsndfile package must be installed manually. See Datasets - Installation for more information.\n> \n> \n> \n\n\nLoad the dataset from huggingface in your notebook and start exploring with a simple view:\n\n\nYou can use the UI to interactively configure the view on the data. Depending on the concrete tasks (e.g. model comparison, debugging, outlier detection) you might want to leverage different enrichments and metadata.\n\n\nAs a plug and play option, you can check out the Huggingface space: Huggingface Space for speech enrichment\n\n\nAlternatively, you can run the notebook URL locally.### SpeechCommands Dataset\n\n\nThis is a set of one-second .wav audio files, each containing a single spoken\nEnglish word or background noise. These words are from a small set of commands, and are spoken by a\nvariety of different speakers. This data set is designed to help train simple\nmachine learning models. It is covered in more detail at URL\n\n\nVersion 0.01 of the data set (configuration '\"v0.01\"') was released on August 3rd 2017 and contains\n64,727 audio files.\n\n\nVersion 0.02 of the data set (configuration '\"v0.02\"') was released on April 11th 2018 and\ncontains 105,829 audio files.### Supported Tasks and Leaderboards\n\n\n* 'keyword-spotting': the dataset can be used to train and evaluate keyword\nspotting systems. The task is to detect preregistered keywords by classifying utterances\ninto a predefined set of words. The task is usually performed on-device for the\nfast response time. Thus, accuracy, model size, and inference time are all crucial.### Languages\n\n\nThe language data in SpeechCommands is in English (BCP-47 'en').\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nExample of a core word ('\"label\"' is a word, '\"is\\_unknown\"' is 'False'):\n\n\nExample of an auxiliary word ('\"label\"' is a word, '\"is\\_unknown\"' is 'True')\n\n\nExample of background noise ('*silence*') class:",
"passage: ### Data Fields\n\n\n* 'file': relative audio filename inside the original archive.\n* 'audio': dictionary containing a relative audio filename,\na decoded audio array, and the sampling rate. Note that when accessing\nthe audio column: 'dataset[0][\"audio\"]' the audio is automatically decoded\nand resampled to 'dataset.features[\"audio\"].sampling\\_rate'.\nDecoding and resampling of a large number of audios might take a significant\namount of time. Thus, it is important to first query the sample index before\nthe '\"audio\"' column, i.e. 'dataset[0][\"audio\"]' should always be preferred\nover 'dataset[\"audio\"][0]'.\n* 'label': either word pronounced in an audio sample or background noise ('*silence*') class.\nNote that it's an integer value corresponding to the class name.\n* 'is\\_unknown': if a word is auxiliary. Equals to 'False' if a word is a core word or '*silence*',\n'True' if a word is an auxiliary word.\n* 'speaker\\_id': unique id of a speaker. Equals to 'None' if label is '*silence*'.\n* 'utterance\\_id': incremental id of a word utterance within the same speaker.### Data Splits\n\n\nThe dataset has two versions (= configurations): '\"v0.01\"' and '\"v0.02\"'. '\"v0.02\"'\ncontains more words (see section Source Data for more details).\n\n\n\nNote that in train and validation sets examples of '*silence*' class are longer than 1 second.\nYou can use the following code to sample 1-second examples from the longer ones:\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nThe primary goal of the dataset is to provide a way to build and test small\nmodels that can detect a single word from a set of target words and differentiate it\nfrom background noise or unrelated speech with as few false positives as possible.### Source Data"
] |
2d19557de12ecdb7d95629bc92a01d4dba1eba94
|
# Dataset Card for "train_1m-chinese-zhtw"
## 內容
包含約 100 萬條由 [BELLE](https://github.com/LianjiaTech/BELLE) 專案產生的中文指令(instruction)資料。
## 範例
```
{
"instruction": "判斷給定的文章是否符合語法規則。如果不符合,請提供修改建議。下面是一篇文章的開頭: 為了探討這個主題,本文將提供一系列資料和例項,以證明這一觀點,
"input": "",
"output": "這個開頭符合語法規則。"
}
```
### 欄位:
```
instruction: 指令
input: 輸入(此資料集均為空)
output: 輸出
```
## 使用限制
僅允許將此資料集及使用此資料集產生的衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。
本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集所帶來的任何損害、糾紛,本專案不承擔任何責任。
|
erhwenkuo/train_1m-chinese-zhtw
|
[
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:zh",
"alpaca",
"fine-tune",
"region:us"
] |
2023-09-27T12:53:42+00:00
|
{"language": ["zh"], "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 422333552, "num_examples": 917424}], "download_size": 290105331, "dataset_size": 422333552}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["alpaca", "fine-tune"]}
|
2023-09-27T13:46:09+00:00
|
[] |
[
"zh"
] |
TAGS
#task_categories-text-generation #size_categories-100K<n<1M #language-Chinese #alpaca #fine-tune #region-us
|
# Dataset Card for "train_1m-chinese-zhtw"
## 內容
包含約 100 萬條由 BELLE 專案產生的中文指令(instruction)資料。
## 範例
### 欄位:
## 使用限制
僅允許將此資料集及使用此資料集產生的衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。
本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集所帶來的任何損害、糾紛,本專案不承擔任何責任。
|
[
"# Dataset Card for \"train_1m-chinese-zhtw\"",
"## 內容\n包含約 100 萬條由 BELLE 專案產生的中文指令(instruction)資料。",
"## 範例",
"### 欄位:",
"## 使用限制\n\n僅允許將此資料集及使用此資料集產生的衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。\n本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集所帶來的任何損害、糾紛,本專案不承擔任何責任。"
] |
[
"TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-Chinese #alpaca #fine-tune #region-us \n",
"# Dataset Card for \"train_1m-chinese-zhtw\"",
"## 內容\n包含約 100 萬條由 BELLE 專案產生的中文指令(instruction)資料。",
"## 範例",
"### 欄位:",
"## 使用限制\n\n僅允許將此資料集及使用此資料集產生的衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。\n本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集所帶來的任何損害、糾紛,本專案不承擔任何責任。"
] |
[
41,
17,
24,
4,
6,
82
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-Chinese #alpaca #fine-tune #region-us \n# Dataset Card for \"train_1m-chinese-zhtw\"## 內容\n包含約 100 萬條由 BELLE 專案產生的中文指令(instruction)資料。## 範例### 欄位:## 使用限制\n\n僅允許將此資料集及使用此資料集產生的衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。\n本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集所帶來的任何損害、糾紛,本專案不承擔任何責任。"
] |
f0b38a44f01128f5b42b02d5b7412dd0656005e8
|
# Dataset Card for Common Voice Corpus 13.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Vaibhav Srivastav](mailto:[email protected])
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 27141 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 17689 validated hours in 108 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Autoevaluate Leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=mozilla-foundation%2Fcommon_voice_11_0&only_verified=0&task=automatic-speech-recognition&config=ar&split=test&metric=wer)
### Languages
```
Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dioula, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hill Mari, Hindi, Hungarian, Icelandic, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Korean, Kurmanji Kurdish, Kyrgyz, Lao, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Occitan, Odia, Persian, Polish, Portuguese, Punjabi, Quechua Chanka, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Saraiki, Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamil, Tatar, Thai, Tigre, Tigrinya, Toki Pona, Turkish, Turkmen, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh, Yoruba
```
## How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi" for Hindi):
```python
from datasets import load_dataset
cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train", streaming=True)
print(next(iter(cv_13)))
```
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
### Local
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train")
batch_sampler = BatchSampler(RandomSampler(cv_13), batch_size=32, drop_last=False)
dataloader = DataLoader(cv_13, batch_sampler=batch_sampler)
```
### Streaming
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train")
dataloader = DataLoader(cv_13, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
### Example scripts
Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 13 with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_13_0", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
|
fmagot01/common_voice_13_0_dv_preprocessed
|
[
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] |
2023-09-27T13:10:46+00:00
|
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": {"ab": ["10K<n<100K"], "ar": ["100K<n<1M"], "as": ["1K<n<10K"], "ast": ["1K<n<10K"], "az": ["n<1K"], "ba": ["100K<n<1M"], "bas": ["1K<n<10K"], "be": ["1M<n<10M"], "bg": ["10K<n<100K"], "bn": ["1M<n<10M"], "br": ["10K<n<100K"], "ca": ["1M<n<10M"], "ckb": ["100K<n<1M"], "cnh": ["1K<n<10K"], "cs": ["100K<n<1M"], "cv": ["10K<n<100K"], "cy": ["100K<n<1M"], "da": ["10K<n<100K"], "de": ["100K<n<1M"], "dv": ["10K<n<100K"], "dyu": ["n<1K"], "el": ["10K<n<100K"], "en": ["1M<n<10M"], "eo": ["1M<n<10M"], "es": ["1M<n<10M"], "et": ["10K<n<100K"], "eu": ["100K<n<1M"], "fa": ["100K<n<1M"], "fi": ["10K<n<100K"], "fr": ["100K<n<1M"], "fy-NL": ["100K<n<1M"], "ga-IE": ["10K<n<100K"], "gl": ["10K<n<100K"], "gn": ["1K<n<10K"], "ha": ["10K<n<100K"], "hi": ["10K<n<100K"], "hsb": ["1K<n<10K"], "hu": ["10K<n<100K"], "hy-AM": ["1K<n<10K"], "ia": ["10K<n<100K"], "id": ["10K<n<100K"], "ig": ["1K<n<10K"], "is": ["n<1K"], "it": ["100K<n<1M"], "ja": ["100K<n<1M"], "ka": ["10K<n<100K"], "kab": ["100K<n<1M"], "kk": ["1K<n<10K"], "kmr": ["10K<n<100K"], "ko": ["1K<n<10K"], "ky": ["10K<n<100K"], "lg": ["100K<n<1M"], "lo": ["n<1K"], "lt": ["10K<n<100K"], "lv": ["10K<n<100K"], "mdf": ["n<1K"], "mhr": ["100K<n<1M"], "mk": ["n<1K"], "ml": ["1K<n<10K"], "mn": ["10K<n<100K"], "mr": ["10K<n<100K"], "mrj": ["10K<n<100K"], "mt": ["10K<n<100K"], "myv": ["1K<n<10K"], "nan-tw": ["10K<n<100K"], "ne-NP": ["n<1K"], "nl": ["10K<n<100K"], "nn-NO": ["n<1K"], "oc": ["1K<n<10K"], "or": ["1K<n<10K"], "pa-IN": ["1K<n<10K"], "pl": ["100K<n<1M"], "pt": ["100K<n<1M"], "quy": ["n<1K"], "rm-sursilv": ["1K<n<10K"], "rm-vallader": ["1K<n<10K"], "ro": ["10K<n<100K"], "ru": ["100K<n<1M"], "rw": ["1M<n<10M"], "sah": ["1K<n<10K"], "sat": ["n<1K"], "sc": ["1K<n<10K"], "sk": ["10K<n<100K"], "skr": ["1K<n<10K"], "sl": ["10K<n<100K"], "sr": ["1K<n<10K"], "sv-SE": ["10K<n<100K"], "sw": ["100K<n<1M"], "ta": ["100K<n<1M"], "th": ["100K<n<1M"], "ti": ["n<1K"], "tig": ["n<1K"], "tk": ["1K<n<10K"], "tok": ["10K<n<100K"], "tr": ["10K<n<100K"], "tt": ["10K<n<100K"], "tw": ["n<1K"], "ug": ["10K<n<100K"], "uk": ["10K<n<100K"], "ur": ["100K<n<1M"], "uz": ["100K<n<1M"], "vi": ["10K<n<100K"], "vot": ["n<1K"], "yo": ["1K<n<10K"], "yue": ["10K<n<100K"], "zh-CN": ["100K<n<1M"], "zh-HK": ["100K<n<1M"], "zh-TW": ["100K<n<1M"]}, "source_datasets": ["extended|common_voice"], "task_categories": ["automatic-speech-recognition"], "paperswithcode_id": "common-voice", "pretty_name": "Common Voice Corpus 13.0", "language_bcp47": ["ab", "ar", "as", "ast", "az", "ba", "bas", "be", "bg", "bn", "br", "ca", "ckb", "cnh", "cs", "cv", "cy", "da", "de", "dv", "dyu", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy-NL", "ga-IE", "gl", "gn", "ha", "hi", "hsb", "hu", "hy-AM", "ia", "id", "ig", "is", "it", "ja", "ka", "kab", "kk", "kmr", "ko", "ky", "lg", "lo", "lt", "lv", "mdf", "mhr", "mk", "ml", "mn", "mr", "mrj", "mt", "myv", "nan-tw", "ne-NP", "nl", "nn-NO", "oc", "or", "pa-IN", "pl", "pt", "quy", "rm-sursilv", "rm-vallader", "ro", "ru", "rw", "sah", "sat", "sc", "sk", "skr", "sl", "sr", "sv-SE", "sw", "ta", "th", "ti", "tig", "tk", "tok", "tr", "tt", "tw", "ug", "uk", "ur", "uz", "vi", "vot", "yo", "yue", "zh-CN", "zh-HK", "zh-TW"], "extra_gated_prompt": "By clicking on \u201cAccess repository\u201d below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset."}
|
2023-09-27T14:32:11+00:00
|
[
"1912.06670"
] |
[] |
TAGS
#task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #source_datasets-extended|common_voice #license-cc0-1.0 #arxiv-1912.06670 #region-us
|
# Dataset Card for Common Voice Corpus 13.0
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- How to use
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
- Leaderboard: URL
- Point of Contact: Vaibhav Srivastav
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 27141 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 17689 validated hours in 108 languages, but more voices and languages are always added.
Take a look at the Languages page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
Autoevaluate Leaderboard
### Languages
## How to use
The 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function.
For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi" for Hindi):
Using the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
*Bonus*: create a PyTorch dataloader directly with your own datasets (local/streamed).
### Local
### Streaming
To find out more about loading and preparing audio datasets, head over to URL
### Example scripts
Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 13 with 'transformers' - here.
## Dataset Structure
### Data Instances
A typical data point comprises the 'path' to the audio file and its 'sentence'.
Additional fields include 'accent', 'age', 'client_id', 'up_votes', 'down_votes', 'gender', 'locale' and 'segment'.
### Data Fields
'client_id' ('string'): An id for which client (voice) made the recording
'path' ('string'): The path to the audio file
'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'.
'sentence' ('string'): The sentence the user was prompted to speak
'up_votes' ('int64'): How many upvotes the audio file has received from reviewers
'down_votes' ('int64'): How many downvotes the audio file has received from reviewers
'age' ('string'): The age of the speaker (e.g. 'teens', 'twenties', 'fifties')
'gender' ('string'): The gender of the speaker
'accent' ('string'): Accent of the speaker
'locale' ('string'): The locale of the speaker
'segment' ('string'): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, almost all sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
Public Domain, CC-0
|
[
"# Dataset Card for Common Voice Corpus 13.0",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - How to use\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: Vaibhav Srivastav",
"### Dataset Summary\n\nThe Common Voice dataset consists of a unique MP3 and corresponding text file. \nMany of the 27141 recorded hours in the dataset also include demographic metadata like age, sex, and accent \nthat can help improve the accuracy of speech recognition engines.\n\nThe dataset currently consists of 17689 validated hours in 108 languages, but more voices and languages are always added. \nTake a look at the Languages page to request a language or start contributing.",
"### Supported Tasks and Leaderboards\n\nThe results for models trained on the Common Voice datasets are available via the \n Autoevaluate Leaderboard",
"### Languages",
"## How to use\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function. \n\nFor example, to download the Hindi config, simply specify the corresponding language config name (i.e., \"hi\" for Hindi):\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.\n\n\n*Bonus*: create a PyTorch dataloader directly with your own datasets (local/streamed).",
"### Local",
"### Streaming\n\n\n\nTo find out more about loading and preparing audio datasets, head over to URL",
"### Example scripts\n\nTrain your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 13 with 'transformers' - here.",
"## Dataset Structure",
"### Data Instances\n\nA typical data point comprises the 'path' to the audio file and its 'sentence'. \nAdditional fields include 'accent', 'age', 'client_id', 'up_votes', 'down_votes', 'gender', 'locale' and 'segment'.",
"### Data Fields\n\n'client_id' ('string'): An id for which client (voice) made the recording\n\n'path' ('string'): The path to the audio file\n\n'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\n'sentence' ('string'): The sentence the user was prompted to speak\n\n'up_votes' ('int64'): How many upvotes the audio file has received from reviewers\n\n'down_votes' ('int64'): How many downvotes the audio file has received from reviewers\n\n'age' ('string'): The age of the speaker (e.g. 'teens', 'twenties', 'fifties')\n\n'gender' ('string'): The gender of the speaker\n\n'accent' ('string'): Accent of the speaker\n\n'locale' ('string'): The locale of the speaker\n\n'segment' ('string'): Usually an empty field",
"### Data Splits\n\nThe speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.\n\nThe validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.\n\nThe invalidated data is data has been invalidated by reviewers\nand received downvotes indicating that the data is of low quality.\n\nThe reported data is data that has been reported, for different reasons.\n\nThe other data is data that has not yet been reviewed.\n\nThe dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.",
"## Data Preprocessing Recommended by Hugging Face\n\nThe following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice. \n\nMany examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.\n\nIn addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, almost all sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nPublic Domain, CC-0"
] |
[
"TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #source_datasets-extended|common_voice #license-cc0-1.0 #arxiv-1912.06670 #region-us \n",
"# Dataset Card for Common Voice Corpus 13.0",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - How to use\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: Vaibhav Srivastav",
"### Dataset Summary\n\nThe Common Voice dataset consists of a unique MP3 and corresponding text file. \nMany of the 27141 recorded hours in the dataset also include demographic metadata like age, sex, and accent \nthat can help improve the accuracy of speech recognition engines.\n\nThe dataset currently consists of 17689 validated hours in 108 languages, but more voices and languages are always added. \nTake a look at the Languages page to request a language or start contributing.",
"### Supported Tasks and Leaderboards\n\nThe results for models trained on the Common Voice datasets are available via the \n Autoevaluate Leaderboard",
"### Languages",
"## How to use\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function. \n\nFor example, to download the Hindi config, simply specify the corresponding language config name (i.e., \"hi\" for Hindi):\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.\n\n\n*Bonus*: create a PyTorch dataloader directly with your own datasets (local/streamed).",
"### Local",
"### Streaming\n\n\n\nTo find out more about loading and preparing audio datasets, head over to URL",
"### Example scripts\n\nTrain your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 13 with 'transformers' - here.",
"## Dataset Structure",
"### Data Instances\n\nA typical data point comprises the 'path' to the audio file and its 'sentence'. \nAdditional fields include 'accent', 'age', 'client_id', 'up_votes', 'down_votes', 'gender', 'locale' and 'segment'.",
"### Data Fields\n\n'client_id' ('string'): An id for which client (voice) made the recording\n\n'path' ('string'): The path to the audio file\n\n'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\n'sentence' ('string'): The sentence the user was prompted to speak\n\n'up_votes' ('int64'): How many upvotes the audio file has received from reviewers\n\n'down_votes' ('int64'): How many downvotes the audio file has received from reviewers\n\n'age' ('string'): The age of the speaker (e.g. 'teens', 'twenties', 'fifties')\n\n'gender' ('string'): The gender of the speaker\n\n'accent' ('string'): Accent of the speaker\n\n'locale' ('string'): The locale of the speaker\n\n'segment' ('string'): Usually an empty field",
"### Data Splits\n\nThe speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.\n\nThe validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.\n\nThe invalidated data is data has been invalidated by reviewers\nand received downvotes indicating that the data is of low quality.\n\nThe reported data is data that has been reported, for different reasons.\n\nThe other data is data that has not yet been reviewed.\n\nThe dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.",
"## Data Preprocessing Recommended by Hugging Face\n\nThe following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice. \n\nMany examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.\n\nIn addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, almost all sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nPublic Domain, CC-0"
] |
[
87,
10,
124,
34,
108,
32,
4,
190,
3,
22,
36,
6,
77,
378,
145,
233,
5,
7,
4,
10,
10,
5,
5,
9,
42,
8,
41,
8,
7,
5,
6,
11
] |
[
"passage: TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #source_datasets-extended|common_voice #license-cc0-1.0 #arxiv-1912.06670 #region-us \n# Dataset Card for Common Voice Corpus 13.0## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - How to use\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: Vaibhav Srivastav### Dataset Summary\n\nThe Common Voice dataset consists of a unique MP3 and corresponding text file. \nMany of the 27141 recorded hours in the dataset also include demographic metadata like age, sex, and accent \nthat can help improve the accuracy of speech recognition engines.\n\nThe dataset currently consists of 17689 validated hours in 108 languages, but more voices and languages are always added. \nTake a look at the Languages page to request a language or start contributing.### Supported Tasks and Leaderboards\n\nThe results for models trained on the Common Voice datasets are available via the \n Autoevaluate Leaderboard### Languages",
"passage: ## How to use\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function. \n\nFor example, to download the Hindi config, simply specify the corresponding language config name (i.e., \"hi\" for Hindi):\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.\n\n\n*Bonus*: create a PyTorch dataloader directly with your own datasets (local/streamed).### Local### Streaming\n\n\n\nTo find out more about loading and preparing audio datasets, head over to URL### Example scripts\n\nTrain your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 13 with 'transformers' - here.## Dataset Structure### Data Instances\n\nA typical data point comprises the 'path' to the audio file and its 'sentence'. \nAdditional fields include 'accent', 'age', 'client_id', 'up_votes', 'down_votes', 'gender', 'locale' and 'segment'.",
"passage: ### Data Fields\n\n'client_id' ('string'): An id for which client (voice) made the recording\n\n'path' ('string'): The path to the audio file\n\n'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\n'sentence' ('string'): The sentence the user was prompted to speak\n\n'up_votes' ('int64'): How many upvotes the audio file has received from reviewers\n\n'down_votes' ('int64'): How many downvotes the audio file has received from reviewers\n\n'age' ('string'): The age of the speaker (e.g. 'teens', 'twenties', 'fifties')\n\n'gender' ('string'): The gender of the speaker\n\n'accent' ('string'): Accent of the speaker\n\n'locale' ('string'): The locale of the speaker\n\n'segment' ('string'): Usually an empty field### Data Splits\n\nThe speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.\n\nThe validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.\n\nThe invalidated data is data has been invalidated by reviewers\nand received downvotes indicating that the data is of low quality.\n\nThe reported data is data that has been reported, for different reasons.\n\nThe other data is data that has not yet been reviewed.\n\nThe dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.## Data Preprocessing Recommended by Hugging Face\n\nThe following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice. \n\nMany examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.\n\nIn addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, almost all sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.## Considerations for Using the Data"
] |
e48a57058d04179d507e3f5e69a1a8ab08713863
|
# Dataset Card for Common Voice Corpus 13.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Vaibhav Srivastav](mailto:[email protected])
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 27141 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 17689 validated hours in 108 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Autoevaluate Leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=mozilla-foundation%2Fcommon_voice_11_0&only_verified=0&task=automatic-speech-recognition&config=ar&split=test&metric=wer)
### Languages
```
Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dioula, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hill Mari, Hindi, Hungarian, Icelandic, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Korean, Kurmanji Kurdish, Kyrgyz, Lao, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Occitan, Odia, Persian, Polish, Portuguese, Punjabi, Quechua Chanka, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Saraiki, Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamil, Tatar, Thai, Tigre, Tigrinya, Toki Pona, Turkish, Turkmen, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh, Yoruba
```
## How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi" for Hindi):
```python
from datasets import load_dataset
cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train", streaming=True)
print(next(iter(cv_13)))
```
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
### Local
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train")
batch_sampler = BatchSampler(RandomSampler(cv_13), batch_size=32, drop_last=False)
dataloader = DataLoader(cv_13, batch_sampler=batch_sampler)
```
### Streaming
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train")
dataloader = DataLoader(cv_13, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
### Example scripts
Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 13 with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_13_0", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
|
ssahir/common_voice_13_0_dv_preprocessed
|
[
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] |
2023-09-27T13:46:16+00:00
|
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": {"ab": ["10K<n<100K"], "ar": ["100K<n<1M"], "as": ["1K<n<10K"], "ast": ["1K<n<10K"], "az": ["n<1K"], "ba": ["100K<n<1M"], "bas": ["1K<n<10K"], "be": ["1M<n<10M"], "bg": ["10K<n<100K"], "bn": ["1M<n<10M"], "br": ["10K<n<100K"], "ca": ["1M<n<10M"], "ckb": ["100K<n<1M"], "cnh": ["1K<n<10K"], "cs": ["100K<n<1M"], "cv": ["10K<n<100K"], "cy": ["100K<n<1M"], "da": ["10K<n<100K"], "de": ["100K<n<1M"], "dv": ["10K<n<100K"], "dyu": ["n<1K"], "el": ["10K<n<100K"], "en": ["1M<n<10M"], "eo": ["1M<n<10M"], "es": ["1M<n<10M"], "et": ["10K<n<100K"], "eu": ["100K<n<1M"], "fa": ["100K<n<1M"], "fi": ["10K<n<100K"], "fr": ["100K<n<1M"], "fy-NL": ["100K<n<1M"], "ga-IE": ["10K<n<100K"], "gl": ["10K<n<100K"], "gn": ["1K<n<10K"], "ha": ["10K<n<100K"], "hi": ["10K<n<100K"], "hsb": ["1K<n<10K"], "hu": ["10K<n<100K"], "hy-AM": ["1K<n<10K"], "ia": ["10K<n<100K"], "id": ["10K<n<100K"], "ig": ["1K<n<10K"], "is": ["n<1K"], "it": ["100K<n<1M"], "ja": ["100K<n<1M"], "ka": ["10K<n<100K"], "kab": ["100K<n<1M"], "kk": ["1K<n<10K"], "kmr": ["10K<n<100K"], "ko": ["1K<n<10K"], "ky": ["10K<n<100K"], "lg": ["100K<n<1M"], "lo": ["n<1K"], "lt": ["10K<n<100K"], "lv": ["10K<n<100K"], "mdf": ["n<1K"], "mhr": ["100K<n<1M"], "mk": ["n<1K"], "ml": ["1K<n<10K"], "mn": ["10K<n<100K"], "mr": ["10K<n<100K"], "mrj": ["10K<n<100K"], "mt": ["10K<n<100K"], "myv": ["1K<n<10K"], "nan-tw": ["10K<n<100K"], "ne-NP": ["n<1K"], "nl": ["10K<n<100K"], "nn-NO": ["n<1K"], "oc": ["1K<n<10K"], "or": ["1K<n<10K"], "pa-IN": ["1K<n<10K"], "pl": ["100K<n<1M"], "pt": ["100K<n<1M"], "quy": ["n<1K"], "rm-sursilv": ["1K<n<10K"], "rm-vallader": ["1K<n<10K"], "ro": ["10K<n<100K"], "ru": ["100K<n<1M"], "rw": ["1M<n<10M"], "sah": ["1K<n<10K"], "sat": ["n<1K"], "sc": ["1K<n<10K"], "sk": ["10K<n<100K"], "skr": ["1K<n<10K"], "sl": ["10K<n<100K"], "sr": ["1K<n<10K"], "sv-SE": ["10K<n<100K"], "sw": ["100K<n<1M"], "ta": ["100K<n<1M"], "th": ["100K<n<1M"], "ti": ["n<1K"], "tig": ["n<1K"], "tk": ["1K<n<10K"], "tok": ["10K<n<100K"], "tr": ["10K<n<100K"], "tt": ["10K<n<100K"], "tw": ["n<1K"], "ug": ["10K<n<100K"], "uk": ["10K<n<100K"], "ur": ["100K<n<1M"], "uz": ["100K<n<1M"], "vi": ["10K<n<100K"], "vot": ["n<1K"], "yo": ["1K<n<10K"], "yue": ["10K<n<100K"], "zh-CN": ["100K<n<1M"], "zh-HK": ["100K<n<1M"], "zh-TW": ["100K<n<1M"]}, "source_datasets": ["extended|common_voice"], "task_categories": ["automatic-speech-recognition"], "paperswithcode_id": "common-voice", "pretty_name": "Common Voice Corpus 13.0", "language_bcp47": ["ab", "ar", "as", "ast", "az", "ba", "bas", "be", "bg", "bn", "br", "ca", "ckb", "cnh", "cs", "cv", "cy", "da", "de", "dv", "dyu", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy-NL", "ga-IE", "gl", "gn", "ha", "hi", "hsb", "hu", "hy-AM", "ia", "id", "ig", "is", "it", "ja", "ka", "kab", "kk", "kmr", "ko", "ky", "lg", "lo", "lt", "lv", "mdf", "mhr", "mk", "ml", "mn", "mr", "mrj", "mt", "myv", "nan-tw", "ne-NP", "nl", "nn-NO", "oc", "or", "pa-IN", "pl", "pt", "quy", "rm-sursilv", "rm-vallader", "ro", "ru", "rw", "sah", "sat", "sc", "sk", "skr", "sl", "sr", "sv-SE", "sw", "ta", "th", "ti", "tig", "tk", "tok", "tr", "tt", "tw", "ug", "uk", "ur", "uz", "vi", "vot", "yo", "yue", "zh-CN", "zh-HK", "zh-TW"], "extra_gated_prompt": "By clicking on \u201cAccess repository\u201d below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset."}
|
2023-09-27T13:47:43+00:00
|
[
"1912.06670"
] |
[] |
TAGS
#task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #source_datasets-extended|common_voice #license-cc0-1.0 #arxiv-1912.06670 #region-us
|
# Dataset Card for Common Voice Corpus 13.0
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- How to use
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
- Leaderboard: URL
- Point of Contact: Vaibhav Srivastav
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 27141 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 17689 validated hours in 108 languages, but more voices and languages are always added.
Take a look at the Languages page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
Autoevaluate Leaderboard
### Languages
## How to use
The 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function.
For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi" for Hindi):
Using the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
*Bonus*: create a PyTorch dataloader directly with your own datasets (local/streamed).
### Local
### Streaming
To find out more about loading and preparing audio datasets, head over to URL
### Example scripts
Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 13 with 'transformers' - here.
## Dataset Structure
### Data Instances
A typical data point comprises the 'path' to the audio file and its 'sentence'.
Additional fields include 'accent', 'age', 'client_id', 'up_votes', 'down_votes', 'gender', 'locale' and 'segment'.
### Data Fields
'client_id' ('string'): An id for which client (voice) made the recording
'path' ('string'): The path to the audio file
'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'.
'sentence' ('string'): The sentence the user was prompted to speak
'up_votes' ('int64'): How many upvotes the audio file has received from reviewers
'down_votes' ('int64'): How many downvotes the audio file has received from reviewers
'age' ('string'): The age of the speaker (e.g. 'teens', 'twenties', 'fifties')
'gender' ('string'): The gender of the speaker
'accent' ('string'): Accent of the speaker
'locale' ('string'): The locale of the speaker
'segment' ('string'): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, almost all sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
Public Domain, CC-0
|
[
"# Dataset Card for Common Voice Corpus 13.0",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - How to use\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: Vaibhav Srivastav",
"### Dataset Summary\n\nThe Common Voice dataset consists of a unique MP3 and corresponding text file. \nMany of the 27141 recorded hours in the dataset also include demographic metadata like age, sex, and accent \nthat can help improve the accuracy of speech recognition engines.\n\nThe dataset currently consists of 17689 validated hours in 108 languages, but more voices and languages are always added. \nTake a look at the Languages page to request a language or start contributing.",
"### Supported Tasks and Leaderboards\n\nThe results for models trained on the Common Voice datasets are available via the \n Autoevaluate Leaderboard",
"### Languages",
"## How to use\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function. \n\nFor example, to download the Hindi config, simply specify the corresponding language config name (i.e., \"hi\" for Hindi):\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.\n\n\n*Bonus*: create a PyTorch dataloader directly with your own datasets (local/streamed).",
"### Local",
"### Streaming\n\n\n\nTo find out more about loading and preparing audio datasets, head over to URL",
"### Example scripts\n\nTrain your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 13 with 'transformers' - here.",
"## Dataset Structure",
"### Data Instances\n\nA typical data point comprises the 'path' to the audio file and its 'sentence'. \nAdditional fields include 'accent', 'age', 'client_id', 'up_votes', 'down_votes', 'gender', 'locale' and 'segment'.",
"### Data Fields\n\n'client_id' ('string'): An id for which client (voice) made the recording\n\n'path' ('string'): The path to the audio file\n\n'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\n'sentence' ('string'): The sentence the user was prompted to speak\n\n'up_votes' ('int64'): How many upvotes the audio file has received from reviewers\n\n'down_votes' ('int64'): How many downvotes the audio file has received from reviewers\n\n'age' ('string'): The age of the speaker (e.g. 'teens', 'twenties', 'fifties')\n\n'gender' ('string'): The gender of the speaker\n\n'accent' ('string'): Accent of the speaker\n\n'locale' ('string'): The locale of the speaker\n\n'segment' ('string'): Usually an empty field",
"### Data Splits\n\nThe speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.\n\nThe validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.\n\nThe invalidated data is data has been invalidated by reviewers\nand received downvotes indicating that the data is of low quality.\n\nThe reported data is data that has been reported, for different reasons.\n\nThe other data is data that has not yet been reviewed.\n\nThe dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.",
"## Data Preprocessing Recommended by Hugging Face\n\nThe following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice. \n\nMany examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.\n\nIn addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, almost all sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nPublic Domain, CC-0"
] |
[
"TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #source_datasets-extended|common_voice #license-cc0-1.0 #arxiv-1912.06670 #region-us \n",
"# Dataset Card for Common Voice Corpus 13.0",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - How to use\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: Vaibhav Srivastav",
"### Dataset Summary\n\nThe Common Voice dataset consists of a unique MP3 and corresponding text file. \nMany of the 27141 recorded hours in the dataset also include demographic metadata like age, sex, and accent \nthat can help improve the accuracy of speech recognition engines.\n\nThe dataset currently consists of 17689 validated hours in 108 languages, but more voices and languages are always added. \nTake a look at the Languages page to request a language or start contributing.",
"### Supported Tasks and Leaderboards\n\nThe results for models trained on the Common Voice datasets are available via the \n Autoevaluate Leaderboard",
"### Languages",
"## How to use\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function. \n\nFor example, to download the Hindi config, simply specify the corresponding language config name (i.e., \"hi\" for Hindi):\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.\n\n\n*Bonus*: create a PyTorch dataloader directly with your own datasets (local/streamed).",
"### Local",
"### Streaming\n\n\n\nTo find out more about loading and preparing audio datasets, head over to URL",
"### Example scripts\n\nTrain your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 13 with 'transformers' - here.",
"## Dataset Structure",
"### Data Instances\n\nA typical data point comprises the 'path' to the audio file and its 'sentence'. \nAdditional fields include 'accent', 'age', 'client_id', 'up_votes', 'down_votes', 'gender', 'locale' and 'segment'.",
"### Data Fields\n\n'client_id' ('string'): An id for which client (voice) made the recording\n\n'path' ('string'): The path to the audio file\n\n'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\n'sentence' ('string'): The sentence the user was prompted to speak\n\n'up_votes' ('int64'): How many upvotes the audio file has received from reviewers\n\n'down_votes' ('int64'): How many downvotes the audio file has received from reviewers\n\n'age' ('string'): The age of the speaker (e.g. 'teens', 'twenties', 'fifties')\n\n'gender' ('string'): The gender of the speaker\n\n'accent' ('string'): Accent of the speaker\n\n'locale' ('string'): The locale of the speaker\n\n'segment' ('string'): Usually an empty field",
"### Data Splits\n\nThe speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.\n\nThe validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.\n\nThe invalidated data is data has been invalidated by reviewers\nand received downvotes indicating that the data is of low quality.\n\nThe reported data is data that has been reported, for different reasons.\n\nThe other data is data that has not yet been reviewed.\n\nThe dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.",
"## Data Preprocessing Recommended by Hugging Face\n\nThe following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice. \n\nMany examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.\n\nIn addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, almost all sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nPublic Domain, CC-0"
] |
[
87,
10,
124,
34,
108,
32,
4,
190,
3,
22,
36,
6,
77,
378,
145,
233,
5,
7,
4,
10,
10,
5,
5,
9,
42,
8,
41,
8,
7,
5,
6,
11
] |
[
"passage: TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #source_datasets-extended|common_voice #license-cc0-1.0 #arxiv-1912.06670 #region-us \n# Dataset Card for Common Voice Corpus 13.0## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - How to use\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: Vaibhav Srivastav### Dataset Summary\n\nThe Common Voice dataset consists of a unique MP3 and corresponding text file. \nMany of the 27141 recorded hours in the dataset also include demographic metadata like age, sex, and accent \nthat can help improve the accuracy of speech recognition engines.\n\nThe dataset currently consists of 17689 validated hours in 108 languages, but more voices and languages are always added. \nTake a look at the Languages page to request a language or start contributing.### Supported Tasks and Leaderboards\n\nThe results for models trained on the Common Voice datasets are available via the \n Autoevaluate Leaderboard### Languages",
"passage: ## How to use\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function. \n\nFor example, to download the Hindi config, simply specify the corresponding language config name (i.e., \"hi\" for Hindi):\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.\n\n\n*Bonus*: create a PyTorch dataloader directly with your own datasets (local/streamed).### Local### Streaming\n\n\n\nTo find out more about loading and preparing audio datasets, head over to URL### Example scripts\n\nTrain your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 13 with 'transformers' - here.## Dataset Structure### Data Instances\n\nA typical data point comprises the 'path' to the audio file and its 'sentence'. \nAdditional fields include 'accent', 'age', 'client_id', 'up_votes', 'down_votes', 'gender', 'locale' and 'segment'.",
"passage: ### Data Fields\n\n'client_id' ('string'): An id for which client (voice) made the recording\n\n'path' ('string'): The path to the audio file\n\n'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\n'sentence' ('string'): The sentence the user was prompted to speak\n\n'up_votes' ('int64'): How many upvotes the audio file has received from reviewers\n\n'down_votes' ('int64'): How many downvotes the audio file has received from reviewers\n\n'age' ('string'): The age of the speaker (e.g. 'teens', 'twenties', 'fifties')\n\n'gender' ('string'): The gender of the speaker\n\n'accent' ('string'): Accent of the speaker\n\n'locale' ('string'): The locale of the speaker\n\n'segment' ('string'): Usually an empty field### Data Splits\n\nThe speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.\n\nThe validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.\n\nThe invalidated data is data has been invalidated by reviewers\nand received downvotes indicating that the data is of low quality.\n\nThe reported data is data that has been reported, for different reasons.\n\nThe other data is data that has not yet been reviewed.\n\nThe dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.## Data Preprocessing Recommended by Hugging Face\n\nThe following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice. \n\nMany examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.\n\nIn addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, almost all sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.## Considerations for Using the Data"
] |
cdc8633330d373a29b4293e0f79546e4bdbf2a43
|
# The OE Dataset!

A dataset consisting of synthetic and real images annotated with instance segmentation masks for testing sim-to-real model performance for robotic manipulation.
### Dataset Summary
The OE Dataset is a collection of synthetic and real images of 3D-printed OE logos. Each image is annotated with instance segmentation masks. The dataset explicitly marks synthetic samples based on their creation method (either photorealistic synthetic samples or domain randomized samples) to facilitate sim-to-real performance tests on different synthetic datasets.
### Supported Tasks and Leaderboards
The dataset supports tasks such as semantic segmentation, instance segmentation, object detection, image classification, and testing sim-to-real transfer.
## Dataset Structure
### Data Instances
The instances of the dataset are 1920x1080x3 images in PNG format. The annotations are 1920x1080x4 PNG images representing the instance segmentation masks, where each instance is associated with a unique color.
### Data Fields
The data fields are:
1) 'image': 1920x1080x3 PNG image
2) 'mask': 1920x1080x4 PNG image
### Data Splits
The dataset contains training and validation splits for all image collections (real images, photorealistic synthetic images, domain randomized synthetic images) to facilitate cross-domain testing.
## Dataset Creation
### Curation Rationale
The dataset was created to provide a testbed for examining the effects of fine-tuning instance segmentation models on synthetic data (using various sim-to-real approaches).
### Source Data
The data is generated using two methods:
- Real images are recorded using a robotic setup and automatically annotated using the method proposed in [[1]](https://ieeexplore.ieee.org/abstract/document/9922852)
- Synthetic samples are generated using Blender and annotated using the [Blender Annotation Tool (BAT)](https://github.com/ABC-iRobotics/blender_annotation_tool)
### Citation Information
OE Dataset:
```bibtex
@ARTICLE{10145828,
author={Károly, Artúr István and Tirczka, Sebestyén and Gao, Huijun and Rudas, Imre J. and Galambos, Péter},
journal={IEEE Transactions on Cybernetics},
title={Increasing the Robustness of Deep Learning Models for Object Segmentation: A Framework for Blending Automatically Annotated Real and Synthetic Data},
year={2023},
volume={},
number={},
pages={1-14},
doi={10.1109/TCYB.2023.3276485}}
```
Automatically annotating real images with instance segmentation masks using a robotic arm:
```bibtex
@INPROCEEDINGS{9922852,
author={Károly, Artúr I. and Károly, Ármin and Galambos, Péter},
booktitle={2022 IEEE 10th Jubilee International Conference on Computational Cybernetics and Cyber-Medical Systems (ICCC)},
title={Automatic Generation and Annotation of Object Segmentation Datasets Using Robotic Arm},
year={2022},
volume={},
number={},
pages={000063-000068},
doi={10.1109/ICCC202255925.2022.9922852}}
```
Synthetic dataset generation and annotation method:
```bibtex
@INPROCEEDINGS{9780790,
author={Károly, Artúr I. and Galambos, Péter},
booktitle={2022 IEEE 20th Jubilee World Symposium on Applied Machine Intelligence and Informatics (SAMI)},
title={Automated Dataset Generation with Blender for Deep Learning-based Object Segmentation},
year={2022},
volume={},
number={},
pages={000329-000334},
doi={10.1109/SAMI54271.2022.9780790}}
```
Other related publications:
```bibtex
@INPROCEEDINGS{10029564,
author={Károly, Artúr I. and Tirczka, Sebestyén and Piricz, Tamás and Galambos, Péter},
booktitle={2022 IEEE 22nd International Symposium on Computational Intelligence and Informatics and 8th IEEE International Conference on Recent Achievements in Mechatronics, Automation, Computer Science and Robotics (CINTI-MACRo)},
title={Robotic Manipulation of Pathological Slides Powered by Deep Learning and Classical Image Processing},
year={2022},
volume={},
number={},
pages={000387-000392},
doi={10.1109/CINTI-MACRo57952.2022.10029564}}
```
```bibtex
@Article{app13010525,
AUTHOR = {Károly, Artúr István and Galambos, Péter},
TITLE = {Task-Specific Grasp Planning for Robotic Assembly by Fine-Tuning GQCNNs on Automatically Generated Synthetic Data},
JOURNAL = {Applied Sciences},
VOLUME = {13},
YEAR = {2023},
NUMBER = {1},
ARTICLE-NUMBER = {525},
URL = {https://www.mdpi.com/2076-3417/13/1/525},
ISSN = {2076-3417},
ABSTRACT = {In modern robot applications, there is often a need to manipulate previously unknown objects in an unstructured environment. The field of grasp-planning deals with the task of finding grasps for a given object that can be successfully executed with a robot. The predicted grasps can be evaluated according to certain criteria, such as analytical metrics, similarity to human-provided grasps, or the success rate of physical trials. The quality of a grasp also depends on the task which will be carried out after the grasping is completed. Current task-specific grasp planning approaches mostly use probabilistic methods, which utilize categorical task encoding. We argue that categorical task encoding may not be suitable for complex assembly tasks. This paper proposes a transfer-learning-based approach for task-specific grasp planning for robotic assembly. The proposed method is based on an automated pipeline that quickly and automatically generates a small-scale task-specific synthetic grasp dataset using Graspit! and Blender. This dataset is utilized to fine-tune pre-trained grasp quality convolutional neural networks (GQCNNs). The aim is to train GQCNNs that can predict grasps which do not result in a collision when placing the objects. Consequently, this paper focuses on the geometric feasibility of the predicted grasps and does not consider the dynamic effects. The fine-tuned GQCNNs are evaluated using the Moveit! Task Constructor motion planning framework, which enables the automated inspection of whether the motion planning for a task is feasible given a predicted grasp and, if not, which part of the task is responsible for the failure. Our results suggest that fine-tuning GQCNN models can result in superior grasp-planning performance (0.9 success rate compared to 0.65) in the context of an assembly task. Our method can be used to rapidly attain new task-specific grasp policies for flexible robotic assembly applications.},
DOI = {10.3390/app13010525}
}
```
|
ABC-iRobotics/oe_dataset
|
[
"task_categories:object-detection",
"task_categories:image-segmentation",
"task_categories:robotics",
"task_ids:instance-segmentation",
"task_ids:semantic-segmentation",
"annotations_creators:machine-generated",
"size_categories:1K<n<10K",
"language:en",
"license:gpl-3.0",
"vision",
"image segmentation",
"instance segmentation",
"object detection",
"synthetic",
"sim-to-real",
"region:us"
] |
2023-09-27T13:58:22+00:00
|
{"annotations_creators": ["machine-generated"], "language": ["en"], "license": "gpl-3.0", "size_categories": ["1K<n<10K"], "task_categories": ["object-detection", "image-segmentation", "robotics"], "task_ids": ["instance-segmentation", "semantic-segmentation"], "pretty_name": "OE Dataset", "tags": ["vision", "image segmentation", "instance segmentation", "object detection", "synthetic", "sim-to-real"]}
|
2023-10-05T18:25:48+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-object-detection #task_categories-image-segmentation #task_categories-robotics #task_ids-instance-segmentation #task_ids-semantic-segmentation #annotations_creators-machine-generated #size_categories-1K<n<10K #language-English #license-gpl-3.0 #vision #image segmentation #instance segmentation #object detection #synthetic #sim-to-real #region-us
|
# The OE Dataset!
!OE demo
A dataset consisting of synthetic and real images annotated with instance segmentation masks for testing sim-to-real model performance for robotic manipulation.
### Dataset Summary
The OE Dataset is a collection of synthetic and real images of 3D-printed OE logos. Each image is annotated with instance segmentation masks. The dataset explicitly marks synthetic samples based on their creation method (either photorealistic synthetic samples or domain randomized samples) to facilitate sim-to-real performance tests on different synthetic datasets.
### Supported Tasks and Leaderboards
The dataset supports tasks such as semantic segmentation, instance segmentation, object detection, image classification, and testing sim-to-real transfer.
## Dataset Structure
### Data Instances
The instances of the dataset are 1920x1080x3 images in PNG format. The annotations are 1920x1080x4 PNG images representing the instance segmentation masks, where each instance is associated with a unique color.
### Data Fields
The data fields are:
1) 'image': 1920x1080x3 PNG image
2) 'mask': 1920x1080x4 PNG image
### Data Splits
The dataset contains training and validation splits for all image collections (real images, photorealistic synthetic images, domain randomized synthetic images) to facilitate cross-domain testing.
## Dataset Creation
### Curation Rationale
The dataset was created to provide a testbed for examining the effects of fine-tuning instance segmentation models on synthetic data (using various sim-to-real approaches).
### Source Data
The data is generated using two methods:
- Real images are recorded using a robotic setup and automatically annotated using the method proposed in [[1]](URL
- Synthetic samples are generated using Blender and annotated using the Blender Annotation Tool (BAT)
OE Dataset:
Automatically annotating real images with instance segmentation masks using a robotic arm:
Synthetic dataset generation and annotation method:
Other related publications:
|
[
"# The OE Dataset!\n\n!OE demo\n\nA dataset consisting of synthetic and real images annotated with instance segmentation masks for testing sim-to-real model performance for robotic manipulation.",
"### Dataset Summary\n\nThe OE Dataset is a collection of synthetic and real images of 3D-printed OE logos. Each image is annotated with instance segmentation masks. The dataset explicitly marks synthetic samples based on their creation method (either photorealistic synthetic samples or domain randomized samples) to facilitate sim-to-real performance tests on different synthetic datasets.",
"### Supported Tasks and Leaderboards\n\nThe dataset supports tasks such as semantic segmentation, instance segmentation, object detection, image classification, and testing sim-to-real transfer.",
"## Dataset Structure",
"### Data Instances\n\nThe instances of the dataset are 1920x1080x3 images in PNG format. The annotations are 1920x1080x4 PNG images representing the instance segmentation masks, where each instance is associated with a unique color.",
"### Data Fields\n\nThe data fields are:\n\n1) 'image': 1920x1080x3 PNG image\n2) 'mask': 1920x1080x4 PNG image",
"### Data Splits\n\nThe dataset contains training and validation splits for all image collections (real images, photorealistic synthetic images, domain randomized synthetic images) to facilitate cross-domain testing.",
"## Dataset Creation",
"### Curation Rationale\n\nThe dataset was created to provide a testbed for examining the effects of fine-tuning instance segmentation models on synthetic data (using various sim-to-real approaches).",
"### Source Data\n\nThe data is generated using two methods:\n\n- Real images are recorded using a robotic setup and automatically annotated using the method proposed in [[1]](URL\n- Synthetic samples are generated using Blender and annotated using the Blender Annotation Tool (BAT)\n\n\n\nOE Dataset:\n\n\nAutomatically annotating real images with instance segmentation masks using a robotic arm:\n\n\nSynthetic dataset generation and annotation method:\n\n\nOther related publications:"
] |
[
"TAGS\n#task_categories-object-detection #task_categories-image-segmentation #task_categories-robotics #task_ids-instance-segmentation #task_ids-semantic-segmentation #annotations_creators-machine-generated #size_categories-1K<n<10K #language-English #license-gpl-3.0 #vision #image segmentation #instance segmentation #object detection #synthetic #sim-to-real #region-us \n",
"# The OE Dataset!\n\n!OE demo\n\nA dataset consisting of synthetic and real images annotated with instance segmentation masks for testing sim-to-real model performance for robotic manipulation.",
"### Dataset Summary\n\nThe OE Dataset is a collection of synthetic and real images of 3D-printed OE logos. Each image is annotated with instance segmentation masks. The dataset explicitly marks synthetic samples based on their creation method (either photorealistic synthetic samples or domain randomized samples) to facilitate sim-to-real performance tests on different synthetic datasets.",
"### Supported Tasks and Leaderboards\n\nThe dataset supports tasks such as semantic segmentation, instance segmentation, object detection, image classification, and testing sim-to-real transfer.",
"## Dataset Structure",
"### Data Instances\n\nThe instances of the dataset are 1920x1080x3 images in PNG format. The annotations are 1920x1080x4 PNG images representing the instance segmentation masks, where each instance is associated with a unique color.",
"### Data Fields\n\nThe data fields are:\n\n1) 'image': 1920x1080x3 PNG image\n2) 'mask': 1920x1080x4 PNG image",
"### Data Splits\n\nThe dataset contains training and validation splits for all image collections (real images, photorealistic synthetic images, domain randomized synthetic images) to facilitate cross-domain testing.",
"## Dataset Creation",
"### Curation Rationale\n\nThe dataset was created to provide a testbed for examining the effects of fine-tuning instance segmentation models on synthetic data (using various sim-to-real approaches).",
"### Source Data\n\nThe data is generated using two methods:\n\n- Real images are recorded using a robotic setup and automatically annotated using the method proposed in [[1]](URL\n- Synthetic samples are generated using Blender and annotated using the Blender Annotation Tool (BAT)\n\n\n\nOE Dataset:\n\n\nAutomatically annotating real images with instance segmentation masks using a robotic arm:\n\n\nSynthetic dataset generation and annotation method:\n\n\nOther related publications:"
] |
[
124,
47,
99,
45,
6,
57,
38,
50,
5,
48,
106
] |
[
"passage: TAGS\n#task_categories-object-detection #task_categories-image-segmentation #task_categories-robotics #task_ids-instance-segmentation #task_ids-semantic-segmentation #annotations_creators-machine-generated #size_categories-1K<n<10K #language-English #license-gpl-3.0 #vision #image segmentation #instance segmentation #object detection #synthetic #sim-to-real #region-us \n# The OE Dataset!\n\n!OE demo\n\nA dataset consisting of synthetic and real images annotated with instance segmentation masks for testing sim-to-real model performance for robotic manipulation.### Dataset Summary\n\nThe OE Dataset is a collection of synthetic and real images of 3D-printed OE logos. Each image is annotated with instance segmentation masks. The dataset explicitly marks synthetic samples based on their creation method (either photorealistic synthetic samples or domain randomized samples) to facilitate sim-to-real performance tests on different synthetic datasets.### Supported Tasks and Leaderboards\n\nThe dataset supports tasks such as semantic segmentation, instance segmentation, object detection, image classification, and testing sim-to-real transfer.## Dataset Structure### Data Instances\n\nThe instances of the dataset are 1920x1080x3 images in PNG format. The annotations are 1920x1080x4 PNG images representing the instance segmentation masks, where each instance is associated with a unique color.### Data Fields\n\nThe data fields are:\n\n1) 'image': 1920x1080x3 PNG image\n2) 'mask': 1920x1080x4 PNG image### Data Splits\n\nThe dataset contains training and validation splits for all image collections (real images, photorealistic synthetic images, domain randomized synthetic images) to facilitate cross-domain testing.## Dataset Creation"
] |
6270b370cef7b6fca5dfde8c04ffdbc638dce213
|
# Dataset Card for "generated_chat_0.4m-chinese-zhtw"
## 內容
包含約 40 萬條由 [BELLE](https://github.com/LianjiaTech/BELLE) 專案產生的個人化角色對話數據,包含角色介紹。
注意:此資料集是由ChatGPT產生的,未經嚴格校驗,題目或解題過程可能包含錯誤。使用過程中請注意這一點。
## 範例
```
{
"instruction": "於以下角色訊息完成一段對話\nAlice是一位年輕的律師,她正在為一個涉及瀕危物種保護的案件做準備。Bob是一位當地的居民,他所在的社區經常受到該瀕危物種的干擾。\n",
"input": "",
"output": "\nAlice: 你好Bob,我是瀕危物種保護案的律師,很高興見到你。我們前幾天發給你的文件你看到了嗎?\nBob: 看到了,你們說要禁止我們在該物種的常住地採集蘑菇,這很不公平。\nAlice:保護瀕危物種對整個生態系統都很重要,如果我們的舉動損害了這個物種的生存,那麼整個生態系統都將遭受損失。 \nBob: 我理解您的立場,但是我們一直以來都依靠這個物種來維持我們的經濟生活,現在我們要怎麼辦? \nAlice: 我理解這對當地社區的影響,但是現在我們必須找到一種既能保護這個物種,也同時使您的經濟生活不受影響的解決方案。你有任何想法嗎?我們可以一起來想辦法。\n"
}
```
### 欄位
```
instruction: 指令
input: 輸入(此資料集均為空)
output: 輸出
```
## 限制和使用限制
我們要求開發者僅將我們開源的程式碼、資料、模型及後續衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。
由於數據是由 *ChatGPT* 產生的,未經嚴格驗證,在事實性和其他方面仍有一些不足之處。因此,在使用此資料集時,請務必注意甄別。
本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集帶來的任何損害、糾紛,本專案的開發者不承擔任何責任。
***
# Generated_Chat_0.4M
## Contents
Includes approx. 400k Personalized Character Dialogue generated by BELLE, with character Introduction.
Note: this subset was generated by *ChatGPT* and was not strictly verified. The quizzes or answers might contain errors. Please take this in mind when using this subset.
## Sample
```
{
"instruction": "於以下角色訊息完成一段對話\nAlice是一位年輕的律師,她正在為一個涉及瀕危物種保護的案件做準備。Bob是一位當地的居民,他所在的社區經常受到該瀕危物種的干擾。\n",
"input": "",
"output": "\nAlice: 你好Bob,我是瀕危物種保護案的律師,很高興見到你。我們前幾天發給你的文件你看到了嗎?\nBob: 看到了,你們說要禁止我們在該物種的常住地採集蘑菇,這很不公平。\nAlice:保護瀕危物種對整個生態系統都很重要,如果我們的舉動損害了這個物種的生存,那麼整個生態系統都將遭受損失。 \nBob: 我理解您的立場,但是我們一直以來都依靠這個物種來維持我們的經濟生活,現在我們要怎麼辦? \nAlice: 我理解這對當地社區的影響,但是現在我們必須找到一種既能保護這個物種,也同時使您的經濟生活不受影響的解決方案。你有任何想法嗎?我們可以一起來想辦法。\n"
}
```
### Schema
```
instruction: 指令
input: 輸入(此資料集均為空)
output: 輸出
```
## Limitation and Usage Limits
We require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed.
Since this dataset was generated by *ChatGPT* and was not strictly verified, it still has shortcomings regarding factuality and other aspects. When using this dataset, careful inspection is needed.
This dataset does not represent anyone's ground, interest or thought, and is not related to any kind of claim of any groups. The developers of this project do not assume any responsibility to potential harm inflicted by using this dataset and project.
|
erhwenkuo/generated_chat_0.4m-chinese-zhtw
|
[
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:zh",
"region:us"
] |
2023-09-27T14:03:04+00:00
|
{"language": ["zh"], "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 505991941, "num_examples": 396004}], "download_size": 287027695, "dataset_size": 505991941}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-27T14:45:28+00:00
|
[] |
[
"zh"
] |
TAGS
#task_categories-text-generation #size_categories-100K<n<1M #language-Chinese #region-us
|
# Dataset Card for "generated_chat_0.4m-chinese-zhtw"
## 內容
包含約 40 萬條由 BELLE 專案產生的個人化角色對話數據,包含角色介紹。
注意:此資料集是由ChatGPT產生的,未經嚴格校驗,題目或解題過程可能包含錯誤。使用過程中請注意這一點。
## 範例
### 欄位
## 限制和使用限制
我們要求開發者僅將我們開源的程式碼、資料、模型及後續衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。
由於數據是由 *ChatGPT* 產生的,未經嚴格驗證,在事實性和其他方面仍有一些不足之處。因此,在使用此資料集時,請務必注意甄別。
本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集帶來的任何損害、糾紛,本專案的開發者不承擔任何責任。
*
# Generated_Chat_0.4M
## Contents
Includes approx. 400k Personalized Character Dialogue generated by BELLE, with character Introduction.
Note: this subset was generated by *ChatGPT* and was not strictly verified. The quizzes or answers might contain errors. Please take this in mind when using this subset.
## Sample
### Schema
## Limitation and Usage Limits
We require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed.
Since this dataset was generated by *ChatGPT* and was not strictly verified, it still has shortcomings regarding factuality and other aspects. When using this dataset, careful inspection is needed.
This dataset does not represent anyone's ground, interest or thought, and is not related to any kind of claim of any groups. The developers of this project do not assume any responsibility to potential harm inflicted by using this dataset and project.
|
[
"# Dataset Card for \"generated_chat_0.4m-chinese-zhtw\"",
"## 內容\n包含約 40 萬條由 BELLE 專案產生的個人化角色對話數據,包含角色介紹。\n\n注意:此資料集是由ChatGPT產生的,未經嚴格校驗,題目或解題過程可能包含錯誤。使用過程中請注意這一點。",
"## 範例",
"### 欄位",
"## 限制和使用限制\n我們要求開發者僅將我們開源的程式碼、資料、模型及後續衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。\n\n由於數據是由 *ChatGPT* 產生的,未經嚴格驗證,在事實性和其他方面仍有一些不足之處。因此,在使用此資料集時,請務必注意甄別。\n\n本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集帶來的任何損害、糾紛,本專案的開發者不承擔任何責任。\n\n\n*",
"# Generated_Chat_0.4M",
"## Contents\nIncludes approx. 400k Personalized Character Dialogue generated by BELLE, with character Introduction.\n\nNote: this subset was generated by *ChatGPT* and was not strictly verified. The quizzes or answers might contain errors. Please take this in mind when using this subset.",
"## Sample",
"### Schema",
"## Limitation and Usage Limits\nWe require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed.\n\nSince this dataset was generated by *ChatGPT* and was not strictly verified, it still has shortcomings regarding factuality and other aspects. When using this dataset, careful inspection is needed.\n\nThis dataset does not represent anyone's ground, interest or thought, and is not related to any kind of claim of any groups. The developers of this project do not assume any responsibility to potential harm inflicted by using this dataset and project."
] |
[
"TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-Chinese #region-us \n",
"# Dataset Card for \"generated_chat_0.4m-chinese-zhtw\"",
"## 內容\n包含約 40 萬條由 BELLE 專案產生的個人化角色對話數據,包含角色介紹。\n\n注意:此資料集是由ChatGPT產生的,未經嚴格校驗,題目或解題過程可能包含錯誤。使用過程中請注意這一點。",
"## 範例",
"### 欄位",
"## 限制和使用限制\n我們要求開發者僅將我們開源的程式碼、資料、模型及後續衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。\n\n由於數據是由 *ChatGPT* 產生的,未經嚴格驗證,在事實性和其他方面仍有一些不足之處。因此,在使用此資料集時,請務必注意甄別。\n\n本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集帶來的任何損害、糾紛,本專案的開發者不承擔任何責任。\n\n\n*",
"# Generated_Chat_0.4M",
"## Contents\nIncludes approx. 400k Personalized Character Dialogue generated by BELLE, with character Introduction.\n\nNote: this subset was generated by *ChatGPT* and was not strictly verified. The quizzes or answers might contain errors. Please take this in mind when using this subset.",
"## Sample",
"### Schema",
"## Limitation and Usage Limits\nWe require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed.\n\nSince this dataset was generated by *ChatGPT* and was not strictly verified, it still has shortcomings regarding factuality and other aspects. When using this dataset, careful inspection is needed.\n\nThis dataset does not represent anyone's ground, interest or thought, and is not related to any kind of claim of any groups. The developers of this project do not assume any responsibility to potential harm inflicted by using this dataset and project."
] |
[
34,
20,
62,
4,
5,
134,
8,
73,
3,
4,
158
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-Chinese #region-us \n# Dataset Card for \"generated_chat_0.4m-chinese-zhtw\"## 內容\n包含約 40 萬條由 BELLE 專案產生的個人化角色對話數據,包含角色介紹。\n\n注意:此資料集是由ChatGPT產生的,未經嚴格校驗,題目或解題過程可能包含錯誤。使用過程中請注意這一點。## 範例### 欄位## 限制和使用限制\n我們要求開發者僅將我們開源的程式碼、資料、模型及後續衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。\n\n由於數據是由 *ChatGPT* 產生的,未經嚴格驗證,在事實性和其他方面仍有一些不足之處。因此,在使用此資料集時,請務必注意甄別。\n\n本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集帶來的任何損害、糾紛,本專案的開發者不承擔任何責任。\n\n\n*# Generated_Chat_0.4M## Contents\nIncludes approx. 400k Personalized Character Dialogue generated by BELLE, with character Introduction.\n\nNote: this subset was generated by *ChatGPT* and was not strictly verified. The quizzes or answers might contain errors. Please take this in mind when using this subset.## Sample### Schema## Limitation and Usage Limits\nWe require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed.\n\nSince this dataset was generated by *ChatGPT* and was not strictly verified, it still has shortcomings regarding factuality and other aspects. When using this dataset, careful inspection is needed.\n\nThis dataset does not represent anyone's ground, interest or thought, and is not related to any kind of claim of any groups. The developers of this project do not assume any responsibility to potential harm inflicted by using this dataset and project."
] |
a0d5683392256f580797000a233e3cb9beb4bad9
|
# Dataset Card for "multiturn_chat_0.8m-chinese-zhtw"
## 內容
包含約 80 萬條由 [BELLE](https://github.com/LianjiaTech/BELLE) 專案所產生的 *Human* 與 *Assistant* 的多輪對話。
注意:此資料集是由 ChatGPT 產生的,未經嚴格校驗,內容可能包含錯誤。使用過程中請注意這一點。
**instruction** 中包含多輪對話的上文內容,以 *Human:* 和 *Assistant:* 區分,**output** 中包含當前 *Assistant* 角色的回答。
## 範例
```
{
"instruction": "Human:請根據題材「健身房」,寫一篇800字的文章。 \nAssistant:好的,以下是由我生成的800字「健身房」主題文章:\n在現代社會,人們越來越珍惜健康和身材。健身房就是一個人們可以鍛鍊身體的場所,也是一個社交的場所。在健身房,人們可以透過器材訓練,有氧運動和瑜珈等方式來達到健身的目的。 \n對於喜歡肌力訓練的人來說,選擇健身房的一個原因是因為那裡的器材更加完備,可以滿足訓練的需求。當然,不只是體力上的訓練,健身房也是鍛鍊毅力和耐力的地方。因為每一次運動的過程都需要堅持和勇氣。 \n而對於那些想要減肥的人來說,健身房也是非常重要的。有氧運動例如慢跑,跳操,游泳等都是非常有效的減肥方式。而且因為健身房的壓力比較大,所以很容易讓人保持運動的熱情和動力。 \n有些人則會選擇瑜珈館作為健身的場所,因為它們通常比較安靜,能夠讓人們放鬆。瑜珈有紓解壓力,提升柔軟度,強化肌肉的好處,而且更適合不喜歡高強度訓練的人。 \n總的來說,健身房和瑜珈館是現代社會不可或缺的健身場所。無論您是肌力訓練愛好者,減重追求者還是想要放鬆的人,您都可以在這裡找到適合自己的運動方式。 \nHuman:請從上文摘取三個句子進行評分。 \nAssistant:",
"input": "",
"output": "好的。以下是我摘取的三個句子:\n1. \"在健身房,人們可以透過器材訓練,有氧運動和瑜珈等方式來達到健身的目的。 \"\n2. \"而對於想要減肥的人來說,健身房也是非常重要的。 \"\n3. \"總的來說,健身房和瑜珈館是現代社會不可或缺的健身場所。 \"\n網頁端評分介面還沒準備好,不能在這裡為你進行打分,請見諒。"
}
```
### 欄位
```
instruction: 指令
input: 輸入(此資料集均為空)
output: 輸出
```
## 限制和使用限制
我們要求開發者僅將我們開源的程式碼、資料、模型及後續衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。
由於數據是由*ChatGPT*產生的,未經嚴格驗證,在事實性和其他方面仍有一些不足之處。因此,在使用此資料集時,請務必注意甄別。
本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集帶來的任何損害、糾紛,本專案的開發者不承擔任何責任。
***
# Multiturn Chat 0.8M
## Contents
Includes approx. 0.8M Chinese multiturn dialogs between *human* and *assistant*.
Note: this subset was generated by *ChatGPT* and was not strictly verified. The dialog contents might contain errors. Please take this in mind when using this subset.
**instruction** contains history dialog context, distinguishable by *Human:* and *Assistant:*, **output** contains the current reply by *assistant*.
## Sample
```
{
"instruction": "Human:請根據題材「健身房」,寫一篇800字的文章。 \nAssistant:好的,以下是由我生成的800字「健身房」主題文章:\n在現代社會,人們越來越珍惜健康和身材。健身房就是一個人們可以鍛鍊身體的場所,也是一個社交的場所。在健身房,人們可以透過器材訓練,有氧運動和瑜珈等方式來達到健身的目的。 \n對於喜歡肌力訓練的人來說,選擇健身房的一個原因是因為那裡的器材更加完備,可以滿足訓練的需求。當然,不只是體力上的訓練,健身房也是鍛鍊毅力和耐力的地方。因為每一次運動的過程都需要堅持和勇氣。 \n而對於那些想要減肥的人來說,健身房也是非常重要的。有氧運動例如慢跑,跳操,游泳等都是非常有效的減肥方式。而且因為健身房的壓力比較大,所以很容易讓人保持運動的熱情和動力。 \n有些人則會選擇瑜珈館作為健身的場所,因為它們通常比較安靜,能夠讓人們放鬆。瑜珈有紓解壓力,提升柔軟度,強化肌肉的好處,而且更適合不喜歡高強度訓練的人。 \n總的來說,健身房和瑜珈館是現代社會不可或缺的健身場所。無論您是肌力訓練愛好者,減重追求者還是想要放鬆的人,您都可以在這裡找到適合自己的運動方式。 \nHuman:請從上文摘取三個句子進行評分。 \nAssistant:",
"input": "",
"output": "好的。以下是我摘取的三個句子:\n1. \"在健身房,人們可以透過器材訓練,有氧運動和瑜珈等方式來達到健身的目的。 \"\n2. \"而對於想要減肥的人來說,健身房也是非常重要的。 \"\n3. \"總的來說,健身房和瑜珈館是現代社會不可或缺的健身場所。 \"\n網頁端評分介面還沒準備好,不能在這裡為你進行打分,請見諒。"
}
```
### Schema
```
instruction: 指令
input: 輸入(此資料集均為空)
output: 輸出
```
## Limitation and Usage Limits
We require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed.
Since this dataset was generated by *ChatGPT* and was not strictly verified, it still has shortcomings regarding factuality and other aspects. When using this dataset, careful inspection is needed.
This dataset does not represent anyone's ground, interest or thought, and is not related to any kind of claim of any groups. The developers of this project do not assume any responsibility to potential harm inflicted by using this dataset and project.
|
erhwenkuo/multiturn_chat_0.8m-chinese-zhtw
|
[
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:zh",
"alpaca",
"fine-tune",
"region:us"
] |
2023-09-27T14:07:30+00:00
|
{"language": ["zh"], "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 956384448, "num_examples": 831036}], "download_size": 588338923, "dataset_size": 956384448}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["alpaca", "fine-tune"]}
|
2023-09-27T14:41:32+00:00
|
[] |
[
"zh"
] |
TAGS
#task_categories-text-generation #size_categories-100K<n<1M #language-Chinese #alpaca #fine-tune #region-us
|
# Dataset Card for "multiturn_chat_0.8m-chinese-zhtw"
## 內容
包含約 80 萬條由 BELLE 專案所產生的 *Human* 與 *Assistant* 的多輪對話。
注意:此資料集是由 ChatGPT 產生的,未經嚴格校驗,內容可能包含錯誤。使用過程中請注意這一點。
instruction 中包含多輪對話的上文內容,以 *Human:* 和 *Assistant:* 區分,output 中包含當前 *Assistant* 角色的回答。
## 範例
### 欄位
## 限制和使用限制
我們要求開發者僅將我們開源的程式碼、資料、模型及後續衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。
由於數據是由*ChatGPT*產生的,未經嚴格驗證,在事實性和其他方面仍有一些不足之處。因此,在使用此資料集時,請務必注意甄別。
本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集帶來的任何損害、糾紛,本專案的開發者不承擔任何責任。
*
# Multiturn Chat 0.8M
## Contents
Includes approx. 0.8M Chinese multiturn dialogs between *human* and *assistant*.
Note: this subset was generated by *ChatGPT* and was not strictly verified. The dialog contents might contain errors. Please take this in mind when using this subset.
instruction contains history dialog context, distinguishable by *Human:* and *Assistant:*, output contains the current reply by *assistant*.
## Sample
### Schema
## Limitation and Usage Limits
We require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed.
Since this dataset was generated by *ChatGPT* and was not strictly verified, it still has shortcomings regarding factuality and other aspects. When using this dataset, careful inspection is needed.
This dataset does not represent anyone's ground, interest or thought, and is not related to any kind of claim of any groups. The developers of this project do not assume any responsibility to potential harm inflicted by using this dataset and project.
|
[
"# Dataset Card for \"multiturn_chat_0.8m-chinese-zhtw\"",
"## 內容\n\n包含約 80 萬條由 BELLE 專案所產生的 *Human* 與 *Assistant* 的多輪對話。\n\n注意:此資料集是由 ChatGPT 產生的,未經嚴格校驗,內容可能包含錯誤。使用過程中請注意這一點。\n\ninstruction 中包含多輪對話的上文內容,以 *Human:* 和 *Assistant:* 區分,output 中包含當前 *Assistant* 角色的回答。",
"## 範例",
"### 欄位",
"## 限制和使用限制\n我們要求開發者僅將我們開源的程式碼、資料、模型及後續衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。\n\n由於數據是由*ChatGPT*產生的,未經嚴格驗證,在事實性和其他方面仍有一些不足之處。因此,在使用此資料集時,請務必注意甄別。\n\n本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集帶來的任何損害、糾紛,本專案的開發者不承擔任何責任。\n\n\n\n*",
"# Multiturn Chat 0.8M",
"## Contents\nIncludes approx. 0.8M Chinese multiturn dialogs between *human* and *assistant*.\n\nNote: this subset was generated by *ChatGPT* and was not strictly verified. The dialog contents might contain errors. Please take this in mind when using this subset.\n\ninstruction contains history dialog context, distinguishable by *Human:* and *Assistant:*, output contains the current reply by *assistant*.",
"## Sample",
"### Schema",
"## Limitation and Usage Limits\nWe require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed.\n\nSince this dataset was generated by *ChatGPT* and was not strictly verified, it still has shortcomings regarding factuality and other aspects. When using this dataset, careful inspection is needed.\n\nThis dataset does not represent anyone's ground, interest or thought, and is not related to any kind of claim of any groups. The developers of this project do not assume any responsibility to potential harm inflicted by using this dataset and project."
] |
[
"TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-Chinese #alpaca #fine-tune #region-us \n",
"# Dataset Card for \"multiturn_chat_0.8m-chinese-zhtw\"",
"## 內容\n\n包含約 80 萬條由 BELLE 專案所產生的 *Human* 與 *Assistant* 的多輪對話。\n\n注意:此資料集是由 ChatGPT 產生的,未經嚴格校驗,內容可能包含錯誤。使用過程中請注意這一點。\n\ninstruction 中包含多輪對話的上文內容,以 *Human:* 和 *Assistant:* 區分,output 中包含當前 *Assistant* 角色的回答。",
"## 範例",
"### 欄位",
"## 限制和使用限制\n我們要求開發者僅將我們開源的程式碼、資料、模型及後續衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。\n\n由於數據是由*ChatGPT*產生的,未經嚴格驗證,在事實性和其他方面仍有一些不足之處。因此,在使用此資料集時,請務必注意甄別。\n\n本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集帶來的任何損害、糾紛,本專案的開發者不承擔任何責任。\n\n\n\n*",
"# Multiturn Chat 0.8M",
"## Contents\nIncludes approx. 0.8M Chinese multiturn dialogs between *human* and *assistant*.\n\nNote: this subset was generated by *ChatGPT* and was not strictly verified. The dialog contents might contain errors. Please take this in mind when using this subset.\n\ninstruction contains history dialog context, distinguishable by *Human:* and *Assistant:*, output contains the current reply by *assistant*.",
"## Sample",
"### Schema",
"## Limitation and Usage Limits\nWe require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed.\n\nSince this dataset was generated by *ChatGPT* and was not strictly verified, it still has shortcomings regarding factuality and other aspects. When using this dataset, careful inspection is needed.\n\nThis dataset does not represent anyone's ground, interest or thought, and is not related to any kind of claim of any groups. The developers of this project do not assume any responsibility to potential harm inflicted by using this dataset and project."
] |
[
41,
21,
112,
4,
5,
133,
8,
106,
3,
4,
158
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-Chinese #alpaca #fine-tune #region-us \n# Dataset Card for \"multiturn_chat_0.8m-chinese-zhtw\"## 內容\n\n包含約 80 萬條由 BELLE 專案所產生的 *Human* 與 *Assistant* 的多輪對話。\n\n注意:此資料集是由 ChatGPT 產生的,未經嚴格校驗,內容可能包含錯誤。使用過程中請注意這一點。\n\ninstruction 中包含多輪對話的上文內容,以 *Human:* 和 *Assistant:* 區分,output 中包含當前 *Assistant* 角色的回答。## 範例### 欄位## 限制和使用限制\n我們要求開發者僅將我們開源的程式碼、資料、模型及後續衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。\n\n由於數據是由*ChatGPT*產生的,未經嚴格驗證,在事實性和其他方面仍有一些不足之處。因此,在使用此資料集時,請務必注意甄別。\n\n本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集帶來的任何損害、糾紛,本專案的開發者不承擔任何責任。\n\n\n\n*# Multiturn Chat 0.8M## Contents\nIncludes approx. 0.8M Chinese multiturn dialogs between *human* and *assistant*.\n\nNote: this subset was generated by *ChatGPT* and was not strictly verified. The dialog contents might contain errors. Please take this in mind when using this subset.\n\ninstruction contains history dialog context, distinguishable by *Human:* and *Assistant:*, output contains the current reply by *assistant*.## Sample### Schema"
] |
16526b16a436791ea79df6c6da3382911a58ea9b
|
# Dataset Card for "librispeech_asr-noise"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
distil-whisper/librispeech_asr-noise
|
[
"region:us"
] |
2023-09-27T14:14:14+00:00
|
{"dataset_info": [{"config_name": "test-pub-noise", "features": [{"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "40", "num_bytes": 2517727265.74, "num_examples": 2620}, {"name": "35", "num_bytes": 2517727265.74, "num_examples": 2620}, {"name": "30", "num_bytes": 2517727265.74, "num_examples": 2620}, {"name": "25", "num_bytes": 2517727265.74, "num_examples": 2620}, {"name": "20", "num_bytes": 2517727265.74, "num_examples": 2620}, {"name": "15", "num_bytes": 2517727265.74, "num_examples": 2620}, {"name": "10", "num_bytes": 2517727265.74, "num_examples": 2620}, {"name": "5", "num_bytes": 2517727265.74, "num_examples": 2620}, {"name": "0", "num_bytes": 2517727265.74, "num_examples": 2620}, {"name": "minus5", "num_bytes": 2517727265.74, "num_examples": 2620}, {"name": "minus10", "num_bytes": 2517727265.74, "num_examples": 2620}], "download_size": 9029521258, "dataset_size": 27694999923.13999}, {"config_name": "test-white-noise", "features": [{"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "40", "num_bytes": 2517727265.74, "num_examples": 2620}, {"name": "35", "num_bytes": 2517727265.74, "num_examples": 2620}, {"name": "30", "num_bytes": 2517727265.74, "num_examples": 2620}, {"name": "25", "num_bytes": 2517727265.74, "num_examples": 2620}, {"name": "20", "num_bytes": 2517727265.74, "num_examples": 2620}, {"name": "15", "num_bytes": 2517727265.74, "num_examples": 2620}, {"name": "10", "num_bytes": 2517727265.74, "num_examples": 2620}, {"name": "5", "num_bytes": 2517727265.74, "num_examples": 2620}, {"name": "0", "num_bytes": 2517727265.74, "num_examples": 2620}, {"name": "minus5", "num_bytes": 2517727265.74, "num_examples": 2620}, {"name": "minus10", "num_bytes": 2517727265.74, "num_examples": 2620}], "download_size": 15639888311, "dataset_size": 27694999923.13999}, {"config_name": "validation-pub-noise", "features": [{"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "40", "num_bytes": 2313039107.07, "num_examples": 2703}, {"name": "35", "num_bytes": 2313039107.07, "num_examples": 2703}, {"name": "30", "num_bytes": 2313039107.07, "num_examples": 2703}, {"name": "25", "num_bytes": 2313039107.07, "num_examples": 2703}, {"name": "20", "num_bytes": 2313039107.07, "num_examples": 2703}, {"name": "15", "num_bytes": 2313039107.07, "num_examples": 2703}, {"name": "10", "num_bytes": 2313039107.07, "num_examples": 2703}, {"name": "5", "num_bytes": 2313039107.07, "num_examples": 2703}, {"name": "0", "num_bytes": 2313039107.07, "num_examples": 2703}, {"name": "minus5", "num_bytes": 2313039107.07, "num_examples": 2703}, {"name": "minus10", "num_bytes": 2313039107.07, "num_examples": 2703}], "download_size": 15441254231, "dataset_size": 25443430177.77}, {"config_name": "validation-white-noise", "features": [{"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "40", "num_bytes": 2313039107.07, "num_examples": 2703}, {"name": "35", "num_bytes": 2313039107.07, "num_examples": 2703}, {"name": "30", "num_bytes": 2313039107.07, "num_examples": 2703}, {"name": "25", "num_bytes": 2313039107.07, "num_examples": 2703}, {"name": "20", "num_bytes": 2313039107.07, "num_examples": 2703}, {"name": "15", "num_bytes": 2313039107.07, "num_examples": 2703}, {"name": "10", "num_bytes": 2313039107.07, "num_examples": 2703}, {"name": "5", "num_bytes": 2313039107.07, "num_examples": 2703}, {"name": "0", "num_bytes": 2313039107.07, "num_examples": 2703}, {"name": "minus5", "num_bytes": 2313039107.07, "num_examples": 2703}, {"name": "minus10", "num_bytes": 2313039107.07, "num_examples": 2703}], "download_size": 15581612447, "dataset_size": 25443430177.77}], "configs": [{"config_name": "test-pub-noise", "data_files": [{"split": "40", "path": "test-pub-noise/40-*"}, {"split": "35", "path": "test-pub-noise/35-*"}, {"split": "30", "path": "test-pub-noise/30-*"}, {"split": "25", "path": "test-pub-noise/25-*"}, {"split": "20", "path": "test-pub-noise/20-*"}, {"split": "15", "path": "test-pub-noise/15-*"}, {"split": "10", "path": "test-pub-noise/10-*"}, {"split": "5", "path": "test-pub-noise/5-*"}, {"split": "0", "path": "test-pub-noise/0-*"}, {"split": "minus5", "path": "test-pub-noise/minus5-*"}, {"split": "minus10", "path": "test-pub-noise/minus10-*"}]}, {"config_name": "test-white-noise", "data_files": [{"split": "40", "path": "test-white-noise/40-*"}, {"split": "35", "path": "test-white-noise/35-*"}, {"split": "30", "path": "test-white-noise/30-*"}, {"split": "25", "path": "test-white-noise/25-*"}, {"split": "20", "path": "test-white-noise/20-*"}, {"split": "15", "path": "test-white-noise/15-*"}, {"split": "10", "path": "test-white-noise/10-*"}, {"split": "5", "path": "test-white-noise/5-*"}, {"split": "0", "path": "test-white-noise/0-*"}, {"split": "minus5", "path": "test-white-noise/minus5-*"}, {"split": "minus10", "path": "test-white-noise/minus10-*"}]}, {"config_name": "validation-pub-noise", "data_files": [{"split": "40", "path": "validation-pub-noise/40-*"}, {"split": "35", "path": "validation-pub-noise/35-*"}, {"split": "30", "path": "validation-pub-noise/30-*"}, {"split": "25", "path": "validation-pub-noise/25-*"}, {"split": "20", "path": "validation-pub-noise/20-*"}, {"split": "15", "path": "validation-pub-noise/15-*"}, {"split": "10", "path": "validation-pub-noise/10-*"}, {"split": "5", "path": "validation-pub-noise/5-*"}, {"split": "0", "path": "validation-pub-noise/0-*"}, {"split": "minus5", "path": "validation-pub-noise/minus5-*"}, {"split": "minus10", "path": "validation-pub-noise/minus10-*"}]}, {"config_name": "validation-white-noise", "data_files": [{"split": "40", "path": "validation-white-noise/40-*"}, {"split": "35", "path": "validation-white-noise/35-*"}, {"split": "30", "path": "validation-white-noise/30-*"}, {"split": "25", "path": "validation-white-noise/25-*"}, {"split": "20", "path": "validation-white-noise/20-*"}, {"split": "15", "path": "validation-white-noise/15-*"}, {"split": "10", "path": "validation-white-noise/10-*"}, {"split": "5", "path": "validation-white-noise/5-*"}, {"split": "0", "path": "validation-white-noise/0-*"}, {"split": "minus5", "path": "validation-white-noise/minus5-*"}, {"split": "minus10", "path": "validation-white-noise/minus10-*"}]}]}
|
2023-09-27T14:56:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "librispeech_asr-noise"
More Information needed
|
[
"# Dataset Card for \"librispeech_asr-noise\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"librispeech_asr-noise\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"librispeech_asr-noise\"\n\nMore Information needed"
] |
a10dc04dfa25b6fdd27fb8e8619ba3bd56c315ef
|
# Dataset Card for "train_2m-chinese-zhtw"
## 內容
包含約 200 萬條由 [BELLE](https://github.com/LianjiaTech/BELLE) 專案目產生的中文指令(instruction)資料。
## 範例
```
{
"instruction": "將以下三個句子組合成一個有意義的段落。\n狗是人類最好的朋友。它們非常聰明,可以進行各種活動。如果你喜歡散步,狗可以成為你一起散步的夥伴。",
"input": "",
"output": "狗是人類最好的朋友,它們非常聰明,可以進行各種活動。如果你喜歡散步,狗可以成為你一起散步的伙伴。出門散步是一種良好的鍛煉方式,而有狗的陪伴會讓散步變得更有趣,並且有狗在身邊也能給你帶來安全感。所以,擁有一隻狗作為你的伙伴,可以幫助你變得更加積極主動和健康。"
}
```
### 欄位:
```
instruction: 指令
input: 輸入(此資料集均為空)
output: 輸出
```
## 使用限制
僅允許將此資料集及使用此資料集產生的衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。
本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集所帶來的任何損害、糾紛,本專案不承擔任何責任。
|
erhwenkuo/train_2m-chinese-zhtw
|
[
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:zh",
"alpaca",
"fine-tune",
"region:us"
] |
2023-09-27T14:26:14+00:00
|
{"language": ["zh"], "size_categories": ["1M<n<10M"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1857012653, "num_examples": 2000000}], "download_size": 1134473798, "dataset_size": 1857012653}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["alpaca", "fine-tune"]}
|
2023-09-27T14:48:57+00:00
|
[] |
[
"zh"
] |
TAGS
#task_categories-text-generation #size_categories-1M<n<10M #language-Chinese #alpaca #fine-tune #region-us
|
# Dataset Card for "train_2m-chinese-zhtw"
## 內容
包含約 200 萬條由 BELLE 專案目產生的中文指令(instruction)資料。
## 範例
### 欄位:
## 使用限制
僅允許將此資料集及使用此資料集產生的衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。
本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集所帶來的任何損害、糾紛,本專案不承擔任何責任。
|
[
"# Dataset Card for \"train_2m-chinese-zhtw\"",
"## 內容\n包含約 200 萬條由 BELLE 專案目產生的中文指令(instruction)資料。",
"## 範例",
"### 欄位:",
"## 使用限制\n僅允許將此資料集及使用此資料集產生的衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。\n本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集所帶來的任何損害、糾紛,本專案不承擔任何責任。"
] |
[
"TAGS\n#task_categories-text-generation #size_categories-1M<n<10M #language-Chinese #alpaca #fine-tune #region-us \n",
"# Dataset Card for \"train_2m-chinese-zhtw\"",
"## 內容\n包含約 200 萬條由 BELLE 專案目產生的中文指令(instruction)資料。",
"## 範例",
"### 欄位:",
"## 使用限制\n僅允許將此資料集及使用此資料集產生的衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。\n本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集所帶來的任何損害、糾紛,本專案不承擔任何責任。"
] |
[
41,
18,
25,
4,
6,
82
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-1M<n<10M #language-Chinese #alpaca #fine-tune #region-us \n# Dataset Card for \"train_2m-chinese-zhtw\"## 內容\n包含約 200 萬條由 BELLE 專案目產生的中文指令(instruction)資料。## 範例### 欄位:## 使用限制\n僅允許將此資料集及使用此資料集產生的衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。\n本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集所帶來的任何損害、糾紛,本專案不承擔任何責任。"
] |
7b4f5b61de1f8f11f143aa59f90349d79b8d16e5
|
# Dataset Card for Evaluation run of mistralai/Mistral-7B-v0.1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/mistralai/Mistral-7B-v0.1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 6 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_mistralai__Mistral-7B-v0.1",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T13:02:14.153054](https://huggingface.co/datasets/open-llm-leaderboard/details_mistralai__Mistral-7B-v0.1/blob/main/results_2023-12-02T13-02-14.153054.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.3707354056103108,
"acc_stderr": 0.013304267705458433
},
"harness|gsm8k|5": {
"acc": 0.3707354056103108,
"acc_stderr": 0.013304267705458433
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_mistralai__Mistral-7B-v0.1
|
[
"region:us"
] |
2023-09-27T14:31:20+00:00
|
{"pretty_name": "Evaluation run of mistralai/Mistral-7B-v0.1", "dataset_summary": "Dataset automatically created during the evaluation run of model [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 6 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_mistralai__Mistral-7B-v0.1\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T13:02:14.153054](https://huggingface.co/datasets/open-llm-leaderboard/details_mistralai__Mistral-7B-v0.1/blob/main/results_2023-12-02T13-02-14.153054.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.3707354056103108,\n \"acc_stderr\": 0.013304267705458433\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3707354056103108,\n \"acc_stderr\": 0.013304267705458433\n }\n}\n```", "repo_url": "https://huggingface.co/mistralai/Mistral-7B-v0.1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|arc:challenge|25_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_25T23_48_21.884715", "path": ["**/details_harness|drop|3_2023-10-25T23-48-21.884715.parquet"]}, {"split": "2023_10_26T01_29_53.089924", "path": ["**/details_harness|drop|3_2023-10-26T01-29-53.089924.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-26T01-29-53.089924.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_25T23_48_21.884715", "path": ["**/details_harness|gsm8k|5_2023-10-25T23-48-21.884715.parquet"]}, {"split": "2023_10_26T01_29_53.089924", "path": ["**/details_harness|gsm8k|5_2023-10-26T01-29-53.089924.parquet"]}, {"split": "2023_12_01T11_13_53.246042", "path": ["**/details_harness|gsm8k|5_2023-12-01T11-13-53.246042.parquet"]}, {"split": "2023_12_02T13_01_55.687268", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-01-55.687268.parquet"]}, {"split": "2023_12_02T13_02_14.153054", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-02-14.153054.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-02-14.153054.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hellaswag|10_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-27T15-30-59.039834.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-27T15-30-59.039834.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-27T15-30-59.039834.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_25T23_48_21.884715", "path": ["**/details_harness|winogrande|5_2023-10-25T23-48-21.884715.parquet"]}, {"split": "2023_10_26T01_29_53.089924", "path": ["**/details_harness|winogrande|5_2023-10-26T01-29-53.089924.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-26T01-29-53.089924.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_27T15_30_59.039834", "path": ["results_2023-09-27T15-30-59.039834.parquet"]}, {"split": "2023_10_25T23_48_21.884715", "path": ["results_2023-10-25T23-48-21.884715.parquet"]}, {"split": "2023_10_26T01_29_53.089924", "path": ["results_2023-10-26T01-29-53.089924.parquet"]}, {"split": "2023_12_01T11_13_53.246042", "path": ["results_2023-12-01T11-13-53.246042.parquet"]}, {"split": "2023_12_02T13_01_55.687268", "path": ["results_2023-12-02T13-01-55.687268.parquet"]}, {"split": "2023_12_02T13_02_14.153054", "path": ["results_2023-12-02T13-02-14.153054.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T13-02-14.153054.parquet"]}]}]}
|
2023-12-02T13:02:30+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of mistralai/Mistral-7B-v0.1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model mistralai/Mistral-7B-v0.1 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 6 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T13:02:14.153054(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of mistralai/Mistral-7B-v0.1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model mistralai/Mistral-7B-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 6 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:02:14.153054(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of mistralai/Mistral-7B-v0.1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model mistralai/Mistral-7B-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 6 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:02:14.153054(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
20,
31,
169,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of mistralai/Mistral-7B-v0.1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model mistralai/Mistral-7B-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 6 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T13:02:14.153054(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
8476444269e71fb0b18c06a54ea863b2b389c614
|
# Dataset Card for "NFT-70M_image"
## Dataset summary
The *NFT-70M_image* dataset is a companion for our released [**NFT-70M_transactions**](https://huggingface.co/datasets/MLNTeam-Unical/NFT-70M_transactions) dataset,
which is the largest and most up-to-date collection of Non-Fungible Tokens (NFT) transactions between 2021 and 2023 sourced from [OpenSea](https://opensea.io).
As we also reported in the "Data anonymization" section of the dataset card of *NFT-70M_transactions*,
the URLs of NFT images data were replaced by identifiers to numerical vectors that represent an encrypted representation (i.e., embeddings)
of the image contents obtained via the [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) neural network model.
*Note about this dataset version: embedded-image data in this dataset only include jpg and png formats and correspond to the Collection headers (i.e., collection_image field in the NFT-70M_transactions dataset).
Upcoming versions will include all NFT embedded-image data.*
## Ethical use of data and informed consent
This data repository is made available for research and informational purposes only.
Any finding that might be drawn from the data provided within this repository should be intended to support decision-making regarding actions made on NFTs, and not to replace the human specialists.
*The authors are not responsible for any issues related to trading failures based on the data provided within this repository.*
## Terms of Usage
Please cite the following papers in any research product whose findings are based on the data provided within this repository:
- L. La Cava, D. Costa, A. Tagarelli: SONAR: Web-based Tool for Multimodal Exploration of Non-Fungible Token Inspiration Networks. In: Proc. ACM SIGIR 2023. Taipei, Taiwan, July 23-27 2023. DOI: https://doi.org/10.1145/3539618.3591821
- L. La Cava, D. Costa, A. Tagarelli: Visually Wired NFTs: Exploring the Role of Inspiration in Non-Fungible Tokens. CoRR abs/2303.17031 (2023). DOI: https://doi.org/10.48550/arXiv.2303.17031
- D. Costa, L. La Cava, A. Tagarelli: Show me your NFT and I tell you how it will perform: Multimodal representation learning for NFT selling price prediction. In: Proc. ACM WebConf 2023, pp. 1875-1885. Austin, TX, USA, 30 April 2023 – 4 May 2023. DOI: https://doi.org/10.1145/3543507.3583520
Data within this repository were fetched using the REST APIs provided by OpenSea. You should also acknowledge [OpenSea API]("https://docs.opensea.io/reference/api-overview).
## Liability statement
The authors hereby declare that they are not responsible for any harmful or objectionable content that may be contained within the data provided within this repository.
Users of the dataset are expected to exercise due diligence and responsibility when using the data, including but not limited to:
(i) Content Review: Users should review the dataset's contents carefully and assess its suitability for their intended purposes; (ii) Compliance: Users are responsible for ensuring that their use of the dataset complies with all applicable laws, regulations, and ethical standards;
(iii) Data Processing: Users may need to apply data preprocessing, filtering, or other techniques to remove or address any objectionable or harmful content as needed.
The authors of this dataset disclaim any liability for the accuracy, completeness, or suitability of the data and shall not be held responsible for any consequences resulting from the use or misuse of the dataset.
*By accessing and using this dataset, users acknowledge and accept this disclaimer.*
|
MLNTeam-Unical/NFT-70M_image
|
[
"task_categories:time-series-forecasting",
"task_categories:text-classification",
"task_categories:feature-extraction",
"task_categories:text-generation",
"task_categories:zero-shot-classification",
"task_categories:text2text-generation",
"task_categories:sentence-similarity",
"task_categories:image-classification",
"task_categories:image-to-text",
"task_categories:text-to-image",
"task_categories:text-retrieval",
"size_categories:10M<n<100M",
"language:en",
"license:cc-by-nc-4.0",
"Non-fungible Tokens",
"Crypto",
"Web3",
"Art",
"Multimodal Learning",
"region:us"
] |
2023-09-27T14:35:31+00:00
|
{"language": ["en"], "license": "cc-by-nc-4.0", "size_categories": ["10M<n<100M"], "task_categories": ["time-series-forecasting", "text-classification", "feature-extraction", "text-generation", "zero-shot-classification", "text2text-generation", "sentence-similarity", "image-classification", "image-to-text", "text-to-image", "text-retrieval"], "pretty_name": "NFT-70M_image", "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "emb", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 585722532, "num_examples": 189923}], "download_size": 703210305, "dataset_size": 585722532}, "tags": ["Non-fungible Tokens", "Crypto", "Web3", "Art", "Multimodal Learning"]}
|
2023-10-02T15:51:33+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-time-series-forecasting #task_categories-text-classification #task_categories-feature-extraction #task_categories-text-generation #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-sentence-similarity #task_categories-image-classification #task_categories-image-to-text #task_categories-text-to-image #task_categories-text-retrieval #size_categories-10M<n<100M #language-English #license-cc-by-nc-4.0 #Non-fungible Tokens #Crypto #Web3 #Art #Multimodal Learning #region-us
|
# Dataset Card for "NFT-70M_image"
## Dataset summary
The *NFT-70M_image* dataset is a companion for our released NFT-70M_transactions dataset,
which is the largest and most up-to-date collection of Non-Fungible Tokens (NFT) transactions between 2021 and 2023 sourced from OpenSea.
As we also reported in the "Data anonymization" section of the dataset card of *NFT-70M_transactions*,
the URLs of NFT images data were replaced by identifiers to numerical vectors that represent an encrypted representation (i.e., embeddings)
of the image contents obtained via the google/vit-base-patch16-224 neural network model.
*Note about this dataset version: embedded-image data in this dataset only include jpg and png formats and correspond to the Collection headers (i.e., collection_image field in the NFT-70M_transactions dataset).
Upcoming versions will include all NFT embedded-image data.*
## Ethical use of data and informed consent
This data repository is made available for research and informational purposes only.
Any finding that might be drawn from the data provided within this repository should be intended to support decision-making regarding actions made on NFTs, and not to replace the human specialists.
*The authors are not responsible for any issues related to trading failures based on the data provided within this repository.*
## Terms of Usage
Please cite the following papers in any research product whose findings are based on the data provided within this repository:
- L. La Cava, D. Costa, A. Tagarelli: SONAR: Web-based Tool for Multimodal Exploration of Non-Fungible Token Inspiration Networks. In: Proc. ACM SIGIR 2023. Taipei, Taiwan, July 23-27 2023. DOI: URL
- L. La Cava, D. Costa, A. Tagarelli: Visually Wired NFTs: Exploring the Role of Inspiration in Non-Fungible Tokens. CoRR abs/2303.17031 (2023). DOI: URL
- D. Costa, L. La Cava, A. Tagarelli: Show me your NFT and I tell you how it will perform: Multimodal representation learning for NFT selling price prediction. In: Proc. ACM WebConf 2023, pp. 1875-1885. Austin, TX, USA, 30 April 2023 – 4 May 2023. DOI: URL
Data within this repository were fetched using the REST APIs provided by OpenSea. You should also acknowledge OpenSea API.
## Liability statement
The authors hereby declare that they are not responsible for any harmful or objectionable content that may be contained within the data provided within this repository.
Users of the dataset are expected to exercise due diligence and responsibility when using the data, including but not limited to:
(i) Content Review: Users should review the dataset's contents carefully and assess its suitability for their intended purposes; (ii) Compliance: Users are responsible for ensuring that their use of the dataset complies with all applicable laws, regulations, and ethical standards;
(iii) Data Processing: Users may need to apply data preprocessing, filtering, or other techniques to remove or address any objectionable or harmful content as needed.
The authors of this dataset disclaim any liability for the accuracy, completeness, or suitability of the data and shall not be held responsible for any consequences resulting from the use or misuse of the dataset.
*By accessing and using this dataset, users acknowledge and accept this disclaimer.*
|
[
"# Dataset Card for \"NFT-70M_image\"",
"## Dataset summary\nThe *NFT-70M_image* dataset is a companion for our released NFT-70M_transactions dataset, \nwhich is the largest and most up-to-date collection of Non-Fungible Tokens (NFT) transactions between 2021 and 2023 sourced from OpenSea. \n\nAs we also reported in the \"Data anonymization\" section of the dataset card of *NFT-70M_transactions*, \nthe URLs of NFT images data were replaced by identifiers to numerical vectors that represent an encrypted representation (i.e., embeddings) \nof the image contents obtained via the google/vit-base-patch16-224 neural network model.\n\n*Note about this dataset version: embedded-image data in this dataset only include jpg and png formats and correspond to the Collection headers (i.e., collection_image field in the NFT-70M_transactions dataset). \nUpcoming versions will include all NFT embedded-image data.*",
"## Ethical use of data and informed consent\nThis data repository is made available for research and informational purposes only. \n\nAny finding that might be drawn from the data provided within this repository should be intended to support decision-making regarding actions made on NFTs, and not to replace the human specialists. \n\n*The authors are not responsible for any issues related to trading failures based on the data provided within this repository.*",
"## Liability statement\nThe authors hereby declare that they are not responsible for any harmful or objectionable content that may be contained within the data provided within this repository. \nUsers of the dataset are expected to exercise due diligence and responsibility when using the data, including but not limited to:\n(i) Content Review: Users should review the dataset's contents carefully and assess its suitability for their intended purposes; (ii) Compliance: Users are responsible for ensuring that their use of the dataset complies with all applicable laws, regulations, and ethical standards; \n(iii) Data Processing: Users may need to apply data preprocessing, filtering, or other techniques to remove or address any objectionable or harmful content as needed.\n\nThe authors of this dataset disclaim any liability for the accuracy, completeness, or suitability of the data and shall not be held responsible for any consequences resulting from the use or misuse of the dataset. \n\n*By accessing and using this dataset, users acknowledge and accept this disclaimer.*"
] |
[
"TAGS\n#task_categories-time-series-forecasting #task_categories-text-classification #task_categories-feature-extraction #task_categories-text-generation #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-sentence-similarity #task_categories-image-classification #task_categories-image-to-text #task_categories-text-to-image #task_categories-text-retrieval #size_categories-10M<n<100M #language-English #license-cc-by-nc-4.0 #Non-fungible Tokens #Crypto #Web3 #Art #Multimodal Learning #region-us \n",
"# Dataset Card for \"NFT-70M_image\"",
"## Dataset summary\nThe *NFT-70M_image* dataset is a companion for our released NFT-70M_transactions dataset, \nwhich is the largest and most up-to-date collection of Non-Fungible Tokens (NFT) transactions between 2021 and 2023 sourced from OpenSea. \n\nAs we also reported in the \"Data anonymization\" section of the dataset card of *NFT-70M_transactions*, \nthe URLs of NFT images data were replaced by identifiers to numerical vectors that represent an encrypted representation (i.e., embeddings) \nof the image contents obtained via the google/vit-base-patch16-224 neural network model.\n\n*Note about this dataset version: embedded-image data in this dataset only include jpg and png formats and correspond to the Collection headers (i.e., collection_image field in the NFT-70M_transactions dataset). \nUpcoming versions will include all NFT embedded-image data.*",
"## Ethical use of data and informed consent\nThis data repository is made available for research and informational purposes only. \n\nAny finding that might be drawn from the data provided within this repository should be intended to support decision-making regarding actions made on NFTs, and not to replace the human specialists. \n\n*The authors are not responsible for any issues related to trading failures based on the data provided within this repository.*",
"## Liability statement\nThe authors hereby declare that they are not responsible for any harmful or objectionable content that may be contained within the data provided within this repository. \nUsers of the dataset are expected to exercise due diligence and responsibility when using the data, including but not limited to:\n(i) Content Review: Users should review the dataset's contents carefully and assess its suitability for their intended purposes; (ii) Compliance: Users are responsible for ensuring that their use of the dataset complies with all applicable laws, regulations, and ethical standards; \n(iii) Data Processing: Users may need to apply data preprocessing, filtering, or other techniques to remove or address any objectionable or harmful content as needed.\n\nThe authors of this dataset disclaim any liability for the accuracy, completeness, or suitability of the data and shall not be held responsible for any consequences resulting from the use or misuse of the dataset. \n\n*By accessing and using this dataset, users acknowledge and accept this disclaimer.*"
] |
[
188,
13,
237,
96,
242
] |
[
"passage: TAGS\n#task_categories-time-series-forecasting #task_categories-text-classification #task_categories-feature-extraction #task_categories-text-generation #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-sentence-similarity #task_categories-image-classification #task_categories-image-to-text #task_categories-text-to-image #task_categories-text-retrieval #size_categories-10M<n<100M #language-English #license-cc-by-nc-4.0 #Non-fungible Tokens #Crypto #Web3 #Art #Multimodal Learning #region-us \n# Dataset Card for \"NFT-70M_image\"## Dataset summary\nThe *NFT-70M_image* dataset is a companion for our released NFT-70M_transactions dataset, \nwhich is the largest and most up-to-date collection of Non-Fungible Tokens (NFT) transactions between 2021 and 2023 sourced from OpenSea. \n\nAs we also reported in the \"Data anonymization\" section of the dataset card of *NFT-70M_transactions*, \nthe URLs of NFT images data were replaced by identifiers to numerical vectors that represent an encrypted representation (i.e., embeddings) \nof the image contents obtained via the google/vit-base-patch16-224 neural network model.\n\n*Note about this dataset version: embedded-image data in this dataset only include jpg and png formats and correspond to the Collection headers (i.e., collection_image field in the NFT-70M_transactions dataset). \nUpcoming versions will include all NFT embedded-image data.*"
] |
450b19a39e178ff9a1822c221e2c4f9a6269cbe8
|
# Dataset Card for "train_0.5m-chinese-zhtw"
## 內容
包含約 50 萬條由 [BELLE](https://github.com/LianjiaTech/BELLE) 專案產生的中文指令資料。
## 範例
```
{
"instruction": "給定一個文字輸入,將其中的所有數字加1。\n“明天的會議在9點開始,記得準時到達。 ”\n",
"input": "",
"output": "「明天的會議在10點開始,記得準時到達。 ”"
}
```
### 欄位:
```
instruction: 指令
input: 輸入(此資料集均為空)
output: 輸出
```
## 使用限制
僅允許將此資料集及使用此資料集產生的衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。
本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集所帶來的任何損害、糾紛,本專案不承擔任何責任。
|
erhwenkuo/train_0.5m-chinese-zhtw
|
[
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:zh",
"alpaca",
"fine-tune",
"region:us"
] |
2023-09-27T14:55:31+00:00
|
{"language": ["zh"], "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 265980267, "num_examples": 519255}], "download_size": 183812396, "dataset_size": 265980267}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["alpaca", "fine-tune"]}
|
2023-09-27T14:59:00+00:00
|
[] |
[
"zh"
] |
TAGS
#task_categories-text-generation #size_categories-100K<n<1M #language-Chinese #alpaca #fine-tune #region-us
|
# Dataset Card for "train_0.5m-chinese-zhtw"
## 內容
包含約 50 萬條由 BELLE 專案產生的中文指令資料。
## 範例
### 欄位:
## 使用限制
僅允許將此資料集及使用此資料集產生的衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。
本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集所帶來的任何損害、糾紛,本專案不承擔任何責任。
|
[
"# Dataset Card for \"train_0.5m-chinese-zhtw\"",
"## 內容\n包含約 50 萬條由 BELLE 專案產生的中文指令資料。",
"## 範例",
"### 欄位:",
"## 使用限制\n僅允許將此資料集及使用此資料集產生的衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。\n本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集所帶來的任何損害、糾紛,本專案不承擔任何責任。"
] |
[
"TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-Chinese #alpaca #fine-tune #region-us \n",
"# Dataset Card for \"train_0.5m-chinese-zhtw\"",
"## 內容\n包含約 50 萬條由 BELLE 專案產生的中文指令資料。",
"## 範例",
"### 欄位:",
"## 使用限制\n僅允許將此資料集及使用此資料集產生的衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。\n本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集所帶來的任何損害、糾紛,本專案不承擔任何責任。"
] |
[
41,
18,
21,
4,
6,
82
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-Chinese #alpaca #fine-tune #region-us \n# Dataset Card for \"train_0.5m-chinese-zhtw\"## 內容\n包含約 50 萬條由 BELLE 專案產生的中文指令資料。## 範例### 欄位:## 使用限制\n僅允許將此資料集及使用此資料集產生的衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。\n本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集所帶來的任何損害、糾紛,本專案不承擔任何責任。"
] |
f9283b5faec87ffe380d7b58c610e80e1ffe489a
|
# Dataset Card for "94c40829"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/94c40829
|
[
"region:us"
] |
2023-09-27T14:58:34+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 271, "num_examples": 10}], "download_size": 1428, "dataset_size": 271}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-27T14:58:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "94c40829"
More Information needed
|
[
"# Dataset Card for \"94c40829\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"94c40829\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"94c40829\"\n\nMore Information needed"
] |
ee55c6bfd082757d466ee128e758d98db77059f7
|
# Dataset Card for "634fb531"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/634fb531
|
[
"region:us"
] |
2023-09-27T15:00:39+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 273, "num_examples": 10}], "download_size": 1461, "dataset_size": 273}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-27T15:00:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "634fb531"
More Information needed
|
[
"# Dataset Card for \"634fb531\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"634fb531\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"634fb531\"\n\nMore Information needed"
] |
f8d33d7101a8ae9657a33bd7d342f79ede543f53
|
# Dataset Card for "3ec30f64"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/3ec30f64
|
[
"region:us"
] |
2023-09-27T15:06:02+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 218, "num_examples": 10}], "download_size": 1399, "dataset_size": 218}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-27T15:06:02+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "3ec30f64"
More Information needed
|
[
"# Dataset Card for \"3ec30f64\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"3ec30f64\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"3ec30f64\"\n\nMore Information needed"
] |
128717986517527293c8c87460b0f7d21bf52580
|
# Dataset Card for "wav2vec2_03F_IEMOCAP"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
MonKira/wav2vec2_03F_IEMOCAP
|
[
"region:us"
] |
2023-09-27T15:10:24+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}]}], "dataset_info": {"features": [{"name": "input_values", "sequence": "float32"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 1465109436, "num_examples": 4995}, {"name": "val", "num_bytes": 147717436, "num_examples": 536}], "download_size": 1220635662, "dataset_size": 1612826872}}
|
2023-09-27T15:12:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "wav2vec2_03F_IEMOCAP"
More Information needed
|
[
"# Dataset Card for \"wav2vec2_03F_IEMOCAP\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"wav2vec2_03F_IEMOCAP\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"wav2vec2_03F_IEMOCAP\"\n\nMore Information needed"
] |
f0d08a4c9edf07384c1b24ec6d67152d3f29dc39
|
# Dataset Card for "squad_rare_v4_train_30_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/squad_rare_v4_train_30_eval_10
|
[
"region:us"
] |
2023-09-27T15:14:05+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 546548, "num_examples": 368}, {"name": "validation", "num_bytes": 49683, "num_examples": 50}], "download_size": 104892, "dataset_size": 596231}}
|
2023-09-27T15:18:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "squad_rare_v4_train_30_eval_10"
More Information needed
|
[
"# Dataset Card for \"squad_rare_v4_train_30_eval_10\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_rare_v4_train_30_eval_10\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_rare_v4_train_30_eval_10\"\n\nMore Information needed"
] |
3b35ff885a23480cf608a7a9a2bc3a158f1b3908
|
# Dataset Card for "squad_wrong_rare_v4_train_30_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/squad_wrong_rare_v4_train_30_eval_10
|
[
"region:us"
] |
2023-09-27T15:14:15+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 546548, "num_examples": 368}, {"name": "validation", "num_bytes": 50213, "num_examples": 50}], "download_size": 105441, "dataset_size": 596761}}
|
2023-09-27T15:18:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "squad_wrong_rare_v4_train_30_eval_10"
More Information needed
|
[
"# Dataset Card for \"squad_wrong_rare_v4_train_30_eval_10\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_wrong_rare_v4_train_30_eval_10\"\n\nMore Information needed"
] |
[
6,
30
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_wrong_rare_v4_train_30_eval_10\"\n\nMore Information needed"
] |
d58eb0f63165f930a5cda95e9c88e485a1b3c6b2
|
# Dataset Card for "squad_no_rare_v4_train_30_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/squad_no_rare_v4_train_30_eval_10
|
[
"region:us"
] |
2023-09-27T15:14:24+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 546548, "num_examples": 368}, {"name": "validation", "num_bytes": 48145, "num_examples": 50}], "download_size": 104416, "dataset_size": 594693}}
|
2023-09-27T15:18:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "squad_no_rare_v4_train_30_eval_10"
More Information needed
|
[
"# Dataset Card for \"squad_no_rare_v4_train_30_eval_10\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_no_rare_v4_train_30_eval_10\"\n\nMore Information needed"
] |
[
6,
29
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_no_rare_v4_train_30_eval_10\"\n\nMore Information needed"
] |
cea04b88d3fa8221ed0f9fac529a7ff68306558e
|
# Dataset Card for "squad_no_rare_strict_v4_train_30_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/squad_no_rare_strict_v4_train_30_eval_10
|
[
"region:us"
] |
2023-09-27T15:14:33+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 541741, "num_examples": 368}, {"name": "validation", "num_bytes": 48145, "num_examples": 50}], "download_size": 104279, "dataset_size": 589886}}
|
2023-09-27T15:18:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "squad_no_rare_strict_v4_train_30_eval_10"
More Information needed
|
[
"# Dataset Card for \"squad_no_rare_strict_v4_train_30_eval_10\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_no_rare_strict_v4_train_30_eval_10\"\n\nMore Information needed"
] |
[
6,
31
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_no_rare_strict_v4_train_30_eval_10\"\n\nMore Information needed"
] |
5175bedfd8fc99c80da3a2eb33d92a93502ed40a
|
# Dataset Card for "Bactrian-Spanish-Clean-Light"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Rodr16020/Bactrian-Spanish-Clean-Light
|
[
"region:us"
] |
2023-09-27T15:22:00+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5191106, "num_examples": 3000}], "download_size": 2646581, "dataset_size": 5191106}}
|
2023-09-27T15:22:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Bactrian-Spanish-Clean-Light"
More Information needed
|
[
"# Dataset Card for \"Bactrian-Spanish-Clean-Light\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Bactrian-Spanish-Clean-Light\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Bactrian-Spanish-Clean-Light\"\n\nMore Information needed"
] |
68ea26e8c5518dc76c6948712d79ad6eed759697
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
DoctorSlimm/mozart-api-demo-pages
|
[
"region:us"
] |
2023-09-27T16:15:39+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data.csv"}]}]}
|
2023-10-04T18:54:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
8,
24,
6,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
f10ec4101c1cd53cd0b929deab60644ef98eac65
|
# Bangumi Image Base of Lycoris Recoil
This is the image base of bangumi Lycoris Recoil, we detected 31 characters, 2149 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 22 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 67 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 17 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 117 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 120 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 21 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 79 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 36 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 16 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 24 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 11 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 21 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 11 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 10 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 10 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 118 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 10 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 54 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 50 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 23 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 10 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 9 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 407 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 13 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 102 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 9 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 27 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 510 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 33 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 27 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 165 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BangumiBase/lycorisrecoil
|
[
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] |
2023-09-27T16:18:53+00:00
|
{"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]}
|
2023-09-29T11:40:36+00:00
|
[] |
[] |
TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
|
Bangumi Image Base of Lycoris Recoil
====================================
This is the image base of bangumi Lycoris Recoil, we detected 31 characters, 2149 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
|
[] |
[
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
[
25
] |
[
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
fb78973681807165523621153d1d67db7a469e73
|
# Dataset Card for "guanaco-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
gonzalopolavieja/guanaco-llama2-1k
|
[
"region:us"
] |
2023-09-27T16:50:30+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1654448, "num_examples": 1000}], "download_size": 966693, "dataset_size": 1654448}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-27T16:50:32+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "guanaco-llama2-1k"
More Information needed
|
[
"# Dataset Card for \"guanaco-llama2-1k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"guanaco-llama2-1k\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"guanaco-llama2-1k\"\n\nMore Information needed"
] |
27b35e53f33a0e6bf595213ac97438411bfa6e45
|
# Dataset mix from:
+ databricks/databricks-dolly-15k
+ ewof/alpaca-instruct-unfiltered
+ garage/bAInd_Open-Platypus
+ gbharti/finance-alpaca
+ Honkware/oasst1-alpaca
+ medical/chat
+ pankajmathur/WizardLM_Orca
+ teknium/GPTeacher-General-Instruct
+ LIMA
+ Chain-of-Thought
+ Dynosaur/dynosaur-full
+ nam194_vietnews
+ quora_chat
+ stackoverflow_chat
# Dataset Creation:
+ The source language dataset was translated into Vietnamese using the OpenAI GPT-3.5 API.
+ 2% of the translations got translation errors. These translations were skipped.
+ The remaining translations were merged into 1 main dataset for Fine-Tuning
# Important Notes:
+ This dataset was translated by a machine learning model, and may contain errors or inaccuracies.
+ 2% of the original dataset could not be processed automatically and were skipped.
|
infCapital/viet-llama2-ft
|
[
"region:us"
] |
2023-09-27T16:58:35+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2088825146, "num_examples": 1932833}], "download_size": 874832201, "dataset_size": 2088825146}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-28T05:38:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset mix from:
+ databricks/databricks-dolly-15k
+ ewof/alpaca-instruct-unfiltered
+ garage/bAInd_Open-Platypus
+ gbharti/finance-alpaca
+ Honkware/oasst1-alpaca
+ medical/chat
+ pankajmathur/WizardLM_Orca
+ teknium/GPTeacher-General-Instruct
+ LIMA
+ Chain-of-Thought
+ Dynosaur/dynosaur-full
+ nam194_vietnews
+ quora_chat
+ stackoverflow_chat
# Dataset Creation:
+ The source language dataset was translated into Vietnamese using the OpenAI GPT-3.5 API.
+ 2% of the translations got translation errors. These translations were skipped.
+ The remaining translations were merged into 1 main dataset for Fine-Tuning
# Important Notes:
+ This dataset was translated by a machine learning model, and may contain errors or inaccuracies.
+ 2% of the original dataset could not be processed automatically and were skipped.
|
[
"# Dataset mix from:\n\n+ databricks/databricks-dolly-15k\n+ ewof/alpaca-instruct-unfiltered\n+ garage/bAInd_Open-Platypus\n+ gbharti/finance-alpaca\n+ Honkware/oasst1-alpaca\n+ medical/chat\n+ pankajmathur/WizardLM_Orca\n+ teknium/GPTeacher-General-Instruct\n+ LIMA\n+ Chain-of-Thought\n+ Dynosaur/dynosaur-full\n+ nam194_vietnews\n+ quora_chat\n+ stackoverflow_chat",
"# Dataset Creation:\n\n+ The source language dataset was translated into Vietnamese using the OpenAI GPT-3.5 API.\n+ 2% of the translations got translation errors. These translations were skipped.\n+ The remaining translations were merged into 1 main dataset for Fine-Tuning",
"# Important Notes:\n\n+ This dataset was translated by a machine learning model, and may contain errors or inaccuracies.\n+ 2% of the original dataset could not be processed automatically and were skipped."
] |
[
"TAGS\n#region-us \n",
"# Dataset mix from:\n\n+ databricks/databricks-dolly-15k\n+ ewof/alpaca-instruct-unfiltered\n+ garage/bAInd_Open-Platypus\n+ gbharti/finance-alpaca\n+ Honkware/oasst1-alpaca\n+ medical/chat\n+ pankajmathur/WizardLM_Orca\n+ teknium/GPTeacher-General-Instruct\n+ LIMA\n+ Chain-of-Thought\n+ Dynosaur/dynosaur-full\n+ nam194_vietnews\n+ quora_chat\n+ stackoverflow_chat",
"# Dataset Creation:\n\n+ The source language dataset was translated into Vietnamese using the OpenAI GPT-3.5 API.\n+ 2% of the translations got translation errors. These translations were skipped.\n+ The remaining translations were merged into 1 main dataset for Fine-Tuning",
"# Important Notes:\n\n+ This dataset was translated by a machine learning model, and may contain errors or inaccuracies.\n+ 2% of the original dataset could not be processed automatically and were skipped."
] |
[
6,
136,
66,
48
] |
[
"passage: TAGS\n#region-us \n# Dataset mix from:\n\n+ databricks/databricks-dolly-15k\n+ ewof/alpaca-instruct-unfiltered\n+ garage/bAInd_Open-Platypus\n+ gbharti/finance-alpaca\n+ Honkware/oasst1-alpaca\n+ medical/chat\n+ pankajmathur/WizardLM_Orca\n+ teknium/GPTeacher-General-Instruct\n+ LIMA\n+ Chain-of-Thought\n+ Dynosaur/dynosaur-full\n+ nam194_vietnews\n+ quora_chat\n+ stackoverflow_chat# Dataset Creation:\n\n+ The source language dataset was translated into Vietnamese using the OpenAI GPT-3.5 API.\n+ 2% of the translations got translation errors. These translations were skipped.\n+ The remaining translations were merged into 1 main dataset for Fine-Tuning# Important Notes:\n\n+ This dataset was translated by a machine learning model, and may contain errors or inaccuracies.\n+ 2% of the original dataset could not be processed automatically and were skipped."
] |
97dcd0bc6243ec1f7c53b95d31f284f640d24257
|
# Dataset Card for "52b331a6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/52b331a6
|
[
"region:us"
] |
2023-09-27T17:08:07+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 189, "num_examples": 10}], "download_size": 1383, "dataset_size": 189}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-27T17:08:07+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "52b331a6"
More Information needed
|
[
"# Dataset Card for \"52b331a6\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"52b331a6\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"52b331a6\"\n\nMore Information needed"
] |
e264a8030c179a79b276dad4da36954c46ac5bc7
|
# Dataset Card for "guanaco-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Vrushali/guanaco-llama2-1k
|
[
"region:us"
] |
2023-09-27T17:40:26+00:00
|
{"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "Question", "dtype": "string"}, {"name": "Answer", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1333386, "num_examples": 1000}], "download_size": 565663, "dataset_size": 1333386}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-27T17:40:28+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "guanaco-llama2-1k"
More Information needed
|
[
"# Dataset Card for \"guanaco-llama2-1k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"guanaco-llama2-1k\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"guanaco-llama2-1k\"\n\nMore Information needed"
] |
ba4abe0b0a7328db41b47936446dc98303f4baf3
|
This dataset is generated by [Lilac](http://lilacml.com) for a HuggingFace Space: [huggingface.co/spaces/lilacai/lilac](https://huggingface.co/spaces/lilacai/lilac).
Original dataset: [https://huggingface.co/datasets/vikp/textbook_quality_programming](https://huggingface.co/datasets/vikp/textbook_quality_programming)
Lilac dataset config:
```namespace: lilac
name: textbook_quality_programming
source:
dataset_name: vikp/textbook_quality_programming
source_name: huggingface
embeddings:
- path:
- outline
- '*'
embedding: gte-small
- path:
- concepts
- '*'
embedding: gte-small
- path: markdown
embedding: gte-small
signals:
- path:
- outline
- '*'
signal:
signal_name: pii
- path:
- outline
- '*'
signal:
signal_name: text_statistics
- path:
- outline
- '*'
signal:
signal_name: near_dup
- path:
- outline
- '*'
signal:
signal_name: lang_detection
- path:
- outline
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: legal-termination
signal_name: concept_score
- path:
- outline
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: negative-sentiment
signal_name: concept_score
- path:
- outline
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: non-english
signal_name: concept_score
- path:
- outline
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: positive-sentiment
signal_name: concept_score
- path:
- outline
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: profanity
signal_name: concept_score
- path:
- outline
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: question
signal_name: concept_score
- path:
- outline
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: source-code
signal_name: concept_score
- path:
- outline
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: toxicity
signal_name: concept_score
- path:
- concepts
- '*'
signal:
signal_name: pii
- path:
- concepts
- '*'
signal:
signal_name: text_statistics
- path:
- concepts
- '*'
signal:
signal_name: near_dup
- path:
- concepts
- '*'
signal:
signal_name: lang_detection
- path:
- concepts
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: legal-termination
signal_name: concept_score
- path:
- concepts
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: negative-sentiment
signal_name: concept_score
- path:
- concepts
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: non-english
signal_name: concept_score
- path:
- concepts
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: positive-sentiment
signal_name: concept_score
- path:
- concepts
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: profanity
signal_name: concept_score
- path:
- concepts
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: question
signal_name: concept_score
- path:
- concepts
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: source-code
signal_name: concept_score
- path:
- concepts
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: toxicity
signal_name: concept_score
- path: markdown
signal:
signal_name: pii
- path: markdown
signal:
signal_name: text_statistics
- path: markdown
signal:
signal_name: near_dup
- path: markdown
signal:
signal_name: lang_detection
- path: markdown
signal:
embedding: gte-small
namespace: lilac
concept_name: legal-termination
signal_name: concept_score
- path: markdown
signal:
embedding: gte-small
namespace: lilac
concept_name: negative-sentiment
signal_name: concept_score
- path: markdown
signal:
embedding: gte-small
namespace: lilac
concept_name: non-english
signal_name: concept_score
- path: markdown
signal:
embedding: gte-small
namespace: lilac
concept_name: positive-sentiment
signal_name: concept_score
- path: markdown
signal:
embedding: gte-small
namespace: lilac
concept_name: profanity
signal_name: concept_score
- path: markdown
signal:
embedding: gte-small
namespace: lilac
concept_name: question
signal_name: concept_score
- path: markdown
signal:
embedding: gte-small
namespace: lilac
concept_name: source-code
signal_name: concept_score
- path: markdown
signal:
embedding: gte-small
namespace: lilac
concept_name: toxicity
signal_name: concept_score
- path:
- outline
- '*'
signal:
signal_name: cluster_dbscan
- path:
- concepts
- '*'
signal:
signal_name: cluster_dbscan
- path: markdown
signal:
signal_name: cluster_dbscan
- path:
- outline
- '*'
signal:
embedding: gte-small
signal_name: cluster_hdbscan
- path:
- concepts
- '*'
signal:
embedding: gte-small
signal_name: cluster_hdbscan
- path: markdown
signal:
embedding: gte-small
signal_name: cluster_hdbscan
settings:
ui:
media_paths:
- - outline
- '*'
- - concepts
- '*'
- markdown
markdown_paths:
- markdown
tags:
- machine-learning
```
|
lilacai/lilac-textbook_quality_programming
|
[
"region:us"
] |
2023-09-27T17:52:17+00:00
|
{}
|
2023-12-07T13:57:20+00:00
|
[] |
[] |
TAGS
#region-us
|
This dataset is generated by Lilac for a HuggingFace Space: URL
Original dataset: URL
Lilac dataset config:
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
423086585e36fe8a772c5a5be95c493c5de1429a
|
# Dataset Card for SlimPajama-6B-embedded
This is a copy of [DKYoon/SlimPajama-6B](https://huggingface.co/datasets/DKYoon/SlimPajama-6B), together with embeddings generated by [thenlper/gte-large](https://huggingface.co/thenlper/gte-large).
There are 5.49 million examples of text, a representative random sample of [SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B). Each text is associated with a 1024-dimensional embedding vector that is meant to represent the semantic content. The vectors were generated by average-pooling (max-pooling dataset to come in the future).
This dataset is intended to help with downstream tasks such as reverse-embeddings, interpreting embedding spaces and creating adapters between embeddings models.
|
sproos/SlimPajama-6B-embedded
|
[
"region:us"
] |
2023-09-27T18:06:00+00:00
|
{}
|
2023-09-27T18:36:48+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for SlimPajama-6B-embedded
This is a copy of DKYoon/SlimPajama-6B, together with embeddings generated by thenlper/gte-large.
There are 5.49 million examples of text, a representative random sample of SlimPajama-627B. Each text is associated with a 1024-dimensional embedding vector that is meant to represent the semantic content. The vectors were generated by average-pooling (max-pooling dataset to come in the future).
This dataset is intended to help with downstream tasks such as reverse-embeddings, interpreting embedding spaces and creating adapters between embeddings models.
|
[
"# Dataset Card for SlimPajama-6B-embedded\n\nThis is a copy of DKYoon/SlimPajama-6B, together with embeddings generated by thenlper/gte-large. \nThere are 5.49 million examples of text, a representative random sample of SlimPajama-627B. Each text is associated with a 1024-dimensional embedding vector that is meant to represent the semantic content. The vectors were generated by average-pooling (max-pooling dataset to come in the future).\n\nThis dataset is intended to help with downstream tasks such as reverse-embeddings, interpreting embedding spaces and creating adapters between embeddings models."
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for SlimPajama-6B-embedded\n\nThis is a copy of DKYoon/SlimPajama-6B, together with embeddings generated by thenlper/gte-large. \nThere are 5.49 million examples of text, a representative random sample of SlimPajama-627B. Each text is associated with a 1024-dimensional embedding vector that is meant to represent the semantic content. The vectors were generated by average-pooling (max-pooling dataset to come in the future).\n\nThis dataset is intended to help with downstream tasks such as reverse-embeddings, interpreting embedding spaces and creating adapters between embeddings models."
] |
[
6,
157
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for SlimPajama-6B-embedded\n\nThis is a copy of DKYoon/SlimPajama-6B, together with embeddings generated by thenlper/gte-large. \nThere are 5.49 million examples of text, a representative random sample of SlimPajama-627B. Each text is associated with a 1024-dimensional embedding vector that is meant to represent the semantic content. The vectors were generated by average-pooling (max-pooling dataset to come in the future).\n\nThis dataset is intended to help with downstream tasks such as reverse-embeddings, interpreting embedding spaces and creating adapters between embeddings models."
] |
e2f68b8226b748e1758eca2335769557fd350305
|
# Dataset Card for "7f0dfe44"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/7f0dfe44
|
[
"region:us"
] |
2023-09-27T18:12:08+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 212, "num_examples": 10}], "download_size": 1370, "dataset_size": 212}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-27T18:12:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "7f0dfe44"
More Information needed
|
[
"# Dataset Card for \"7f0dfe44\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"7f0dfe44\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"7f0dfe44\"\n\nMore Information needed"
] |
abf99ba2de1d5c93e777016bb2634248c7f366e5
|
# Mnist-Ambiguous
This dataset contains mnist-like images, but with an unclear ground truth. For each image, there are two classes which could be considered true.
Robust and uncertainty-aware DNNs should thus detect and flag these issues.
### Features
Same as mnist, the supervised dataset has an `image` (28x28 int array) and a `label` (int).
Additionally, the following features are exposed for your convenience:
- `text_label` (str): A textual representation of the probabilistic label, e.g. `p(0)=0.54, p(5)=0.46`
- `p_label` (list of floats): Ground-Truth probabilities for each class (two nonzero values for our ambiguous images)
- `is_ambiguous` (bool): Flag indicating if this is one of our ambiguous images (see 'splits' below)
### Splits
We provide four splits:
- `test`: 10'000 ambiguous images
- `train`: 10'000 ambiguous images - adding ambiguous images to the training set makes sure test-time ambiguous images are in-distribution.
- `test_mixed`: 20'000 images, consisting of the (shuffled) concatenation of our ambiguous `test` set and the nominal mnist test set by LeCun et. al.,
- `train_mixed`: 70'000 images, consisting of the (shuffled) concatenation of our ambiguous `training` and the nominal training set.
Note that the ambiguous test images are highly ambiguous (i.e., the two classes have very similar ground truth likelihoods),
the training set images allow for more unbalanced ambiguity.
This is to make the training set more closely connected to the nominal data, while still keeping the test set clearly ambiguous.
For research targeting explicitly aleatoric uncertainty, we recommend training the model using `train_mixed`.
Otherwise, our `test` set will lead to both epistemic and aleatoric uncertainty.
In related literature, such 'mixed' splits are sometimes denoted as *dirty* splits.
### Assessment and Validity
For a brief discussion of the strength and weaknesses of this dataset,
including a quantitative comparison to the (only) other ambiguous datasets available in the literature, we refer to our paper.
### Paper
Pre-print here: [https://arxiv.org/abs/2207.10495](https://arxiv.org/abs/2207.10495)
Citation:
```
@misc{https://doi.org/10.48550/arxiv.2207.10495,
doi = {10.48550/ARXIV.2207.10495},
url = {https://arxiv.org/abs/2207.10495},
author = {Weiss, Michael and Gómez, André García and Tonella, Paolo},
title = {A Forgotten Danger in DNN Supervision Testing: Generating and Detecting True Ambiguity},
publisher = {arXiv},
year = {2022}
}
```
### License
As this is a derivative work of mnist, which is CC-BY-SA 3.0 licensed, our dataset is released using the same license.
|
asoria/mnist_ambiguous
|
[
"task_categories:image-classification",
"annotations_creators:machine-generated",
"size_categories:10K<n<100K",
"source_datasets:extended|mnist",
"language:en",
"license:cc-by-sa-3.0",
"arxiv:2207.10495",
"region:us"
] |
2023-09-27T18:25:05+00:00
|
{"annotations_creators": ["machine-generated"], "language": ["en"], "license": "cc-by-sa-3.0", "size_categories": ["10K<n<100K"], "source_datasets": ["extended|mnist"], "task_categories": ["image-classification"], "pretty_name": "mnist_ambigous"}
|
2023-09-27T18:25:16+00:00
|
[
"2207.10495"
] |
[
"en"
] |
TAGS
#task_categories-image-classification #annotations_creators-machine-generated #size_categories-10K<n<100K #source_datasets-extended|mnist #language-English #license-cc-by-sa-3.0 #arxiv-2207.10495 #region-us
|
# Mnist-Ambiguous
This dataset contains mnist-like images, but with an unclear ground truth. For each image, there are two classes which could be considered true.
Robust and uncertainty-aware DNNs should thus detect and flag these issues.
### Features
Same as mnist, the supervised dataset has an 'image' (28x28 int array) and a 'label' (int).
Additionally, the following features are exposed for your convenience:
- 'text_label' (str): A textual representation of the probabilistic label, e.g. 'p(0)=0.54, p(5)=0.46'
- 'p_label' (list of floats): Ground-Truth probabilities for each class (two nonzero values for our ambiguous images)
- 'is_ambiguous' (bool): Flag indicating if this is one of our ambiguous images (see 'splits' below)
### Splits
We provide four splits:
- 'test': 10'000 ambiguous images
- 'train': 10'000 ambiguous images - adding ambiguous images to the training set makes sure test-time ambiguous images are in-distribution.
- 'test_mixed': 20'000 images, consisting of the (shuffled) concatenation of our ambiguous 'test' set and the nominal mnist test set by LeCun et. al.,
- 'train_mixed': 70'000 images, consisting of the (shuffled) concatenation of our ambiguous 'training' and the nominal training set.
Note that the ambiguous test images are highly ambiguous (i.e., the two classes have very similar ground truth likelihoods),
the training set images allow for more unbalanced ambiguity.
This is to make the training set more closely connected to the nominal data, while still keeping the test set clearly ambiguous.
For research targeting explicitly aleatoric uncertainty, we recommend training the model using 'train_mixed'.
Otherwise, our 'test' set will lead to both epistemic and aleatoric uncertainty.
In related literature, such 'mixed' splits are sometimes denoted as *dirty* splits.
### Assessment and Validity
For a brief discussion of the strength and weaknesses of this dataset,
including a quantitative comparison to the (only) other ambiguous datasets available in the literature, we refer to our paper.
### Paper
Pre-print here: URL
Citation:
### License
As this is a derivative work of mnist, which is CC-BY-SA 3.0 licensed, our dataset is released using the same license.
|
[
"# Mnist-Ambiguous\n\nThis dataset contains mnist-like images, but with an unclear ground truth. For each image, there are two classes which could be considered true.\nRobust and uncertainty-aware DNNs should thus detect and flag these issues.",
"### Features\nSame as mnist, the supervised dataset has an 'image' (28x28 int array) and a 'label' (int).\n\nAdditionally, the following features are exposed for your convenience:\n\n- 'text_label' (str): A textual representation of the probabilistic label, e.g. 'p(0)=0.54, p(5)=0.46' \n- 'p_label' (list of floats): Ground-Truth probabilities for each class (two nonzero values for our ambiguous images)\n- 'is_ambiguous' (bool): Flag indicating if this is one of our ambiguous images (see 'splits' below)",
"### Splits\nWe provide four splits:\n\n- 'test': 10'000 ambiguous images\n- 'train': 10'000 ambiguous images - adding ambiguous images to the training set makes sure test-time ambiguous images are in-distribution.\n- 'test_mixed': 20'000 images, consisting of the (shuffled) concatenation of our ambiguous 'test' set and the nominal mnist test set by LeCun et. al.,\n- 'train_mixed': 70'000 images, consisting of the (shuffled) concatenation of our ambiguous 'training' and the nominal training set.\n\nNote that the ambiguous test images are highly ambiguous (i.e., the two classes have very similar ground truth likelihoods), \nthe training set images allow for more unbalanced ambiguity. \nThis is to make the training set more closely connected to the nominal data, while still keeping the test set clearly ambiguous.\n\nFor research targeting explicitly aleatoric uncertainty, we recommend training the model using 'train_mixed'. \nOtherwise, our 'test' set will lead to both epistemic and aleatoric uncertainty. \nIn related literature, such 'mixed' splits are sometimes denoted as *dirty* splits.",
"### Assessment and Validity\nFor a brief discussion of the strength and weaknesses of this dataset, \nincluding a quantitative comparison to the (only) other ambiguous datasets available in the literature, we refer to our paper.",
"### Paper\nPre-print here: URL\n\nCitation:",
"### License\nAs this is a derivative work of mnist, which is CC-BY-SA 3.0 licensed, our dataset is released using the same license."
] |
[
"TAGS\n#task_categories-image-classification #annotations_creators-machine-generated #size_categories-10K<n<100K #source_datasets-extended|mnist #language-English #license-cc-by-sa-3.0 #arxiv-2207.10495 #region-us \n",
"# Mnist-Ambiguous\n\nThis dataset contains mnist-like images, but with an unclear ground truth. For each image, there are two classes which could be considered true.\nRobust and uncertainty-aware DNNs should thus detect and flag these issues.",
"### Features\nSame as mnist, the supervised dataset has an 'image' (28x28 int array) and a 'label' (int).\n\nAdditionally, the following features are exposed for your convenience:\n\n- 'text_label' (str): A textual representation of the probabilistic label, e.g. 'p(0)=0.54, p(5)=0.46' \n- 'p_label' (list of floats): Ground-Truth probabilities for each class (two nonzero values for our ambiguous images)\n- 'is_ambiguous' (bool): Flag indicating if this is one of our ambiguous images (see 'splits' below)",
"### Splits\nWe provide four splits:\n\n- 'test': 10'000 ambiguous images\n- 'train': 10'000 ambiguous images - adding ambiguous images to the training set makes sure test-time ambiguous images are in-distribution.\n- 'test_mixed': 20'000 images, consisting of the (shuffled) concatenation of our ambiguous 'test' set and the nominal mnist test set by LeCun et. al.,\n- 'train_mixed': 70'000 images, consisting of the (shuffled) concatenation of our ambiguous 'training' and the nominal training set.\n\nNote that the ambiguous test images are highly ambiguous (i.e., the two classes have very similar ground truth likelihoods), \nthe training set images allow for more unbalanced ambiguity. \nThis is to make the training set more closely connected to the nominal data, while still keeping the test set clearly ambiguous.\n\nFor research targeting explicitly aleatoric uncertainty, we recommend training the model using 'train_mixed'. \nOtherwise, our 'test' set will lead to both epistemic and aleatoric uncertainty. \nIn related literature, such 'mixed' splits are sometimes denoted as *dirty* splits.",
"### Assessment and Validity\nFor a brief discussion of the strength and weaknesses of this dataset, \nincluding a quantitative comparison to the (only) other ambiguous datasets available in the literature, we refer to our paper.",
"### Paper\nPre-print here: URL\n\nCitation:",
"### License\nAs this is a derivative work of mnist, which is CC-BY-SA 3.0 licensed, our dataset is released using the same license."
] |
[
78,
62,
155,
301,
53,
12,
35
] |
[
"passage: TAGS\n#task_categories-image-classification #annotations_creators-machine-generated #size_categories-10K<n<100K #source_datasets-extended|mnist #language-English #license-cc-by-sa-3.0 #arxiv-2207.10495 #region-us \n# Mnist-Ambiguous\n\nThis dataset contains mnist-like images, but with an unclear ground truth. For each image, there are two classes which could be considered true.\nRobust and uncertainty-aware DNNs should thus detect and flag these issues.### Features\nSame as mnist, the supervised dataset has an 'image' (28x28 int array) and a 'label' (int).\n\nAdditionally, the following features are exposed for your convenience:\n\n- 'text_label' (str): A textual representation of the probabilistic label, e.g. 'p(0)=0.54, p(5)=0.46' \n- 'p_label' (list of floats): Ground-Truth probabilities for each class (two nonzero values for our ambiguous images)\n- 'is_ambiguous' (bool): Flag indicating if this is one of our ambiguous images (see 'splits' below)"
] |
bf70afa051891e255c7ea72702336b6c262e97f6
|
# Dataset Card for "bedroom_left_vs_right"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sungile/bedroom_left_vs_right
|
[
"region:us"
] |
2023-09-27T18:56:21+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19193302.0, "num_examples": 20}], "download_size": 19194928, "dataset_size": 19193302.0}}
|
2023-09-27T20:08:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "bedroom_left_vs_right"
More Information needed
|
[
"# Dataset Card for \"bedroom_left_vs_right\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"bedroom_left_vs_right\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"bedroom_left_vs_right\"\n\nMore Information needed"
] |
0129675d859ce0d260329415d68d56ce1f9cce9c
|
# Dataset Card for "196211bd"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/196211bd
|
[
"region:us"
] |
2023-09-27T19:00:15+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 217, "num_examples": 10}], "download_size": 1421, "dataset_size": 217}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-27T19:00:16+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "196211bd"
More Information needed
|
[
"# Dataset Card for \"196211bd\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"196211bd\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"196211bd\"\n\nMore Information needed"
] |
8f70325f4343645fa9162c1a0e3ea159709f0925
|
# Dataset Card for Observed Antibody Space
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
zgcarvalho/oas-test
|
[
"size_categories:10M<n<100M",
"license:cc-by-4.0",
"biology",
"protein",
"region:us"
] |
2023-09-27T19:19:04+00:00
|
{"license": "cc-by-4.0", "size_categories": "10M<n<100M", "pretty_name": "Observed Antibody Space", "config_names": ["paired", "unpaired"], "tags": ["biology", "protein"], "dataset_info": [{"config_name": "paired", "features": [{"name": "sequence_heavy", "dtype": "string"}, {"name": "sequence_light", "dtype": "string"}, {"name": "cdr1_heavy", "dtype": "string"}, {"name": "cdr2_heavy", "dtype": "string"}, {"name": "cdr3_heavy", "dtype": "string"}, {"name": "fwr1_heavy", "dtype": "string"}, {"name": "fwr2_heavy", "dtype": "string"}, {"name": "fwr3_heavy", "dtype": "string"}, {"name": "fwr4_heavy", "dtype": "string"}, {"name": "cdr1_light", "dtype": "string"}, {"name": "cdr2_light", "dtype": "string"}, {"name": "cdr3_light", "dtype": "string"}, {"name": "fwr1_light", "dtype": "string"}, {"name": "fwr2_light", "dtype": "string"}, {"name": "fwr3_light", "dtype": "string"}, {"name": "fwr4_light", "dtype": "string"}, {"name": "species", "dtype": "string"}, {"name": "vaccine", "dtype": "string"}, {"name": "disease", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 985822519, "num_examples": 1777462}], "download_size": 0, "dataset_size": 985822519}, {"config_name": "unpaired", "features": [{"name": "sequence", "dtype": "string"}, {"name": "chain", "dtype": "string"}, {"name": "cdr1", "dtype": "string"}, {"name": "cdr2", "dtype": "string"}, {"name": "cdr3", "dtype": "string"}, {"name": "fwr1", "dtype": "string"}, {"name": "fwr2", "dtype": "string"}, {"name": "fwr3", "dtype": "string"}, {"name": "fwr4", "dtype": "string"}, {"name": "species", "dtype": "string"}, {"name": "vaccine", "dtype": "string"}, {"name": "disease", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4671469078, "num_examples": 15925303}], "download_size": 0, "dataset_size": 4671469078}], "configs": [{"config_name": "paired", "data_files": [{"split": "train", "path": "paired/train-*"}]}, {"config_name": "unpaired", "data_files": [{"split": "train", "path": "unpaired/train-*"}]}]}
|
2023-09-28T18:34:40+00:00
|
[] |
[] |
TAGS
#size_categories-10M<n<100M #license-cc-by-4.0 #biology #protein #region-us
|
# Dataset Card for Observed Antibody Space
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Observed Antibody Space",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#size_categories-10M<n<100M #license-cc-by-4.0 #biology #protein #region-us \n",
"# Dataset Card for Observed Antibody Space",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
33,
11,
24,
6,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#size_categories-10M<n<100M #license-cc-by-4.0 #biology #protein #region-us \n# Dataset Card for Observed Antibody Space## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
7de4824c6865a13e82f4556f417314f15e16d4d0
|
Dataset for object detection of military aircraft
bounding box in PASCAL VOC format (xmin, ymin, xmax, ymax)
43 aircraft types
(A-10, A-400M, AG-600, AV-8B, B-1, B-2, B-52 Be-200, C-130, C-17, C-2, C-5, E-2, E-7, EF-2000, F-117, F-14, F-15, F-16, F/A-18, F-22, F-35, F-4, J-20, JAS-39, MQ-9, Mig-31, Mirage2000, P-3(CP-140), RQ-4, Rafale, SR-71(may contain A-12), Su-34, Su-57, Tornado, Tu-160, Tu-95(Tu-142), U-2, US-2(US-1A Kai), V-22, Vulcan, XB-70, YF-23)
Please let me know if you find wrong labels or duplicated images.
|
Illia56/Military-Aircraft-Detection
|
[
"task_categories:object-detection",
"task_categories:zero-shot-classification",
"task_categories:zero-shot-image-classification",
"task_categories:depth-estimation",
"task_categories:image-classification",
"task_categories:image-segmentation",
"size_categories:1M<n<10M",
"license:apache-2.0",
"Image",
"Computer Vision ",
"Military",
"Aviation",
"Engineering",
"region:us"
] |
2023-09-27T19:26:04+00:00
|
{"license": "apache-2.0", "size_categories": ["1M<n<10M"], "task_categories": ["object-detection", "zero-shot-classification", "zero-shot-image-classification", "depth-estimation", "image-classification", "image-segmentation"], "tags": ["Image", "Computer Vision ", "Military", "Aviation", "Engineering"]}
|
2023-09-28T04:40:58+00:00
|
[] |
[] |
TAGS
#task_categories-object-detection #task_categories-zero-shot-classification #task_categories-zero-shot-image-classification #task_categories-depth-estimation #task_categories-image-classification #task_categories-image-segmentation #size_categories-1M<n<10M #license-apache-2.0 #Image #Computer Vision #Military #Aviation #Engineering #region-us
|
Dataset for object detection of military aircraft
bounding box in PASCAL VOC format (xmin, ymin, xmax, ymax)
43 aircraft types
(A-10, A-400M, AG-600, AV-8B, B-1, B-2, B-52 Be-200, C-130, C-17, C-2, C-5, E-2, E-7, EF-2000, F-117, F-14, F-15, F-16, F/A-18, F-22, F-35, F-4, J-20, JAS-39, MQ-9, Mig-31, Mirage2000, P-3(CP-140), RQ-4, Rafale, SR-71(may contain A-12), Su-34, Su-57, Tornado, Tu-160, Tu-95(Tu-142), U-2, US-2(US-1A Kai), V-22, Vulcan, XB-70, YF-23)
Please let me know if you find wrong labels or duplicated images.
|
[] |
[
"TAGS\n#task_categories-object-detection #task_categories-zero-shot-classification #task_categories-zero-shot-image-classification #task_categories-depth-estimation #task_categories-image-classification #task_categories-image-segmentation #size_categories-1M<n<10M #license-apache-2.0 #Image #Computer Vision #Military #Aviation #Engineering #region-us \n"
] |
[
117
] |
[
"passage: TAGS\n#task_categories-object-detection #task_categories-zero-shot-classification #task_categories-zero-shot-image-classification #task_categories-depth-estimation #task_categories-image-classification #task_categories-image-segmentation #size_categories-1M<n<10M #license-apache-2.0 #Image #Computer Vision #Military #Aviation #Engineering #region-us \n"
] |
f35313178caccc42551c0c7f1341136eb3fbe2e1
|
# Bangumi Image Base of Yagate Kimi Ni Naru
This is the image base of bangumi Yagate Kimi ni Naru, we detected 17 characters, 1763 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 597 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 9 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 49 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 46 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 451 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 18 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 76 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 52 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 17 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 16 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 20 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 44 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 82 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 39 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 129 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 7 | [Download](15/dataset.zip) |  |  |  |  |  |  |  | N/A |
| noise | 111 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BangumiBase/yagatekimininaru
|
[
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] |
2023-09-27T19:36:25+00:00
|
{"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]}
|
2023-09-29T11:50:36+00:00
|
[] |
[] |
TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
|
Bangumi Image Base of Yagate Kimi Ni Naru
=========================================
This is the image base of bangumi Yagate Kimi ni Naru, we detected 17 characters, 1763 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
|
[] |
[
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
[
25
] |
[
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
8580f1ff7062a79a56125521198814fcea79f642
|
# Dataset Card for "pubmed-abstracts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
gayanin/pubmed-abstracts
|
[
"region:us"
] |
2023-09-27T20:22:51+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "refs", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9419993, "num_examples": 74724}, {"name": "test", "num_bytes": 1206965, "num_examples": 9341}, {"name": "validation", "num_bytes": 1239760, "num_examples": 9341}], "download_size": 6522287, "dataset_size": 11866718}}
|
2023-09-27T21:44:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "pubmed-abstracts"
More Information needed
|
[
"# Dataset Card for \"pubmed-abstracts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"pubmed-abstracts\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"pubmed-abstracts\"\n\nMore Information needed"
] |
b950a117ac0b8b4e44eff4c35e56fc32e7403c25
|
# Dataset of Miyauchi Hikage
This is the dataset of Miyauchi Hikage, containing 192 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 192 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 446 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 497 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 192 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 192 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 192 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 446 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 446 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 393 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 497 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 497 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/miyauchi_hikage_nonnonbiyori
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-27T20:31:46+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-27T20:37:51+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Miyauchi Hikage
==========================
This is the dataset of Miyauchi Hikage, containing 192 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
270d958ad09462a303199ca3d88576dc0f77d8de
|
# Dataset of Fujimiya Konomi
This is the dataset of Fujimiya Konomi, containing 160 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 160 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 389 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 427 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 160 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 160 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 160 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 389 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 389 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 331 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 427 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 427 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/fujimiya_konomi_nonnonbiyori
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-27T20:54:00+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-27T20:58:56+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Fujimiya Konomi
==========================
This is the dataset of Fujimiya Konomi, containing 160 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
6d2388527013c9d6ba097f1e38b37ea632164e82
|
# Dataset Card for "italian-dataset-deepl2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
thomasavare/italian-dataset-deepl2
|
[
"region:us"
] |
2023-09-27T21:06:29+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "english", "dtype": "string"}, {"name": "italian", "dtype": "string"}, {"name": "Class", "dtype": "string"}, {"name": "Class_index", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 60782, "num_examples": 500}], "download_size": 22544, "dataset_size": 60782}}
|
2023-09-27T21:06:31+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "italian-dataset-deepl2"
More Information needed
|
[
"# Dataset Card for \"italian-dataset-deepl2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"italian-dataset-deepl2\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"italian-dataset-deepl2\"\n\nMore Information needed"
] |
93d4fdad0d4275415a565c97f702b2d8247f3ed9
|
# Dataset of Shinoda Akane
This is the dataset of Shinoda Akane, containing 82 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 82 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 205 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 233 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 82 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 82 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 82 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 205 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 205 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 185 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 233 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 233 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/shinoda_akane_nonnonbiyori
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-27T21:07:15+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-27T21:10:06+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Shinoda Akane
========================
This is the dataset of Shinoda Akane, containing 82 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
f2b2727f863616c532b5944c9db8f700449e7b39
|
# Dataset Card for "medical_dialogue-chinese-zhtw"
中文醫療問答資料集
## 來源
本資料集是從 [Toyhom/Chinese-medical-dialogue-data](https://github.com/Toyhom/Chinese-medical-dialogue-data) 的 github repo 中轉換而來。
## 內容
|科別|數量|
|---|---|
**Andriatria 男科** |94,596 個問答對|
**IM 內科** |220,606 個問答對|
**OAGD 婦產科** |183,751 個問答對|
**Oncology 腫瘤科** |75,553 個問答對|
**Pediatric 兒科** |101,602 個問答對|
**Surgical 外科** |115,991 個問答對|
總計 792,099 條數據
## 範例
```json
{
"instruction": "現在你是個神經腦外科醫生,請根據病人的問題給予建議:",
"input": "癲癇病能吃德巴金嗎,錯覺,有時候感覺看到的和聽到的不太一樣。",
"output": "巴金是廣譜抗病藥物,主要作用於中樞神經系統,對動物的藥理研究發現德巴金對各種癲癇的實驗模型(全身性和局部性)均有抗驚厥作用,對人的各種類型癲癇發作有抑製作用,作用機理可能與增加γ-氨基丁酸的濃度有關。主要是治癲癇藥物。建議在醫生的知道下,用藥,祝您身體早日康復。"
}
```
## 欄位:
```
instruction: 指令
input: 輸入
output: 輸出
```
## 使用限制
此資料集用於研究大型語言模型的目的,不得用於會對社會帶來危害的用途。
本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集所帶來的任何損害、糾紛,本專案不承擔任何責任。
|
erhwenkuo/medical_dialogue-chinese-zhtw
|
[
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:zh",
"license:mit",
"region:us"
] |
2023-09-27T21:27:35+00:00
|
{"language": ["zh"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "output", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 553726613, "num_examples": 799743}], "download_size": 288686981, "dataset_size": 553726613}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-27T22:06:14+00:00
|
[] |
[
"zh"
] |
TAGS
#task_categories-text-generation #size_categories-100K<n<1M #language-Chinese #license-mit #region-us
|
Dataset Card for "medical\_dialogue-chinese-zhtw"
=================================================
中文醫療問答資料集
來源
--
本資料集是從 Toyhom/Chinese-medical-dialogue-data 的 github repo 中轉換而來。
內容
--
總計 792,099 條數據
範例
--
欄位:
---
使用限制
----
此資料集用於研究大型語言模型的目的,不得用於會對社會帶來危害的用途。
本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集所帶來的任何損害、糾紛,本專案不承擔任何責任。
|
[] |
[
"TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-Chinese #license-mit #region-us \n"
] |
[
39
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-Chinese #license-mit #region-us \n"
] |
3c261a8b52702ed1e1727da186e1fc9434900551
|
# Dataset of Yukimura Aoi
This is the dataset of Yukimura Aoi, containing 300 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 300 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 750 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 889 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 300 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 300 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 300 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 750 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 750 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 643 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 889 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 889 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/yukimura_aoi_encouragementofclimb
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-27T21:38:32+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-27T21:47:31+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Yukimura Aoi
=======================
This is the dataset of Yukimura Aoi, containing 300 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
91c011e7abc54bfc05ea9e3be7256bf63aff8288
|
# Dataset of Kuraue Hinata
This is the dataset of Kuraue Hinata, containing 299 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 299 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 722 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 876 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 299 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 299 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 299 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 722 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 722 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 625 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 876 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 876 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/kuraue_hinata_encouragementofclimb
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-27T22:15:18+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-27T22:21:20+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Kuraue Hinata
========================
This is the dataset of Kuraue Hinata, containing 299 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
087b12f5be757b724ace6cb02dc13bb78a9a2973
|
# DEBATS (National Assembly and Senate)
The database contains full reports of french [debates](https://echanges.dila.gouv.fr/OPENDATA/Debats/) in the National Assembly since October 4, 2011 and in the Senate since October 2, 2011.
|
Nicolas-BZRD/DEBATS_opendata
|
[
"size_categories:1K<n<10K",
"language:fr",
"license:odc-by",
"legal",
"region:us"
] |
2023-09-27T22:25:33+00:00
|
{"language": ["fr"], "license": "odc-by", "size_categories": ["1K<n<10K"], "pretty_name": "Debates at National Assembly and Senate", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 860286530, "num_examples": 2214}], "download_size": 438989465, "dataset_size": 860286530}, "tags": ["legal"]}
|
2023-09-28T10:00:26+00:00
|
[] |
[
"fr"
] |
TAGS
#size_categories-1K<n<10K #language-French #license-odc-by #legal #region-us
|
# DEBATS (National Assembly and Senate)
The database contains full reports of french debates in the National Assembly since October 4, 2011 and in the Senate since October 2, 2011.
|
[
"# DEBATS (National Assembly and Senate)\n\nThe database contains full reports of french debates in the National Assembly since October 4, 2011 and in the Senate since October 2, 2011."
] |
[
"TAGS\n#size_categories-1K<n<10K #language-French #license-odc-by #legal #region-us \n",
"# DEBATS (National Assembly and Senate)\n\nThe database contains full reports of french debates in the National Assembly since October 4, 2011 and in the Senate since October 2, 2011."
] |
[
34,
43
] |
[
"passage: TAGS\n#size_categories-1K<n<10K #language-French #license-odc-by #legal #region-us \n# DEBATS (National Assembly and Senate)\n\nThe database contains full reports of french debates in the National Assembly since October 4, 2011 and in the Senate since October 2, 2011."
] |
72f2ae03c5cce1f02ec4714c40e79d669942c967
|
# Dataset of Saitō Kaede
This is the dataset of Saitō Kaede, containing 294 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 294 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 717 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 813 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 294 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 294 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 294 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 717 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 717 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 588 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 813 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 813 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/saito_kaede_encouragementofclimb
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-27T22:43:52+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-27T22:49:18+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Saitō Kaede
======================
This is the dataset of Saitō Kaede, containing 294 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
3aac46684181f38646cc90aad10651524d8a4a2f
|
# Dataset Card for "rayquaza"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
kira/rayquaza
|
[
"region:us"
] |
2023-09-27T23:07:52+00:00
|
{"dataset_info": {"features": [{"name": "conversation", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "sys_message", "dtype": "string"}, {"name": "tkn_len", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1145395609.5420778, "num_examples": 396955}], "download_size": 559019729, "dataset_size": 1145395609.5420778}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-27T23:12:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "rayquaza"
More Information needed
|
[
"# Dataset Card for \"rayquaza\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"rayquaza\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"rayquaza\"\n\nMore Information needed"
] |
1a0c52602a28cc2fddecba5a4fd763477613d06c
|
# Dataset of Aoba Kokona
This is the dataset of Aoba Kokona, containing 298 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 298 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 730 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 766 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 298 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 298 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 298 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 730 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 730 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 560 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 766 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 766 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/aoba_kokona_encouragementofclimb
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-27T23:12:08+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-27T23:18:18+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Aoba Kokona
======================
This is the dataset of Aoba Kokona, containing 298 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
c88fbc5c6afb32ed24ae40261df2e8df4c333684
|
# Dataset of Kurosaki Honoka
This is the dataset of Kurosaki Honoka, containing 82 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 82 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 196 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 228 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 82 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 82 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 82 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 196 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 196 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 172 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 228 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 228 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/kurosaki_honoka_encouragementofclimb
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-27T23:24:51+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-27T23:27:22+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Kurosaki Honoka
==========================
This is the dataset of Kurosaki Honoka, containing 82 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
9e70a2c1a0999544f7de6b3b6e30c05a01d3e5ca
|
# Dataset Card for "Collective Cognition ChatGPT Conversations"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
### Dataset Summary
The "Collective Cognition ChatGPT Conversations" dataset is a collection of chat logs between users and the ChatGPT model. These conversations have been shared by users on the "Collective Cognition" website. The dataset provides insights into user interactions with language models and can be utilized for multiple purposes, including training, research, and analysis.
Visit Collective Cognition to browse or contribute chats here: https://collectivecognition.ai
### Supported Tasks
- **Language Model Finetuning**: Training through behavior cloning and knowledge instilling.
- **Dialogue Understanding**: Analyzing the flow and structure of conversations.
- **Sentiment Analysis**: Understanding the sentiment or emotion in the dialogues.
- **Topic Modeling**: Determining the topics discussed within the chats.
### Languages
The primary language of the dataset is English, but any language chat may be present in the dataset as users share more chats.
## Dataset Structure
### Data Instances
A typical data instance includes a chat log with a title, model name, whether the chat used custom instructions (currently not included if so), and the content of the message with the role of the sender (either "human" or "gpt").
The dataset is formatted to conform with ShareGPT standard.
This is an example entry in the dataset, noting that turns are variable and can be many or as few as two:
```json
{
"title": "Interest Formulas Clarification",
"model_name": "GPT-4",
"custom_instruction": false,
"conversations": [
{
"from": "human",
"value": "Is this correct? The formula for calculating simple interest (also known as Compound Interest) is given by I = P * r^n, where \n- I represents the final amount of money, also called principal or capital. This value will keep on increasing at a fixed rate per unit time denoted by 'r' and number of times denoted by 'n'. The initial investment represented by 'P', while the percentage rate expressed as an annual fraction, i.e., p/100 representing the percentage to be paid annually. In other words it\u2019s how much you earn in one year out of your original $100 invested with a 5% annual return compounded monthly."
},
{
"from": "assistant",
"value": "I'm sorry, but there seems to be some confusion in your question. The formula given and the definitions of the variables seem to be mixed up between simple interest and compound interest. Let me clarify:\n\n1. Simple Interest: This type of interest is calculated only on the original amount (principal) that you invested or loaned. The formula for simple interest is I = PRT, where:\n - I is the interest\n - P is the principal amount (initial investment)\n - R is the annual interest rate (in decimal form, so a 5% rate would be 0.05)\n - T is the time the money is invested or borrowed for, in years.\n\n2. Compound Interest: This type of interest is calculated on the initial principal and also on the accumulated interest of previous periods of a deposit or loan. The formula for compound interest is A = P(1 + r/n)^(nt), where:\n - A is the amount of money accumulated after n years, including interest.\n - P is the principal amount (the initial amount of money)\n - r is the annual interest rate (in decimal form, so a 5% rate would be 0.05)\n - n is the number of times that interest is compounded per year\n - t is the number of years the money is invested or borrowed for.\n\nIn both cases, the interest rate is usually expressed as an annual rate, regardless of the length of the term."
}
]
}
```
### Data Splits
Currently, the dataset is not divided into specific splits (train, test, validation).
## Dataset Creation
### Curation Rationale
The dataset was curated to provide insights into how users interact with language models and to contribute to the broader NLP community's resources.
### Source Data
The data originates from user contributions on the "Collective Cognition" website.
### Personal and Sensitive Information
All chats uploaded to the Collective Cognition website are made public, and are uploaded as a new dataset periodically. If you would like to have your chat removed, please email [email protected]
## Considerations for Using the Data
### Social Impact of Dataset
The dataset offers a glimpse into the interaction dynamics between humans and AI models. It can be instrumental for researchers studying human-AI collaboration.
### Discussion of Biases
There might be biases in the dataset based on the types of users contributing chat logs and the topics they discuss with ChatGPT, particularly centered around what users may utilize ChatGPT for the most.
### Other Known Limitations
The dataset is dependent on the voluntary contributions of users. Hence, it might not represent the entire spectrum of interactions that users have with ChatGPT.
## Additional Information
### Licensing Information
MIT
|
CollectiveCognition/chats-data-2023-09-27
|
[
"license:mit",
"region:us"
] |
2023-09-27T23:39:17+00:00
|
{"license": "mit"}
|
2023-09-27T23:40:51+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
# Dataset Card for "Collective Cognition ChatGPT Conversations"
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
### Dataset Summary
The "Collective Cognition ChatGPT Conversations" dataset is a collection of chat logs between users and the ChatGPT model. These conversations have been shared by users on the "Collective Cognition" website. The dataset provides insights into user interactions with language models and can be utilized for multiple purposes, including training, research, and analysis.
Visit Collective Cognition to browse or contribute chats here: URL
### Supported Tasks
- Language Model Finetuning: Training through behavior cloning and knowledge instilling.
- Dialogue Understanding: Analyzing the flow and structure of conversations.
- Sentiment Analysis: Understanding the sentiment or emotion in the dialogues.
- Topic Modeling: Determining the topics discussed within the chats.
### Languages
The primary language of the dataset is English, but any language chat may be present in the dataset as users share more chats.
## Dataset Structure
### Data Instances
A typical data instance includes a chat log with a title, model name, whether the chat used custom instructions (currently not included if so), and the content of the message with the role of the sender (either "human" or "gpt").
The dataset is formatted to conform with ShareGPT standard.
This is an example entry in the dataset, noting that turns are variable and can be many or as few as two:
### Data Splits
Currently, the dataset is not divided into specific splits (train, test, validation).
## Dataset Creation
### Curation Rationale
The dataset was curated to provide insights into how users interact with language models and to contribute to the broader NLP community's resources.
### Source Data
The data originates from user contributions on the "Collective Cognition" website.
### Personal and Sensitive Information
All chats uploaded to the Collective Cognition website are made public, and are uploaded as a new dataset periodically. If you would like to have your chat removed, please email admin@URL
## Considerations for Using the Data
### Social Impact of Dataset
The dataset offers a glimpse into the interaction dynamics between humans and AI models. It can be instrumental for researchers studying human-AI collaboration.
### Discussion of Biases
There might be biases in the dataset based on the types of users contributing chat logs and the topics they discuss with ChatGPT, particularly centered around what users may utilize ChatGPT for the most.
### Other Known Limitations
The dataset is dependent on the voluntary contributions of users. Hence, it might not represent the entire spectrum of interactions that users have with ChatGPT.
## Additional Information
### Licensing Information
MIT
|
[
"# Dataset Card for \"Collective Cognition ChatGPT Conversations\"",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description",
"### Dataset Summary\nThe \"Collective Cognition ChatGPT Conversations\" dataset is a collection of chat logs between users and the ChatGPT model. These conversations have been shared by users on the \"Collective Cognition\" website. The dataset provides insights into user interactions with language models and can be utilized for multiple purposes, including training, research, and analysis.\n\nVisit Collective Cognition to browse or contribute chats here: URL",
"### Supported Tasks\n- Language Model Finetuning: Training through behavior cloning and knowledge instilling.\n- Dialogue Understanding: Analyzing the flow and structure of conversations.\n- Sentiment Analysis: Understanding the sentiment or emotion in the dialogues.\n- Topic Modeling: Determining the topics discussed within the chats.",
"### Languages\nThe primary language of the dataset is English, but any language chat may be present in the dataset as users share more chats.",
"## Dataset Structure",
"### Data Instances\nA typical data instance includes a chat log with a title, model name, whether the chat used custom instructions (currently not included if so), and the content of the message with the role of the sender (either \"human\" or \"gpt\").\n\nThe dataset is formatted to conform with ShareGPT standard.\n\nThis is an example entry in the dataset, noting that turns are variable and can be many or as few as two:",
"### Data Splits\nCurrently, the dataset is not divided into specific splits (train, test, validation).",
"## Dataset Creation",
"### Curation Rationale\nThe dataset was curated to provide insights into how users interact with language models and to contribute to the broader NLP community's resources.",
"### Source Data\nThe data originates from user contributions on the \"Collective Cognition\" website.",
"### Personal and Sensitive Information\nAll chats uploaded to the Collective Cognition website are made public, and are uploaded as a new dataset periodically. If you would like to have your chat removed, please email admin@URL",
"## Considerations for Using the Data",
"### Social Impact of Dataset\nThe dataset offers a glimpse into the interaction dynamics between humans and AI models. It can be instrumental for researchers studying human-AI collaboration.",
"### Discussion of Biases\nThere might be biases in the dataset based on the types of users contributing chat logs and the topics they discuss with ChatGPT, particularly centered around what users may utilize ChatGPT for the most.",
"### Other Known Limitations\nThe dataset is dependent on the voluntary contributions of users. Hence, it might not represent the entire spectrum of interactions that users have with ChatGPT.",
"## Additional Information",
"### Licensing Information\nMIT"
] |
[
"TAGS\n#license-mit #region-us \n",
"# Dataset Card for \"Collective Cognition ChatGPT Conversations\"",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description",
"### Dataset Summary\nThe \"Collective Cognition ChatGPT Conversations\" dataset is a collection of chat logs between users and the ChatGPT model. These conversations have been shared by users on the \"Collective Cognition\" website. The dataset provides insights into user interactions with language models and can be utilized for multiple purposes, including training, research, and analysis.\n\nVisit Collective Cognition to browse or contribute chats here: URL",
"### Supported Tasks\n- Language Model Finetuning: Training through behavior cloning and knowledge instilling.\n- Dialogue Understanding: Analyzing the flow and structure of conversations.\n- Sentiment Analysis: Understanding the sentiment or emotion in the dialogues.\n- Topic Modeling: Determining the topics discussed within the chats.",
"### Languages\nThe primary language of the dataset is English, but any language chat may be present in the dataset as users share more chats.",
"## Dataset Structure",
"### Data Instances\nA typical data instance includes a chat log with a title, model name, whether the chat used custom instructions (currently not included if so), and the content of the message with the role of the sender (either \"human\" or \"gpt\").\n\nThe dataset is formatted to conform with ShareGPT standard.\n\nThis is an example entry in the dataset, noting that turns are variable and can be many or as few as two:",
"### Data Splits\nCurrently, the dataset is not divided into specific splits (train, test, validation).",
"## Dataset Creation",
"### Curation Rationale\nThe dataset was curated to provide insights into how users interact with language models and to contribute to the broader NLP community's resources.",
"### Source Data\nThe data originates from user contributions on the \"Collective Cognition\" website.",
"### Personal and Sensitive Information\nAll chats uploaded to the Collective Cognition website are made public, and are uploaded as a new dataset periodically. If you would like to have your chat removed, please email admin@URL",
"## Considerations for Using the Data",
"### Social Impact of Dataset\nThe dataset offers a glimpse into the interaction dynamics between humans and AI models. It can be instrumental for researchers studying human-AI collaboration.",
"### Discussion of Biases\nThere might be biases in the dataset based on the types of users contributing chat logs and the topics they discuss with ChatGPT, particularly centered around what users may utilize ChatGPT for the most.",
"### Other Known Limitations\nThe dataset is dependent on the voluntary contributions of users. Hence, it might not represent the entire spectrum of interactions that users have with ChatGPT.",
"## Additional Information",
"### Licensing Information\nMIT"
] |
[
11,
19,
116,
4,
107,
75,
32,
6,
99,
28,
5,
38,
24,
52,
8,
40,
55,
42,
5,
7
] |
[
"passage: TAGS\n#license-mit #region-us \n# Dataset Card for \"Collective Cognition ChatGPT Conversations\"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information## Dataset Description### Dataset Summary\nThe \"Collective Cognition ChatGPT Conversations\" dataset is a collection of chat logs between users and the ChatGPT model. These conversations have been shared by users on the \"Collective Cognition\" website. The dataset provides insights into user interactions with language models and can be utilized for multiple purposes, including training, research, and analysis.\n\nVisit Collective Cognition to browse or contribute chats here: URL### Supported Tasks\n- Language Model Finetuning: Training through behavior cloning and knowledge instilling.\n- Dialogue Understanding: Analyzing the flow and structure of conversations.\n- Sentiment Analysis: Understanding the sentiment or emotion in the dialogues.\n- Topic Modeling: Determining the topics discussed within the chats.### Languages\nThe primary language of the dataset is English, but any language chat may be present in the dataset as users share more chats.## Dataset Structure### Data Instances\nA typical data instance includes a chat log with a title, model name, whether the chat used custom instructions (currently not included if so), and the content of the message with the role of the sender (either \"human\" or \"gpt\").\n\nThe dataset is formatted to conform with ShareGPT standard.\n\nThis is an example entry in the dataset, noting that turns are variable and can be many or as few as two:### Data Splits\nCurrently, the dataset is not divided into specific splits (train, test, validation).## Dataset Creation"
] |
2fe1c1a99946fd3998d932d8a46fc99ddd2ab3fc
|
# Dataset of Hoto Kokoa
This is the dataset of Hoto Kokoa, containing 300 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 300 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 717 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 827 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 300 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 300 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 300 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 717 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 717 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 616 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 827 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 827 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/hoto_kokoa_istheorderarabbit
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-27T23:57:18+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-28T00:03:51+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Hoto Kokoa
=====================
This is the dataset of Hoto Kokoa, containing 300 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
44b5b2faf26064b7a8f5a5039bbb76c67344e57b
|
# Dataset Card for "pubmed_subset_wiki_1p"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zxvix/pubmed_subset_wiki_1p
|
[
"region:us"
] |
2023-09-28T00:20:10+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2876377130.508231, "num_examples": 1010142}, {"name": "test", "num_bytes": 1024229, "num_examples": 1000}], "download_size": 630832709, "dataset_size": 2877401359.508231}}
|
2023-09-28T00:21:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "pubmed_subset_wiki_1p"
More Information needed
|
[
"# Dataset Card for \"pubmed_subset_wiki_1p\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"pubmed_subset_wiki_1p\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"pubmed_subset_wiki_1p\"\n\nMore Information needed"
] |
992f67985cd0caa7316b035e4b82694dadd4d7d9
|
# Dataset of Kafuu Chino
This is the dataset of Kafuu Chino, containing 292 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 292 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 680 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 765 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 292 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 292 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 292 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 680 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 680 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 582 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 765 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 765 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/kafuu_chino_istheorderarabbit
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-28T00:30:21+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-28T00:36:35+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Kafuu Chino
======================
This is the dataset of Kafuu Chino, containing 292 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
54dae5762da9e4c5d814246036ca1f32364f7bfa
|
# Dataset Card for "gpt4all-j-prompt-generations-pt"
## Dataset Description
Copy translated into Portuguese of the dataset [gpt4all_prompt_generations](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations) using the googletrans library.
## Translate
[translate_dataset.ipynb](translate_dataset.ipynb)
## Usage
[dataset_usage.ipynb](dataset_usage.ipynb)
|
pablo-moreira/gpt4all-j-prompt-generations-pt
|
[
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:pt",
"license:apache-2.0",
"region:us"
] |
2023-09-28T00:43:05+00:00
|
{"language": ["pt"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "pretty_name": "GPT4All Prompt Generations translated into Portuguese using Google Translate.", "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1956916380, "num_examples": 808812}], "download_size": 1134108118, "dataset_size": 1956916380}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-06T15:02:12+00:00
|
[] |
[
"pt"
] |
TAGS
#task_categories-text-generation #size_categories-100K<n<1M #language-Portuguese #license-apache-2.0 #region-us
|
# Dataset Card for "gpt4all-j-prompt-generations-pt"
## Dataset Description
Copy translated into Portuguese of the dataset gpt4all_prompt_generations using the googletrans library.
## Translate
translate_dataset.ipynb
## Usage
dataset_usage.ipynb
|
[
"# Dataset Card for \"gpt4all-j-prompt-generations-pt\"",
"## Dataset Description\n\nCopy translated into Portuguese of the dataset gpt4all_prompt_generations using the googletrans library.",
"## Translate\n\ntranslate_dataset.ipynb",
"## Usage\n\ndataset_usage.ipynb"
] |
[
"TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-Portuguese #license-apache-2.0 #region-us \n",
"# Dataset Card for \"gpt4all-j-prompt-generations-pt\"",
"## Dataset Description\n\nCopy translated into Portuguese of the dataset gpt4all_prompt_generations using the googletrans library.",
"## Translate\n\ntranslate_dataset.ipynb",
"## Usage\n\ndataset_usage.ipynb"
] |
[
43,
22,
34,
12,
11
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-Portuguese #license-apache-2.0 #region-us \n# Dataset Card for \"gpt4all-j-prompt-generations-pt\"## Dataset Description\n\nCopy translated into Portuguese of the dataset gpt4all_prompt_generations using the googletrans library.## Translate\n\ntranslate_dataset.ipynb## Usage\n\ndataset_usage.ipynb"
] |
b4b9e8f90b049f2dc8557d9e5fc6106cc6ce46ca
|
# Dataset of Tedeza Rize
This is the dataset of Tedeza Rize, containing 299 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 299 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 696 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 771 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 299 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 299 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 299 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 696 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 696 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 592 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 771 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 771 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/tedeza_rize_istheorderarabbit
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-28T01:06:26+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-28T01:13:32+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Tedeza Rize
======================
This is the dataset of Tedeza Rize, containing 299 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
ae0f79c2722d565b68ef40f74dd38b643b1e2cda
|
# Dataset Card for "dart"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
rookshanks/dart
|
[
"region:us"
] |
2023-09-28T01:10:24+00:00
|
{"dataset_info": {"features": [{"name": "context", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15361709, "num_examples": 62659}, {"name": "validation", "num_bytes": 1895789, "num_examples": 6980}, {"name": "test", "num_bytes": 3429190, "num_examples": 12552}], "download_size": 1145768, "dataset_size": 20686688}}
|
2023-09-28T01:35:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "dart"
More Information needed
|
[
"# Dataset Card for \"dart\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"dart\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"dart\"\n\nMore Information needed"
] |
fdb8c89fabd65ee3ac11442e6899e774583291d2
|
# Dataset of Ujimatsu Chiya
This is the dataset of Ujimatsu Chiya, containing 297 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 297 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 694 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 767 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 297 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 297 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 297 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 694 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 694 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 582 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 767 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 767 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/ujimatsu_chiya_istheorderarabbit
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-28T01:41:45+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-28T01:48:33+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Ujimatsu Chiya
=========================
This is the dataset of Ujimatsu Chiya, containing 297 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
7a46feb0d5ebfd8485aebe6e849170e4205611aa
|
# Dataset Card for "orca_mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
alpayariyak/orca_mini
|
[
"region:us"
] |
2023-09-28T02:13:02+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "system", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 62321431, "num_examples": 56037}], "download_size": 30816818, "dataset_size": 62321431}}
|
2023-09-28T02:13:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "orca_mini"
More Information needed
|
[
"# Dataset Card for \"orca_mini\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"orca_mini\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"orca_mini\"\n\nMore Information needed"
] |
e507503f7292ebec8617e48a0f47d5f3c0e2d722
|
# Dataset Card for "orca_mini_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
alpayariyak/orca_mini_v1
|
[
"region:us"
] |
2023-09-28T02:15:07+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "system", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 62321431, "num_examples": 56037}], "download_size": 30816818, "dataset_size": 62321431}}
|
2023-09-28T02:15:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "orca_mini_v1"
More Information needed
|
[
"# Dataset Card for \"orca_mini_v1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"orca_mini_v1\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"orca_mini_v1\"\n\nMore Information needed"
] |
4454a69d435db417e4fa3dc16b7a5e89c52a4e1a
|
# Dataset of Kirima Sharo
This is the dataset of Kirima Sharo, containing 295 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 295 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 658 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 784 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 295 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 295 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 295 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 658 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 658 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 553 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 784 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 784 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/kirima_sharo_istheorderarabbit
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-28T02:17:02+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-28T02:27:03+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Kirima Sharo
=======================
This is the dataset of Kirima Sharo, containing 295 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
5e7bcbe79b8416ce9f64062a8b6d91462d744115
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/608c79739c5a8f8ddd85c409/5OV6RfqSKycPUuI0rm34m.png" width="500">
# comma body
A dataset of indoor navigation with the comma body.
# description
This dataset is consists of 69 segments (1 min chunks) of video data compressed using H.265 and sensor measurements and logs from openpilot.
# how to use
- videos: use `openpilot.tools.lib.framereader` or your favorite video decoder
- logs: use `openpilot.tools.lib.logreader` or `PlotJuggler`
# have fun!
|
commaai/commabody
|
[
"license:mit",
"region:us"
] |
2023-09-28T02:21:20+00:00
|
{"license": "mit"}
|
2023-09-28T02:53:25+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
<img src="URL width="500">
# comma body
A dataset of indoor navigation with the comma body.
# description
This dataset is consists of 69 segments (1 min chunks) of video data compressed using H.265 and sensor measurements and logs from openpilot.
# how to use
- videos: use 'URL.framereader' or your favorite video decoder
- logs: use 'URL.logreader' or 'PlotJuggler'
# have fun!
|
[
"# comma body \nA dataset of indoor navigation with the comma body.",
"# description \nThis dataset is consists of 69 segments (1 min chunks) of video data compressed using H.265 and sensor measurements and logs from openpilot.",
"# how to use \n- videos: use 'URL.framereader' or your favorite video decoder\n- logs: use 'URL.logreader' or 'PlotJuggler'",
"# have fun!"
] |
[
"TAGS\n#license-mit #region-us \n",
"# comma body \nA dataset of indoor navigation with the comma body.",
"# description \nThis dataset is consists of 69 segments (1 min chunks) of video data compressed using H.265 and sensor measurements and logs from openpilot.",
"# how to use \n- videos: use 'URL.framereader' or your favorite video decoder\n- logs: use 'URL.logreader' or 'PlotJuggler'",
"# have fun!"
] |
[
11,
15,
38,
42,
4
] |
[
"passage: TAGS\n#license-mit #region-us \n# comma body \nA dataset of indoor navigation with the comma body.# description \nThis dataset is consists of 69 segments (1 min chunks) of video data compressed using H.265 and sensor measurements and logs from openpilot.# how to use \n- videos: use 'URL.framereader' or your favorite video decoder\n- logs: use 'URL.logreader' or 'PlotJuggler'# have fun!"
] |
e236feecff380c4cc35e512d11dc2b4febdebf6a
|
# Dataset Card for "orca_m_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
alpayariyak/orca_m_v1
|
[
"region:us"
] |
2023-09-28T02:25:10+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "system_prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 62321431, "num_examples": 56037}], "download_size": 30817650, "dataset_size": 62321431}}
|
2023-09-28T02:25:30+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "orca_m_v1"
More Information needed
|
[
"# Dataset Card for \"orca_m_v1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"orca_m_v1\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"orca_m_v1\"\n\nMore Information needed"
] |
3b58191aec443aca4ba870b8327b42aaac61007b
|
# Dataset Card for "combined"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DopeorNope/combined
|
[
"region:us"
] |
2023-09-28T02:25:58+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 36438102, "num_examples": 27085}], "download_size": 19659282, "dataset_size": 36438102}}
|
2023-09-28T02:32:25+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "combined"
More Information needed
|
[
"# Dataset Card for \"combined\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"combined\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"combined\"\n\nMore Information needed"
] |
d3752ca5da8688e5e55f94bc9b4124eb322e9dc9
|
# Dataset of Jyōga Maya
This is the dataset of Jyōga Maya, containing 283 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 283 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 634 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 716 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 283 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 283 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 283 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 634 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 634 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 535 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 716 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 716 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/jyoga_maya_istheorderarabbit
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-28T02:53:20+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-28T02:59:37+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Jyōga Maya
=====================
This is the dataset of Jyōga Maya, containing 283 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
3010fb64ca99ed01328aee5d9117b2eb9e637b3d
|
# Dataset Card for "train-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AfshanAhmed/train-data
|
[
"region:us"
] |
2023-09-28T03:16:09+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 215477115.0, "num_examples": 210}], "download_size": 215489237, "dataset_size": 215477115.0}}
|
2023-09-28T03:23:08+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "train-data"
More Information needed
|
[
"# Dataset Card for \"train-data\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"train-data\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"train-data\"\n\nMore Information needed"
] |
cfcaa761bd6906e4e42571259657861972424209
|
# Dataset of Natsu Megumi
This is the dataset of Natsu Megumi, containing 288 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 288 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 669 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 724 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 288 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 288 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 288 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 669 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 669 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 557 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 724 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 724 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/natsu_megumi_istheorderarabbit
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-28T03:25:46+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-28T03:28:53+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Natsu Megumi
=======================
This is the dataset of Natsu Megumi, containing 288 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
585ebdb8d51fe7e095e7d3d0f19bfa1d47126e9a
|
# Dataset Card for "training-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AfshanAhmed/training-data
|
[
"region:us"
] |
2023-09-28T03:33:00+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 301571473.0, "num_examples": 300}], "download_size": 301565751, "dataset_size": 301571473.0}}
|
2023-09-28T05:00:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "training-data"
More Information needed
|
[
"# Dataset Card for \"training-data\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"training-data\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"training-data\"\n\nMore Information needed"
] |
d8d9abce7bf8e8513503d944603814cbaa4e1a88
|
# Dataset of Aoyama Midori
This is the dataset of Aoyama Midori, containing 195 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 195 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 457 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 530 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 195 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 195 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 195 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 457 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 457 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 393 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 530 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 530 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/aoyama_midori_istheorderarabbit
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-28T03:50:31+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-28T03:55:47+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Aoyama Midori
========================
This is the dataset of Aoyama Midori, containing 195 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
aa5db06b7567225e0352eaf2b3c6c6bce0d92131
|
# Dataset of Hoto Mocha
This is the dataset of Hoto Mocha, containing 68 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 68 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 167 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 187 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 68 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 68 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 68 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 167 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 167 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 143 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 187 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 187 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/hoto_mocha_istheorderarabbit
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-28T04:04:53+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-28T04:08:12+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Hoto Mocha
=====================
This is the dataset of Hoto Mocha, containing 68 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
9492855a98da25b2677fda9b0c54663193cdbf2c
|
# Dataset Card for "marinda-type-inference-debuginfo-only-O0-shuffle"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
PurCL/marinda-type-inference-debuginfo-only-O0-shuffle
|
[
"region:us"
] |
2023-09-28T04:08:20+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "metadata", "struct": [{"name": "binary_name", "dtype": "string"}, {"name": "function_addr", "dtype": "int64"}, {"name": "function_name", "dtype": "string"}, {"name": "project_name", "dtype": "string"}]}, {"name": "code_w_type", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "data_dep", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 268866704.4189582, "num_examples": 55771}, {"name": "test", "num_bytes": 29875149.581041828, "num_examples": 6197}], "download_size": 63950792, "dataset_size": 298741854.0}}
|
2023-09-28T04:08:27+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "marinda-type-inference-debuginfo-only-O0-shuffle"
More Information needed
|
[
"# Dataset Card for \"marinda-type-inference-debuginfo-only-O0-shuffle\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"marinda-type-inference-debuginfo-only-O0-shuffle\"\n\nMore Information needed"
] |
[
6,
32
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"marinda-type-inference-debuginfo-only-O0-shuffle\"\n\nMore Information needed"
] |
756049f95804784260572423394e70be3b004410
|
# Dataset Card for "marinda-type-inference-debuginfo-only-O2-shuffle"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
PurCL/marinda-type-inference-debuginfo-only-O2-shuffle
|
[
"region:us"
] |
2023-09-28T04:10:19+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "metadata", "struct": [{"name": "binary_name", "dtype": "string"}, {"name": "function_addr", "dtype": "int64"}, {"name": "function_name", "dtype": "string"}, {"name": "project_name", "dtype": "string"}]}, {"name": "code_w_type", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "data_dep", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 204117739.7069311, "num_examples": 29631}, {"name": "test", "num_bytes": 22684341.293068886, "num_examples": 3293}], "download_size": 56107280, "dataset_size": 226802081.0}}
|
2023-09-28T04:10:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "marinda-type-inference-debuginfo-only-O2-shuffle"
More Information needed
|
[
"# Dataset Card for \"marinda-type-inference-debuginfo-only-O2-shuffle\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"marinda-type-inference-debuginfo-only-O2-shuffle\"\n\nMore Information needed"
] |
[
6,
31
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"marinda-type-inference-debuginfo-only-O2-shuffle\"\n\nMore Information needed"
] |
a507bf7b237d93f99ff68393c84b556f7985e719
|
# Dataset Card for "marinda-type-inference-debuginfo-only-O1-shuffle"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
PurCL/marinda-type-inference-debuginfo-only-O1-shuffle
|
[
"region:us"
] |
2023-09-28T04:10:26+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "metadata", "struct": [{"name": "binary_name", "dtype": "string"}, {"name": "function_addr", "dtype": "int64"}, {"name": "function_name", "dtype": "string"}, {"name": "project_name", "dtype": "string"}]}, {"name": "code_w_type", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "data_dep", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 201535867.70075417, "num_examples": 37113}, {"name": "test", "num_bytes": 22394684.299245823, "num_examples": 4124}], "download_size": 52386440, "dataset_size": 223930552.0}}
|
2023-09-28T04:10:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "marinda-type-inference-debuginfo-only-O1-shuffle"
More Information needed
|
[
"# Dataset Card for \"marinda-type-inference-debuginfo-only-O1-shuffle\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"marinda-type-inference-debuginfo-only-O1-shuffle\"\n\nMore Information needed"
] |
[
6,
31
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"marinda-type-inference-debuginfo-only-O1-shuffle\"\n\nMore Information needed"
] |
f47f89880cd551ae1efa1cde46db119ff33f122e
|
# Dataset Card for "marinda-type-inference-debuginfo-only-O3-shuffle"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
PurCL/marinda-type-inference-debuginfo-only-O3-shuffle
|
[
"region:us"
] |
2023-09-28T04:10:29+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "metadata", "struct": [{"name": "binary_name", "dtype": "string"}, {"name": "function_addr", "dtype": "int64"}, {"name": "function_name", "dtype": "string"}, {"name": "project_name", "dtype": "string"}]}, {"name": "code_w_type", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "data_dep", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 265826924.50166753, "num_examples": 28065}, {"name": "test", "num_bytes": 29542639.498332478, "num_examples": 3119}], "download_size": 78570389, "dataset_size": 295369564.0}}
|
2023-09-28T04:10:36+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "marinda-type-inference-debuginfo-only-O3-shuffle"
More Information needed
|
[
"# Dataset Card for \"marinda-type-inference-debuginfo-only-O3-shuffle\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"marinda-type-inference-debuginfo-only-O3-shuffle\"\n\nMore Information needed"
] |
[
6,
31
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"marinda-type-inference-debuginfo-only-O3-shuffle\"\n\nMore Information needed"
] |
846d104ad02ff194fc2fb84dcb8c594fb2134aa3
|
# Dataset Card for "NoiseProj_Chelsea"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jhuang14/NoiseProj_Chelsea
|
[
"region:us"
] |
2023-09-28T04:17:34+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "log0728_113539_1.00.png", "1": "log0728_113539_10.00.png", "2": "log0728_113539_100.00.png", "3": "log0728_113539_101.00.png", "4": "log0728_113539_102.00.png", "5": "log0728_113539_103.00.png", "6": "log0728_113539_104.00.png", "7": "log0728_113539_105.00.png", "8": "log0728_113539_11.00.png", "9": "log0728_113539_12.00.png", "10": "log0728_113539_13.00.png", "11": "log0728_113539_14.00.png", "12": "log0728_113539_15.00.png", "13": "log0728_113539_16.00.png", "14": "log0728_113539_17.00.png", "15": "log0728_113539_18.00.png", "16": "log0728_113539_19.00.png", "17": "log0728_113539_2.00.png", "18": "log0728_113539_20.00.png", "19": "log0728_113539_21.00.png", "20": "log0728_113539_22.00.png", "21": "log0728_113539_23.00.png", "22": "log0728_113539_24.00.png", "23": "log0728_113539_25.00.png", "24": "log0728_113539_26.00.png", "25": "log0728_113539_27.00.png", "26": "log0728_113539_28.00.png", "27": "log0728_113539_29.00.png", "28": "log0728_113539_3.00.png", "29": "log0728_113539_30.00.png", "30": "log0728_113539_31.00.png", "31": "log0728_113539_32.00.png", "32": "log0728_113539_33.00.png", "33": "log0728_113539_34.00.png", "34": "log0728_113539_35.00.png", "35": "log0728_113539_36.00.png", "36": "log0728_113539_37.00.png", "37": "log0728_113539_38.00.png", "38": "log0728_113539_39.00.png", "39": "log0728_113539_4.00.png", "40": "log0728_113539_40.00.png", "41": "log0728_113539_41.00.png", "42": "log0728_113539_42.00.png", "43": "log0728_113539_43.00.png", "44": "log0728_113539_44.00.png", "45": "log0728_113539_45.00.png", "46": "log0728_113539_46.00.png", "47": "log0728_113539_47.00.png", "48": "log0728_113539_48.00.png", "49": "log0728_113539_49.00.png", "50": "log0728_113539_5.00.png", "51": "log0728_113539_50.00.png", "52": "log0728_113539_51.00.png", "53": "log0728_113539_52.00.png", "54": "log0728_113539_53.00.png", "55": "log0728_113539_54.00.png", "56": "log0728_113539_55.00.png", "57": "log0728_113539_56.00.png", "58": "log0728_113539_57.00.png", "59": "log0728_113539_58.00.png", "60": "log0728_113539_59.00.png", "61": "log0728_113539_6.00.png", "62": "log0728_113539_60.00.png", "63": "log0728_113539_61.00.png", "64": "log0728_113539_62.00.png", "65": "log0728_113539_63.00.png", "66": "log0728_113539_64.00.png", "67": "log0728_113539_65.00.png", "68": "log0728_113539_66.00.png", "69": "log0728_113539_67.00.png", "70": "log0728_113539_68.00.png", "71": "log0728_113539_69.00.png", "72": "log0728_113539_7.00.png", "73": "log0728_113539_70.00.png", "74": "log0728_113539_71.00.png", "75": "log0728_113539_72.00.png", "76": "log0728_113539_73.00.png", "77": "log0728_113539_74.00.png", "78": "log0728_113539_75.00.png", "79": "log0728_113539_76.00.png", "80": "log0728_113539_77.00.png", "81": "log0728_113539_78.00.png", "82": "log0728_113539_79.00.png", "83": "log0728_113539_8.00.png", "84": "log0728_113539_80.00.png", "85": "log0728_113539_81.00.png", "86": "log0728_113539_82.00.png", "87": "log0728_113539_83.00.png", "88": "log0728_113539_84.00.png", "89": "log0728_113539_85.00.png", "90": "log0728_113539_86.00.png", "91": "log0728_113539_87.00.png", "92": "log0728_113539_88.00.png", "93": "log0728_113539_89.00.png", "94": "log0728_113539_9.00.png", "95": "log0728_113539_90.00.png", "96": "log0728_113539_91.00.png", "97": "log0728_113539_92.00.png", "98": "log0728_113539_93.00.png", "99": "log0728_113539_94.00.png", "100": "log0728_113539_95.00.png", "101": "log0728_113539_96.00.png", "102": "log0728_113539_97.00.png", "103": "log0728_113539_98.00.png", "104": "log0728_113539_99.00.png", "105": "log0728_151922_1.00.png", "106": "log0728_151922_10.00.png", "107": "log0728_151922_11.00.png", "108": "log0728_151922_12.00.png", "109": "log0728_151922_13.00.png", "110": "log0728_151922_14.00.png", "111": "log0728_151922_15.00.png", "112": "log0728_151922_16.00.png", "113": "log0728_151922_17.00.png", "114": "log0728_151922_18.00.png", "115": "log0728_151922_19.00.png", "116": "log0728_151922_2.00.png", "117": "log0728_151922_20.00.png", "118": "log0728_151922_21.00.png", "119": "log0728_151922_22.00.png", "120": "log0728_151922_23.00.png", "121": "log0728_151922_24.00.png", "122": "log0728_151922_25.00.png", "123": "log0728_151922_26.00.png", "124": "log0728_151922_27.00.png", "125": "log0728_151922_28.00.png", "126": "log0728_151922_29.00.png", "127": "log0728_151922_3.00.png", "128": "log0728_151922_30.00.png", "129": "log0728_151922_31.00.png", "130": "log0728_151922_32.00.png", "131": "log0728_151922_33.00.png", "132": "log0728_151922_34.00.png", "133": "log0728_151922_35.00.png", "134": "log0728_151922_36.00.png", "135": "log0728_151922_37.00.png", "136": "log0728_151922_38.00.png", "137": "log0728_151922_39.00.png", "138": "log0728_151922_4.00.png", "139": "log0728_151922_40.00.png", "140": "log0728_151922_41.00.png", "141": "log0728_151922_42.00.png", "142": "log0728_151922_43.00.png", "143": "log0728_151922_44.00.png", "144": "log0728_151922_45.00.png", "145": "log0728_151922_46.00.png", "146": "log0728_151922_47.00.png", "147": "log0728_151922_48.00.png", "148": "log0728_151922_49.00.png", "149": "log0728_151922_5.00.png", "150": "log0728_151922_50.00.png", "151": "log0728_151922_51.00.png", "152": "log0728_151922_52.00.png", "153": "log0728_151922_53.00.png", "154": "log0728_151922_54.00.png", "155": "log0728_151922_55.00.png", "156": "log0728_151922_56.00.png", "157": "log0728_151922_57.00.png", "158": "log0728_151922_58.00.png", "159": "log0728_151922_59.00.png", "160": "log0728_151922_6.00.png", "161": "log0728_151922_60.00.png", "162": "log0728_151922_61.00.png", "163": "log0728_151922_62.00.png", "164": "log0728_151922_63.00.png", "165": "log0728_151922_64.00.png", "166": "log0728_151922_65.00.png", "167": "log0728_151922_66.00.png", "168": "log0728_151922_67.00.png", "169": "log0728_151922_68.00.png", "170": "log0728_151922_69.00.png", "171": "log0728_151922_7.00.png", "172": "log0728_151922_70.00.png", "173": "log0728_151922_71.00.png", "174": "log0728_151922_72.00.png", "175": "log0728_151922_73.00.png", "176": "log0728_151922_74.00.png", "177": "log0728_151922_75.00.png", "178": "log0728_151922_76.00.png", "179": "log0728_151922_77.00.png", "180": "log0728_151922_78.00.png", "181": "log0728_151922_79.00.png", "182": "log0728_151922_8.00.png", "183": "log0728_151922_80.00.png", "184": "log0728_151922_81.00.png", "185": "log0728_151922_82.00.png", "186": "log0728_151922_83.00.png", "187": "log0728_151922_84.00.png", "188": "log0728_151922_85.00.png", "189": "log0728_151922_86.00.png", "190": "log0728_151922_87.00.png", "191": "log0728_151922_88.00.png", "192": "log0728_151922_89.00.png", "193": "log0728_151922_9.00.png", "194": "log0728_151922_90.00.png", "195": "log0728_151922_91.00.png", "196": "log0728_151922_92.00.png", "197": "log0728_190304_1.00.png", "198": "log0728_190304_10.00.png", "199": "log0728_190304_11.00.png", "200": "log0728_190304_12.00.png", "201": "log0728_190304_13.00.png", "202": "log0728_190304_14.00.png", "203": "log0728_190304_15.00.png", "204": "log0728_190304_16.00.png", "205": "log0728_190304_17.00.png", "206": "log0728_190304_18.00.png", "207": "log0728_190304_19.00.png", "208": "log0728_190304_2.00.png", "209": "log0728_190304_20.00.png", "210": "log0728_190304_21.00.png", "211": "log0728_190304_22.00.png", "212": "log0728_190304_23.00.png", "213": "log0728_190304_24.00.png", "214": "log0728_190304_25.00.png", "215": "log0728_190304_26.00.png", "216": "log0728_190304_27.00.png", "217": "log0728_190304_28.00.png", "218": "log0728_190304_29.00.png", "219": "log0728_190304_3.00.png", "220": "log0728_190304_30.00.png", "221": "log0728_190304_31.00.png", "222": "log0728_190304_32.00.png", "223": "log0728_190304_33.00.png", "224": "log0728_190304_34.00.png", "225": "log0728_190304_35.00.png", "226": "log0728_190304_36.00.png", "227": "log0728_190304_37.00.png", "228": "log0728_190304_38.00.png", "229": "log0728_190304_39.00.png", "230": "log0728_190304_4.00.png", "231": "log0728_190304_40.00.png", "232": "log0728_190304_41.00.png", "233": "log0728_190304_42.00.png", "234": "log0728_190304_43.00.png", "235": "log0728_190304_44.00.png", "236": "log0728_190304_45.00.png", "237": "log0728_190304_46.00.png", "238": "log0728_190304_47.00.png", "239": "log0728_190304_48.00.png", "240": "log0728_190304_5.00.png", "241": "log0728_190304_6.00.png", "242": "log0728_190304_7.00.png", "243": "log0728_190304_8.00.png", "244": "log0728_190304_9.00.png", "245": "log0728_224646_1.00.png", "246": "log0728_224646_10.00.png", "247": "log0728_224646_11.00.png", "248": "log0728_224646_12.00.png", "249": "log0728_224646_13.00.png", "250": "log0728_224646_14.00.png", "251": "log0728_224646_2.00.png", "252": "log0728_224646_3.00.png", "253": "log0728_224646_4.00.png", "254": "log0728_224646_5.00.png", "255": "log0728_224646_6.00.png", "256": "log0728_224646_7.00.png", "257": "log0728_224646_8.00.png", "258": "log0728_224646_9.00.png", "259": "log0729_023028_1.00.png", "260": "log0729_023028_10.00.png", "261": "log0729_023028_11.00.png", "262": "log0729_023028_12.00.png", "263": "log0729_023028_13.00.png", "264": "log0729_023028_14.00.png", "265": "log0729_023028_15.00.png", "266": "log0729_023028_16.00.png", "267": "log0729_023028_17.00.png", "268": "log0729_023028_18.00.png", "269": "log0729_023028_19.00.png", "270": "log0729_023028_2.00.png", "271": "log0729_023028_20.00.png", "272": "log0729_023028_21.00.png", "273": "log0729_023028_22.00.png", "274": "log0729_023028_3.00.png", "275": "log0729_023028_4.00.png", "276": "log0729_023028_5.00.png", "277": "log0729_023028_6.00.png", "278": "log0729_023028_7.00.png", "279": "log0729_023028_8.00.png", "280": "log0729_023028_9.00.png", "281": "log0729_061410_1.00.png", "282": "log0729_061410_10.00.png", "283": "log0729_061410_100.00.png", "284": "log0729_061410_101.00.png", "285": "log0729_061410_102.00.png", "286": "log0729_061410_103.00.png", "287": "log0729_061410_104.00.png", "288": "log0729_061410_105.00.png", "289": "log0729_061410_106.00.png", "290": "log0729_061410_107.00.png", "291": "log0729_061410_108.00.png", "292": "log0729_061410_109.00.png", "293": "log0729_061410_11.00.png", "294": "log0729_061410_110.00.png", "295": "log0729_061410_111.00.png", "296": "log0729_061410_112.00.png", "297": "log0729_061410_113.00.png", "298": "log0729_061410_114.00.png", "299": "log0729_061410_115.00.png", "300": "log0729_061410_116.00.png", "301": "log0729_061410_117.00.png", "302": "log0729_061410_118.00.png", "303": "log0729_061410_119.00.png", "304": "log0729_061410_12.00.png", "305": "log0729_061410_120.00.png", "306": "log0729_061410_121.00.png", "307": "log0729_061410_122.00.png", "308": "log0729_061410_123.00.png", "309": "log0729_061410_124.00.png", "310": "log0729_061410_125.00.png", "311": "log0729_061410_126.00.png", "312": "log0729_061410_127.00.png", "313": "log0729_061410_128.00.png", "314": "log0729_061410_129.00.png", "315": "log0729_061410_13.00.png", "316": "log0729_061410_130.00.png", "317": "log0729_061410_131.00.png", "318": "log0729_061410_132.00.png", "319": "log0729_061410_133.00.png", "320": "log0729_061410_134.00.png", "321": "log0729_061410_135.00.png", "322": "log0729_061410_136.00.png", "323": "log0729_061410_137.00.png", "324": "log0729_061410_138.00.png", "325": "log0729_061410_139.00.png", "326": "log0729_061410_14.00.png", "327": "log0729_061410_140.00.png", "328": "log0729_061410_141.00.png", "329": "log0729_061410_142.00.png", "330": "log0729_061410_143.00.png", "331": "log0729_061410_144.00.png", "332": "log0729_061410_145.00.png", "333": "log0729_061410_146.00.png", "334": "log0729_061410_147.00.png", "335": "log0729_061410_148.00.png", "336": "log0729_061410_149.00.png", "337": "log0729_061410_15.00.png", "338": "log0729_061410_150.00.png", "339": "log0729_061410_151.00.png", "340": "log0729_061410_152.00.png", "341": "log0729_061410_153.00.png", "342": "log0729_061410_154.00.png", "343": "log0729_061410_155.00.png", "344": "log0729_061410_156.00.png", "345": "log0729_061410_157.00.png", "346": "log0729_061410_158.00.png", "347": "log0729_061410_159.00.png", "348": "log0729_061410_16.00.png", "349": "log0729_061410_160.00.png", "350": "log0729_061410_161.00.png", "351": "log0729_061410_162.00.png", "352": "log0729_061410_163.00.png", "353": "log0729_061410_164.00.png", "354": "log0729_061410_165.00.png", "355": "log0729_061410_166.00.png", "356": "log0729_061410_167.00.png", "357": "log0729_061410_168.00.png", "358": "log0729_061410_169.00.png", "359": "log0729_061410_17.00.png", "360": "log0729_061410_170.00.png", "361": "log0729_061410_171.00.png", "362": "log0729_061410_172.00.png", "363": "log0729_061410_173.00.png", "364": "log0729_061410_174.00.png", "365": "log0729_061410_175.00.png", "366": "log0729_061410_176.00.png", "367": "log0729_061410_177.00.png", "368": "log0729_061410_18.00.png", "369": "log0729_061410_19.00.png", "370": "log0729_061410_2.00.png", "371": "log0729_061410_20.00.png", "372": "log0729_061410_21.00.png", "373": "log0729_061410_22.00.png", "374": "log0729_061410_23.00.png", "375": "log0729_061410_24.00.png", "376": "log0729_061410_25.00.png", "377": "log0729_061410_26.00.png", "378": "log0729_061410_27.00.png", "379": "log0729_061410_28.00.png", "380": "log0729_061410_29.00.png", "381": "log0729_061410_3.00.png", "382": "log0729_061410_30.00.png", "383": "log0729_061410_31.00.png", "384": "log0729_061410_32.00.png", "385": "log0729_061410_33.00.png", "386": "log0729_061410_34.00.png", "387": "log0729_061410_35.00.png", "388": "log0729_061410_36.00.png", "389": "log0729_061410_37.00.png", "390": "log0729_061410_38.00.png", "391": "log0729_061410_39.00.png", "392": "log0729_061410_4.00.png", "393": "log0729_061410_40.00.png", "394": "log0729_061410_41.00.png", "395": "log0729_061410_42.00.png", "396": "log0729_061410_43.00.png", "397": "log0729_061410_44.00.png", "398": "log0729_061410_45.00.png", "399": "log0729_061410_46.00.png", "400": "log0729_061410_47.00.png", "401": "log0729_061410_48.00.png", "402": "log0729_061410_49.00.png", "403": "log0729_061410_5.00.png", "404": "log0729_061410_50.00.png", "405": "log0729_061410_51.00.png", "406": "log0729_061410_52.00.png", "407": "log0729_061410_53.00.png", "408": "log0729_061410_54.00.png", "409": "log0729_061410_55.00.png", "410": "log0729_061410_56.00.png", "411": "log0729_061410_57.00.png", "412": "log0729_061410_58.00.png", "413": "log0729_061410_59.00.png", "414": "log0729_061410_6.00.png", "415": "log0729_061410_60.00.png", "416": "log0729_061410_61.00.png", "417": "log0729_061410_62.00.png", "418": "log0729_061410_63.00.png", "419": "log0729_061410_64.00.png", "420": "log0729_061410_65.00.png", "421": "log0729_061410_66.00.png", "422": "log0729_061410_67.00.png", "423": "log0729_061410_68.00.png", "424": "log0729_061410_69.00.png", "425": "log0729_061410_7.00.png", "426": "log0729_061410_70.00.png", "427": "log0729_061410_71.00.png", "428": "log0729_061410_72.00.png", "429": "log0729_061410_73.00.png", "430": "log0729_061410_74.00.png", "431": "log0729_061410_75.00.png", "432": "log0729_061410_76.00.png", "433": "log0729_061410_77.00.png", "434": "log0729_061410_78.00.png", "435": "log0729_061410_79.00.png", "436": "log0729_061410_8.00.png", "437": "log0729_061410_80.00.png", "438": "log0729_061410_81.00.png", "439": "log0729_061410_82.00.png", "440": "log0729_061410_83.00.png", "441": "log0729_061410_84.00.png", "442": "log0729_061410_85.00.png", "443": "log0729_061410_86.00.png", "444": "log0729_061410_87.00.png", "445": "log0729_061410_88.00.png", "446": "log0729_061410_89.00.png", "447": "log0729_061410_9.00.png", "448": "log0729_061410_90.00.png", "449": "log0729_061410_91.00.png", "450": "log0729_061410_92.00.png", "451": "log0729_061410_93.00.png", "452": "log0729_061410_94.00.png", "453": "log0729_061410_95.00.png", "454": "log0729_061410_96.00.png", "455": "log0729_061410_97.00.png", "456": "log0729_061410_98.00.png", "457": "log0729_061410_99.00.png", "458": "log0729_095753_1.00.png", "459": "log0729_095753_10.00.png", "460": "log0729_095753_11.00.png", "461": "log0729_095753_12.00.png", "462": "log0729_095753_13.00.png", "463": "log0729_095753_14.00.png", "464": "log0729_095753_15.00.png", "465": "log0729_095753_16.00.png", "466": "log0729_095753_17.00.png", "467": "log0729_095753_18.00.png", "468": "log0729_095753_19.00.png", "469": "log0729_095753_2.00.png", "470": "log0729_095753_20.00.png", "471": "log0729_095753_21.00.png", "472": "log0729_095753_22.00.png", "473": "log0729_095753_23.00.png", "474": "log0729_095753_24.00.png", "475": "log0729_095753_25.00.png", "476": "log0729_095753_26.00.png", "477": "log0729_095753_27.00.png", "478": "log0729_095753_28.00.png", "479": "log0729_095753_29.00.png", "480": "log0729_095753_3.00.png", "481": "log0729_095753_30.00.png", "482": "log0729_095753_31.00.png", "483": "log0729_095753_32.00.png", "484": "log0729_095753_33.00.png", "485": "log0729_095753_34.00.png", "486": "log0729_095753_35.00.png", "487": "log0729_095753_36.00.png", "488": "log0729_095753_37.00.png", "489": "log0729_095753_38.00.png", "490": "log0729_095753_39.00.png", "491": "log0729_095753_4.00.png", "492": "log0729_095753_40.00.png", "493": "log0729_095753_41.00.png", "494": "log0729_095753_42.00.png", "495": "log0729_095753_43.00.png", "496": "log0729_095753_44.00.png", "497": "log0729_095753_45.00.png", "498": "log0729_095753_46.00.png", "499": "log0729_095753_47.00.png", "500": "log0729_095753_48.00.png", "501": "log0729_095753_49.00.png", "502": "log0729_095753_5.00.png", "503": "log0729_095753_50.00.png", "504": "log0729_095753_51.00.png", "505": "log0729_095753_52.00.png", "506": "log0729_095753_53.00.png", "507": "log0729_095753_54.00.png", "508": "log0729_095753_55.00.png", "509": "log0729_095753_56.00.png", "510": "log0729_095753_57.00.png", "511": "log0729_095753_58.00.png", "512": "log0729_095753_59.00.png", "513": "log0729_095753_6.00.png", "514": "log0729_095753_60.00.png", "515": "log0729_095753_61.00.png", "516": "log0729_095753_62.00.png", "517": "log0729_095753_63.00.png", "518": "log0729_095753_64.00.png", "519": "log0729_095753_65.00.png", "520": "log0729_095753_66.00.png", "521": "log0729_095753_67.00.png", "522": "log0729_095753_68.00.png", "523": "log0729_095753_69.00.png", "524": "log0729_095753_7.00.png", "525": "log0729_095753_70.00.png", "526": "log0729_095753_71.00.png", "527": "log0729_095753_72.00.png", "528": "log0729_095753_73.00.png", "529": "log0729_095753_74.00.png", "530": "log0729_095753_75.00.png", "531": "log0729_095753_76.00.png", "532": "log0729_095753_77.00.png", "533": "log0729_095753_78.00.png", "534": "log0729_095753_79.00.png", "535": "log0729_095753_8.00.png", "536": "log0729_095753_80.00.png", "537": "log0729_095753_81.00.png", "538": "log0729_095753_9.00.png", "539": "log0729_134135_1.00.png", "540": "log0729_134135_10.00.png", "541": "log0729_134135_100.00.png", "542": "log0729_134135_101.00.png", "543": "log0729_134135_102.00.png", "544": "log0729_134135_103.00.png", "545": "log0729_134135_104.00.png", "546": "log0729_134135_105.00.png", "547": "log0729_134135_106.00.png", "548": "log0729_134135_107.00.png", "549": "log0729_134135_108.00.png", "550": "log0729_134135_109.00.png", "551": "log0729_134135_11.00.png", "552": "log0729_134135_110.00.png", "553": "log0729_134135_111.00.png", "554": "log0729_134135_112.00.png", "555": "log0729_134135_113.00.png", "556": "log0729_134135_114.00.png", "557": "log0729_134135_115.00.png", "558": "log0729_134135_116.00.png", "559": "log0729_134135_117.00.png", "560": "log0729_134135_118.00.png", "561": "log0729_134135_119.00.png", "562": "log0729_134135_12.00.png", "563": "log0729_134135_120.00.png", "564": "log0729_134135_121.00.png", "565": "log0729_134135_122.00.png", "566": "log0729_134135_123.00.png", "567": "log0729_134135_124.00.png", "568": "log0729_134135_125.00.png", "569": "log0729_134135_126.00.png", "570": "log0729_134135_127.00.png", "571": "log0729_134135_128.00.png", "572": "log0729_134135_129.00.png", "573": "log0729_134135_13.00.png", "574": "log0729_134135_130.00.png", "575": "log0729_134135_131.00.png", "576": "log0729_134135_132.00.png", "577": "log0729_134135_133.00.png", "578": "log0729_134135_134.00.png", "579": "log0729_134135_135.00.png", "580": "log0729_134135_136.00.png", "581": "log0729_134135_137.00.png", "582": "log0729_134135_138.00.png", "583": "log0729_134135_139.00.png", "584": "log0729_134135_14.00.png", "585": "log0729_134135_140.00.png", "586": "log0729_134135_141.00.png", "587": "log0729_134135_142.00.png", "588": "log0729_134135_143.00.png", "589": "log0729_134135_144.00.png", "590": "log0729_134135_145.00.png", "591": "log0729_134135_146.00.png", "592": "log0729_134135_147.00.png", "593": "log0729_134135_148.00.png", "594": "log0729_134135_149.00.png", "595": "log0729_134135_15.00.png", "596": "log0729_134135_150.00.png", "597": "log0729_134135_151.00.png", "598": "log0729_134135_152.00.png", "599": "log0729_134135_153.00.png", "600": "log0729_134135_154.00.png", "601": "log0729_134135_155.00.png", "602": "log0729_134135_156.00.png", "603": "log0729_134135_157.00.png", "604": "log0729_134135_158.00.png", "605": "log0729_134135_159.00.png", "606": "log0729_134135_16.00.png", "607": "log0729_134135_160.00.png", "608": "log0729_134135_161.00.png", "609": "log0729_134135_162.00.png", "610": "log0729_134135_163.00.png", "611": "log0729_134135_164.00.png", "612": "log0729_134135_165.00.png", "613": "log0729_134135_166.00.png", "614": "log0729_134135_167.00.png", "615": "log0729_134135_168.00.png", "616": "log0729_134135_169.00.png", "617": "log0729_134135_17.00.png", "618": "log0729_134135_170.00.png", "619": "log0729_134135_171.00.png", "620": "log0729_134135_172.00.png", "621": "log0729_134135_173.00.png", "622": "log0729_134135_174.00.png", "623": "log0729_134135_175.00.png", "624": "log0729_134135_176.00.png", "625": "log0729_134135_177.00.png", "626": "log0729_134135_178.00.png", "627": "log0729_134135_179.00.png", "628": "log0729_134135_18.00.png", "629": "log0729_134135_180.00.png", "630": "log0729_134135_181.00.png", "631": "log0729_134135_182.00.png", "632": "log0729_134135_183.00.png", "633": "log0729_134135_184.00.png", "634": "log0729_134135_185.00.png", "635": "log0729_134135_186.00.png", "636": "log0729_134135_187.00.png", "637": "log0729_134135_188.00.png", "638": "log0729_134135_189.00.png", "639": "log0729_134135_19.00.png", "640": "log0729_134135_190.00.png", "641": "log0729_134135_191.00.png", "642": "log0729_134135_192.00.png", "643": "log0729_134135_193.00.png", "644": "log0729_134135_194.00.png", "645": "log0729_134135_195.00.png", "646": "log0729_134135_196.00.png", "647": "log0729_134135_197.00.png", "648": "log0729_134135_198.00.png", "649": "log0729_134135_199.00.png", "650": "log0729_134135_2.00.png", "651": "log0729_134135_20.00.png", "652": "log0729_134135_200.00.png", "653": "log0729_134135_201.00.png", "654": "log0729_134135_202.00.png", "655": "log0729_134135_203.00.png", "656": "log0729_134135_204.00.png", "657": "log0729_134135_205.00.png", "658": "log0729_134135_206.00.png", "659": "log0729_134135_207.00.png", "660": "log0729_134135_208.00.png", "661": "log0729_134135_209.00.png", "662": "log0729_134135_21.00.png", "663": "log0729_134135_210.00.png", "664": "log0729_134135_211.00.png", "665": "log0729_134135_212.00.png", "666": "log0729_134135_213.00.png", "667": "log0729_134135_214.00.png", "668": "log0729_134135_215.00.png", "669": "log0729_134135_216.00.png", "670": "log0729_134135_217.00.png", "671": "log0729_134135_218.00.png", "672": "log0729_134135_219.00.png", "673": "log0729_134135_22.00.png", "674": "log0729_134135_220.00.png", "675": "log0729_134135_221.00.png", "676": "log0729_134135_222.00.png", "677": "log0729_134135_223.00.png", "678": "log0729_134135_224.00.png", "679": "log0729_134135_225.00.png", "680": "log0729_134135_226.00.png", "681": "log0729_134135_227.00.png", "682": "log0729_134135_228.00.png", "683": "log0729_134135_229.00.png", "684": "log0729_134135_23.00.png", "685": "log0729_134135_230.00.png", "686": "log0729_134135_231.00.png", "687": "log0729_134135_232.00.png", "688": "log0729_134135_233.00.png", "689": "log0729_134135_234.00.png", "690": "log0729_134135_235.00.png", "691": "log0729_134135_236.00.png", "692": "log0729_134135_237.00.png", "693": "log0729_134135_238.00.png", "694": "log0729_134135_239.00.png", "695": "log0729_134135_24.00.png", "696": "log0729_134135_240.00.png", "697": "log0729_134135_241.00.png", "698": "log0729_134135_242.00.png", "699": "log0729_134135_243.00.png", "700": "log0729_134135_244.00.png", "701": "log0729_134135_245.00.png", "702": "log0729_134135_246.00.png", "703": "log0729_134135_247.00.png", "704": "log0729_134135_248.00.png", "705": "log0729_134135_249.00.png", "706": "log0729_134135_25.00.png", "707": "log0729_134135_250.00.png", "708": "log0729_134135_251.00.png", "709": "log0729_134135_252.00.png", "710": "log0729_134135_253.00.png", "711": "log0729_134135_254.00.png", "712": "log0729_134135_255.00.png", "713": "log0729_134135_256.00.png", "714": "log0729_134135_257.00.png", "715": "log0729_134135_258.00.png", "716": "log0729_134135_259.00.png", "717": "log0729_134135_26.00.png", "718": "log0729_134135_27.00.png", "719": "log0729_134135_28.00.png", "720": "log0729_134135_29.00.png", "721": "log0729_134135_3.00.png", "722": "log0729_134135_30.00.png", "723": "log0729_134135_31.00.png", "724": "log0729_134135_32.00.png", "725": "log0729_134135_33.00.png", "726": "log0729_134135_34.00.png", "727": "log0729_134135_35.00.png", "728": "log0729_134135_36.00.png", "729": "log0729_134135_37.00.png", "730": "log0729_134135_38.00.png", "731": "log0729_134135_39.00.png", "732": "log0729_134135_4.00.png", "733": "log0729_134135_40.00.png", "734": "log0729_134135_41.00.png", "735": "log0729_134135_42.00.png", "736": "log0729_134135_43.00.png", "737": "log0729_134135_44.00.png", "738": "log0729_134135_45.00.png", "739": "log0729_134135_46.00.png", "740": "log0729_134135_47.00.png", "741": "log0729_134135_48.00.png", "742": "log0729_134135_49.00.png", "743": "log0729_134135_5.00.png", "744": "log0729_134135_50.00.png", "745": "log0729_134135_51.00.png", "746": "log0729_134135_52.00.png", "747": "log0729_134135_53.00.png", "748": "log0729_134135_54.00.png", "749": "log0729_134135_55.00.png", "750": "log0729_134135_56.00.png", "751": "log0729_134135_57.00.png", "752": "log0729_134135_58.00.png", "753": "log0729_134135_59.00.png", "754": "log0729_134135_6.00.png", "755": "log0729_134135_60.00.png", "756": "log0729_134135_61.00.png", "757": "log0729_134135_62.00.png", "758": "log0729_134135_63.00.png", "759": "log0729_134135_64.00.png", "760": "log0729_134135_65.00.png", "761": "log0729_134135_66.00.png", "762": "log0729_134135_67.00.png", "763": "log0729_134135_68.00.png", "764": "log0729_134135_69.00.png", "765": "log0729_134135_7.00.png", "766": "log0729_134135_70.00.png", "767": "log0729_134135_71.00.png", "768": "log0729_134135_72.00.png", "769": "log0729_134135_73.00.png", "770": "log0729_134135_74.00.png", "771": "log0729_134135_75.00.png", "772": "log0729_134135_76.00.png", "773": "log0729_134135_77.00.png", "774": "log0729_134135_78.00.png", "775": "log0729_134135_79.00.png", "776": "log0729_134135_8.00.png", "777": "log0729_134135_80.00.png", "778": "log0729_134135_81.00.png", "779": "log0729_134135_82.00.png", "780": "log0729_134135_83.00.png", "781": "log0729_134135_84.00.png", "782": "log0729_134135_85.00.png", "783": "log0729_134135_86.00.png", "784": "log0729_134135_87.00.png", "785": "log0729_134135_88.00.png", "786": "log0729_134135_89.00.png", "787": "log0729_134135_9.00.png", "788": "log0729_134135_90.00.png", "789": "log0729_134135_91.00.png", "790": "log0729_134135_92.00.png", "791": "log0729_134135_93.00.png", "792": "log0729_134135_94.00.png", "793": "log0729_134135_95.00.png", "794": "log0729_134135_96.00.png", "795": "log0729_134135_97.00.png", "796": "log0729_134135_98.00.png", "797": "log0729_134135_99.00.png", "798": "log0729_172517_1.00.png", "799": "log0729_172517_10.00.png", "800": "log0729_172517_11.00.png", "801": "log0729_172517_12.00.png", "802": "log0729_172517_13.00.png", "803": "log0729_172517_14.00.png", "804": "log0729_172517_15.00.png", "805": "log0729_172517_16.00.png", "806": "log0729_172517_17.00.png", "807": "log0729_172517_18.00.png", "808": "log0729_172517_19.00.png", "809": "log0729_172517_2.00.png", "810": "log0729_172517_20.00.png", "811": "log0729_172517_21.00.png", "812": "log0729_172517_22.00.png", "813": "log0729_172517_23.00.png", "814": "log0729_172517_24.00.png", "815": "log0729_172517_25.00.png", "816": "log0729_172517_26.00.png", "817": "log0729_172517_27.00.png", "818": "log0729_172517_28.00.png", "819": "log0729_172517_29.00.png", "820": "log0729_172517_3.00.png", "821": "log0729_172517_30.00.png", "822": "log0729_172517_31.00.png", "823": "log0729_172517_32.00.png", "824": "log0729_172517_33.00.png", "825": "log0729_172517_34.00.png", "826": "log0729_172517_35.00.png", "827": "log0729_172517_36.00.png", "828": "log0729_172517_37.00.png", "829": "log0729_172517_38.00.png", "830": "log0729_172517_39.00.png", "831": "log0729_172517_4.00.png", "832": "log0729_172517_40.00.png", "833": "log0729_172517_41.00.png", "834": "log0729_172517_42.00.png", "835": "log0729_172517_43.00.png", "836": "log0729_172517_44.00.png", "837": "log0729_172517_45.00.png", "838": "log0729_172517_46.00.png", "839": "log0729_172517_47.00.png", "840": "log0729_172517_48.00.png", "841": "log0729_172517_49.00.png", "842": "log0729_172517_5.00.png", "843": "log0729_172517_50.00.png", "844": "log0729_172517_51.00.png", "845": "log0729_172517_52.00.png", "846": "log0729_172517_53.00.png", "847": "log0729_172517_54.00.png", "848": "log0729_172517_55.00.png", "849": "log0729_172517_56.00.png", "850": "log0729_172517_57.00.png", "851": "log0729_172517_58.00.png", "852": "log0729_172517_59.00.png", "853": "log0729_172517_6.00.png", "854": "log0729_172517_60.00.png", "855": "log0729_172517_61.00.png", "856": "log0729_172517_62.00.png", "857": "log0729_172517_63.00.png", "858": "log0729_172517_64.00.png", "859": "log0729_172517_65.00.png", "860": "log0729_172517_66.00.png", "861": "log0729_172517_67.00.png", "862": "log0729_172517_68.00.png", "863": "log0729_172517_69.00.png", "864": "log0729_172517_7.00.png", "865": "log0729_172517_70.00.png", "866": "log0729_172517_71.00.png", "867": "log0729_172517_72.00.png", "868": "log0729_172517_73.00.png", "869": "log0729_172517_74.00.png", "870": "log0729_172517_75.00.png", "871": "log0729_172517_76.00.png", "872": "log0729_172517_77.00.png", "873": "log0729_172517_78.00.png", "874": "log0729_172517_79.00.png", "875": "log0729_172517_8.00.png", "876": "log0729_172517_80.00.png", "877": "log0729_172517_81.00.png", "878": "log0729_172517_82.00.png", "879": "log0729_172517_83.00.png", "880": "log0729_172517_84.00.png", "881": "log0729_172517_85.00.png", "882": "log0729_172517_86.00.png", "883": "log0729_172517_87.00.png", "884": "log0729_172517_88.00.png", "885": "log0729_172517_89.00.png", "886": "log0729_172517_9.00.png", "887": "log0729_172517_90.00.png", "888": "log0729_210900_1.00.png", "889": "log0729_210900_10.00.png", "890": "log0729_210900_11.00.png", "891": "log0729_210900_12.00.png", "892": "log0729_210900_13.00.png", "893": "log0729_210900_14.00.png", "894": "log0729_210900_15.00.png", "895": "log0729_210900_16.00.png", "896": "log0729_210900_17.00.png", "897": "log0729_210900_18.00.png", "898": "log0729_210900_19.00.png", "899": "log0729_210900_2.00.png", "900": "log0729_210900_20.00.png", "901": "log0729_210900_21.00.png", "902": "log0729_210900_22.00.png", "903": "log0729_210900_23.00.png", "904": "log0729_210900_24.00.png", "905": "log0729_210900_25.00.png", "906": "log0729_210900_26.00.png", "907": "log0729_210900_27.00.png", "908": "log0729_210900_28.00.png", "909": "log0729_210900_29.00.png", "910": "log0729_210900_3.00.png", "911": "log0729_210900_30.00.png", "912": "log0729_210900_31.00.png", "913": "log0729_210900_32.00.png", "914": "log0729_210900_33.00.png", "915": "log0729_210900_34.00.png", "916": "log0729_210900_35.00.png", "917": "log0729_210900_36.00.png", "918": "log0729_210900_4.00.png", "919": "log0729_210900_5.00.png", "920": "log0729_210900_6.00.png", "921": "log0729_210900_7.00.png", "922": "log0729_210900_8.00.png", "923": "log0729_210900_9.00.png", "924": "log0730_005242_1.00.png", "925": "log0730_005242_2.00.png", "926": "log0730_043624_1.00.png", "927": "log0730_043624_2.00.png", "928": "log0730_043624_3.00.png", "929": "log0730_043624_4.00.png", "930": "log0730_043624_5.00.png", "931": "log0730_082007_1.00.png", "932": "log0730_082007_10.00.png", "933": "log0730_082007_11.00.png", "934": "log0730_082007_12.00.png", "935": "log0730_082007_13.00.png", "936": "log0730_082007_14.00.png", "937": "log0730_082007_15.00.png", "938": "log0730_082007_16.00.png", "939": "log0730_082007_17.00.png", "940": "log0730_082007_18.00.png", "941": "log0730_082007_19.00.png", "942": "log0730_082007_2.00.png", "943": "log0730_082007_20.00.png", "944": "log0730_082007_21.00.png", "945": "log0730_082007_22.00.png", "946": "log0730_082007_23.00.png", "947": "log0730_082007_24.00.png", "948": "log0730_082007_25.00.png", "949": "log0730_082007_26.00.png", "950": "log0730_082007_27.00.png", "951": "log0730_082007_28.00.png", "952": "log0730_082007_29.00.png", "953": "log0730_082007_3.00.png", "954": "log0730_082007_30.00.png", "955": "log0730_082007_31.00.png", "956": "log0730_082007_32.00.png", "957": "log0730_082007_33.00.png", "958": "log0730_082007_34.00.png", "959": "log0730_082007_35.00.png", "960": "log0730_082007_36.00.png", "961": "log0730_082007_37.00.png", "962": "log0730_082007_38.00.png", "963": "log0730_082007_39.00.png", "964": "log0730_082007_4.00.png", "965": "log0730_082007_40.00.png", "966": "log0730_082007_41.00.png", "967": "log0730_082007_42.00.png", "968": "log0730_082007_43.00.png", "969": "log0730_082007_44.00.png", "970": "log0730_082007_45.00.png", "971": "log0730_082007_46.00.png", "972": "log0730_082007_47.00.png", "973": "log0730_082007_48.00.png", "974": "log0730_082007_49.00.png", "975": "log0730_082007_5.00.png", "976": "log0730_082007_50.00.png", "977": "log0730_082007_51.00.png", "978": "log0730_082007_52.00.png", "979": "log0730_082007_53.00.png", "980": "log0730_082007_54.00.png", "981": "log0730_082007_55.00.png", "982": "log0730_082007_6.00.png", "983": "log0730_082007_7.00.png", "984": "log0730_082007_8.00.png", "985": "log0730_082007_9.00.png", "986": "log0730_120349_1.00.png", "987": "log0730_120349_10.00.png", "988": "log0730_120349_100.00.png", "989": "log0730_120349_101.00.png", "990": "log0730_120349_102.00.png", "991": "log0730_120349_103.00.png", "992": "log0730_120349_104.00.png", "993": "log0730_120349_105.00.png", "994": "log0730_120349_106.00.png", "995": "log0730_120349_107.00.png", "996": "log0730_120349_108.00.png", "997": "log0730_120349_109.00.png", "998": "log0730_120349_11.00.png", "999": "log0730_120349_110.00.png", "1000": "log0730_120349_111.00.png", "1001": "log0730_120349_112.00.png", "1002": "log0730_120349_113.00.png", "1003": "log0730_120349_114.00.png", "1004": "log0730_120349_115.00.png", "1005": "log0730_120349_116.00.png", "1006": "log0730_120349_117.00.png", "1007": "log0730_120349_118.00.png", "1008": "log0730_120349_119.00.png", "1009": "log0730_120349_12.00.png", "1010": "log0730_120349_120.00.png", "1011": "log0730_120349_121.00.png", "1012": "log0730_120349_122.00.png", "1013": "log0730_120349_123.00.png", "1014": "log0730_120349_124.00.png", "1015": "log0730_120349_125.00.png", "1016": "log0730_120349_126.00.png", "1017": "log0730_120349_127.00.png", "1018": "log0730_120349_128.00.png", "1019": "log0730_120349_129.00.png", "1020": "log0730_120349_13.00.png", "1021": "log0730_120349_130.00.png", "1022": "log0730_120349_131.00.png", "1023": "log0730_120349_132.00.png", "1024": "log0730_120349_133.00.png", "1025": "log0730_120349_134.00.png", "1026": "log0730_120349_135.00.png", "1027": "log0730_120349_136.00.png", "1028": "log0730_120349_137.00.png", "1029": "log0730_120349_138.00.png", "1030": "log0730_120349_139.00.png", "1031": "log0730_120349_14.00.png", "1032": "log0730_120349_140.00.png", "1033": "log0730_120349_15.00.png", "1034": "log0730_120349_16.00.png", "1035": "log0730_120349_17.00.png", "1036": "log0730_120349_18.00.png", "1037": "log0730_120349_19.00.png", "1038": "log0730_120349_2.00.png", "1039": "log0730_120349_20.00.png", "1040": "log0730_120349_21.00.png", "1041": "log0730_120349_22.00.png", "1042": "log0730_120349_23.00.png", "1043": "log0730_120349_24.00.png", "1044": "log0730_120349_25.00.png", "1045": "log0730_120349_26.00.png", "1046": "log0730_120349_27.00.png", "1047": "log0730_120349_28.00.png", "1048": "log0730_120349_29.00.png", "1049": "log0730_120349_3.00.png", "1050": "log0730_120349_30.00.png", "1051": "log0730_120349_31.00.png", "1052": "log0730_120349_32.00.png", "1053": "log0730_120349_33.00.png", "1054": "log0730_120349_34.00.png", "1055": "log0730_120349_35.00.png", "1056": "log0730_120349_36.00.png", "1057": "log0730_120349_37.00.png", "1058": "log0730_120349_38.00.png", "1059": "log0730_120349_39.00.png", "1060": "log0730_120349_4.00.png", "1061": "log0730_120349_40.00.png", "1062": "log0730_120349_41.00.png", "1063": "log0730_120349_42.00.png", "1064": "log0730_120349_43.00.png", "1065": "log0730_120349_44.00.png", "1066": "log0730_120349_45.00.png", "1067": "log0730_120349_46.00.png", "1068": "log0730_120349_47.00.png", "1069": "log0730_120349_48.00.png", "1070": "log0730_120349_49.00.png", "1071": "log0730_120349_5.00.png", "1072": "log0730_120349_50.00.png", "1073": "log0730_120349_51.00.png", "1074": "log0730_120349_52.00.png", "1075": "log0730_120349_53.00.png", "1076": "log0730_120349_54.00.png", "1077": "log0730_120349_55.00.png", "1078": "log0730_120349_56.00.png", "1079": "log0730_120349_57.00.png", "1080": "log0730_120349_58.00.png", "1081": "log0730_120349_59.00.png", "1082": "log0730_120349_6.00.png", "1083": "log0730_120349_60.00.png", "1084": "log0730_120349_61.00.png", "1085": "log0730_120349_62.00.png", "1086": "log0730_120349_63.00.png", "1087": "log0730_120349_64.00.png", "1088": "log0730_120349_65.00.png", "1089": "log0730_120349_66.00.png", "1090": "log0730_120349_67.00.png", "1091": "log0730_120349_68.00.png", "1092": "log0730_120349_69.00.png", "1093": "log0730_120349_7.00.png", "1094": "log0730_120349_70.00.png", "1095": "log0730_120349_71.00.png", "1096": "log0730_120349_72.00.png", "1097": "log0730_120349_73.00.png", "1098": "log0730_120349_74.00.png", "1099": "log0730_120349_75.00.png", "1100": "log0730_120349_76.00.png", "1101": "log0730_120349_77.00.png", "1102": "log0730_120349_78.00.png", "1103": "log0730_120349_79.00.png", "1104": "log0730_120349_8.00.png", "1105": "log0730_120349_80.00.png", "1106": "log0730_120349_81.00.png", "1107": "log0730_120349_82.00.png", "1108": "log0730_120349_83.00.png", "1109": "log0730_120349_84.00.png", "1110": "log0730_120349_85.00.png", "1111": "log0730_120349_86.00.png", "1112": "log0730_120349_87.00.png", "1113": "log0730_120349_88.00.png", "1114": "log0730_120349_89.00.png", "1115": "log0730_120349_9.00.png", "1116": "log0730_120349_90.00.png", "1117": "log0730_120349_91.00.png", "1118": "log0730_120349_92.00.png", "1119": "log0730_120349_93.00.png", "1120": "log0730_120349_94.00.png", "1121": "log0730_120349_95.00.png", "1122": "log0730_120349_96.00.png", "1123": "log0730_120349_97.00.png", "1124": "log0730_120349_98.00.png", "1125": "log0730_120349_99.00.png", "1126": "log0730_154731_1.00.png", "1127": "log0730_154731_10.00.png", "1128": "log0730_154731_100.00.png", "1129": "log0730_154731_101.00.png", "1130": "log0730_154731_102.00.png", "1131": "log0730_154731_103.00.png", "1132": "log0730_154731_104.00.png", "1133": "log0730_154731_105.00.png", "1134": "log0730_154731_106.00.png", "1135": "log0730_154731_107.00.png", "1136": "log0730_154731_108.00.png", "1137": "log0730_154731_109.00.png", "1138": "log0730_154731_11.00.png", "1139": "log0730_154731_110.00.png", "1140": "log0730_154731_111.00.png", "1141": "log0730_154731_112.00.png", "1142": "log0730_154731_113.00.png", "1143": "log0730_154731_114.00.png", "1144": "log0730_154731_115.00.png", "1145": "log0730_154731_116.00.png", "1146": "log0730_154731_117.00.png", "1147": "log0730_154731_118.00.png", "1148": "log0730_154731_119.00.png", "1149": "log0730_154731_12.00.png", "1150": "log0730_154731_120.00.png", "1151": "log0730_154731_121.00.png", "1152": "log0730_154731_122.00.png", "1153": "log0730_154731_123.00.png", "1154": "log0730_154731_124.00.png", "1155": "log0730_154731_125.00.png", "1156": "log0730_154731_126.00.png", "1157": "log0730_154731_127.00.png", "1158": "log0730_154731_13.00.png", "1159": "log0730_154731_14.00.png", "1160": "log0730_154731_15.00.png", "1161": "log0730_154731_16.00.png", "1162": "log0730_154731_17.00.png", "1163": "log0730_154731_18.00.png", "1164": "log0730_154731_19.00.png", "1165": "log0730_154731_2.00.png", "1166": "log0730_154731_20.00.png", "1167": "log0730_154731_21.00.png", "1168": "log0730_154731_22.00.png", "1169": "log0730_154731_23.00.png", "1170": "log0730_154731_24.00.png", "1171": "log0730_154731_25.00.png", "1172": "log0730_154731_26.00.png", "1173": "log0730_154731_27.00.png", "1174": "log0730_154731_28.00.png", "1175": "log0730_154731_29.00.png", "1176": "log0730_154731_3.00.png", "1177": "log0730_154731_30.00.png", "1178": "log0730_154731_31.00.png", "1179": "log0730_154731_32.00.png", "1180": "log0730_154731_33.00.png", "1181": "log0730_154731_34.00.png", "1182": "log0730_154731_35.00.png", "1183": "log0730_154731_36.00.png", "1184": "log0730_154731_37.00.png", "1185": "log0730_154731_38.00.png", "1186": "log0730_154731_39.00.png", "1187": "log0730_154731_4.00.png", "1188": "log0730_154731_40.00.png", "1189": "log0730_154731_41.00.png", "1190": "log0730_154731_42.00.png", "1191": "log0730_154731_43.00.png", "1192": "log0730_154731_44.00.png", "1193": "log0730_154731_45.00.png", "1194": "log0730_154731_46.00.png", "1195": "log0730_154731_47.00.png", "1196": "log0730_154731_48.00.png", "1197": "log0730_154731_49.00.png", "1198": "log0730_154731_5.00.png", "1199": "log0730_154731_50.00.png", "1200": "log0730_154731_51.00.png", "1201": "log0730_154731_52.00.png", "1202": "log0730_154731_53.00.png", "1203": "log0730_154731_54.00.png", "1204": "log0730_154731_55.00.png", "1205": "log0730_154731_56.00.png", "1206": "log0730_154731_57.00.png", "1207": "log0730_154731_58.00.png", "1208": "log0730_154731_59.00.png", "1209": "log0730_154731_6.00.png", "1210": "log0730_154731_60.00.png", "1211": "log0730_154731_61.00.png", "1212": "log0730_154731_62.00.png", "1213": "log0730_154731_63.00.png", "1214": "log0730_154731_64.00.png", "1215": "log0730_154731_65.00.png", "1216": "log0730_154731_66.00.png", "1217": "log0730_154731_67.00.png", "1218": "log0730_154731_68.00.png", "1219": "log0730_154731_69.00.png", "1220": "log0730_154731_7.00.png", "1221": "log0730_154731_70.00.png", "1222": "log0730_154731_71.00.png", "1223": "log0730_154731_72.00.png", "1224": "log0730_154731_73.00.png", "1225": "log0730_154731_74.00.png", "1226": "log0730_154731_75.00.png", "1227": "log0730_154731_76.00.png", "1228": "log0730_154731_77.00.png", "1229": "log0730_154731_78.00.png", "1230": "log0730_154731_79.00.png", "1231": "log0730_154731_8.00.png", "1232": "log0730_154731_80.00.png", "1233": "log0730_154731_81.00.png", "1234": "log0730_154731_82.00.png", "1235": "log0730_154731_83.00.png", "1236": "log0730_154731_84.00.png", "1237": "log0730_154731_85.00.png", "1238": "log0730_154731_86.00.png", "1239": "log0730_154731_87.00.png", "1240": "log0730_154731_88.00.png", "1241": "log0730_154731_89.00.png", "1242": "log0730_154731_9.00.png", "1243": "log0730_154731_90.00.png", "1244": "log0730_154731_91.00.png", "1245": "log0730_154731_92.00.png", "1246": "log0730_154731_93.00.png", "1247": "log0730_154731_94.00.png", "1248": "log0730_154731_95.00.png", "1249": "log0730_154731_96.00.png", "1250": "log0730_154731_97.00.png", "1251": "log0730_154731_98.00.png", "1252": "log0730_154731_99.00.png", "1253": "log0730_193113_1.00.png", "1254": "log0730_193113_10.00.png", "1255": "log0730_193113_11.00.png", "1256": "log0730_193113_12.00.png", "1257": "log0730_193113_13.00.png", "1258": "log0730_193113_14.00.png", "1259": "log0730_193113_15.00.png", "1260": "log0730_193113_16.00.png", "1261": "log0730_193113_17.00.png", "1262": "log0730_193113_18.00.png", "1263": "log0730_193113_19.00.png", "1264": "log0730_193113_2.00.png", "1265": "log0730_193113_20.00.png", "1266": "log0730_193113_21.00.png", "1267": "log0730_193113_22.00.png", "1268": "log0730_193113_23.00.png", "1269": "log0730_193113_3.00.png", "1270": "log0730_193113_4.00.png", "1271": "log0730_193113_5.00.png", "1272": "log0730_193113_6.00.png", "1273": "log0730_193113_7.00.png", "1274": "log0730_193113_8.00.png", "1275": "log0730_193113_9.00.png", "1276": "log0730_231455_1.00.png", "1277": "log0730_231455_2.00.png", "1278": "log0730_231455_3.00.png", "1279": "log0730_231455_4.00.png", "1280": "log0730_231455_5.00.png", "1281": "log0730_231455_6.00.png", "1282": "log0730_231455_7.00.png", "1283": "log0730_231455_8.00.png", "1284": "log0730_231455_9.00.png", "1285": "log0731_025838_1.00.png", "1286": "log0731_025838_2.00.png", "1287": "log0731_064220_1.00.png", "1288": "log0731_064220_10.00.png", "1289": "log0731_064220_11.00.png", "1290": "log0731_064220_12.00.png", "1291": "log0731_064220_13.00.png", "1292": "log0731_064220_14.00.png", "1293": "log0731_064220_15.00.png", "1294": "log0731_064220_16.00.png", "1295": "log0731_064220_17.00.png", "1296": "log0731_064220_18.00.png", "1297": "log0731_064220_19.00.png", "1298": "log0731_064220_2.00.png", "1299": "log0731_064220_20.00.png", "1300": "log0731_064220_21.00.png", "1301": "log0731_064220_22.00.png", "1302": "log0731_064220_23.00.png", "1303": "log0731_064220_24.00.png", "1304": "log0731_064220_25.00.png", "1305": "log0731_064220_26.00.png", "1306": "log0731_064220_27.00.png", "1307": "log0731_064220_28.00.png", "1308": "log0731_064220_3.00.png", "1309": "log0731_064220_4.00.png", "1310": "log0731_064220_5.00.png", "1311": "log0731_064220_6.00.png", "1312": "log0731_064220_7.00.png", "1313": "log0731_064220_8.00.png", "1314": "log0731_064220_9.00.png", "1315": "log0731_102602_1.00.png", "1316": "log0731_102602_10.00.png", "1317": "log0731_102602_11.00.png", "1318": "log0731_102602_12.00.png", "1319": "log0731_102602_13.00.png", "1320": "log0731_102602_14.00.png", "1321": "log0731_102602_15.00.png", "1322": "log0731_102602_16.00.png", "1323": "log0731_102602_17.00.png", "1324": "log0731_102602_18.00.png", "1325": "log0731_102602_19.00.png", "1326": "log0731_102602_2.00.png", "1327": "log0731_102602_20.00.png", "1328": "log0731_102602_21.00.png", "1329": "log0731_102602_22.00.png", "1330": "log0731_102602_23.00.png", "1331": "log0731_102602_24.00.png", "1332": "log0731_102602_25.00.png", "1333": "log0731_102602_26.00.png", "1334": "log0731_102602_27.00.png", "1335": "log0731_102602_28.00.png", "1336": "log0731_102602_29.00.png", "1337": "log0731_102602_3.00.png", "1338": "log0731_102602_30.00.png", "1339": "log0731_102602_31.00.png", "1340": "log0731_102602_32.00.png", "1341": "log0731_102602_33.00.png", "1342": "log0731_102602_34.00.png", "1343": "log0731_102602_35.00.png", "1344": "log0731_102602_36.00.png", "1345": "log0731_102602_37.00.png", "1346": "log0731_102602_38.00.png", "1347": "log0731_102602_39.00.png", "1348": "log0731_102602_4.00.png", "1349": "log0731_102602_40.00.png", "1350": "log0731_102602_41.00.png", "1351": "log0731_102602_42.00.png", "1352": "log0731_102602_43.00.png", "1353": "log0731_102602_44.00.png", "1354": "log0731_102602_45.00.png", "1355": "log0731_102602_46.00.png", "1356": "log0731_102602_47.00.png", "1357": "log0731_102602_48.00.png", "1358": "log0731_102602_49.00.png", "1359": "log0731_102602_5.00.png", "1360": "log0731_102602_50.00.png", "1361": "log0731_102602_51.00.png", "1362": "log0731_102602_6.00.png", "1363": "log0731_102602_7.00.png", "1364": "log0731_102602_8.00.png", "1365": "log0731_102602_9.00.png", "1366": "log0731_140945_1.00.png", "1367": "log0731_140945_10.00.png", "1368": "log0731_140945_11.00.png", "1369": "log0731_140945_12.00.png", "1370": "log0731_140945_13.00.png", "1371": "log0731_140945_14.00.png", "1372": "log0731_140945_15.00.png", "1373": "log0731_140945_16.00.png", "1374": "log0731_140945_17.00.png", "1375": "log0731_140945_18.00.png", "1376": "log0731_140945_19.00.png", "1377": "log0731_140945_2.00.png", "1378": "log0731_140945_20.00.png", "1379": "log0731_140945_21.00.png", "1380": "log0731_140945_22.00.png", "1381": "log0731_140945_23.00.png", "1382": "log0731_140945_24.00.png", "1383": "log0731_140945_25.00.png", "1384": "log0731_140945_26.00.png", "1385": "log0731_140945_27.00.png", "1386": "log0731_140945_28.00.png", "1387": "log0731_140945_29.00.png", "1388": "log0731_140945_3.00.png", "1389": "log0731_140945_30.00.png", "1390": "log0731_140945_31.00.png", "1391": "log0731_140945_32.00.png", "1392": "log0731_140945_33.00.png", "1393": "log0731_140945_34.00.png", "1394": "log0731_140945_35.00.png", "1395": "log0731_140945_36.00.png", "1396": "log0731_140945_37.00.png", "1397": "log0731_140945_38.00.png", "1398": "log0731_140945_39.00.png", "1399": "log0731_140945_4.00.png", "1400": "log0731_140945_40.00.png", "1401": "log0731_140945_41.00.png", "1402": "log0731_140945_42.00.png", "1403": "log0731_140945_43.00.png", "1404": "log0731_140945_44.00.png", "1405": "log0731_140945_45.00.png", "1406": "log0731_140945_46.00.png", "1407": "log0731_140945_47.00.png", "1408": "log0731_140945_48.00.png", "1409": "log0731_140945_49.00.png", "1410": "log0731_140945_5.00.png", "1411": "log0731_140945_50.00.png", "1412": "log0731_140945_51.00.png", "1413": "log0731_140945_52.00.png", "1414": "log0731_140945_53.00.png", "1415": "log0731_140945_54.00.png", "1416": "log0731_140945_55.00.png", "1417": "log0731_140945_56.00.png", "1418": "log0731_140945_57.00.png", "1419": "log0731_140945_58.00.png", "1420": "log0731_140945_59.00.png", "1421": "log0731_140945_6.00.png", "1422": "log0731_140945_60.00.png", "1423": "log0731_140945_61.00.png", "1424": "log0731_140945_62.00.png", "1425": "log0731_140945_63.00.png", "1426": "log0731_140945_64.00.png", "1427": "log0731_140945_65.00.png", "1428": "log0731_140945_66.00.png", "1429": "log0731_140945_67.00.png", "1430": "log0731_140945_68.00.png", "1431": "log0731_140945_69.00.png", "1432": "log0731_140945_7.00.png", "1433": "log0731_140945_70.00.png", "1434": "log0731_140945_71.00.png", "1435": "log0731_140945_72.00.png", "1436": "log0731_140945_8.00.png", "1437": "log0731_140945_9.00.png", "1438": "log0731_175327_1.00.png", "1439": "log0731_175327_10.00.png", "1440": "log0731_175327_11.00.png", "1441": "log0731_175327_12.00.png", "1442": "log0731_175327_13.00.png", "1443": "log0731_175327_14.00.png", "1444": "log0731_175327_15.00.png", "1445": "log0731_175327_16.00.png", "1446": "log0731_175327_17.00.png", "1447": "log0731_175327_18.00.png", "1448": "log0731_175327_19.00.png", "1449": "log0731_175327_2.00.png", "1450": "log0731_175327_20.00.png", "1451": "log0731_175327_21.00.png", "1452": "log0731_175327_22.00.png", "1453": "log0731_175327_23.00.png", "1454": "log0731_175327_24.00.png", "1455": "log0731_175327_25.00.png", "1456": "log0731_175327_26.00.png", "1457": "log0731_175327_27.00.png", "1458": "log0731_175327_28.00.png", "1459": "log0731_175327_29.00.png", "1460": "log0731_175327_3.00.png", "1461": "log0731_175327_30.00.png", "1462": "log0731_175327_31.00.png", "1463": "log0731_175327_32.00.png", "1464": "log0731_175327_33.00.png", "1465": "log0731_175327_34.00.png", "1466": "log0731_175327_35.00.png", "1467": "log0731_175327_36.00.png", "1468": "log0731_175327_37.00.png", "1469": "log0731_175327_38.00.png", "1470": "log0731_175327_39.00.png", "1471": "log0731_175327_4.00.png", "1472": "log0731_175327_40.00.png", "1473": "log0731_175327_41.00.png", "1474": "log0731_175327_42.00.png", "1475": "log0731_175327_43.00.png", "1476": "log0731_175327_44.00.png", "1477": "log0731_175327_45.00.png", "1478": "log0731_175327_46.00.png", "1479": "log0731_175327_5.00.png", "1480": "log0731_175327_6.00.png", "1481": "log0731_175327_7.00.png", "1482": "log0731_175327_8.00.png", "1483": "log0731_175327_9.00.png", "1484": "log0731_213709_1.00.png", "1485": "log0731_213709_10.00.png", "1486": "log0731_213709_11.00.png", "1487": "log0731_213709_12.00.png", "1488": "log0731_213709_13.00.png", "1489": "log0731_213709_14.00.png", "1490": "log0731_213709_15.00.png", "1491": "log0731_213709_2.00.png", "1492": "log0731_213709_3.00.png", "1493": "log0731_213709_4.00.png", "1494": "log0731_213709_5.00.png", "1495": "log0731_213709_6.00.png", "1496": "log0731_213709_7.00.png", "1497": "log0731_213709_8.00.png", "1498": "log0731_213709_9.00.png", "1499": "log0801_012051_1.00.png", "1500": "log0801_012051_2.00.png", "1501": "log0801_012051_3.00.png", "1502": "log0801_050434_1.00.png", "1503": "log0801_050434_10.00.png", "1504": "log0801_050434_11.00.png", "1505": "log0801_050434_12.00.png", "1506": "log0801_050434_13.00.png", "1507": "log0801_050434_14.00.png", "1508": "log0801_050434_15.00.png", "1509": "log0801_050434_16.00.png", "1510": "log0801_050434_17.00.png", "1511": "log0801_050434_18.00.png", "1512": "log0801_050434_19.00.png", "1513": "log0801_050434_2.00.png", "1514": "log0801_050434_20.00.png", "1515": "log0801_050434_21.00.png", "1516": "log0801_050434_22.00.png", "1517": "log0801_050434_23.00.png", "1518": "log0801_050434_24.00.png", "1519": "log0801_050434_25.00.png", "1520": "log0801_050434_26.00.png", "1521": "log0801_050434_27.00.png", "1522": "log0801_050434_28.00.png", "1523": "log0801_050434_29.00.png", "1524": "log0801_050434_3.00.png", "1525": "log0801_050434_30.00.png", "1526": "log0801_050434_31.00.png", "1527": "log0801_050434_32.00.png", "1528": "log0801_050434_33.00.png", "1529": "log0801_050434_34.00.png", "1530": "log0801_050434_35.00.png", "1531": "log0801_050434_36.00.png", "1532": "log0801_050434_37.00.png", "1533": "log0801_050434_38.00.png", "1534": "log0801_050434_39.00.png", "1535": "log0801_050434_4.00.png", "1536": "log0801_050434_40.00.png", "1537": "log0801_050434_41.00.png", "1538": "log0801_050434_42.00.png", "1539": "log0801_050434_43.00.png", "1540": "log0801_050434_44.00.png", "1541": "log0801_050434_45.00.png", "1542": "log0801_050434_46.00.png", "1543": "log0801_050434_47.00.png", "1544": "log0801_050434_48.00.png", "1545": "log0801_050434_49.00.png", "1546": "log0801_050434_5.00.png", "1547": "log0801_050434_50.00.png", "1548": "log0801_050434_51.00.png", "1549": "log0801_050434_52.00.png", "1550": "log0801_050434_53.00.png", "1551": "log0801_050434_54.00.png", "1552": "log0801_050434_55.00.png", "1553": "log0801_050434_56.00.png", "1554": "log0801_050434_57.00.png", "1555": "log0801_050434_58.00.png", "1556": "log0801_050434_59.00.png", "1557": "log0801_050434_6.00.png", "1558": "log0801_050434_60.00.png", "1559": "log0801_050434_61.00.png", "1560": "log0801_050434_62.00.png", "1561": "log0801_050434_63.00.png", "1562": "log0801_050434_64.00.png", "1563": "log0801_050434_65.00.png", "1564": "log0801_050434_66.00.png", "1565": "log0801_050434_67.00.png", "1566": "log0801_050434_68.00.png", "1567": "log0801_050434_69.00.png", "1568": "log0801_050434_7.00.png", "1569": "log0801_050434_70.00.png", "1570": "log0801_050434_71.00.png", "1571": "log0801_050434_72.00.png", "1572": "log0801_050434_73.00.png", "1573": "log0801_050434_74.00.png", "1574": "log0801_050434_75.00.png", "1575": "log0801_050434_76.00.png", "1576": "log0801_050434_77.00.png", "1577": "log0801_050434_8.00.png", "1578": "log0801_050434_9.00.png", "1579": "log0801_084816_1.00.png", "1580": "log0801_084816_10.00.png", "1581": "log0801_084816_11.00.png", "1582": "log0801_084816_12.00.png", "1583": "log0801_084816_13.00.png", "1584": "log0801_084816_14.00.png", "1585": "log0801_084816_15.00.png", "1586": "log0801_084816_16.00.png", "1587": "log0801_084816_17.00.png", "1588": "log0801_084816_18.00.png", "1589": "log0801_084816_19.00.png", "1590": "log0801_084816_2.00.png", "1591": "log0801_084816_20.00.png", "1592": "log0801_084816_21.00.png", "1593": "log0801_084816_22.00.png", "1594": "log0801_084816_23.00.png", "1595": "log0801_084816_24.00.png", "1596": "log0801_084816_25.00.png", "1597": "log0801_084816_26.00.png", "1598": "log0801_084816_27.00.png", "1599": "log0801_084816_28.00.png", "1600": "log0801_084816_29.00.png", "1601": "log0801_084816_3.00.png", "1602": "log0801_084816_30.00.png", "1603": "log0801_084816_31.00.png", "1604": "log0801_084816_32.00.png", "1605": "log0801_084816_33.00.png", "1606": "log0801_084816_34.00.png", "1607": "log0801_084816_35.00.png", "1608": "log0801_084816_36.00.png", "1609": "log0801_084816_37.00.png", "1610": "log0801_084816_38.00.png", "1611": "log0801_084816_39.00.png", "1612": "log0801_084816_4.00.png", "1613": "log0801_084816_40.00.png", "1614": "log0801_084816_41.00.png", "1615": "log0801_084816_42.00.png", "1616": "log0801_084816_43.00.png", "1617": "log0801_084816_44.00.png", "1618": "log0801_084816_45.00.png", "1619": "log0801_084816_46.00.png", "1620": "log0801_084816_47.00.png", "1621": "log0801_084816_48.00.png", "1622": "log0801_084816_49.00.png", "1623": "log0801_084816_5.00.png", "1624": "log0801_084816_50.00.png", "1625": "log0801_084816_51.00.png", "1626": "log0801_084816_52.00.png", "1627": "log0801_084816_53.00.png", "1628": "log0801_084816_54.00.png", "1629": "log0801_084816_55.00.png", "1630": "log0801_084816_56.00.png", "1631": "log0801_084816_57.00.png", "1632": "log0801_084816_58.00.png", "1633": "log0801_084816_59.00.png", "1634": "log0801_084816_6.00.png", "1635": "log0801_084816_60.00.png", "1636": "log0801_084816_61.00.png", "1637": "log0801_084816_62.00.png", "1638": "log0801_084816_63.00.png", "1639": "log0801_084816_64.00.png", "1640": "log0801_084816_65.00.png", "1641": "log0801_084816_66.00.png", "1642": "log0801_084816_67.00.png", "1643": "log0801_084816_68.00.png", "1644": "log0801_084816_69.00.png", "1645": "log0801_084816_7.00.png", "1646": "log0801_084816_70.00.png", "1647": "log0801_084816_71.00.png", "1648": "log0801_084816_72.00.png", "1649": "log0801_084816_73.00.png", "1650": "log0801_084816_74.00.png", "1651": "log0801_084816_75.00.png", "1652": "log0801_084816_8.00.png", "1653": "log0801_084816_9.00.png", "1654": "log0801_123158_1.00.png", "1655": "log0801_123158_10.00.png", "1656": "log0801_123158_100.00.png", "1657": "log0801_123158_101.00.png", "1658": "log0801_123158_102.00.png", "1659": "log0801_123158_11.00.png", "1660": "log0801_123158_12.00.png", "1661": "log0801_123158_13.00.png", "1662": "log0801_123158_14.00.png", "1663": "log0801_123158_15.00.png", "1664": "log0801_123158_16.00.png", "1665": "log0801_123158_17.00.png", "1666": "log0801_123158_18.00.png", "1667": "log0801_123158_19.00.png", "1668": "log0801_123158_2.00.png", "1669": "log0801_123158_20.00.png", "1670": "log0801_123158_21.00.png", "1671": "log0801_123158_22.00.png", "1672": "log0801_123158_23.00.png", "1673": "log0801_123158_24.00.png", "1674": "log0801_123158_25.00.png", "1675": "log0801_123158_26.00.png", "1676": "log0801_123158_27.00.png", "1677": "log0801_123158_28.00.png", "1678": "log0801_123158_29.00.png", "1679": "log0801_123158_3.00.png", "1680": "log0801_123158_30.00.png", "1681": "log0801_123158_31.00.png", "1682": "log0801_123158_32.00.png", "1683": "log0801_123158_33.00.png", "1684": "log0801_123158_34.00.png", "1685": "log0801_123158_35.00.png", "1686": "log0801_123158_36.00.png", "1687": "log0801_123158_37.00.png", "1688": "log0801_123158_38.00.png", "1689": "log0801_123158_39.00.png", "1690": "log0801_123158_4.00.png", "1691": "log0801_123158_40.00.png", "1692": "log0801_123158_41.00.png", "1693": "log0801_123158_42.00.png", "1694": "log0801_123158_43.00.png", "1695": "log0801_123158_44.00.png", "1696": "log0801_123158_45.00.png", "1697": "log0801_123158_46.00.png", "1698": "log0801_123158_47.00.png", "1699": "log0801_123158_48.00.png", "1700": "log0801_123158_49.00.png", "1701": "log0801_123158_5.00.png", "1702": "log0801_123158_50.00.png", "1703": "log0801_123158_51.00.png", "1704": "log0801_123158_52.00.png", "1705": "log0801_123158_53.00.png", "1706": "log0801_123158_54.00.png", "1707": "log0801_123158_55.00.png", "1708": "log0801_123158_56.00.png", "1709": "log0801_123158_57.00.png", "1710": "log0801_123158_58.00.png", "1711": "log0801_123158_59.00.png", "1712": "log0801_123158_6.00.png", "1713": "log0801_123158_60.00.png", "1714": "log0801_123158_61.00.png", "1715": "log0801_123158_62.00.png", "1716": "log0801_123158_63.00.png", "1717": "log0801_123158_64.00.png", "1718": "log0801_123158_65.00.png", "1719": "log0801_123158_66.00.png", "1720": "log0801_123158_67.00.png", "1721": "log0801_123158_68.00.png", "1722": "log0801_123158_69.00.png", "1723": "log0801_123158_7.00.png", "1724": "log0801_123158_70.00.png", "1725": "log0801_123158_71.00.png", "1726": "log0801_123158_72.00.png", "1727": "log0801_123158_73.00.png", "1728": "log0801_123158_74.00.png", "1729": "log0801_123158_75.00.png", "1730": "log0801_123158_76.00.png", "1731": "log0801_123158_77.00.png", "1732": "log0801_123158_78.00.png", "1733": "log0801_123158_79.00.png", "1734": "log0801_123158_8.00.png", "1735": "log0801_123158_80.00.png", "1736": "log0801_123158_81.00.png", "1737": "log0801_123158_82.00.png", "1738": "log0801_123158_83.00.png", "1739": "log0801_123158_84.00.png", "1740": "log0801_123158_85.00.png", "1741": "log0801_123158_86.00.png", "1742": "log0801_123158_87.00.png", "1743": "log0801_123158_88.00.png", "1744": "log0801_123158_89.00.png", "1745": "log0801_123158_9.00.png", "1746": "log0801_123158_90.00.png", "1747": "log0801_123158_91.00.png", "1748": "log0801_123158_92.00.png", "1749": "log0801_123158_93.00.png", "1750": "log0801_123158_94.00.png", "1751": "log0801_123158_95.00.png", "1752": "log0801_123158_96.00.png", "1753": "log0801_123158_97.00.png", "1754": "log0801_123158_98.00.png", "1755": "log0801_123158_99.00.png", "1756": "log0801_161540_1.00.png", "1757": "log0801_161540_10.00.png", "1758": "log0801_161540_11.00.png", "1759": "log0801_161540_12.00.png", "1760": "log0801_161540_13.00.png", "1761": "log0801_161540_14.00.png", "1762": "log0801_161540_15.00.png", "1763": "log0801_161540_16.00.png", "1764": "log0801_161540_17.00.png", "1765": "log0801_161540_18.00.png", "1766": "log0801_161540_19.00.png", "1767": "log0801_161540_2.00.png", "1768": "log0801_161540_20.00.png", "1769": "log0801_161540_21.00.png", "1770": "log0801_161540_22.00.png", "1771": "log0801_161540_23.00.png", "1772": "log0801_161540_24.00.png", "1773": "log0801_161540_25.00.png", "1774": "log0801_161540_26.00.png", "1775": "log0801_161540_27.00.png", "1776": "log0801_161540_28.00.png", "1777": "log0801_161540_29.00.png", "1778": "log0801_161540_3.00.png", "1779": "log0801_161540_30.00.png", "1780": "log0801_161540_31.00.png", "1781": "log0801_161540_32.00.png", "1782": "log0801_161540_33.00.png", "1783": "log0801_161540_34.00.png", "1784": "log0801_161540_35.00.png", "1785": "log0801_161540_36.00.png", "1786": "log0801_161540_37.00.png", "1787": "log0801_161540_38.00.png", "1788": "log0801_161540_39.00.png", "1789": "log0801_161540_4.00.png", "1790": "log0801_161540_40.00.png", "1791": "log0801_161540_41.00.png", "1792": "log0801_161540_42.00.png", "1793": "log0801_161540_43.00.png", "1794": "log0801_161540_44.00.png", "1795": "log0801_161540_45.00.png", "1796": "log0801_161540_46.00.png", "1797": "log0801_161540_47.00.png", "1798": "log0801_161540_48.00.png", "1799": "log0801_161540_49.00.png", "1800": "log0801_161540_5.00.png", "1801": "log0801_161540_50.00.png", "1802": "log0801_161540_51.00.png", "1803": "log0801_161540_52.00.png", "1804": "log0801_161540_53.00.png", "1805": "log0801_161540_54.00.png", "1806": "log0801_161540_55.00.png", "1807": "log0801_161540_56.00.png", "1808": "log0801_161540_57.00.png", "1809": "log0801_161540_58.00.png", "1810": "log0801_161540_59.00.png", "1811": "log0801_161540_6.00.png", "1812": "log0801_161540_60.00.png", "1813": "log0801_161540_61.00.png", "1814": "log0801_161540_62.00.png", "1815": "log0801_161540_63.00.png", "1816": "log0801_161540_64.00.png", "1817": "log0801_161540_65.00.png", "1818": "log0801_161540_66.00.png", "1819": "log0801_161540_67.00.png", "1820": "log0801_161540_68.00.png", "1821": "log0801_161540_69.00.png", "1822": "log0801_161540_7.00.png", "1823": "log0801_161540_70.00.png", "1824": "log0801_161540_71.00.png", "1825": "log0801_161540_72.00.png", "1826": "log0801_161540_73.00.png", "1827": "log0801_161540_74.00.png", "1828": "log0801_161540_75.00.png", "1829": "log0801_161540_76.00.png", "1830": "log0801_161540_77.00.png", "1831": "log0801_161540_78.00.png", "1832": "log0801_161540_79.00.png", "1833": "log0801_161540_8.00.png", "1834": "log0801_161540_80.00.png", "1835": "log0801_161540_81.00.png", "1836": "log0801_161540_82.00.png", "1837": "log0801_161540_83.00.png", "1838": "log0801_161540_84.00.png", "1839": "log0801_161540_85.00.png", "1840": "log0801_161540_86.00.png", "1841": "log0801_161540_87.00.png", "1842": "log0801_161540_88.00.png", "1843": "log0801_161540_89.00.png", "1844": "log0801_161540_9.00.png", "1845": "log0801_161540_90.00.png", "1846": "log0801_161540_91.00.png", "1847": "log0801_161540_92.00.png", "1848": "log0801_195923_1.00.png", "1849": "log0801_195923_10.00.png", "1850": "log0801_195923_11.00.png", "1851": "log0801_195923_12.00.png", "1852": "log0801_195923_13.00.png", "1853": "log0801_195923_14.00.png", "1854": "log0801_195923_15.00.png", "1855": "log0801_195923_16.00.png", "1856": "log0801_195923_17.00.png", "1857": "log0801_195923_18.00.png", "1858": "log0801_195923_19.00.png", "1859": "log0801_195923_2.00.png", "1860": "log0801_195923_20.00.png", "1861": "log0801_195923_21.00.png", "1862": "log0801_195923_22.00.png", "1863": "log0801_195923_23.00.png", "1864": "log0801_195923_24.00.png", "1865": "log0801_195923_25.00.png", "1866": "log0801_195923_26.00.png", "1867": "log0801_195923_27.00.png", "1868": "log0801_195923_28.00.png", "1869": "log0801_195923_29.00.png", "1870": "log0801_195923_3.00.png", "1871": "log0801_195923_30.00.png", "1872": "log0801_195923_31.00.png", "1873": "log0801_195923_32.00.png", "1874": "log0801_195923_33.00.png", "1875": "log0801_195923_4.00.png", "1876": "log0801_195923_5.00.png", "1877": "log0801_195923_6.00.png", "1878": "log0801_195923_7.00.png", "1879": "log0801_195923_8.00.png", "1880": "log0801_195923_9.00.png", "1881": "log0801_234305_1.00.png", "1882": "log0801_234305_10.00.png", "1883": "log0801_234305_2.00.png", "1884": "log0801_234305_3.00.png", "1885": "log0801_234305_4.00.png", "1886": "log0801_234305_5.00.png", "1887": "log0801_234305_6.00.png", "1888": "log0801_234305_7.00.png", "1889": "log0801_234305_8.00.png", "1890": "log0801_234305_9.00.png", "1891": "log0802_032647_1.00.png", "1892": "log0802_032647_10.00.png", "1893": "log0802_032647_11.00.png", "1894": "log0802_032647_12.00.png", "1895": "log0802_032647_13.00.png", "1896": "log0802_032647_14.00.png", "1897": "log0802_032647_15.00.png", "1898": "log0802_032647_16.00.png", "1899": "log0802_032647_17.00.png", "1900": "log0802_032647_18.00.png", "1901": "log0802_032647_19.00.png", "1902": "log0802_032647_2.00.png", "1903": "log0802_032647_20.00.png", "1904": "log0802_032647_21.00.png", "1905": "log0802_032647_22.00.png", "1906": "log0802_032647_23.00.png", "1907": "log0802_032647_24.00.png", "1908": "log0802_032647_25.00.png", "1909": "log0802_032647_26.00.png", "1910": "log0802_032647_27.00.png", "1911": "log0802_032647_28.00.png", "1912": "log0802_032647_3.00.png", "1913": "log0802_032647_4.00.png", "1914": "log0802_032647_5.00.png", "1915": "log0802_032647_6.00.png", "1916": "log0802_032647_7.00.png", "1917": "log0802_032647_8.00.png", "1918": "log0802_032647_9.00.png", "1919": "log0802_071030_1.00.png", "1920": "log0802_071030_10.00.png", "1921": "log0802_071030_11.00.png", "1922": "log0802_071030_12.00.png", "1923": "log0802_071030_13.00.png", "1924": "log0802_071030_14.00.png", "1925": "log0802_071030_15.00.png", "1926": "log0802_071030_16.00.png", "1927": "log0802_071030_17.00.png", "1928": "log0802_071030_18.00.png", "1929": "log0802_071030_19.00.png", "1930": "log0802_071030_20.00.png", "1931": "log0802_071030_21.00.png", "1932": "log0802_071030_22.00.png", "1933": "log0802_071030_23.00.png", "1934": "log0802_071030_24.00.png", "1935": "log0802_071030_25.00.png", "1936": "log0802_071030_26.00.png", "1937": "log0802_071030_27.00.png", "1938": "log0802_071030_28.00.png", "1939": "log0802_071030_29.00.png", "1940": "log0802_071030_30.00.png", "1941": "log0802_071030_31.00.png", "1942": "log0802_071030_32.00.png", "1943": "log0802_071030_33.00.png", "1944": "log0802_071030_34.00.png", "1945": "log0802_071030_35.00.png", "1946": "log0802_071030_36.00.png", "1947": "log0802_071030_37.00.png", "1948": "log0802_071030_38.00.png", "1949": "log0802_071030_39.00.png", "1950": "log0802_071030_40.00.png", "1951": "log0802_071030_41.00.png", "1952": "log0802_071030_42.00.png", "1953": "log0802_071030_43.00.png", "1954": "log0802_071030_44.00.png", "1955": "log0802_071030_45.00.png", "1956": "log0802_071030_46.00.png", "1957": "log0802_071030_47.00.png", "1958": "log0802_071030_48.00.png", "1959": "log0802_071030_49.00.png", "1960": "log0802_071030_50.00.png", "1961": "log0802_071030_51.00.png", "1962": "log0802_071030_52.00.png", "1963": "log0802_071030_53.00.png", "1964": "log0802_071030_54.00.png", "1965": "log0802_071030_55.00.png", "1966": "log0802_071030_56.00.png", "1967": "log0802_071030_57.00.png", "1968": "log0802_071030_58.00.png", "1969": "log0802_071030_59.00.png", "1970": "log0802_071030_60.00.png", "1971": "log0802_071030_61.00.png", "1972": "log0802_071030_62.00.png", "1973": "log0802_071030_63.00.png", "1974": "log0802_071030_64.00.png", "1975": "log0802_071030_65.00.png", "1976": "log0802_071030_66.00.png", "1977": "log0802_071030_67.00.png", "1978": "log0802_071030_68.00.png", "1979": "log0802_071030_69.00.png", "1980": "log0802_071030_70.00.png", "1981": "log0802_071030_71.00.png", "1982": "log0802_071030_72.00.png", "1983": "log0802_071030_73.00.png", "1984": "log0802_071030_74.00.png", "1985": "log0802_105412_1.00.png", "1986": "log0802_105412_10.00.png", "1987": "log0802_105412_11.00.png", "1988": "log0802_105412_12.00.png", "1989": "log0802_105412_13.00.png", "1990": "log0802_105412_14.00.png", "1991": "log0802_105412_15.00.png", "1992": "log0802_105412_16.00.png", "1993": "log0802_105412_17.00.png", "1994": "log0802_105412_18.00.png", "1995": "log0802_105412_19.00.png", "1996": "log0802_105412_2.00.png", "1997": "log0802_105412_20.00.png", "1998": "log0802_105412_21.00.png", "1999": "log0802_105412_22.00.png", "2000": "log0802_105412_23.00.png", "2001": "log0802_105412_24.00.png", "2002": "log0802_105412_25.00.png", "2003": "log0802_105412_26.00.png", "2004": "log0802_105412_27.00.png", "2005": "log0802_105412_28.00.png", "2006": "log0802_105412_29.00.png", "2007": "log0802_105412_3.00.png", "2008": "log0802_105412_30.00.png", "2009": "log0802_105412_31.00.png", "2010": "log0802_105412_32.00.png", "2011": "log0802_105412_33.00.png", "2012": "log0802_105412_34.00.png", "2013": "log0802_105412_35.00.png", "2014": "log0802_105412_36.00.png", "2015": "log0802_105412_37.00.png", "2016": "log0802_105412_38.00.png", "2017": "log0802_105412_39.00.png", "2018": "log0802_105412_4.00.png", "2019": "log0802_105412_40.00.png", "2020": "log0802_105412_41.00.png", "2021": "log0802_105412_42.00.png", "2022": "log0802_105412_43.00.png", "2023": "log0802_105412_44.00.png", "2024": "log0802_105412_45.00.png", "2025": "log0802_105412_46.00.png", "2026": "log0802_105412_47.00.png", "2027": "log0802_105412_48.00.png", "2028": "log0802_105412_49.00.png", "2029": "log0802_105412_5.00.png", "2030": "log0802_105412_50.00.png", "2031": "log0802_105412_51.00.png", "2032": "log0802_105412_52.00.png", "2033": "log0802_105412_53.00.png", "2034": "log0802_105412_54.00.png", "2035": "log0802_105412_55.00.png", "2036": "log0802_105412_56.00.png", "2037": "log0802_105412_57.00.png", "2038": "log0802_105412_58.00.png", "2039": "log0802_105412_59.00.png", "2040": "log0802_105412_6.00.png", "2041": "log0802_105412_60.00.png", "2042": "log0802_105412_61.00.png", "2043": "log0802_105412_62.00.png", "2044": "log0802_105412_63.00.png", "2045": "log0802_105412_64.00.png", "2046": "log0802_105412_65.00.png", "2047": "log0802_105412_66.00.png", "2048": "log0802_105412_7.00.png", "2049": "log0802_105412_8.00.png", "2050": "log0802_105412_9.00.png", "2051": "log0802_143754_1.00.png", "2052": "log0802_143754_10.00.png", "2053": "log0802_143754_100.00.png", "2054": "log0802_143754_101.00.png", "2055": "log0802_143754_102.00.png", "2056": "log0802_143754_103.00.png", "2057": "log0802_143754_104.00.png", "2058": "log0802_143754_105.00.png", "2059": "log0802_143754_106.00.png", "2060": "log0802_143754_107.00.png", "2061": "log0802_143754_108.00.png", "2062": "log0802_143754_109.00.png", "2063": "log0802_143754_11.00.png", "2064": "log0802_143754_110.00.png", "2065": "log0802_143754_111.00.png", "2066": "log0802_143754_112.00.png", "2067": "log0802_143754_113.00.png", "2068": "log0802_143754_114.00.png", "2069": "log0802_143754_115.00.png", "2070": "log0802_143754_116.00.png", "2071": "log0802_143754_117.00.png", "2072": "log0802_143754_118.00.png", "2073": "log0802_143754_119.00.png", "2074": "log0802_143754_12.00.png", "2075": "log0802_143754_120.00.png", "2076": "log0802_143754_121.00.png", "2077": "log0802_143754_122.00.png", "2078": "log0802_143754_123.00.png", "2079": "log0802_143754_124.00.png", "2080": "log0802_143754_125.00.png", "2081": "log0802_143754_126.00.png", "2082": "log0802_143754_127.00.png", "2083": "log0802_143754_128.00.png", "2084": "log0802_143754_129.00.png", "2085": "log0802_143754_13.00.png", "2086": "log0802_143754_130.00.png", "2087": "log0802_143754_131.00.png", "2088": "log0802_143754_132.00.png", "2089": "log0802_143754_133.00.png", "2090": "log0802_143754_134.00.png", "2091": "log0802_143754_135.00.png", "2092": "log0802_143754_136.00.png", "2093": "log0802_143754_137.00.png", "2094": "log0802_143754_138.00.png", "2095": "log0802_143754_139.00.png", "2096": "log0802_143754_14.00.png", "2097": "log0802_143754_140.00.png", "2098": "log0802_143754_141.00.png", "2099": "log0802_143754_142.00.png", "2100": "log0802_143754_143.00.png", "2101": "log0802_143754_144.00.png", "2102": "log0802_143754_145.00.png", "2103": "log0802_143754_146.00.png", "2104": "log0802_143754_147.00.png", "2105": "log0802_143754_148.00.png", "2106": "log0802_143754_149.00.png", "2107": "log0802_143754_15.00.png", "2108": "log0802_143754_150.00.png", "2109": "log0802_143754_151.00.png", "2110": "log0802_143754_152.00.png", "2111": "log0802_143754_153.00.png", "2112": "log0802_143754_154.00.png", "2113": "log0802_143754_155.00.png", "2114": "log0802_143754_156.00.png", "2115": "log0802_143754_157.00.png", "2116": "log0802_143754_158.00.png", "2117": "log0802_143754_159.00.png", "2118": "log0802_143754_16.00.png", "2119": "log0802_143754_17.00.png", "2120": "log0802_143754_18.00.png", "2121": "log0802_143754_19.00.png", "2122": "log0802_143754_2.00.png", "2123": "log0802_143754_20.00.png", "2124": "log0802_143754_21.00.png", "2125": "log0802_143754_22.00.png", "2126": "log0802_143754_23.00.png", "2127": "log0802_143754_24.00.png", "2128": "log0802_143754_25.00.png", "2129": "log0802_143754_26.00.png", "2130": "log0802_143754_27.00.png", "2131": "log0802_143754_28.00.png", "2132": "log0802_143754_29.00.png", "2133": "log0802_143754_3.00.png", "2134": "log0802_143754_30.00.png", "2135": "log0802_143754_31.00.png", "2136": "log0802_143754_32.00.png", "2137": "log0802_143754_33.00.png", "2138": "log0802_143754_34.00.png", "2139": "log0802_143754_35.00.png", "2140": "log0802_143754_36.00.png", "2141": "log0802_143754_37.00.png", "2142": "log0802_143754_38.00.png", "2143": "log0802_143754_39.00.png", "2144": "log0802_143754_4.00.png", "2145": "log0802_143754_40.00.png", "2146": "log0802_143754_41.00.png", "2147": "log0802_143754_42.00.png", "2148": "log0802_143754_43.00.png", "2149": "log0802_143754_44.00.png", "2150": "log0802_143754_45.00.png", "2151": "log0802_143754_46.00.png", "2152": "log0802_143754_47.00.png", "2153": "log0802_143754_48.00.png", "2154": "log0802_143754_49.00.png", "2155": "log0802_143754_5.00.png", "2156": "log0802_143754_50.00.png", "2157": "log0802_143754_51.00.png", "2158": "log0802_143754_52.00.png", "2159": "log0802_143754_53.00.png", "2160": "log0802_143754_54.00.png", "2161": "log0802_143754_55.00.png", "2162": "log0802_143754_56.00.png", "2163": "log0802_143754_57.00.png", "2164": "log0802_143754_58.00.png", "2165": "log0802_143754_59.00.png", "2166": "log0802_143754_6.00.png", "2167": "log0802_143754_60.00.png", "2168": "log0802_143754_61.00.png", "2169": "log0802_143754_62.00.png", "2170": "log0802_143754_63.00.png", "2171": "log0802_143754_64.00.png", "2172": "log0802_143754_65.00.png", "2173": "log0802_143754_66.00.png", "2174": "log0802_143754_67.00.png", "2175": "log0802_143754_68.00.png", "2176": "log0802_143754_69.00.png", "2177": "log0802_143754_7.00.png", "2178": "log0802_143754_70.00.png", "2179": "log0802_143754_71.00.png", "2180": "log0802_143754_72.00.png", "2181": "log0802_143754_73.00.png", "2182": "log0802_143754_74.00.png", "2183": "log0802_143754_75.00.png", "2184": "log0802_143754_76.00.png", "2185": "log0802_143754_77.00.png", "2186": "log0802_143754_78.00.png", "2187": "log0802_143754_79.00.png", "2188": "log0802_143754_8.00.png", "2189": "log0802_143754_80.00.png", "2190": "log0802_143754_81.00.png", "2191": "log0802_143754_82.00.png", "2192": "log0802_143754_83.00.png", "2193": "log0802_143754_84.00.png", "2194": "log0802_143754_85.00.png", "2195": "log0802_143754_86.00.png", "2196": "log0802_143754_87.00.png", "2197": "log0802_143754_88.00.png", "2198": "log0802_143754_89.00.png", "2199": "log0802_143754_9.00.png", "2200": "log0802_143754_90.00.png", "2201": "log0802_143754_91.00.png", "2202": "log0802_143754_92.00.png", "2203": "log0802_143754_93.00.png", "2204": "log0802_143754_94.00.png", "2205": "log0802_143754_95.00.png", "2206": "log0802_143754_96.00.png", "2207": "log0802_143754_97.00.png", "2208": "log0802_143754_98.00.png", "2209": "log0802_143754_99.00.png", "2210": "log0802_182137_1.00.png", "2211": "log0802_182137_10.00.png", "2212": "log0802_182137_11.00.png", "2213": "log0802_182137_12.00.png", "2214": "log0802_182137_13.00.png", "2215": "log0802_182137_14.00.png", "2216": "log0802_182137_15.00.png", "2217": "log0802_182137_16.00.png", "2218": "log0802_182137_17.00.png", "2219": "log0802_182137_18.00.png", "2220": "log0802_182137_19.00.png", "2221": "log0802_182137_2.00.png", "2222": "log0802_182137_20.00.png", "2223": "log0802_182137_21.00.png", "2224": "log0802_182137_22.00.png", "2225": "log0802_182137_23.00.png", "2226": "log0802_182137_24.00.png", "2227": "log0802_182137_25.00.png", "2228": "log0802_182137_26.00.png", "2229": "log0802_182137_27.00.png", "2230": "log0802_182137_28.00.png", "2231": "log0802_182137_29.00.png", "2232": "log0802_182137_3.00.png", "2233": "log0802_182137_30.00.png", "2234": "log0802_182137_31.00.png", "2235": "log0802_182137_32.00.png", "2236": "log0802_182137_33.00.png", "2237": "log0802_182137_34.00.png", "2238": "log0802_182137_35.00.png", "2239": "log0802_182137_36.00.png", "2240": "log0802_182137_37.00.png", "2241": "log0802_182137_38.00.png", "2242": "log0802_182137_39.00.png", "2243": "log0802_182137_4.00.png", "2244": "log0802_182137_40.00.png", "2245": "log0802_182137_41.00.png", "2246": "log0802_182137_42.00.png", "2247": "log0802_182137_43.00.png", "2248": "log0802_182137_44.00.png", "2249": "log0802_182137_45.00.png", "2250": "log0802_182137_46.00.png", "2251": "log0802_182137_47.00.png", "2252": "log0802_182137_48.00.png", "2253": "log0802_182137_49.00.png", "2254": "log0802_182137_5.00.png", "2255": "log0802_182137_50.00.png", "2256": "log0802_182137_51.00.png", "2257": "log0802_182137_52.00.png", "2258": "log0802_182137_53.00.png", "2259": "log0802_182137_54.00.png", "2260": "log0802_182137_55.00.png", "2261": "log0802_182137_56.00.png", "2262": "log0802_182137_57.00.png", "2263": "log0802_182137_58.00.png", "2264": "log0802_182137_59.00.png", "2265": "log0802_182137_6.00.png", "2266": "log0802_182137_60.00.png", "2267": "log0802_182137_61.00.png", "2268": "log0802_182137_62.00.png", "2269": "log0802_182137_63.00.png", "2270": "log0802_182137_64.00.png", "2271": "log0802_182137_65.00.png", "2272": "log0802_182137_66.00.png", "2273": "log0802_182137_67.00.png", "2274": "log0802_182137_68.00.png", "2275": "log0802_182137_69.00.png", "2276": "log0802_182137_7.00.png", "2277": "log0802_182137_70.00.png", "2278": "log0802_182137_71.00.png", "2279": "log0802_182137_72.00.png", "2280": "log0802_182137_73.00.png", "2281": "log0802_182137_74.00.png", "2282": "log0802_182137_75.00.png", "2283": "log0802_182137_76.00.png", "2284": "log0802_182137_77.00.png", "2285": "log0802_182137_78.00.png", "2286": "log0802_182137_79.00.png", "2287": "log0802_182137_8.00.png", "2288": "log0802_182137_80.00.png", "2289": "log0802_182137_9.00.png", "2290": "log0802_220519_1.00.png", "2291": "log0802_220519_10.00.png", "2292": "log0802_220519_11.00.png", "2293": "log0802_220519_12.00.png", "2294": "log0802_220519_13.00.png", "2295": "log0802_220519_14.00.png", "2296": "log0802_220519_15.00.png", "2297": "log0802_220519_16.00.png", "2298": "log0802_220519_17.00.png", "2299": "log0802_220519_18.00.png", "2300": "log0802_220519_19.00.png", "2301": "log0802_220519_2.00.png", "2302": "log0802_220519_20.00.png", "2303": "log0802_220519_3.00.png", "2304": "log0802_220519_4.00.png", "2305": "log0802_220519_5.00.png", "2306": "log0802_220519_6.00.png", "2307": "log0802_220519_7.00.png", "2308": "log0802_220519_8.00.png", "2309": "log0802_220519_9.00.png", "2310": "log0803_014901_1.00.png", "2311": "log0803_014901_2.00.png", "2312": "log0803_014901_3.00.png", "2313": "log0803_053244_1.00.png", "2314": "log0803_053244_10.00.png", "2315": "log0803_053244_11.00.png", "2316": "log0803_053244_12.00.png", "2317": "log0803_053244_13.00.png", "2318": "log0803_053244_14.00.png", "2319": "log0803_053244_15.00.png", "2320": "log0803_053244_16.00.png", "2321": "log0803_053244_17.00.png", "2322": "log0803_053244_18.00.png", "2323": "log0803_053244_19.00.png", "2324": "log0803_053244_2.00.png", "2325": "log0803_053244_20.00.png", "2326": "log0803_053244_21.00.png", "2327": "log0803_053244_22.00.png", "2328": "log0803_053244_23.00.png", "2329": "log0803_053244_24.00.png", "2330": "log0803_053244_25.00.png", "2331": "log0803_053244_26.00.png", "2332": "log0803_053244_27.00.png", "2333": "log0803_053244_28.00.png", "2334": "log0803_053244_29.00.png", "2335": "log0803_053244_3.00.png", "2336": "log0803_053244_30.00.png", "2337": "log0803_053244_31.00.png", "2338": "log0803_053244_32.00.png", "2339": "log0803_053244_33.00.png", "2340": "log0803_053244_34.00.png", "2341": "log0803_053244_35.00.png", "2342": "log0803_053244_36.00.png", "2343": "log0803_053244_37.00.png", "2344": "log0803_053244_38.00.png", "2345": "log0803_053244_39.00.png", "2346": "log0803_053244_4.00.png", "2347": "log0803_053244_40.00.png", "2348": "log0803_053244_41.00.png", "2349": "log0803_053244_42.00.png", "2350": "log0803_053244_43.00.png", "2351": "log0803_053244_44.00.png", "2352": "log0803_053244_45.00.png", "2353": "log0803_053244_46.00.png", "2354": "log0803_053244_47.00.png", "2355": "log0803_053244_48.00.png", "2356": "log0803_053244_49.00.png", "2357": "log0803_053244_5.00.png", "2358": "log0803_053244_50.00.png", "2359": "log0803_053244_51.00.png", "2360": "log0803_053244_52.00.png", "2361": "log0803_053244_53.00.png", "2362": "log0803_053244_54.00.png", "2363": "log0803_053244_55.00.png", "2364": "log0803_053244_56.00.png", "2365": "log0803_053244_57.00.png", "2366": "log0803_053244_58.00.png", "2367": "log0803_053244_59.00.png", "2368": "log0803_053244_6.00.png", "2369": "log0803_053244_60.00.png", "2370": "log0803_053244_61.00.png", "2371": "log0803_053244_62.00.png", "2372": "log0803_053244_63.00.png", "2373": "log0803_053244_64.00.png", "2374": "log0803_053244_65.00.png", "2375": "log0803_053244_66.00.png", "2376": "log0803_053244_67.00.png", "2377": "log0803_053244_68.00.png", "2378": "log0803_053244_69.00.png", "2379": "log0803_053244_7.00.png", "2380": "log0803_053244_70.00.png", "2381": "log0803_053244_71.00.png", "2382": "log0803_053244_72.00.png", "2383": "log0803_053244_73.00.png", "2384": "log0803_053244_74.00.png", "2385": "log0803_053244_75.00.png", "2386": "log0803_053244_76.00.png", "2387": "log0803_053244_77.00.png", "2388": "log0803_053244_78.00.png", "2389": "log0803_053244_8.00.png", "2390": "log0803_053244_9.00.png", "2391": "log0803_091626_1.00.png", "2392": "log0803_091626_10.00.png", "2393": "log0803_091626_11.00.png", "2394": "log0803_091626_12.00.png", "2395": "log0803_091626_13.00.png", "2396": "log0803_091626_14.00.png", "2397": "log0803_091626_15.00.png", "2398": "log0803_091626_16.00.png", "2399": "log0803_091626_17.00.png", "2400": "log0803_091626_18.00.png", "2401": "log0803_091626_19.00.png", "2402": "log0803_091626_2.00.png", "2403": "log0803_091626_20.00.png", "2404": "log0803_091626_21.00.png", "2405": "log0803_091626_22.00.png", "2406": "log0803_091626_23.00.png", "2407": "log0803_091626_24.00.png", "2408": "log0803_091626_25.00.png", "2409": "log0803_091626_26.00.png", "2410": "log0803_091626_27.00.png", "2411": "log0803_091626_28.00.png", "2412": "log0803_091626_29.00.png", "2413": "log0803_091626_3.00.png", "2414": "log0803_091626_30.00.png", "2415": "log0803_091626_31.00.png", "2416": "log0803_091626_32.00.png", "2417": "log0803_091626_33.00.png", "2418": "log0803_091626_34.00.png", "2419": "log0803_091626_35.00.png", "2420": "log0803_091626_36.00.png", "2421": "log0803_091626_37.00.png", "2422": "log0803_091626_38.00.png", "2423": "log0803_091626_39.00.png", "2424": "log0803_091626_4.00.png", "2425": "log0803_091626_40.00.png", "2426": "log0803_091626_41.00.png", "2427": "log0803_091626_42.00.png", "2428": "log0803_091626_43.00.png", "2429": "log0803_091626_44.00.png", "2430": "log0803_091626_45.00.png", "2431": "log0803_091626_46.00.png", "2432": "log0803_091626_5.00.png", "2433": "log0803_091626_6.00.png", "2434": "log0803_091626_7.00.png", "2435": "log0803_091626_8.00.png", "2436": "log0803_091626_9.00.png", "2437": "log0803_130008_1.00.png", "2438": "log0803_130008_10.00.png", "2439": "log0803_130008_11.00.png", "2440": "log0803_130008_12.00.png", "2441": "log0803_130008_13.00.png", "2442": "log0803_130008_14.00.png", "2443": "log0803_130008_15.00.png", "2444": "log0803_130008_16.00.png", "2445": "log0803_130008_17.00.png", "2446": "log0803_130008_18.00.png", "2447": "log0803_130008_19.00.png", "2448": "log0803_130008_2.00.png", "2449": "log0803_130008_20.00.png", "2450": "log0803_130008_21.00.png", "2451": "log0803_130008_22.00.png", "2452": "log0803_130008_23.00.png", "2453": "log0803_130008_24.00.png", "2454": "log0803_130008_25.00.png", "2455": "log0803_130008_26.00.png", "2456": "log0803_130008_27.00.png", "2457": "log0803_130008_28.00.png", "2458": "log0803_130008_29.00.png", "2459": "log0803_130008_3.00.png", "2460": "log0803_130008_30.00.png", "2461": "log0803_130008_31.00.png", "2462": "log0803_130008_32.00.png", "2463": "log0803_130008_33.00.png", "2464": "log0803_130008_34.00.png", "2465": "log0803_130008_35.00.png", "2466": "log0803_130008_36.00.png", "2467": "log0803_130008_37.00.png", "2468": "log0803_130008_38.00.png", "2469": "log0803_130008_39.00.png", "2470": "log0803_130008_4.00.png", "2471": "log0803_130008_40.00.png", "2472": "log0803_130008_41.00.png", "2473": "log0803_130008_42.00.png", "2474": "log0803_130008_43.00.png", "2475": "log0803_130008_44.00.png", "2476": "log0803_130008_45.00.png", "2477": "log0803_130008_46.00.png", "2478": "log0803_130008_47.00.png", "2479": "log0803_130008_48.00.png", "2480": "log0803_130008_49.00.png", "2481": "log0803_130008_5.00.png", "2482": "log0803_130008_50.00.png", "2483": "log0803_130008_51.00.png", "2484": "log0803_130008_52.00.png", "2485": "log0803_130008_53.00.png", "2486": "log0803_130008_54.00.png", "2487": "log0803_130008_55.00.png", "2488": "log0803_130008_56.00.png", "2489": "log0803_130008_57.00.png", "2490": "log0803_130008_6.00.png", "2491": "log0803_130008_7.00.png", "2492": "log0803_130008_8.00.png", "2493": "log0803_130008_9.00.png", "2494": "log0803_164350_1.00.png", "2495": "log0803_164350_10.00.png", "2496": "log0803_164350_11.00.png", "2497": "log0803_164350_12.00.png", "2498": "log0803_164350_13.00.png", "2499": "log0803_164350_14.00.png", "2500": "log0803_164350_15.00.png", "2501": "log0803_164350_16.00.png", "2502": "log0803_164350_17.00.png", "2503": "log0803_164350_18.00.png", "2504": "log0803_164350_19.00.png", "2505": "log0803_164350_2.00.png", "2506": "log0803_164350_20.00.png", "2507": "log0803_164350_21.00.png", "2508": "log0803_164350_22.00.png", "2509": "log0803_164350_23.00.png", "2510": "log0803_164350_24.00.png", "2511": "log0803_164350_25.00.png", "2512": "log0803_164350_26.00.png", "2513": "log0803_164350_27.00.png", "2514": "log0803_164350_28.00.png", "2515": "log0803_164350_29.00.png", "2516": "log0803_164350_3.00.png", "2517": "log0803_164350_30.00.png", "2518": "log0803_164350_31.00.png", "2519": "log0803_164350_32.00.png", "2520": "log0803_164350_33.00.png", "2521": "log0803_164350_34.00.png", "2522": "log0803_164350_35.00.png", "2523": "log0803_164350_36.00.png", "2524": "log0803_164350_37.00.png", "2525": "log0803_164350_38.00.png", "2526": "log0803_164350_39.00.png", "2527": "log0803_164350_4.00.png", "2528": "log0803_164350_40.00.png", "2529": "log0803_164350_41.00.png", "2530": "log0803_164350_42.00.png", "2531": "log0803_164350_43.00.png", "2532": "log0803_164350_44.00.png", "2533": "log0803_164350_45.00.png", "2534": "log0803_164350_46.00.png", "2535": "log0803_164350_47.00.png", "2536": "log0803_164350_48.00.png", "2537": "log0803_164350_49.00.png", "2538": "log0803_164350_5.00.png", "2539": "log0803_164350_50.00.png", "2540": "log0803_164350_51.00.png", "2541": "log0803_164350_52.00.png", "2542": "log0803_164350_53.00.png", "2543": "log0803_164350_54.00.png", "2544": "log0803_164350_55.00.png", "2545": "log0803_164350_56.00.png", "2546": "log0803_164350_57.00.png", "2547": "log0803_164350_58.00.png", "2548": "log0803_164350_59.00.png", "2549": "log0803_164350_6.00.png", "2550": "log0803_164350_60.00.png", "2551": "log0803_164350_61.00.png", "2552": "log0803_164350_62.00.png", "2553": "log0803_164350_63.00.png", "2554": "log0803_164350_64.00.png", "2555": "log0803_164350_65.00.png", "2556": "log0803_164350_66.00.png", "2557": "log0803_164350_67.00.png", "2558": "log0803_164350_68.00.png", "2559": "log0803_164350_69.00.png", "2560": "log0803_164350_7.00.png", "2561": "log0803_164350_70.00.png", "2562": "log0803_164350_71.00.png", "2563": "log0803_164350_8.00.png", "2564": "log0803_164350_9.00.png", "2565": "log0803_202732_1.00.png", "2566": "log0803_202732_10.00.png", "2567": "log0803_202732_100.00.png", "2568": "log0803_202732_101.00.png", "2569": "log0803_202732_102.00.png", "2570": "log0803_202732_103.00.png", "2571": "log0803_202732_104.00.png", "2572": "log0803_202732_11.00.png", "2573": "log0803_202732_12.00.png", "2574": "log0803_202732_13.00.png", "2575": "log0803_202732_14.00.png", "2576": "log0803_202732_15.00.png", "2577": "log0803_202732_16.00.png", "2578": "log0803_202732_17.00.png", "2579": "log0803_202732_18.00.png", "2580": "log0803_202732_19.00.png", "2581": "log0803_202732_2.00.png", "2582": "log0803_202732_20.00.png", "2583": "log0803_202732_21.00.png", "2584": "log0803_202732_22.00.png", "2585": "log0803_202732_23.00.png", "2586": "log0803_202732_24.00.png", "2587": "log0803_202732_25.00.png", "2588": "log0803_202732_26.00.png", "2589": "log0803_202732_27.00.png", "2590": "log0803_202732_28.00.png", "2591": "log0803_202732_29.00.png", "2592": "log0803_202732_3.00.png", "2593": "log0803_202732_30.00.png", "2594": "log0803_202732_31.00.png", "2595": "log0803_202732_32.00.png", "2596": "log0803_202732_33.00.png", "2597": "log0803_202732_34.00.png", "2598": "log0803_202732_35.00.png", "2599": "log0803_202732_36.00.png", "2600": "log0803_202732_37.00.png", "2601": "log0803_202732_38.00.png", "2602": "log0803_202732_39.00.png", "2603": "log0803_202732_4.00.png", "2604": "log0803_202732_40.00.png", "2605": "log0803_202732_41.00.png", "2606": "log0803_202732_42.00.png", "2607": "log0803_202732_43.00.png", "2608": "log0803_202732_44.00.png", "2609": "log0803_202732_45.00.png", "2610": "log0803_202732_46.00.png", "2611": "log0803_202732_47.00.png", "2612": "log0803_202732_48.00.png", "2613": "log0803_202732_49.00.png", "2614": "log0803_202732_5.00.png", "2615": "log0803_202732_50.00.png", "2616": "log0803_202732_51.00.png", "2617": "log0803_202732_52.00.png", "2618": "log0803_202732_53.00.png", "2619": "log0803_202732_54.00.png", "2620": "log0803_202732_55.00.png", "2621": "log0803_202732_56.00.png", "2622": "log0803_202732_57.00.png", "2623": "log0803_202732_58.00.png", "2624": "log0803_202732_59.00.png", "2625": "log0803_202732_6.00.png", "2626": "log0803_202732_60.00.png", "2627": "log0803_202732_61.00.png", "2628": "log0803_202732_62.00.png", "2629": "log0803_202732_63.00.png", "2630": "log0803_202732_64.00.png", "2631": "log0803_202732_65.00.png", "2632": "log0803_202732_66.00.png", "2633": "log0803_202732_67.00.png", "2634": "log0803_202732_68.00.png", "2635": "log0803_202732_69.00.png", "2636": "log0803_202732_7.00.png", "2637": "log0803_202732_70.00.png", "2638": "log0803_202732_71.00.png", "2639": "log0803_202732_72.00.png", "2640": "log0803_202732_73.00.png", "2641": "log0803_202732_74.00.png", "2642": "log0803_202732_75.00.png", "2643": "log0803_202732_76.00.png", "2644": "log0803_202732_77.00.png", "2645": "log0803_202732_78.00.png", "2646": "log0803_202732_79.00.png", "2647": "log0803_202732_8.00.png", "2648": "log0803_202732_80.00.png", "2649": "log0803_202732_81.00.png", "2650": "log0803_202732_82.00.png", "2651": "log0803_202732_83.00.png", "2652": "log0803_202732_84.00.png", "2653": "log0803_202732_85.00.png", "2654": "log0803_202732_86.00.png", "2655": "log0803_202732_87.00.png", "2656": "log0803_202732_88.00.png", "2657": "log0803_202732_89.00.png", "2658": "log0803_202732_9.00.png", "2659": "log0803_202732_90.00.png", "2660": "log0803_202732_91.00.png", "2661": "log0803_202732_92.00.png", "2662": "log0803_202732_93.00.png", "2663": "log0803_202732_94.00.png", "2664": "log0803_202732_95.00.png", "2665": "log0803_202732_96.00.png", "2666": "log0803_202732_97.00.png", "2667": "log0803_202732_98.00.png", "2668": "log0803_202732_99.00.png", "2669": "log0804_001115_1.00.png", "2670": "log0804_001115_2.00.png", "2671": "log0804_001115_3.00.png", "2672": "log0804_001115_4.00.png", "2673": "log0804_001115_5.00.png", "2674": "log0804_001115_6.00.png", "2675": "log0804_001115_7.00.png", "2676": "log0804_001115_8.00.png", "2677": "log0804_035457_1.00.png", "2678": "log0804_035457_10.00.png", "2679": "log0804_035457_11.00.png", "2680": "log0804_035457_12.00.png", "2681": "log0804_035457_13.00.png", "2682": "log0804_035457_14.00.png", "2683": "log0804_035457_15.00.png", "2684": "log0804_035457_16.00.png", "2685": "log0804_035457_17.00.png", "2686": "log0804_035457_18.00.png", "2687": "log0804_035457_19.00.png", "2688": "log0804_035457_2.00.png", "2689": "log0804_035457_20.00.png", "2690": "log0804_035457_21.00.png", "2691": "log0804_035457_22.00.png", "2692": "log0804_035457_23.00.png", "2693": "log0804_035457_24.00.png", "2694": "log0804_035457_25.00.png", "2695": "log0804_035457_26.00.png", "2696": "log0804_035457_27.00.png", "2697": "log0804_035457_28.00.png", "2698": "log0804_035457_29.00.png", "2699": "log0804_035457_3.00.png", "2700": "log0804_035457_30.00.png", "2701": "log0804_035457_31.00.png", "2702": "log0804_035457_32.00.png", "2703": "log0804_035457_33.00.png", "2704": "log0804_035457_4.00.png", "2705": "log0804_035457_5.00.png", "2706": "log0804_035457_6.00.png", "2707": "log0804_035457_7.00.png", "2708": "log0804_035457_8.00.png", "2709": "log0804_035457_9.00.png", "2710": "log0804_073839_1.00.png", "2711": "log0804_073839_10.00.png", "2712": "log0804_073839_11.00.png", "2713": "log0804_073839_12.00.png", "2714": "log0804_073839_13.00.png", "2715": "log0804_073839_14.00.png", "2716": "log0804_073839_15.00.png", "2717": "log0804_073839_16.00.png", "2718": "log0804_073839_17.00.png", "2719": "log0804_073839_18.00.png", "2720": "log0804_073839_19.00.png", "2721": "log0804_073839_2.00.png", "2722": "log0804_073839_20.00.png", "2723": "log0804_073839_21.00.png", "2724": "log0804_073839_22.00.png", "2725": "log0804_073839_23.00.png", "2726": "log0804_073839_24.00.png", "2727": "log0804_073839_25.00.png", "2728": "log0804_073839_26.00.png", "2729": "log0804_073839_27.00.png", "2730": "log0804_073839_28.00.png", "2731": "log0804_073839_29.00.png", "2732": "log0804_073839_3.00.png", "2733": "log0804_073839_30.00.png", "2734": "log0804_073839_31.00.png", "2735": "log0804_073839_32.00.png", "2736": "log0804_073839_33.00.png", "2737": "log0804_073839_34.00.png", "2738": "log0804_073839_35.00.png", "2739": "log0804_073839_36.00.png", "2740": "log0804_073839_37.00.png", "2741": "log0804_073839_38.00.png", "2742": "log0804_073839_39.00.png", "2743": "log0804_073839_4.00.png", "2744": "log0804_073839_40.00.png", "2745": "log0804_073839_41.00.png", "2746": "log0804_073839_42.00.png", "2747": "log0804_073839_43.00.png", "2748": "log0804_073839_44.00.png", "2749": "log0804_073839_45.00.png", "2750": "log0804_073839_46.00.png", "2751": "log0804_073839_47.00.png", "2752": "log0804_073839_48.00.png", "2753": "log0804_073839_49.00.png", "2754": "log0804_073839_5.00.png", "2755": "log0804_073839_50.00.png", "2756": "log0804_073839_51.00.png", "2757": "log0804_073839_52.00.png", "2758": "log0804_073839_53.00.png", "2759": "log0804_073839_54.00.png", "2760": "log0804_073839_55.00.png", "2761": "log0804_073839_56.00.png", "2762": "log0804_073839_57.00.png", "2763": "log0804_073839_58.00.png", "2764": "log0804_073839_59.00.png", "2765": "log0804_073839_6.00.png", "2766": "log0804_073839_60.00.png", "2767": "log0804_073839_61.00.png", "2768": "log0804_073839_62.00.png", "2769": "log0804_073839_63.00.png", "2770": "log0804_073839_64.00.png", "2771": "log0804_073839_65.00.png", "2772": "log0804_073839_66.00.png", "2773": "log0804_073839_67.00.png", "2774": "log0804_073839_68.00.png", "2775": "log0804_073839_69.00.png", "2776": "log0804_073839_7.00.png", "2777": "log0804_073839_8.00.png", "2778": "log0804_073839_9.00.png", "2779": "log0804_112222_1.00.png", "2780": "log0804_112222_10.00.png", "2781": "log0804_112222_100.00.png", "2782": "log0804_112222_101.00.png", "2783": "log0804_112222_102.00.png", "2784": "log0804_112222_103.00.png", "2785": "log0804_112222_104.00.png", "2786": "log0804_112222_105.00.png", "2787": "log0804_112222_106.00.png", "2788": "log0804_112222_107.00.png", "2789": "log0804_112222_108.00.png", "2790": "log0804_112222_109.00.png", "2791": "log0804_112222_11.00.png", "2792": "log0804_112222_110.00.png", "2793": "log0804_112222_111.00.png", "2794": "log0804_112222_112.00.png", "2795": "log0804_112222_113.00.png", "2796": "log0804_112222_114.00.png", "2797": "log0804_112222_115.00.png", "2798": "log0804_112222_116.00.png", "2799": "log0804_112222_117.00.png", "2800": "log0804_112222_118.00.png", "2801": "log0804_112222_119.00.png", "2802": "log0804_112222_12.00.png", "2803": "log0804_112222_120.00.png", "2804": "log0804_112222_121.00.png", "2805": "log0804_112222_122.00.png", "2806": "log0804_112222_123.00.png", "2807": "log0804_112222_124.00.png", "2808": "log0804_112222_125.00.png", "2809": "log0804_112222_126.00.png", "2810": "log0804_112222_127.00.png", "2811": "log0804_112222_128.00.png", "2812": "log0804_112222_129.00.png", "2813": "log0804_112222_13.00.png", "2814": "log0804_112222_130.00.png", "2815": "log0804_112222_131.00.png", "2816": "log0804_112222_132.00.png", "2817": "log0804_112222_133.00.png", "2818": "log0804_112222_134.00.png", "2819": "log0804_112222_135.00.png", "2820": "log0804_112222_136.00.png", "2821": "log0804_112222_137.00.png", "2822": "log0804_112222_138.00.png", "2823": "log0804_112222_139.00.png", "2824": "log0804_112222_14.00.png", "2825": "log0804_112222_140.00.png", "2826": "log0804_112222_141.00.png", "2827": "log0804_112222_142.00.png", "2828": "log0804_112222_143.00.png", "2829": "log0804_112222_144.00.png", "2830": "log0804_112222_145.00.png", "2831": "log0804_112222_146.00.png", "2832": "log0804_112222_147.00.png", "2833": "log0804_112222_148.00.png", "2834": "log0804_112222_149.00.png", "2835": "log0804_112222_15.00.png", "2836": "log0804_112222_150.00.png", "2837": "log0804_112222_151.00.png", "2838": "log0804_112222_152.00.png", "2839": "log0804_112222_153.00.png", "2840": "log0804_112222_154.00.png", "2841": "log0804_112222_155.00.png", "2842": "log0804_112222_156.00.png", "2843": "log0804_112222_157.00.png", "2844": "log0804_112222_158.00.png", "2845": "log0804_112222_159.00.png", "2846": "log0804_112222_16.00.png", "2847": "log0804_112222_160.00.png", "2848": "log0804_112222_161.00.png", "2849": "log0804_112222_162.00.png", "2850": "log0804_112222_163.00.png", "2851": "log0804_112222_164.00.png", "2852": "log0804_112222_165.00.png", "2853": "log0804_112222_166.00.png", "2854": "log0804_112222_167.00.png", "2855": "log0804_112222_168.00.png", "2856": "log0804_112222_169.00.png", "2857": "log0804_112222_17.00.png", "2858": "log0804_112222_170.00.png", "2859": "log0804_112222_171.00.png", "2860": "log0804_112222_172.00.png", "2861": "log0804_112222_173.00.png", "2862": "log0804_112222_174.00.png", "2863": "log0804_112222_175.00.png", "2864": "log0804_112222_176.00.png", "2865": "log0804_112222_18.00.png", "2866": "log0804_112222_19.00.png", "2867": "log0804_112222_2.00.png", "2868": "log0804_112222_20.00.png", "2869": "log0804_112222_21.00.png", "2870": "log0804_112222_22.00.png", "2871": "log0804_112222_23.00.png", "2872": "log0804_112222_24.00.png", "2873": "log0804_112222_25.00.png", "2874": "log0804_112222_26.00.png", "2875": "log0804_112222_27.00.png", "2876": "log0804_112222_28.00.png", "2877": "log0804_112222_29.00.png", "2878": "log0804_112222_3.00.png", "2879": "log0804_112222_30.00.png", "2880": "log0804_112222_31.00.png", "2881": "log0804_112222_32.00.png", "2882": "log0804_112222_33.00.png", "2883": "log0804_112222_34.00.png", "2884": "log0804_112222_35.00.png", "2885": "log0804_112222_36.00.png", "2886": "log0804_112222_37.00.png", "2887": "log0804_112222_38.00.png", "2888": "log0804_112222_39.00.png", "2889": "log0804_112222_4.00.png", "2890": "log0804_112222_40.00.png", "2891": "log0804_112222_41.00.png", "2892": "log0804_112222_42.00.png", "2893": "log0804_112222_43.00.png", "2894": "log0804_112222_44.00.png", "2895": "log0804_112222_45.00.png", "2896": "log0804_112222_46.00.png", "2897": "log0804_112222_47.00.png", "2898": "log0804_112222_48.00.png", "2899": "log0804_112222_49.00.png", "2900": "log0804_112222_5.00.png", "2901": "log0804_112222_50.00.png", "2902": "log0804_112222_51.00.png", "2903": "log0804_112222_52.00.png", "2904": "log0804_112222_53.00.png", "2905": "log0804_112222_54.00.png", "2906": "log0804_112222_55.00.png", "2907": "log0804_112222_56.00.png", "2908": "log0804_112222_57.00.png", "2909": "log0804_112222_58.00.png", "2910": "log0804_112222_59.00.png", "2911": "log0804_112222_6.00.png", "2912": "log0804_112222_60.00.png", "2913": "log0804_112222_61.00.png", "2914": "log0804_112222_62.00.png", "2915": "log0804_112222_63.00.png", "2916": "log0804_112222_64.00.png", "2917": "log0804_112222_65.00.png", "2918": "log0804_112222_66.00.png", "2919": "log0804_112222_67.00.png", "2920": "log0804_112222_68.00.png", "2921": "log0804_112222_69.00.png", "2922": "log0804_112222_7.00.png", "2923": "log0804_112222_70.00.png", "2924": "log0804_112222_71.00.png", "2925": "log0804_112222_72.00.png", "2926": "log0804_112222_73.00.png", "2927": "log0804_112222_74.00.png", "2928": "log0804_112222_75.00.png", "2929": "log0804_112222_76.00.png", "2930": "log0804_112222_77.00.png", "2931": "log0804_112222_78.00.png", "2932": "log0804_112222_79.00.png", "2933": "log0804_112222_8.00.png", "2934": "log0804_112222_80.00.png", "2935": "log0804_112222_81.00.png", "2936": "log0804_112222_82.00.png", "2937": "log0804_112222_83.00.png", "2938": "log0804_112222_84.00.png", "2939": "log0804_112222_85.00.png", "2940": "log0804_112222_86.00.png", "2941": "log0804_112222_87.00.png", "2942": "log0804_112222_88.00.png", "2943": "log0804_112222_89.00.png", "2944": "log0804_112222_9.00.png", "2945": "log0804_112222_90.00.png", "2946": "log0804_112222_91.00.png", "2947": "log0804_112222_92.00.png", "2948": "log0804_112222_93.00.png", "2949": "log0804_112222_94.00.png", "2950": "log0804_112222_95.00.png", "2951": "log0804_112222_96.00.png", "2952": "log0804_112222_97.00.png", "2953": "log0804_112222_98.00.png", "2954": "log0804_112222_99.00.png", "2955": "log0804_150604_1.00.png", "2956": "log0804_150604_10.00.png", "2957": "log0804_150604_100.00.png", "2958": "log0804_150604_101.00.png", "2959": "log0804_150604_102.00.png", "2960": "log0804_150604_103.00.png", "2961": "log0804_150604_104.00.png", "2962": "log0804_150604_105.00.png", "2963": "log0804_150604_106.00.png", "2964": "log0804_150604_107.00.png", "2965": "log0804_150604_108.00.png", "2966": "log0804_150604_109.00.png", "2967": "log0804_150604_11.00.png", "2968": "log0804_150604_110.00.png", "2969": "log0804_150604_111.00.png", "2970": "log0804_150604_112.00.png", "2971": "log0804_150604_113.00.png", "2972": "log0804_150604_114.00.png", "2973": "log0804_150604_115.00.png", "2974": "log0804_150604_116.00.png", "2975": "log0804_150604_117.00.png", "2976": "log0804_150604_118.00.png", "2977": "log0804_150604_119.00.png", "2978": "log0804_150604_12.00.png", "2979": "log0804_150604_120.00.png", "2980": "log0804_150604_121.00.png", "2981": "log0804_150604_122.00.png", "2982": "log0804_150604_123.00.png", "2983": "log0804_150604_124.00.png", "2984": "log0804_150604_125.00.png", "2985": "log0804_150604_126.00.png", "2986": "log0804_150604_127.00.png", "2987": "log0804_150604_128.00.png", "2988": "log0804_150604_129.00.png", "2989": "log0804_150604_13.00.png", "2990": "log0804_150604_130.00.png", "2991": "log0804_150604_131.00.png", "2992": "log0804_150604_132.00.png", "2993": "log0804_150604_133.00.png", "2994": "log0804_150604_134.00.png", "2995": "log0804_150604_135.00.png", "2996": "log0804_150604_136.00.png", "2997": "log0804_150604_137.00.png", "2998": "log0804_150604_138.00.png", "2999": "log0804_150604_139.00.png", "3000": "log0804_150604_14.00.png", "3001": "log0804_150604_140.00.png", "3002": "log0804_150604_141.00.png", "3003": "log0804_150604_142.00.png", "3004": "log0804_150604_143.00.png", "3005": "log0804_150604_144.00.png", "3006": "log0804_150604_145.00.png", "3007": "log0804_150604_146.00.png", "3008": "log0804_150604_147.00.png", "3009": "log0804_150604_148.00.png", "3010": "log0804_150604_149.00.png", "3011": "log0804_150604_15.00.png", "3012": "log0804_150604_150.00.png", "3013": "log0804_150604_151.00.png", "3014": "log0804_150604_152.00.png", "3015": "log0804_150604_153.00.png", "3016": "log0804_150604_154.00.png", "3017": "log0804_150604_155.00.png", "3018": "log0804_150604_156.00.png", "3019": "log0804_150604_157.00.png", "3020": "log0804_150604_158.00.png", "3021": "log0804_150604_159.00.png", "3022": "log0804_150604_16.00.png", "3023": "log0804_150604_160.00.png", "3024": "log0804_150604_161.00.png", "3025": "log0804_150604_162.00.png", "3026": "log0804_150604_163.00.png", "3027": "log0804_150604_164.00.png", "3028": "log0804_150604_165.00.png", "3029": "log0804_150604_166.00.png", "3030": "log0804_150604_167.00.png", "3031": "log0804_150604_168.00.png", "3032": "log0804_150604_169.00.png", "3033": "log0804_150604_17.00.png", "3034": "log0804_150604_170.00.png", "3035": "log0804_150604_171.00.png", "3036": "log0804_150604_172.00.png", "3037": "log0804_150604_173.00.png", "3038": "log0804_150604_174.00.png", "3039": "log0804_150604_175.00.png", "3040": "log0804_150604_176.00.png", "3041": "log0804_150604_177.00.png", "3042": "log0804_150604_178.00.png", "3043": "log0804_150604_179.00.png", "3044": "log0804_150604_18.00.png", "3045": "log0804_150604_180.00.png", "3046": "log0804_150604_181.00.png", "3047": "log0804_150604_182.00.png", "3048": "log0804_150604_183.00.png", "3049": "log0804_150604_19.00.png", "3050": "log0804_150604_2.00.png", "3051": "log0804_150604_20.00.png", "3052": "log0804_150604_21.00.png", "3053": "log0804_150604_22.00.png", "3054": "log0804_150604_23.00.png", "3055": "log0804_150604_24.00.png", "3056": "log0804_150604_25.00.png", "3057": "log0804_150604_26.00.png", "3058": "log0804_150604_27.00.png", "3059": "log0804_150604_28.00.png", "3060": "log0804_150604_29.00.png", "3061": "log0804_150604_3.00.png", "3062": "log0804_150604_30.00.png", "3063": "log0804_150604_31.00.png", "3064": "log0804_150604_32.00.png", "3065": "log0804_150604_33.00.png", "3066": "log0804_150604_34.00.png", "3067": "log0804_150604_35.00.png", "3068": "log0804_150604_36.00.png", "3069": "log0804_150604_37.00.png", "3070": "log0804_150604_38.00.png", "3071": "log0804_150604_39.00.png", "3072": "log0804_150604_4.00.png", "3073": "log0804_150604_40.00.png", "3074": "log0804_150604_41.00.png", "3075": "log0804_150604_42.00.png", "3076": "log0804_150604_43.00.png", "3077": "log0804_150604_44.00.png", "3078": "log0804_150604_45.00.png", "3079": "log0804_150604_46.00.png", "3080": "log0804_150604_47.00.png", "3081": "log0804_150604_48.00.png", "3082": "log0804_150604_49.00.png", "3083": "log0804_150604_5.00.png", "3084": "log0804_150604_50.00.png", "3085": "log0804_150604_51.00.png", "3086": "log0804_150604_52.00.png", "3087": "log0804_150604_53.00.png", "3088": "log0804_150604_54.00.png", "3089": "log0804_150604_55.00.png", "3090": "log0804_150604_56.00.png", "3091": "log0804_150604_57.00.png", "3092": "log0804_150604_58.00.png", "3093": "log0804_150604_59.00.png", "3094": "log0804_150604_6.00.png", "3095": "log0804_150604_60.00.png", "3096": "log0804_150604_61.00.png", "3097": "log0804_150604_62.00.png", "3098": "log0804_150604_63.00.png", "3099": "log0804_150604_64.00.png", "3100": "log0804_150604_65.00.png", "3101": "log0804_150604_66.00.png", "3102": "log0804_150604_67.00.png", "3103": "log0804_150604_68.00.png", "3104": "log0804_150604_69.00.png", "3105": "log0804_150604_7.00.png", "3106": "log0804_150604_70.00.png", "3107": "log0804_150604_71.00.png", "3108": "log0804_150604_72.00.png", "3109": "log0804_150604_73.00.png", "3110": "log0804_150604_74.00.png", "3111": "log0804_150604_75.00.png", "3112": "log0804_150604_76.00.png", "3113": "log0804_150604_77.00.png", "3114": "log0804_150604_78.00.png", "3115": "log0804_150604_79.00.png", "3116": "log0804_150604_8.00.png", "3117": "log0804_150604_80.00.png", "3118": "log0804_150604_81.00.png", "3119": "log0804_150604_82.00.png", "3120": "log0804_150604_83.00.png", "3121": "log0804_150604_84.00.png", "3122": "log0804_150604_85.00.png", "3123": "log0804_150604_86.00.png", "3124": "log0804_150604_87.00.png", "3125": "log0804_150604_88.00.png", "3126": "log0804_150604_89.00.png", "3127": "log0804_150604_9.00.png", "3128": "log0804_150604_90.00.png", "3129": "log0804_150604_91.00.png", "3130": "log0804_150604_92.00.png", "3131": "log0804_150604_93.00.png", "3132": "log0804_150604_94.00.png", "3133": "log0804_150604_96.00.png", "3134": "log0804_150604_97.00.png", "3135": "log0804_150604_98.00.png", "3136": "log0804_150604_99.00.png", "3137": "log0804_184946_1.00.png", "3138": "log0804_184946_10.00.png", "3139": "log0804_184946_11.00.png", "3140": "log0804_184946_12.00.png", "3141": "log0804_184946_13.00.png", "3142": "log0804_184946_14.00.png", "3143": "log0804_184946_15.00.png", "3144": "log0804_184946_16.00.png", "3145": "log0804_184946_17.00.png", "3146": "log0804_184946_18.00.png", "3147": "log0804_184946_19.00.png", "3148": "log0804_184946_2.00.png", "3149": "log0804_184946_20.00.png", "3150": "log0804_184946_21.00.png", "3151": "log0804_184946_22.00.png", "3152": "log0804_184946_23.00.png", "3153": "log0804_184946_24.00.png", "3154": "log0804_184946_25.00.png", "3155": "log0804_184946_26.00.png", "3156": "log0804_184946_27.00.png", "3157": "log0804_184946_28.00.png", "3158": "log0804_184946_29.00.png", "3159": "log0804_184946_3.00.png", "3160": "log0804_184946_30.00.png", "3161": "log0804_184946_31.00.png", "3162": "log0804_184946_32.00.png", "3163": "log0804_184946_33.00.png", "3164": "log0804_184946_34.00.png", "3165": "log0804_184946_35.00.png", "3166": "log0804_184946_36.00.png", "3167": "log0804_184946_37.00.png", "3168": "log0804_184946_38.00.png", "3169": "log0804_184946_39.00.png", "3170": "log0804_184946_4.00.png", "3171": "log0804_184946_40.00.png", "3172": "log0804_184946_41.00.png", "3173": "log0804_184946_42.00.png", "3174": "log0804_184946_43.00.png", "3175": "log0804_184946_44.00.png", "3176": "log0804_184946_45.00.png", "3177": "log0804_184946_46.00.png", "3178": "log0804_184946_47.00.png", "3179": "log0804_184946_48.00.png", "3180": "log0804_184946_49.00.png", "3181": "log0804_184946_5.00.png", "3182": "log0804_184946_50.00.png", "3183": "log0804_184946_51.00.png", "3184": "log0804_184946_52.00.png", "3185": "log0804_184946_53.00.png", "3186": "log0804_184946_54.00.png", "3187": "log0804_184946_55.00.png", "3188": "log0804_184946_56.00.png", "3189": "log0804_184946_57.00.png", "3190": "log0804_184946_58.00.png", "3191": "log0804_184946_59.00.png", "3192": "log0804_184946_6.00.png", "3193": "log0804_184946_60.00.png", "3194": "log0804_184946_61.00.png", "3195": "log0804_184946_62.00.png", "3196": "log0804_184946_63.00.png", "3197": "log0804_184946_64.00.png", "3198": "log0804_184946_65.00.png", "3199": "log0804_184946_66.00.png", "3200": "log0804_184946_67.00.png", "3201": "log0804_184946_68.00.png", "3202": "log0804_184946_69.00.png", "3203": "log0804_184946_7.00.png", "3204": "log0804_184946_70.00.png", "3205": "log0804_184946_71.00.png", "3206": "log0804_184946_72.00.png", "3207": "log0804_184946_73.00.png", "3208": "log0804_184946_74.00.png", "3209": "log0804_184946_75.00.png", "3210": "log0804_184946_76.00.png", "3211": "log0804_184946_77.00.png", "3212": "log0804_184946_78.00.png", "3213": "log0804_184946_79.00.png", "3214": "log0804_184946_8.00.png", "3215": "log0804_184946_80.00.png", "3216": "log0804_184946_81.00.png", "3217": "log0804_184946_82.00.png", "3218": "log0804_184946_83.00.png", "3219": "log0804_184946_84.00.png", "3220": "log0804_184946_85.00.png", "3221": "log0804_184946_86.00.png", "3222": "log0804_184946_87.00.png", "3223": "log0804_184946_88.00.png", "3224": "log0804_184946_89.00.png", "3225": "log0804_184946_9.00.png", "3226": "log0804_184946_90.00.png", "3227": "log0804_184946_91.00.png", "3228": "log0804_184946_92.00.png", "3229": "log0804_184946_93.00.png", "3230": "log0804_184946_94.00.png", "3231": "log0804_184946_95.00.png", "3232": "log0804_184946_96.00.png", "3233": "log0804_184946_97.00.png", "3234": "log0804_184946_98.00.png", "3235": "log0804_184946_99.00.png", "3236": "log0804_223328_1.00.png", "3237": "log0804_223328_10.00.png", "3238": "log0804_223328_11.00.png", "3239": "log0804_223328_12.00.png", "3240": "log0804_223328_13.00.png", "3241": "log0804_223328_14.00.png", "3242": "log0804_223328_15.00.png", "3243": "log0804_223328_16.00.png", "3244": "log0804_223328_17.00.png", "3245": "log0804_223328_18.00.png", "3246": "log0804_223328_19.00.png", "3247": "log0804_223328_2.00.png", "3248": "log0804_223328_20.00.png", "3249": "log0804_223328_21.00.png", "3250": "log0804_223328_22.00.png", "3251": "log0804_223328_23.00.png", "3252": "log0804_223328_24.00.png", "3253": "log0804_223328_25.00.png", "3254": "log0804_223328_26.00.png", "3255": "log0804_223328_27.00.png", "3256": "log0804_223328_28.00.png", "3257": "log0804_223328_29.00.png", "3258": "log0804_223328_3.00.png", "3259": "log0804_223328_30.00.png", "3260": "log0804_223328_31.00.png", "3261": "log0804_223328_32.00.png", "3262": "log0804_223328_33.00.png", "3263": "log0804_223328_34.00.png", "3264": "log0804_223328_35.00.png", "3265": "log0804_223328_36.00.png", "3266": "log0804_223328_37.00.png", "3267": "log0804_223328_38.00.png", "3268": "log0804_223328_39.00.png", "3269": "log0804_223328_4.00.png", "3270": "log0804_223328_40.00.png", "3271": "log0804_223328_41.00.png", "3272": "log0804_223328_42.00.png", "3273": "log0804_223328_43.00.png", "3274": "log0804_223328_44.00.png", "3275": "log0804_223328_45.00.png", "3276": "log0804_223328_46.00.png", "3277": "log0804_223328_47.00.png", "3278": "log0804_223328_48.00.png", "3279": "log0804_223328_49.00.png", "3280": "log0804_223328_5.00.png", "3281": "log0804_223328_50.00.png", "3282": "log0804_223328_51.00.png", "3283": "log0804_223328_52.00.png", "3284": "log0804_223328_53.00.png", "3285": "log0804_223328_54.00.png", "3286": "log0804_223328_55.00.png", "3287": "log0804_223328_56.00.png", "3288": "log0804_223328_57.00.png", "3289": "log0804_223328_58.00.png", "3290": "log0804_223328_59.00.png", "3291": "log0804_223328_6.00.png", "3292": "log0804_223328_60.00.png", "3293": "log0804_223328_61.00.png", "3294": "log0804_223328_62.00.png", "3295": "log0804_223328_63.00.png", "3296": "log0804_223328_64.00.png", "3297": "log0804_223328_65.00.png", "3298": "log0804_223328_66.00.png", "3299": "log0804_223328_67.00.png", "3300": "log0804_223328_68.00.png", "3301": "log0804_223328_69.00.png", "3302": "log0804_223328_7.00.png", "3303": "log0804_223328_70.00.png", "3304": "log0804_223328_71.00.png", "3305": "log0804_223328_72.00.png", "3306": "log0804_223328_73.00.png", "3307": "log0804_223328_74.00.png", "3308": "log0804_223328_75.00.png", "3309": "log0804_223328_76.00.png", "3310": "log0804_223328_77.00.png", "3311": "log0804_223328_78.00.png", "3312": "log0804_223328_79.00.png", "3313": "log0804_223328_8.00.png", "3314": "log0804_223328_80.00.png", "3315": "log0804_223328_81.00.png", "3316": "log0804_223328_82.00.png", "3317": "log0804_223328_83.00.png", "3318": "log0804_223328_84.00.png", "3319": "log0804_223328_85.00.png", "3320": "log0804_223328_86.00.png", "3321": "log0804_223328_87.00.png", "3322": "log0804_223328_88.00.png", "3323": "log0804_223328_89.00.png", "3324": "log0804_223328_9.00.png", "3325": "log0804_223328_90.00.png", "3326": "log0804_223328_91.00.png", "3327": "log0804_223328_92.00.png", "3328": "log0804_223328_93.00.png", "3329": "log0804_223328_94.00.png", "3330": "log0804_223328_95.00.png", "3331": "log0805_021710_1.00.png", "3332": "log0805_021710_2.00.png", "3333": "log0805_021710_3.00.png", "3334": "log0805_021710_4.00.png", "3335": "log0805_021710_5.00.png", "3336": "log0805_060053_1.00.png", "3337": "log0805_060053_10.00.png", "3338": "log0805_060053_11.00.png", "3339": "log0805_060053_12.00.png", "3340": "log0805_060053_13.00.png", "3341": "log0805_060053_14.00.png", "3342": "log0805_060053_15.00.png", "3343": "log0805_060053_16.00.png", "3344": "log0805_060053_17.00.png", "3345": "log0805_060053_18.00.png", "3346": "log0805_060053_19.00.png", "3347": "log0805_060053_2.00.png", "3348": "log0805_060053_20.00.png", "3349": "log0805_060053_21.00.png", "3350": "log0805_060053_22.00.png", "3351": "log0805_060053_23.00.png", "3352": "log0805_060053_24.00.png", "3353": "log0805_060053_25.00.png", "3354": "log0805_060053_26.00.png", "3355": "log0805_060053_27.00.png", "3356": "log0805_060053_28.00.png", "3357": "log0805_060053_29.00.png", "3358": "log0805_060053_3.00.png", "3359": "log0805_060053_30.00.png", "3360": "log0805_060053_31.00.png", "3361": "log0805_060053_32.00.png", "3362": "log0805_060053_33.00.png", "3363": "log0805_060053_34.00.png", "3364": "log0805_060053_35.00.png", "3365": "log0805_060053_36.00.png", "3366": "log0805_060053_37.00.png", "3367": "log0805_060053_38.00.png", "3368": "log0805_060053_39.00.png", "3369": "log0805_060053_4.00.png", "3370": "log0805_060053_40.00.png", "3371": "log0805_060053_41.00.png", "3372": "log0805_060053_42.00.png", "3373": "log0805_060053_43.00.png", "3374": "log0805_060053_44.00.png", "3375": "log0805_060053_45.00.png", "3376": "log0805_060053_46.00.png", "3377": "log0805_060053_47.00.png", "3378": "log0805_060053_48.00.png", "3379": "log0805_060053_49.00.png", "3380": "log0805_060053_5.00.png", "3381": "log0805_060053_50.00.png", "3382": "log0805_060053_51.00.png", "3383": "log0805_060053_52.00.png", "3384": "log0805_060053_53.00.png", "3385": "log0805_060053_54.00.png", "3386": "log0805_060053_55.00.png", "3387": "log0805_060053_56.00.png", "3388": "log0805_060053_57.00.png", "3389": "log0805_060053_58.00.png", "3390": "log0805_060053_59.00.png", "3391": "log0805_060053_6.00.png", "3392": "log0805_060053_60.00.png", "3393": "log0805_060053_61.00.png", "3394": "log0805_060053_62.00.png", "3395": "log0805_060053_63.00.png", "3396": "log0805_060053_64.00.png", "3397": "log0805_060053_65.00.png", "3398": "log0805_060053_66.00.png", "3399": "log0805_060053_67.00.png", "3400": "log0805_060053_68.00.png", "3401": "log0805_060053_69.00.png", "3402": "log0805_060053_7.00.png", "3403": "log0805_060053_70.00.png", "3404": "log0805_060053_71.00.png", "3405": "log0805_060053_72.00.png", "3406": "log0805_060053_73.00.png", "3407": "log0805_060053_74.00.png", "3408": "log0805_060053_75.00.png", "3409": "log0805_060053_76.00.png", "3410": "log0805_060053_77.00.png", "3411": "log0805_060053_78.00.png", "3412": "log0805_060053_79.00.png", "3413": "log0805_060053_8.00.png", "3414": "log0805_060053_80.00.png", "3415": "log0805_060053_81.00.png", "3416": "log0805_060053_82.00.png", "3417": "log0805_060053_83.00.png", "3418": "log0805_060053_84.00.png", "3419": "log0805_060053_85.00.png", "3420": "log0805_060053_86.00.png", "3421": "log0805_060053_87.00.png", "3422": "log0805_060053_88.00.png", "3423": "log0805_060053_9.00.png", "3424": "log0805_094435_1.00.png", "3425": "log0805_094435_10.00.png", "3426": "log0805_094435_11.00.png", "3427": "log0805_094435_12.00.png", "3428": "log0805_094435_13.00.png", "3429": "log0805_094435_14.00.png", "3430": "log0805_094435_15.00.png", "3431": "log0805_094435_16.00.png", "3432": "log0805_094435_17.00.png", "3433": "log0805_094435_18.00.png", "3434": "log0805_094435_19.00.png", "3435": "log0805_094435_2.00.png", "3436": "log0805_094435_20.00.png", "3437": "log0805_094435_21.00.png", "3438": "log0805_094435_22.00.png", "3439": "log0805_094435_23.00.png", "3440": "log0805_094435_24.00.png", "3441": "log0805_094435_25.00.png", "3442": "log0805_094435_26.00.png", "3443": "log0805_094435_27.00.png", "3444": "log0805_094435_28.00.png", "3445": "log0805_094435_29.00.png", "3446": "log0805_094435_3.00.png", "3447": "log0805_094435_30.00.png", "3448": "log0805_094435_31.00.png", "3449": "log0805_094435_32.00.png", "3450": "log0805_094435_33.00.png", "3451": "log0805_094435_34.00.png", "3452": "log0805_094435_35.00.png", "3453": "log0805_094435_36.00.png", "3454": "log0805_094435_37.00.png", "3455": "log0805_094435_38.00.png", "3456": "log0805_094435_39.00.png", "3457": "log0805_094435_4.00.png", "3458": "log0805_094435_40.00.png", "3459": "log0805_094435_41.00.png", "3460": "log0805_094435_42.00.png", "3461": "log0805_094435_43.00.png", "3462": "log0805_094435_44.00.png", "3463": "log0805_094435_45.00.png", "3464": "log0805_094435_46.00.png", "3465": "log0805_094435_47.00.png", "3466": "log0805_094435_48.00.png", "3467": "log0805_094435_49.00.png", "3468": "log0805_094435_5.00.png", "3469": "log0805_094435_50.00.png", "3470": "log0805_094435_51.00.png", "3471": "log0805_094435_52.00.png", "3472": "log0805_094435_53.00.png", "3473": "log0805_094435_54.00.png", "3474": "log0805_094435_55.00.png", "3475": "log0805_094435_56.00.png", "3476": "log0805_094435_57.00.png", "3477": "log0805_094435_58.00.png", "3478": "log0805_094435_59.00.png", "3479": "log0805_094435_6.00.png", "3480": "log0805_094435_60.00.png", "3481": "log0805_094435_61.00.png", "3482": "log0805_094435_62.00.png", "3483": "log0805_094435_63.00.png", "3484": "log0805_094435_64.00.png", "3485": "log0805_094435_65.00.png", "3486": "log0805_094435_66.00.png", "3487": "log0805_094435_67.00.png", "3488": "log0805_094435_68.00.png", "3489": "log0805_094435_69.00.png", "3490": "log0805_094435_7.00.png", "3491": "log0805_094435_70.00.png", "3492": "log0805_094435_71.00.png", "3493": "log0805_094435_72.00.png", "3494": "log0805_094435_73.00.png", "3495": "log0805_094435_74.00.png", "3496": "log0805_094435_75.00.png", "3497": "log0805_094435_76.00.png", "3498": "log0805_094435_77.00.png", "3499": "log0805_094435_78.00.png", "3500": "log0805_094435_79.00.png", "3501": "log0805_094435_8.00.png", "3502": "log0805_094435_80.00.png", "3503": "log0805_094435_81.00.png", "3504": "log0805_094435_82.00.png", "3505": "log0805_094435_83.00.png", "3506": "log0805_094435_84.00.png", "3507": "log0805_094435_85.00.png", "3508": "log0805_094435_86.00.png", "3509": "log0805_094435_87.00.png", "3510": "log0805_094435_88.00.png", "3511": "log0805_094435_89.00.png", "3512": "log0805_094435_9.00.png", "3513": "log0805_094435_90.00.png", "3514": "log0805_094435_91.00.png", "3515": "log0805_094435_92.00.png", "3516": "log0805_094435_93.00.png", "3517": "log0805_094435_94.00.png", "3518": "log0805_094435_95.00.png", "3519": "log0805_132817_1.00.png", "3520": "log0805_132817_10.00.png", "3521": "log0805_132817_100.00.png", "3522": "log0805_132817_101.00.png", "3523": "log0805_132817_102.00.png", "3524": "log0805_132817_103.00.png", "3525": "log0805_132817_104.00.png", "3526": "log0805_132817_105.00.png", "3527": "log0805_132817_106.00.png", "3528": "log0805_132817_107.00.png", "3529": "log0805_132817_108.00.png", "3530": "log0805_132817_109.00.png", "3531": "log0805_132817_11.00.png", "3532": "log0805_132817_110.00.png", "3533": "log0805_132817_111.00.png", "3534": "log0805_132817_112.00.png", "3535": "log0805_132817_113.00.png", "3536": "log0805_132817_114.00.png", "3537": "log0805_132817_115.00.png", "3538": "log0805_132817_116.00.png", "3539": "log0805_132817_117.00.png", "3540": "log0805_132817_118.00.png", "3541": "log0805_132817_119.00.png", "3542": "log0805_132817_12.00.png", "3543": "log0805_132817_120.00.png", "3544": "log0805_132817_121.00.png", "3545": "log0805_132817_122.00.png", "3546": "log0805_132817_123.00.png", "3547": "log0805_132817_124.00.png", "3548": "log0805_132817_125.00.png", "3549": "log0805_132817_126.00.png", "3550": "log0805_132817_127.00.png", "3551": "log0805_132817_128.00.png", "3552": "log0805_132817_129.00.png", "3553": "log0805_132817_13.00.png", "3554": "log0805_132817_130.00.png", "3555": "log0805_132817_131.00.png", "3556": "log0805_132817_132.00.png", "3557": "log0805_132817_133.00.png", "3558": "log0805_132817_134.00.png", "3559": "log0805_132817_135.00.png", "3560": "log0805_132817_136.00.png", "3561": "log0805_132817_137.00.png", "3562": "log0805_132817_138.00.png", "3563": "log0805_132817_139.00.png", "3564": "log0805_132817_14.00.png", "3565": "log0805_132817_140.00.png", "3566": "log0805_132817_141.00.png", "3567": "log0805_132817_142.00.png", "3568": "log0805_132817_143.00.png", "3569": "log0805_132817_144.00.png", "3570": "log0805_132817_145.00.png", "3571": "log0805_132817_146.00.png", "3572": "log0805_132817_147.00.png", "3573": "log0805_132817_148.00.png", "3574": "log0805_132817_149.00.png", "3575": "log0805_132817_15.00.png", "3576": "log0805_132817_150.00.png", "3577": "log0805_132817_151.00.png", "3578": "log0805_132817_152.00.png", "3579": "log0805_132817_153.00.png", "3580": "log0805_132817_154.00.png", "3581": "log0805_132817_155.00.png", "3582": "log0805_132817_156.00.png", "3583": "log0805_132817_157.00.png", "3584": "log0805_132817_158.00.png", "3585": "log0805_132817_159.00.png", "3586": "log0805_132817_16.00.png", "3587": "log0805_132817_160.00.png", "3588": "log0805_132817_161.00.png", "3589": "log0805_132817_162.00.png", "3590": "log0805_132817_163.00.png", "3591": "log0805_132817_164.00.png", "3592": "log0805_132817_165.00.png", "3593": "log0805_132817_166.00.png", "3594": "log0805_132817_167.00.png", "3595": "log0805_132817_168.00.png", "3596": "log0805_132817_169.00.png", "3597": "log0805_132817_17.00.png", "3598": "log0805_132817_170.00.png", "3599": "log0805_132817_171.00.png", "3600": "log0805_132817_172.00.png", "3601": "log0805_132817_173.00.png", "3602": "log0805_132817_174.00.png", "3603": "log0805_132817_175.00.png", "3604": "log0805_132817_176.00.png", "3605": "log0805_132817_177.00.png", "3606": "log0805_132817_18.00.png", "3607": "log0805_132817_19.00.png", "3608": "log0805_132817_2.00.png", "3609": "log0805_132817_20.00.png", "3610": "log0805_132817_21.00.png", "3611": "log0805_132817_22.00.png", "3612": "log0805_132817_23.00.png", "3613": "log0805_132817_24.00.png", "3614": "log0805_132817_25.00.png", "3615": "log0805_132817_26.00.png", "3616": "log0805_132817_27.00.png", "3617": "log0805_132817_28.00.png", "3618": "log0805_132817_29.00.png", "3619": "log0805_132817_3.00.png", "3620": "log0805_132817_30.00.png", "3621": "log0805_132817_31.00.png", "3622": "log0805_132817_32.00.png", "3623": "log0805_132817_33.00.png", "3624": "log0805_132817_34.00.png", "3625": "log0805_132817_35.00.png", "3626": "log0805_132817_36.00.png", "3627": "log0805_132817_37.00.png", "3628": "log0805_132817_38.00.png", "3629": "log0805_132817_39.00.png", "3630": "log0805_132817_4.00.png", "3631": "log0805_132817_40.00.png", "3632": "log0805_132817_41.00.png", "3633": "log0805_132817_42.00.png", "3634": "log0805_132817_43.00.png", "3635": "log0805_132817_44.00.png", "3636": "log0805_132817_45.00.png", "3637": "log0805_132817_46.00.png", "3638": "log0805_132817_47.00.png", "3639": "log0805_132817_48.00.png", "3640": "log0805_132817_49.00.png", "3641": "log0805_132817_5.00.png", "3642": "log0805_132817_50.00.png", "3643": "log0805_132817_51.00.png", "3644": "log0805_132817_52.00.png", "3645": "log0805_132817_53.00.png", "3646": "log0805_132817_54.00.png", "3647": "log0805_132817_55.00.png", "3648": "log0805_132817_56.00.png", "3649": "log0805_132817_57.00.png", "3650": "log0805_132817_58.00.png", "3651": "log0805_132817_59.00.png", "3652": "log0805_132817_6.00.png", "3653": "log0805_132817_60.00.png", "3654": "log0805_132817_61.00.png", "3655": "log0805_132817_62.00.png", "3656": "log0805_132817_63.00.png", "3657": "log0805_132817_64.00.png", "3658": "log0805_132817_65.00.png", "3659": "log0805_132817_66.00.png", "3660": "log0805_132817_67.00.png", "3661": "log0805_132817_68.00.png", "3662": "log0805_132817_69.00.png", "3663": "log0805_132817_7.00.png", "3664": "log0805_132817_70.00.png", "3665": "log0805_132817_71.00.png", "3666": "log0805_132817_72.00.png", "3667": "log0805_132817_73.00.png", "3668": "log0805_132817_74.00.png", "3669": "log0805_132817_75.00.png", "3670": "log0805_132817_76.00.png", "3671": "log0805_132817_77.00.png", "3672": "log0805_132817_78.00.png", "3673": "log0805_132817_79.00.png", "3674": "log0805_132817_8.00.png", "3675": "log0805_132817_80.00.png", "3676": "log0805_132817_81.00.png", "3677": "log0805_132817_82.00.png", "3678": "log0805_132817_83.00.png", "3679": "log0805_132817_84.00.png", "3680": "log0805_132817_85.00.png", "3681": "log0805_132817_86.00.png", "3682": "log0805_132817_87.00.png", "3683": "log0805_132817_88.00.png", "3684": "log0805_132817_89.00.png", "3685": "log0805_132817_9.00.png", "3686": "log0805_132817_90.00.png", "3687": "log0805_132817_91.00.png", "3688": "log0805_132817_92.00.png", "3689": "log0805_132817_93.00.png", "3690": "log0805_132817_94.00.png", "3691": "log0805_132817_95.00.png", "3692": "log0805_132817_96.00.png", "3693": "log0805_132817_97.00.png", "3694": "log0805_132817_98.00.png", "3695": "log0805_132817_99.00.png", "3696": "log0805_171159_1.00.png", "3697": "log0805_171159_10.00.png", "3698": "log0805_171159_100.00.png", "3699": "log0805_171159_101.00.png", "3700": "log0805_171159_102.00.png", "3701": "log0805_171159_103.00.png", "3702": "log0805_171159_104.00.png", "3703": "log0805_171159_105.00.png", "3704": "log0805_171159_106.00.png", "3705": "log0805_171159_107.00.png", "3706": "log0805_171159_108.00.png", "3707": "log0805_171159_109.00.png", "3708": "log0805_171159_11.00.png", "3709": "log0805_171159_110.00.png", "3710": "log0805_171159_111.00.png", "3711": "log0805_171159_112.00.png", "3712": "log0805_171159_113.00.png", "3713": "log0805_171159_114.00.png", "3714": "log0805_171159_115.00.png", "3715": "log0805_171159_116.00.png", "3716": "log0805_171159_117.00.png", "3717": "log0805_171159_118.00.png", "3718": "log0805_171159_119.00.png", "3719": "log0805_171159_12.00.png", "3720": "log0805_171159_120.00.png", "3721": "log0805_171159_121.00.png", "3722": "log0805_171159_122.00.png", "3723": "log0805_171159_123.00.png", "3724": "log0805_171159_124.00.png", "3725": "log0805_171159_125.00.png", "3726": "log0805_171159_126.00.png", "3727": "log0805_171159_127.00.png", "3728": "log0805_171159_128.00.png", "3729": "log0805_171159_129.00.png", "3730": "log0805_171159_13.00.png", "3731": "log0805_171159_130.00.png", "3732": "log0805_171159_131.00.png", "3733": "log0805_171159_132.00.png", "3734": "log0805_171159_133.00.png", "3735": "log0805_171159_134.00.png", "3736": "log0805_171159_135.00.png", "3737": "log0805_171159_136.00.png", "3738": "log0805_171159_137.00.png", "3739": "log0805_171159_138.00.png", "3740": "log0805_171159_139.00.png", "3741": "log0805_171159_14.00.png", "3742": "log0805_171159_140.00.png", "3743": "log0805_171159_141.00.png", "3744": "log0805_171159_142.00.png", "3745": "log0805_171159_143.00.png", "3746": "log0805_171159_144.00.png", "3747": "log0805_171159_145.00.png", "3748": "log0805_171159_146.00.png", "3749": "log0805_171159_147.00.png", "3750": "log0805_171159_148.00.png", "3751": "log0805_171159_149.00.png", "3752": "log0805_171159_15.00.png", "3753": "log0805_171159_150.00.png", "3754": "log0805_171159_151.00.png", "3755": "log0805_171159_152.00.png", "3756": "log0805_171159_153.00.png", "3757": "log0805_171159_154.00.png", "3758": "log0805_171159_155.00.png", "3759": "log0805_171159_156.00.png", "3760": "log0805_171159_157.00.png", "3761": "log0805_171159_158.00.png", "3762": "log0805_171159_159.00.png", "3763": "log0805_171159_16.00.png", "3764": "log0805_171159_160.00.png", "3765": "log0805_171159_161.00.png", "3766": "log0805_171159_162.00.png", "3767": "log0805_171159_163.00.png", "3768": "log0805_171159_164.00.png", "3769": "log0805_171159_165.00.png", "3770": "log0805_171159_166.00.png", "3771": "log0805_171159_167.00.png", "3772": "log0805_171159_168.00.png", "3773": "log0805_171159_169.00.png", "3774": "log0805_171159_17.00.png", "3775": "log0805_171159_170.00.png", "3776": "log0805_171159_171.00.png", "3777": "log0805_171159_172.00.png", "3778": "log0805_171159_173.00.png", "3779": "log0805_171159_174.00.png", "3780": "log0805_171159_175.00.png", "3781": "log0805_171159_176.00.png", "3782": "log0805_171159_177.00.png", "3783": "log0805_171159_178.00.png", "3784": "log0805_171159_179.00.png", "3785": "log0805_171159_18.00.png", "3786": "log0805_171159_180.00.png", "3787": "log0805_171159_181.00.png", "3788": "log0805_171159_182.00.png", "3789": "log0805_171159_183.00.png", "3790": "log0805_171159_184.00.png", "3791": "log0805_171159_185.00.png", "3792": "log0805_171159_186.00.png", "3793": "log0805_171159_187.00.png", "3794": "log0805_171159_188.00.png", "3795": "log0805_171159_189.00.png", "3796": "log0805_171159_19.00.png", "3797": "log0805_171159_190.00.png", "3798": "log0805_171159_191.00.png", "3799": "log0805_171159_192.00.png", "3800": "log0805_171159_193.00.png", "3801": "log0805_171159_194.00.png", "3802": "log0805_171159_195.00.png", "3803": "log0805_171159_196.00.png", "3804": "log0805_171159_197.00.png", "3805": "log0805_171159_198.00.png", "3806": "log0805_171159_199.00.png", "3807": "log0805_171159_2.00.png", "3808": "log0805_171159_20.00.png", "3809": "log0805_171159_200.00.png", "3810": "log0805_171159_201.00.png", "3811": "log0805_171159_202.00.png", "3812": "log0805_171159_203.00.png", "3813": "log0805_171159_204.00.png", "3814": "log0805_171159_205.00.png", "3815": "log0805_171159_206.00.png", "3816": "log0805_171159_207.00.png", "3817": "log0805_171159_208.00.png", "3818": "log0805_171159_209.00.png", "3819": "log0805_171159_21.00.png", "3820": "log0805_171159_210.00.png", "3821": "log0805_171159_211.00.png", "3822": "log0805_171159_212.00.png", "3823": "log0805_171159_213.00.png", "3824": "log0805_171159_214.00.png", "3825": "log0805_171159_215.00.png", "3826": "log0805_171159_216.00.png", "3827": "log0805_171159_217.00.png", "3828": "log0805_171159_218.00.png", "3829": "log0805_171159_219.00.png", "3830": "log0805_171159_22.00.png", "3831": "log0805_171159_220.00.png", "3832": "log0805_171159_221.00.png", "3833": "log0805_171159_222.00.png", "3834": "log0805_171159_223.00.png", "3835": "log0805_171159_224.00.png", "3836": "log0805_171159_225.00.png", "3837": "log0805_171159_226.00.png", "3838": "log0805_171159_227.00.png", "3839": "log0805_171159_228.00.png", "3840": "log0805_171159_229.00.png", "3841": "log0805_171159_23.00.png", "3842": "log0805_171159_230.00.png", "3843": "log0805_171159_231.00.png", "3844": "log0805_171159_232.00.png", "3845": "log0805_171159_233.00.png", "3846": "log0805_171159_234.00.png", "3847": "log0805_171159_235.00.png", "3848": "log0805_171159_237.00.png", "3849": "log0805_171159_238.00.png", "3850": "log0805_171159_239.00.png", "3851": "log0805_171159_240.00.png", "3852": "log0805_171159_241.00.png", "3853": "log0805_171159_242.00.png", "3854": "log0805_171159_243.00.png", "3855": "log0805_171159_244.00.png", "3856": "log0805_171159_245.00.png", "3857": "log0805_171159_246.00.png", "3858": "log0805_171159_247.00.png", "3859": "log0805_171159_248.00.png", "3860": "log0805_171159_249.00.png", "3861": "log0805_171159_250.00.png", "3862": "log0805_171159_251.00.png", "3863": "log0805_171159_252.00.png", "3864": "log0805_171159_253.00.png", "3865": "log0805_171159_254.00.png", "3866": "log0805_171159_255.00.png", "3867": "log0805_171159_256.00.png", "3868": "log0805_171159_257.00.png", "3869": "log0805_171159_258.00.png", "3870": "log0805_171159_259.00.png", "3871": "log0805_171159_260.00.png", "3872": "log0805_171159_261.00.png", "3873": "log0805_171159_262.00.png", "3874": "log0805_171159_263.00.png", "3875": "log0805_171159_264.00.png", "3876": "log0805_171159_265.00.png", "3877": "log0805_171159_266.00.png", "3878": "log0805_171159_267.00.png", "3879": "log0805_171159_268.00.png", "3880": "log0805_171159_269.00.png", "3881": "log0805_171159_270.00.png", "3882": "log0805_171159_271.00.png", "3883": "log0805_171159_272.00.png", "3884": "log0805_171159_273.00.png", "3885": "log0805_171159_274.00.png", "3886": "log0805_171159_275.00.png", "3887": "log0805_171159_276.00.png", "3888": "log0805_171159_277.00.png", "3889": "log0805_171159_278.00.png", "3890": "log0805_171159_279.00.png", "3891": "log0805_171159_280.00.png", "3892": "log0805_171159_281.00.png", "3893": "log0805_171159_282.00.png", "3894": "log0805_171159_283.00.png", "3895": "log0805_171159_284.00.png", "3896": "log0805_171159_285.00.png", "3897": "log0805_171159_286.00.png", "3898": "log0805_171159_287.00.png", "3899": "log0805_171159_288.00.png", "3900": "log0805_171159_289.00.png", "3901": "log0805_171159_290.00.png", "3902": "log0805_171159_291.00.png", "3903": "log0805_171159_292.00.png", "3904": "log0805_171159_293.00.png", "3905": "log0805_171159_294.00.png", "3906": "log0805_171159_295.00.png", "3907": "log0805_171159_296.00.png", "3908": "log0805_171159_297.00.png", "3909": "log0805_171159_298.00.png", "3910": "log0805_171159_299.00.png", "3911": "log0805_171159_300.00.png", "3912": "log0805_171159_301.00.png", "3913": "log0805_171159_302.00.png", "3914": "log0805_171159_303.00.png", "3915": "log0805_171159_304.00.png", "3916": "log0805_171159_305.00.png", "3917": "log0805_171159_306.00.png", "3918": "log0805_171159_307.00.png", "3919": "log0805_171159_308.00.png", "3920": "log0805_171159_309.00.png", "3921": "log0805_171159_310.00.png", "3922": "log0805_171159_311.00.png", "3923": "log0805_171159_312.00.png", "3924": "log0805_171159_313.00.png", "3925": "log0805_171159_314.00.png", "3926": "log0805_171159_315.00.png", "3927": "log0805_171159_316.00.png", "3928": "log0805_171159_317.00.png", "3929": "log0805_171159_318.00.png", "3930": "log0805_171159_319.00.png", "3931": "log0805_171159_320.00.png", "3932": "log0805_171159_321.00.png", "3933": "log0805_171159_322.00.png", "3934": "log0805_171159_323.00.png", "3935": "log0805_171159_324.00.png", "3936": "log0805_171159_325.00.png", "3937": "log0805_171159_326.00.png", "3938": "log0805_171159_327.00.png", "3939": "log0805_171159_328.00.png", "3940": "log0805_171159_329.00.png", "3941": "log0805_171159_330.00.png", "3942": "log0805_171159_331.00.png", "3943": "log0805_171159_332.00.png", "3944": "log0805_171159_333.00.png", "3945": "log0805_171159_334.00.png", "3946": "log0805_171159_335.00.png", "3947": "log0805_171159_336.00.png", "3948": "log0805_171159_337.00.png", "3949": "log0805_171159_338.00.png", "3950": "log0805_171159_339.00.png", "3951": "log0805_171159_340.00.png", "3952": "log0805_171159_341.00.png", "3953": "log0805_171159_342.00.png", "3954": "log0805_171159_343.00.png", "3955": "log0805_171159_344.00.png", "3956": "log0805_171159_345.00.png", "3957": "log0805_171159_346.00.png", "3958": "log0805_171159_347.00.png", "3959": "log0805_171159_348.00.png", "3960": "log0805_171159_349.00.png", "3961": "log0805_171159_350.00.png", "3962": "log0805_171159_351.00.png", "3963": "log0805_171159_352.00.png", "3964": "log0805_171159_353.00.png", "3965": "log0805_171159_354.00.png", "3966": "log0805_171159_355.00.png", "3967": "log0805_171159_356.00.png", "3968": "log0805_171159_357.00.png", "3969": "log0805_171159_358.00.png", "3970": "log0805_171159_359.00.png", "3971": "log0805_171159_360.00.png", "3972": "log0805_171159_361.00.png", "3973": "log0805_171159_362.00.png", "3974": "log0805_171159_363.00.png", "3975": "log0805_171159_364.00.png", "3976": "log0805_171159_365.00.png", "3977": "log0805_171159_366.00.png", "3978": "log0805_171159_367.00.png", "3979": "log0805_171159_368.00.png", "3980": "log0805_171159_369.00.png", "3981": "log0805_171159_370.00.png", "3982": "log0805_171159_371.00.png", "3983": "log0805_171159_372.00.png", "3984": "log0805_171159_373.00.png", "3985": "log0805_171159_374.00.png", "3986": "log0805_171159_375.00.png", "3987": "log0805_171159_376.00.png", "3988": "log0805_171159_377.00.png", "3989": "log0805_171159_378.00.png", "3990": "log0805_171159_379.00.png", "3991": "log0805_171159_380.00.png", "3992": "log0805_171159_381.00.png", "3993": "log0805_171159_382.00.png", "3994": "log0805_171159_383.00.png", "3995": "log0805_171159_384.00.png", "3996": "log0805_171159_385.00.png", "3997": "log0805_171159_386.00.png", "3998": "log0805_171159_387.00.png", "3999": "log0805_171159_388.00.png", "4000": "log0805_171159_389.00.png", "4001": "log0805_171159_390.00.png", "4002": "log0805_171159_391.00.png", "4003": "log0805_171159_392.00.png", "4004": "log0805_171159_393.00.png", "4005": "log0805_171159_394.00.png", "4006": "log0805_171159_395.00.png", "4007": "log0805_171159_396.00.png", "4008": "log0805_171159_397.00.png", "4009": "log0805_171159_398.00.png", "4010": "log0805_171159_399.00.png", "4011": "log0805_171159_400.00.png", "4012": "log0805_171159_401.00.png", "4013": "log0805_171159_402.00.png", "4014": "log0805_171159_403.00.png", "4015": "log0805_171159_404.00.png", "4016": "log0805_171159_405.00.png", "4017": "log0805_171159_406.00.png", "4018": "log0805_171159_407.00.png", "4019": "log0805_171159_408.00.png", "4020": "log0805_171159_409.00.png", "4021": "log0805_171159_410.00.png", "4022": "log0805_171159_411.00.png", "4023": "log0805_171159_412.00.png", "4024": "log0805_171159_413.00.png", "4025": "log0805_171159_414.00.png", "4026": "log0805_171159_415.00.png", "4027": "log0805_171159_416.00.png", "4028": "log0805_171159_417.00.png", "4029": "log0805_171159_418.00.png", "4030": "log0805_171159_419.00.png", "4031": "log0805_171159_420.00.png", "4032": "log0805_205541_1.00.png", "4033": "log0805_205541_10.00.png", "4034": "log0805_205541_100.00.png", "4035": "log0805_205541_101.00.png", "4036": "log0805_205541_102.00.png", "4037": "log0805_205541_103.00.png", "4038": "log0805_205541_104.00.png", "4039": "log0805_205541_105.00.png", "4040": "log0805_205541_106.00.png", "4041": "log0805_205541_107.00.png", "4042": "log0805_205541_108.00.png", "4043": "log0805_205541_109.00.png", "4044": "log0805_205541_11.00.png", "4045": "log0805_205541_110.00.png", "4046": "log0805_205541_111.00.png", "4047": "log0805_205541_112.00.png", "4048": "log0805_205541_113.00.png", "4049": "log0805_205541_114.00.png", "4050": "log0805_205541_115.00.png", "4051": "log0805_205541_116.00.png", "4052": "log0805_205541_117.00.png", "4053": "log0805_205541_118.00.png", "4054": "log0805_205541_119.00.png", "4055": "log0805_205541_12.00.png", "4056": "log0805_205541_120.00.png", "4057": "log0805_205541_121.00.png", "4058": "log0805_205541_122.00.png", "4059": "log0805_205541_123.00.png", "4060": "log0805_205541_124.00.png", "4061": "log0805_205541_125.00.png", "4062": "log0805_205541_126.00.png", "4063": "log0805_205541_127.00.png", "4064": "log0805_205541_128.00.png", "4065": "log0805_205541_129.00.png", "4066": "log0805_205541_13.00.png", "4067": "log0805_205541_130.00.png", "4068": "log0805_205541_131.00.png", "4069": "log0805_205541_132.00.png", "4070": "log0805_205541_133.00.png", "4071": "log0805_205541_134.00.png", "4072": "log0805_205541_135.00.png", "4073": "log0805_205541_136.00.png", "4074": "log0805_205541_137.00.png", "4075": "log0805_205541_138.00.png", "4076": "log0805_205541_139.00.png", "4077": "log0805_205541_14.00.png", "4078": "log0805_205541_140.00.png", "4079": "log0805_205541_141.00.png", "4080": "log0805_205541_142.00.png", "4081": "log0805_205541_143.00.png", "4082": "log0805_205541_144.00.png", "4083": "log0805_205541_145.00.png", "4084": "log0805_205541_146.00.png", "4085": "log0805_205541_147.00.png", "4086": "log0805_205541_148.00.png", "4087": "log0805_205541_149.00.png", "4088": "log0805_205541_15.00.png", "4089": "log0805_205541_150.00.png", "4090": "log0805_205541_151.00.png", "4091": "log0805_205541_152.00.png", "4092": "log0805_205541_153.00.png", "4093": "log0805_205541_154.00.png", "4094": "log0805_205541_155.00.png", "4095": "log0805_205541_156.00.png", "4096": "log0805_205541_157.00.png", "4097": "log0805_205541_158.00.png", "4098": "log0805_205541_159.00.png", "4099": "log0805_205541_16.00.png", "4100": "log0805_205541_160.00.png", "4101": "log0805_205541_161.00.png", "4102": "log0805_205541_162.00.png", "4103": "log0805_205541_163.00.png", "4104": "log0805_205541_164.00.png", "4105": "log0805_205541_165.00.png", "4106": "log0805_205541_166.00.png", "4107": "log0805_205541_167.00.png", "4108": "log0805_205541_168.00.png", "4109": "log0805_205541_169.00.png", "4110": "log0805_205541_17.00.png", "4111": "log0805_205541_170.00.png", "4112": "log0805_205541_171.00.png", "4113": "log0805_205541_172.00.png", "4114": "log0805_205541_173.00.png", "4115": "log0805_205541_174.00.png", "4116": "log0805_205541_175.00.png", "4117": "log0805_205541_176.00.png", "4118": "log0805_205541_177.00.png", "4119": "log0805_205541_178.00.png", "4120": "log0805_205541_179.00.png", "4121": "log0805_205541_18.00.png", "4122": "log0805_205541_180.00.png", "4123": "log0805_205541_181.00.png", "4124": "log0805_205541_182.00.png", "4125": "log0805_205541_183.00.png", "4126": "log0805_205541_184.00.png", "4127": "log0805_205541_185.00.png", "4128": "log0805_205541_186.00.png", "4129": "log0805_205541_187.00.png", "4130": "log0805_205541_188.00.png", "4131": "log0805_205541_189.00.png", "4132": "log0805_205541_19.00.png", "4133": "log0805_205541_190.00.png", "4134": "log0805_205541_191.00.png", "4135": "log0805_205541_192.00.png", "4136": "log0805_205541_193.00.png", "4137": "log0805_205541_194.00.png", "4138": "log0805_205541_195.00.png", "4139": "log0805_205541_196.00.png", "4140": "log0805_205541_197.00.png", "4141": "log0805_205541_198.00.png", "4142": "log0805_205541_199.00.png", "4143": "log0805_205541_2.00.png", "4144": "log0805_205541_20.00.png", "4145": "log0805_205541_200.00.png", "4146": "log0805_205541_201.00.png", "4147": "log0805_205541_202.00.png", "4148": "log0805_205541_203.00.png", "4149": "log0805_205541_204.00.png", "4150": "log0805_205541_205.00.png", "4151": "log0805_205541_206.00.png", "4152": "log0805_205541_207.00.png", "4153": "log0805_205541_208.00.png", "4154": "log0805_205541_209.00.png", "4155": "log0805_205541_21.00.png", "4156": "log0805_205541_210.00.png", "4157": "log0805_205541_211.00.png", "4158": "log0805_205541_212.00.png", "4159": "log0805_205541_213.00.png", "4160": "log0805_205541_214.00.png", "4161": "log0805_205541_215.00.png", "4162": "log0805_205541_216.00.png", "4163": "log0805_205541_217.00.png", "4164": "log0805_205541_218.00.png", "4165": "log0805_205541_219.00.png", "4166": "log0805_205541_22.00.png", "4167": "log0805_205541_220.00.png", "4168": "log0805_205541_221.00.png", "4169": "log0805_205541_222.00.png", "4170": "log0805_205541_223.00.png", "4171": "log0805_205541_224.00.png", "4172": "log0805_205541_225.00.png", "4173": "log0805_205541_226.00.png", "4174": "log0805_205541_227.00.png", "4175": "log0805_205541_228.00.png", "4176": "log0805_205541_229.00.png", "4177": "log0805_205541_23.00.png", "4178": "log0805_205541_230.00.png", "4179": "log0805_205541_24.00.png", "4180": "log0805_205541_25.00.png", "4181": "log0805_205541_26.00.png", "4182": "log0805_205541_27.00.png", "4183": "log0805_205541_28.00.png", "4184": "log0805_205541_29.00.png", "4185": "log0805_205541_3.00.png", "4186": "log0805_205541_30.00.png", "4187": "log0805_205541_31.00.png", "4188": "log0805_205541_32.00.png", "4189": "log0805_205541_33.00.png", "4190": "log0805_205541_34.00.png", "4191": "log0805_205541_35.00.png", "4192": "log0805_205541_36.00.png", "4193": "log0805_205541_37.00.png", "4194": "log0805_205541_38.00.png", "4195": "log0805_205541_39.00.png", "4196": "log0805_205541_4.00.png", "4197": "log0805_205541_40.00.png", "4198": "log0805_205541_41.00.png", "4199": "log0805_205541_42.00.png", "4200": "log0805_205541_43.00.png", "4201": "log0805_205541_44.00.png", "4202": "log0805_205541_45.00.png", "4203": "log0805_205541_46.00.png", "4204": "log0805_205541_47.00.png", "4205": "log0805_205541_48.00.png", "4206": "log0805_205541_49.00.png", "4207": "log0805_205541_5.00.png", "4208": "log0805_205541_50.00.png", "4209": "log0805_205541_51.00.png", "4210": "log0805_205541_52.00.png", "4211": "log0805_205541_53.00.png", "4212": "log0805_205541_54.00.png", "4213": "log0805_205541_55.00.png", "4214": "log0805_205541_56.00.png", "4215": "log0805_205541_57.00.png", "4216": "log0805_205541_58.00.png", "4217": "log0805_205541_59.00.png", "4218": "log0805_205541_6.00.png", "4219": "log0805_205541_60.00.png", "4220": "log0805_205541_61.00.png", "4221": "log0805_205541_62.00.png", "4222": "log0805_205541_63.00.png", "4223": "log0805_205541_64.00.png", "4224": "log0805_205541_65.00.png", "4225": "log0805_205541_66.00.png", "4226": "log0805_205541_67.00.png", "4227": "log0805_205541_68.00.png", "4228": "log0805_205541_69.00.png", "4229": "log0805_205541_7.00.png", "4230": "log0805_205541_70.00.png", "4231": "log0805_205541_71.00.png", "4232": "log0805_205541_72.00.png", "4233": "log0805_205541_73.00.png", "4234": "log0805_205541_74.00.png", "4235": "log0805_205541_75.00.png", "4236": "log0805_205541_76.00.png", "4237": "log0805_205541_77.00.png", "4238": "log0805_205541_78.00.png", "4239": "log0805_205541_79.00.png", "4240": "log0805_205541_8.00.png", "4241": "log0805_205541_80.00.png", "4242": "log0805_205541_81.00.png", "4243": "log0805_205541_82.00.png", "4244": "log0805_205541_83.00.png", "4245": "log0805_205541_84.00.png", "4246": "log0805_205541_85.00.png", "4247": "log0805_205541_86.00.png", "4248": "log0805_205541_87.00.png", "4249": "log0805_205541_88.00.png", "4250": "log0805_205541_89.00.png", "4251": "log0805_205541_9.00.png", "4252": "log0805_205541_90.00.png", "4253": "log0805_205541_91.00.png", "4254": "log0805_205541_92.00.png", "4255": "log0805_205541_93.00.png", "4256": "log0805_205541_94.00.png", "4257": "log0805_205541_95.00.png", "4258": "log0805_205541_96.00.png", "4259": "log0805_205541_97.00.png", "4260": "log0805_205541_98.00.png", "4261": "log0805_205541_99.00.png", "4262": "log0806_003924_1.00.png", "4263": "log0806_003924_10.00.png", "4264": "log0806_003924_11.00.png", "4265": "log0806_003924_12.00.png", "4266": "log0806_003924_13.00.png", "4267": "log0806_003924_14.00.png", "4268": "log0806_003924_15.00.png", "4269": "log0806_003924_16.00.png", "4270": "log0806_003924_17.00.png", "4271": "log0806_003924_18.00.png", "4272": "log0806_003924_19.00.png", "4273": "log0806_003924_2.00.png", "4274": "log0806_003924_20.00.png", "4275": "log0806_003924_21.00.png", "4276": "log0806_003924_22.00.png", "4277": "log0806_003924_23.00.png", "4278": "log0806_003924_24.00.png", "4279": "log0806_003924_25.00.png", "4280": "log0806_003924_26.00.png", "4281": "log0806_003924_27.00.png", "4282": "log0806_003924_28.00.png", "4283": "log0806_003924_29.00.png", "4284": "log0806_003924_3.00.png", "4285": "log0806_003924_30.00.png", "4286": "log0806_003924_31.00.png", "4287": "log0806_003924_32.00.png", "4288": "log0806_003924_33.00.png", "4289": "log0806_003924_34.00.png", "4290": "log0806_003924_35.00.png", "4291": "log0806_003924_36.00.png", "4292": "log0806_003924_37.00.png", "4293": "log0806_003924_38.00.png", "4294": "log0806_003924_39.00.png", "4295": "log0806_003924_4.00.png", "4296": "log0806_003924_40.00.png", "4297": "log0806_003924_41.00.png", "4298": "log0806_003924_42.00.png", "4299": "log0806_003924_43.00.png", "4300": "log0806_003924_44.00.png", "4301": "log0806_003924_45.00.png", "4302": "log0806_003924_46.00.png", "4303": "log0806_003924_47.00.png", "4304": "log0806_003924_48.00.png", "4305": "log0806_003924_49.00.png", "4306": "log0806_003924_5.00.png", "4307": "log0806_003924_50.00.png", "4308": "log0806_003924_51.00.png", "4309": "log0806_003924_52.00.png", "4310": "log0806_003924_53.00.png", "4311": "log0806_003924_54.00.png", "4312": "log0806_003924_55.00.png", "4313": "log0806_003924_56.00.png", "4314": "log0806_003924_57.00.png", "4315": "log0806_003924_58.00.png", "4316": "log0806_003924_59.00.png", "4317": "log0806_003924_6.00.png", "4318": "log0806_003924_60.00.png", "4319": "log0806_003924_61.00.png", "4320": "log0806_003924_62.00.png", "4321": "log0806_003924_63.00.png", "4322": "log0806_003924_64.00.png", "4323": "log0806_003924_65.00.png", "4324": "log0806_003924_66.00.png", "4325": "log0806_003924_67.00.png", "4326": "log0806_003924_68.00.png", "4327": "log0806_003924_69.00.png", "4328": "log0806_003924_7.00.png", "4329": "log0806_003924_70.00.png", "4330": "log0806_003924_71.00.png", "4331": "log0806_003924_72.00.png", "4332": "log0806_003924_8.00.png", "4333": "log0806_003924_9.00.png", "4334": "log0806_042306_1.00.png", "4335": "log0806_042306_10.00.png", "4336": "log0806_042306_11.00.png", "4337": "log0806_042306_12.00.png", "4338": "log0806_042306_13.00.png", "4339": "log0806_042306_14.00.png", "4340": "log0806_042306_15.00.png", "4341": "log0806_042306_16.00.png", "4342": "log0806_042306_17.00.png", "4343": "log0806_042306_18.00.png", "4344": "log0806_042306_19.00.png", "4345": "log0806_042306_2.00.png", "4346": "log0806_042306_20.00.png", "4347": "log0806_042306_21.00.png", "4348": "log0806_042306_22.00.png", "4349": "log0806_042306_23.00.png", "4350": "log0806_042306_24.00.png", "4351": "log0806_042306_25.00.png", "4352": "log0806_042306_26.00.png", "4353": "log0806_042306_27.00.png", "4354": "log0806_042306_28.00.png", "4355": "log0806_042306_29.00.png", "4356": "log0806_042306_3.00.png", "4357": "log0806_042306_30.00.png", "4358": "log0806_042306_31.00.png", "4359": "log0806_042306_32.00.png", "4360": "log0806_042306_33.00.png", "4361": "log0806_042306_34.00.png", "4362": "log0806_042306_35.00.png", "4363": "log0806_042306_36.00.png", "4364": "log0806_042306_37.00.png", "4365": "log0806_042306_38.00.png", "4366": "log0806_042306_39.00.png", "4367": "log0806_042306_4.00.png", "4368": "log0806_042306_40.00.png", "4369": "log0806_042306_41.00.png", "4370": "log0806_042306_42.00.png", "4371": "log0806_042306_43.00.png", "4372": "log0806_042306_44.00.png", "4373": "log0806_042306_45.00.png", "4374": "log0806_042306_46.00.png", "4375": "log0806_042306_47.00.png", "4376": "log0806_042306_48.00.png", "4377": "log0806_042306_49.00.png", "4378": "log0806_042306_5.00.png", "4379": "log0806_042306_50.00.png", "4380": "log0806_042306_51.00.png", "4381": "log0806_042306_52.00.png", "4382": "log0806_042306_53.00.png", "4383": "log0806_042306_54.00.png", "4384": "log0806_042306_55.00.png", "4385": "log0806_042306_56.00.png", "4386": "log0806_042306_57.00.png", "4387": "log0806_042306_58.00.png", "4388": "log0806_042306_59.00.png", "4389": "log0806_042306_6.00.png", "4390": "log0806_042306_60.00.png", "4391": "log0806_042306_61.00.png", "4392": "log0806_042306_62.00.png", "4393": "log0806_042306_63.00.png", "4394": "log0806_042306_64.00.png", "4395": "log0806_042306_65.00.png", "4396": "log0806_042306_66.00.png", "4397": "log0806_042306_67.00.png", "4398": "log0806_042306_68.00.png", "4399": "log0806_042306_69.00.png", "4400": "log0806_042306_7.00.png", "4401": "log0806_042306_70.00.png", "4402": "log0806_042306_71.00.png", "4403": "log0806_042306_72.00.png", "4404": "log0806_042306_73.00.png", "4405": "log0806_042306_74.00.png", "4406": "log0806_042306_75.00.png", "4407": "log0806_042306_76.00.png", "4408": "log0806_042306_77.00.png", "4409": "log0806_042306_78.00.png", "4410": "log0806_042306_79.00.png", "4411": "log0806_042306_8.00.png", "4412": "log0806_042306_80.00.png", "4413": "log0806_042306_81.00.png", "4414": "log0806_042306_82.00.png", "4415": "log0806_042306_9.00.png", "4416": "log0806_080648_1.00.png", "4417": "log0806_080648_10.00.png", "4418": "log0806_080648_100.00.png", "4419": "log0806_080648_101.00.png", "4420": "log0806_080648_11.00.png", "4421": "log0806_080648_12.00.png", "4422": "log0806_080648_13.00.png", "4423": "log0806_080648_14.00.png", "4424": "log0806_080648_15.00.png", "4425": "log0806_080648_16.00.png", "4426": "log0806_080648_17.00.png", "4427": "log0806_080648_18.00.png", "4428": "log0806_080648_19.00.png", "4429": "log0806_080648_2.00.png", "4430": "log0806_080648_20.00.png", "4431": "log0806_080648_21.00.png", "4432": "log0806_080648_22.00.png", "4433": "log0806_080648_23.00.png", "4434": "log0806_080648_24.00.png", "4435": "log0806_080648_25.00.png", "4436": "log0806_080648_26.00.png", "4437": "log0806_080648_27.00.png", "4438": "log0806_080648_28.00.png", "4439": "log0806_080648_29.00.png", "4440": "log0806_080648_3.00.png", "4441": "log0806_080648_30.00.png", "4442": "log0806_080648_31.00.png", "4443": "log0806_080648_32.00.png", "4444": "log0806_080648_33.00.png", "4445": "log0806_080648_34.00.png", "4446": "log0806_080648_35.00.png", "4447": "log0806_080648_36.00.png", "4448": "log0806_080648_37.00.png", "4449": "log0806_080648_38.00.png", "4450": "log0806_080648_39.00.png", "4451": "log0806_080648_4.00.png", "4452": "log0806_080648_40.00.png", "4453": "log0806_080648_41.00.png", "4454": "log0806_080648_42.00.png", "4455": "log0806_080648_43.00.png", "4456": "log0806_080648_44.00.png", "4457": "log0806_080648_45.00.png", "4458": "log0806_080648_46.00.png", "4459": "log0806_080648_47.00.png", "4460": "log0806_080648_48.00.png", "4461": "log0806_080648_49.00.png", "4462": "log0806_080648_5.00.png", "4463": "log0806_080648_50.00.png", "4464": "log0806_080648_51.00.png", "4465": "log0806_080648_52.00.png", "4466": "log0806_080648_53.00.png", "4467": "log0806_080648_54.00.png", "4468": "log0806_080648_55.00.png", "4469": "log0806_080648_56.00.png", "4470": "log0806_080648_57.00.png", "4471": "log0806_080648_58.00.png", "4472": "log0806_080648_59.00.png", "4473": "log0806_080648_6.00.png", "4474": "log0806_080648_60.00.png", "4475": "log0806_080648_61.00.png", "4476": "log0806_080648_62.00.png", "4477": "log0806_080648_63.00.png", "4478": "log0806_080648_64.00.png", "4479": "log0806_080648_65.00.png", "4480": "log0806_080648_66.00.png", "4481": "log0806_080648_67.00.png", "4482": "log0806_080648_68.00.png", "4483": "log0806_080648_69.00.png", "4484": "log0806_080648_7.00.png", "4485": "log0806_080648_70.00.png", "4486": "log0806_080648_71.00.png", "4487": "log0806_080648_72.00.png", "4488": "log0806_080648_73.00.png", "4489": "log0806_080648_74.00.png", "4490": "log0806_080648_75.00.png", "4491": "log0806_080648_76.00.png", "4492": "log0806_080648_77.00.png", "4493": "log0806_080648_78.00.png", "4494": "log0806_080648_79.00.png", "4495": "log0806_080648_8.00.png", "4496": "log0806_080648_80.00.png", "4497": "log0806_080648_81.00.png", "4498": "log0806_080648_82.00.png", "4499": "log0806_080648_83.00.png", "4500": "log0806_080648_84.00.png", "4501": "log0806_080648_85.00.png", "4502": "log0806_080648_86.00.png", "4503": "log0806_080648_87.00.png", "4504": "log0806_080648_88.00.png", "4505": "log0806_080648_89.00.png", "4506": "log0806_080648_9.00.png", "4507": "log0806_080648_90.00.png", "4508": "log0806_080648_91.00.png", "4509": "log0806_080648_92.00.png", "4510": "log0806_080648_93.00.png", "4511": "log0806_080648_94.00.png", "4512": "log0806_080648_95.00.png", "4513": "log0806_080648_96.00.png", "4514": "log0806_080648_97.00.png", "4515": "log0806_080648_98.00.png", "4516": "log0806_080648_99.00.png", "4517": "log0806_115031_1.00.png", "4518": "log0806_115031_10.00.png", "4519": "log0806_115031_100.00.png", "4520": "log0806_115031_101.00.png", "4521": "log0806_115031_102.00.png", "4522": "log0806_115031_103.00.png", "4523": "log0806_115031_104.00.png", "4524": "log0806_115031_105.00.png", "4525": "log0806_115031_106.00.png", "4526": "log0806_115031_107.00.png", "4527": "log0806_115031_108.00.png", "4528": "log0806_115031_109.00.png", "4529": "log0806_115031_11.00.png", "4530": "log0806_115031_110.00.png", "4531": "log0806_115031_111.00.png", "4532": "log0806_115031_112.00.png", "4533": "log0806_115031_113.00.png", "4534": "log0806_115031_114.00.png", "4535": "log0806_115031_115.00.png", "4536": "log0806_115031_116.00.png", "4537": "log0806_115031_117.00.png", "4538": "log0806_115031_118.00.png", "4539": "log0806_115031_119.00.png", "4540": "log0806_115031_12.00.png", "4541": "log0806_115031_13.00.png", "4542": "log0806_115031_14.00.png", "4543": "log0806_115031_15.00.png", "4544": "log0806_115031_16.00.png", "4545": "log0806_115031_17.00.png", "4546": "log0806_115031_18.00.png", "4547": "log0806_115031_19.00.png", "4548": "log0806_115031_2.00.png", "4549": "log0806_115031_20.00.png", "4550": "log0806_115031_21.00.png", "4551": "log0806_115031_22.00.png", "4552": "log0806_115031_23.00.png", "4553": "log0806_115031_24.00.png", "4554": "log0806_115031_25.00.png", "4555": "log0806_115031_26.00.png", "4556": "log0806_115031_27.00.png", "4557": "log0806_115031_28.00.png", "4558": "log0806_115031_29.00.png", "4559": "log0806_115031_3.00.png", "4560": "log0806_115031_30.00.png", "4561": "log0806_115031_31.00.png", "4562": "log0806_115031_32.00.png", "4563": "log0806_115031_33.00.png", "4564": "log0806_115031_34.00.png", "4565": "log0806_115031_35.00.png", "4566": "log0806_115031_36.00.png", "4567": "log0806_115031_37.00.png", "4568": "log0806_115031_38.00.png", "4569": "log0806_115031_39.00.png", "4570": "log0806_115031_4.00.png", "4571": "log0806_115031_40.00.png", "4572": "log0806_115031_41.00.png", "4573": "log0806_115031_42.00.png", "4574": "log0806_115031_43.00.png", "4575": "log0806_115031_44.00.png", "4576": "log0806_115031_45.00.png", "4577": "log0806_115031_46.00.png", "4578": "log0806_115031_47.00.png", "4579": "log0806_115031_48.00.png", "4580": "log0806_115031_49.00.png", "4581": "log0806_115031_5.00.png", "4582": "log0806_115031_50.00.png", "4583": "log0806_115031_51.00.png", "4584": "log0806_115031_52.00.png", "4585": "log0806_115031_53.00.png", "4586": "log0806_115031_54.00.png", "4587": "log0806_115031_55.00.png", "4588": "log0806_115031_56.00.png", "4589": "log0806_115031_57.00.png", "4590": "log0806_115031_58.00.png", "4591": "log0806_115031_59.00.png", "4592": "log0806_115031_6.00.png", "4593": "log0806_115031_60.00.png", "4594": "log0806_115031_61.00.png", "4595": "log0806_115031_62.00.png", "4596": "log0806_115031_63.00.png", "4597": "log0806_115031_64.00.png", "4598": "log0806_115031_65.00.png", "4599": "log0806_115031_66.00.png", "4600": "log0806_115031_67.00.png", "4601": "log0806_115031_68.00.png", "4602": "log0806_115031_69.00.png", "4603": "log0806_115031_7.00.png", "4604": "log0806_115031_70.00.png", "4605": "log0806_115031_71.00.png", "4606": "log0806_115031_72.00.png", "4607": "log0806_115031_73.00.png", "4608": "log0806_115031_74.00.png", "4609": "log0806_115031_75.00.png", "4610": "log0806_115031_76.00.png", "4611": "log0806_115031_77.00.png", "4612": "log0806_115031_78.00.png", "4613": "log0806_115031_79.00.png", "4614": "log0806_115031_8.00.png", "4615": "log0806_115031_80.00.png", "4616": "log0806_115031_81.00.png", "4617": "log0806_115031_82.00.png", "4618": "log0806_115031_83.00.png", "4619": "log0806_115031_84.00.png", "4620": "log0806_115031_85.00.png", "4621": "log0806_115031_86.00.png", "4622": "log0806_115031_87.00.png", "4623": "log0806_115031_88.00.png", "4624": "log0806_115031_89.00.png", "4625": "log0806_115031_9.00.png", "4626": "log0806_115031_90.00.png", "4627": "log0806_115031_91.00.png", "4628": "log0806_115031_92.00.png", "4629": "log0806_115031_93.00.png", "4630": "log0806_115031_94.00.png", "4631": "log0806_115031_95.00.png", "4632": "log0806_115031_96.00.png", "4633": "log0806_115031_97.00.png", "4634": "log0806_115031_98.00.png", "4635": "log0806_115031_99.00.png", "4636": "log0806_153413_1.00.png", "4637": "log0806_153413_10.00.png", "4638": "log0806_153413_11.00.png", "4639": "log0806_153413_12.00.png", "4640": "log0806_153413_13.00.png", "4641": "log0806_153413_14.00.png", "4642": "log0806_153413_15.00.png", "4643": "log0806_153413_16.00.png", "4644": "log0806_153413_17.00.png", "4645": "log0806_153413_18.00.png", "4646": "log0806_153413_19.00.png", "4647": "log0806_153413_2.00.png", "4648": "log0806_153413_20.00.png", "4649": "log0806_153413_21.00.png", "4650": "log0806_153413_22.00.png", "4651": "log0806_153413_23.00.png", "4652": "log0806_153413_24.00.png", "4653": "log0806_153413_25.00.png", "4654": "log0806_153413_26.00.png", "4655": "log0806_153413_27.00.png", "4656": "log0806_153413_28.00.png", "4657": "log0806_153413_29.00.png", "4658": "log0806_153413_3.00.png", "4659": "log0806_153413_30.00.png", "4660": "log0806_153413_31.00.png", "4661": "log0806_153413_32.00.png", "4662": "log0806_153413_33.00.png", "4663": "log0806_153413_34.00.png", "4664": "log0806_153413_35.00.png", "4665": "log0806_153413_36.00.png", "4666": "log0806_153413_37.00.png", "4667": "log0806_153413_38.00.png", "4668": "log0806_153413_39.00.png", "4669": "log0806_153413_4.00.png", "4670": "log0806_153413_40.00.png", "4671": "log0806_153413_41.00.png", "4672": "log0806_153413_42.00.png", "4673": "log0806_153413_43.00.png", "4674": "log0806_153413_44.00.png", "4675": "log0806_153413_45.00.png", "4676": "log0806_153413_46.00.png", "4677": "log0806_153413_47.00.png", "4678": "log0806_153413_48.00.png", "4679": "log0806_153413_49.00.png", "4680": "log0806_153413_5.00.png", "4681": "log0806_153413_50.00.png", "4682": "log0806_153413_51.00.png", "4683": "log0806_153413_52.00.png", "4684": "log0806_153413_53.00.png", "4685": "log0806_153413_54.00.png", "4686": "log0806_153413_55.00.png", "4687": "log0806_153413_56.00.png", "4688": "log0806_153413_57.00.png", "4689": "log0806_153413_58.00.png", "4690": "log0806_153413_59.00.png", "4691": "log0806_153413_6.00.png", "4692": "log0806_153413_60.00.png", "4693": "log0806_153413_7.00.png", "4694": "log0806_153413_8.00.png", "4695": "log0806_153413_9.00.png"}}}}], "splits": [{"name": "train", "num_bytes": 77203879.37159285, "num_examples": 3287}, {"name": "test", "num_bytes": 33094087.628407154, "num_examples": 1409}], "download_size": 110950866, "dataset_size": 110297967.0}}
|
2023-09-28T04:29:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "NoiseProj_Chelsea"
More Information needed
|
[
"# Dataset Card for \"NoiseProj_Chelsea\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"NoiseProj_Chelsea\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"NoiseProj_Chelsea\"\n\nMore Information needed"
] |
ae4e099dce407c62462a4e41ad96bf292ff9788a
|
# Dataset of Koito Yuu
This is the dataset of Koito Yuu, containing 300 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 300 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 672 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 798 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 300 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 300 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 300 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 672 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 672 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 559 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 798 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 798 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/koito_yuu_yagatekimininaru
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-28T04:19:14+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-28T04:24:56+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Koito Yuu
====================
This is the dataset of Koito Yuu, containing 300 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
8ed874af20738230b571108eff97a1b0972a973c
|
# Dataset Card for "Drone_Doppler"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Goorm-AI-04/Drone_Doppler
|
[
"region:us"
] |
2023-09-28T05:10:15+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "sequence": {"sequence": "float64"}}, {"name": "label", "dtype": "int64"}, {"name": "type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 75993012, "num_examples": 13988}, {"name": "test", "num_bytes": 18998253, "num_examples": 3497}], "download_size": 96723379, "dataset_size": 94991265}}
|
2023-09-28T05:21:27+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Drone_Doppler"
More Information needed
|
[
"# Dataset Card for \"Drone_Doppler\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Drone_Doppler\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Drone_Doppler\"\n\nMore Information needed"
] |
31fafff2f71dc23f9bfacb982fff2e923b9896a0
|
# Dataset of Nanami Touko
This is the dataset of Nanami Touko, containing 298 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 298 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 683 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 826 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 298 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 298 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 298 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 683 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 683 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 595 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 826 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 826 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/nanami_touko_yagatekimininaru
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-28T05:11:11+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-28T05:19:43+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Nanami Touko
=======================
This is the dataset of Nanami Touko, containing 298 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
0587c2b6499fbc68a7623439c2af2b24748968dc
|
# The multilingual sentiment dataset of parliamentary debates ParlaSent 1.0
## Dataset Description
- **Repository: [Clarin.si repo](http://hdl.handle.net/11356/1868)**
- **Paper: https://arxiv.org/abs/2309.09783**
### Dataset Summary
This dataset was created and used for sentiment analysis experiments.
The dataset consists of five training datasets and two test sets. The test sets have a _test.jsonl suffix and appear in the Dataset Viewer as _additional_test.
Each test set consists of 2,600 sentences, annotated by one highly trained annotator. Training datasets were internally split into "train", "dev" and "test" portions" for performing language-specific experiments.
The 6-level annotation schema, used by annotators, is the following:
- Positive for sentences that are entirely or predominantly positive
- Negative for sentences that are entirely or predominantly negative
- M_Positive for sentences that convey an ambiguous sentiment or a mixture of sentiments, but lean more towards the positive sentiment
- M_Negative for sentences that convey an ambiguous sentiment or a mixture of sentiments, but lean more towards the negative sentiment
- P_Neutral for sentences that only contain non-sentiment-related statements, but still lean more towards the positive sentiment
- N_Neutral for sentences that only contain non-sentiment-related statements, but still lean more towards the negative sentiment
Dataset is described in detail in our [paper](https://arxiv.org/abs/2309.09783).
### Data Attributes
The attributes in training data are the following:
- sentence - the sentence labeled for sentiment
- country - the country of the parliament the sentence comes form
- annotator1 - first annotator's annotation
- annotator2 - second annotator's annotation
- reconciliation - the final label agreed upon after reconciliation
- label - three level (positive, negative, neutral) label based on the reconciliation label
- document_id - internal identifier of the document the sentence comes form
- sentence_id - internal identifier of the sentence inside the document
- term - the term of the parliament the sentence comes from
- date - the date the sentence was uttered as part of a speech in the parliament
- name - name of the MP giving the speech
- party - the party of the MP
- gender - binary gender of the MP
- birth year - year of birth of the MP
- split - whether the sentence is to be used as a training, development or testing instance in case evaluation is done of the training portion of the dataset
- ruling - whether the MP was in a coalition or an opposition at the time of giving the speech
The attributes in the test data (_test.jsonl files) are the following:
- sentence - the sentence labeled for sentiment
- country - the country of the parliament the sentence comes form
- annotator1 - first (only) annotator's annotation, used as a final annotation
- label - three level (positive, negative, neutral) label based on the annotator1 label
- document_id - internal identifier of the document the sentence comes form
- sentence_id - internal identifier of the sentence inside the document
- term - the term of the parliament the sentence comes from
- date - the date the sentence was uttered as part of a speech in the parliament
- name - name of the MP giving the speech
- party - the party of the MP
- gender - binary gender of the MP
- birth year - year of birth of the MP
- ruling - whether the MP was in a coalition or an opposition at the time of giving the speech
### Citation information
Please quote the following paper:
```
@article{
Mochtak_Rupnik_Ljubešić_2023,
title={The ParlaSent multilingual training dataset for sentiment identification in parliamentary proceedings},
rights={All rights reserved},
url={http://arxiv.org/abs/2309.09783},
abstractNote={Sentiments inherently drive politics. How we receive and process information plays an essential role in political decision-making, shaping our judgment with strategic consequences both on the level of legislators and the masses. If sentiment plays such an important role in politics, how can we study and measure it systematically? The paper presents a new dataset of sentiment-annotated sentences, which are used in a series of experiments focused on training a robust sentiment classifier for parliamentary proceedings. The paper also introduces the first domain-specific LLM for political science applications additionally pre-trained on 1.72 billion domain-specific words from proceedings of 27 European parliaments. We present experiments demonstrating how the additional pre-training of LLM on parliamentary data can significantly improve the model downstream performance on the domain-specific tasks, in our case, sentiment detection in parliamentary proceedings. We further show that multilingual models perform very well on unseen languages and that additional data from other languages significantly improves the target parliament’s results. The paper makes an important contribution to multiple domains of social sciences and bridges them with computer science and computational linguistics. Lastly, it sets up a more robust approach to sentiment analysis of political texts in general, which allows scholars to study political sentiment from a comparative perspective using standardized tools and techniques.},
note={arXiv:2309.09783 [cs]},
number={arXiv:2309.09783},
publisher={arXiv},
author={Mochtak, Michal and Rupnik, Peter and Ljubešić, Nikola},
year={2023},
month={Sep},
language={en}
}
```
|
classla/ParlaSent
|
[
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:sl",
"language:en",
"language:cs",
"language:bs",
"language:hr",
"language:sr",
"language:sk",
"license:cc-by-sa-4.0",
"sentiment",
"classification",
"parliament",
"parlament",
"arxiv:2309.09783",
"region:us"
] |
2023-09-28T05:20:28+00:00
|
{"language": ["sl", "en", "cs", "bs", "hr", "sr", "sk"], "license": "cc-by-sa-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "pretty_name": "ParlaSent", "tags": ["sentiment", "classification", "parliament", "parlament"], "configs": [{"config_name": "EN", "data_files": "ParlaSent_EN.jsonl"}, {"config_name": "BCS", "data_files": "ParlaSent_BCS.jsonl"}, {"config_name": "CZ", "data_files": "ParlaSent_CZ.jsonl"}, {"config_name": "SK", "data_files": "ParlaSent_SK.jsonl"}, {"config_name": "SL", "data_files": "ParlaSent_SL.jsonl"}, {"config_name": "EN_additional_test", "data_files": "ParlaSent_EN_test.jsonl"}, {"config_name": "BCS_additional_test", "data_files": "ParlaSent_BCS_test.jsonl"}]}
|
2023-09-28T12:52:55+00:00
|
[
"2309.09783"
] |
[
"sl",
"en",
"cs",
"bs",
"hr",
"sr",
"sk"
] |
TAGS
#task_categories-text-classification #size_categories-10K<n<100K #language-Slovenian #language-English #language-Czech #language-Bosnian #language-Croatian #language-Serbian #language-Slovak #license-cc-by-sa-4.0 #sentiment #classification #parliament #parlament #arxiv-2309.09783 #region-us
|
# The multilingual sentiment dataset of parliamentary debates ParlaSent 1.0
## Dataset Description
- Repository: URL repo
- Paper: URL
### Dataset Summary
This dataset was created and used for sentiment analysis experiments.
The dataset consists of five training datasets and two test sets. The test sets have a _test.jsonl suffix and appear in the Dataset Viewer as _additional_test.
Each test set consists of 2,600 sentences, annotated by one highly trained annotator. Training datasets were internally split into "train", "dev" and "test" portions" for performing language-specific experiments.
The 6-level annotation schema, used by annotators, is the following:
- Positive for sentences that are entirely or predominantly positive
- Negative for sentences that are entirely or predominantly negative
- M_Positive for sentences that convey an ambiguous sentiment or a mixture of sentiments, but lean more towards the positive sentiment
- M_Negative for sentences that convey an ambiguous sentiment or a mixture of sentiments, but lean more towards the negative sentiment
- P_Neutral for sentences that only contain non-sentiment-related statements, but still lean more towards the positive sentiment
- N_Neutral for sentences that only contain non-sentiment-related statements, but still lean more towards the negative sentiment
Dataset is described in detail in our paper.
### Data Attributes
The attributes in training data are the following:
- sentence - the sentence labeled for sentiment
- country - the country of the parliament the sentence comes form
- annotator1 - first annotator's annotation
- annotator2 - second annotator's annotation
- reconciliation - the final label agreed upon after reconciliation
- label - three level (positive, negative, neutral) label based on the reconciliation label
- document_id - internal identifier of the document the sentence comes form
- sentence_id - internal identifier of the sentence inside the document
- term - the term of the parliament the sentence comes from
- date - the date the sentence was uttered as part of a speech in the parliament
- name - name of the MP giving the speech
- party - the party of the MP
- gender - binary gender of the MP
- birth year - year of birth of the MP
- split - whether the sentence is to be used as a training, development or testing instance in case evaluation is done of the training portion of the dataset
- ruling - whether the MP was in a coalition or an opposition at the time of giving the speech
The attributes in the test data (_test.jsonl files) are the following:
- sentence - the sentence labeled for sentiment
- country - the country of the parliament the sentence comes form
- annotator1 - first (only) annotator's annotation, used as a final annotation
- label - three level (positive, negative, neutral) label based on the annotator1 label
- document_id - internal identifier of the document the sentence comes form
- sentence_id - internal identifier of the sentence inside the document
- term - the term of the parliament the sentence comes from
- date - the date the sentence was uttered as part of a speech in the parliament
- name - name of the MP giving the speech
- party - the party of the MP
- gender - binary gender of the MP
- birth year - year of birth of the MP
- ruling - whether the MP was in a coalition or an opposition at the time of giving the speech
information
Please quote the following paper:
|
[
"# The multilingual sentiment dataset of parliamentary debates ParlaSent 1.0",
"## Dataset Description\n\n- Repository: URL repo \n- Paper: URL",
"### Dataset Summary\n\nThis dataset was created and used for sentiment analysis experiments.\n\nThe dataset consists of five training datasets and two test sets. The test sets have a _test.jsonl suffix and appear in the Dataset Viewer as _additional_test.\n\nEach test set consists of 2,600 sentences, annotated by one highly trained annotator. Training datasets were internally split into \"train\", \"dev\" and \"test\" portions\" for performing language-specific experiments.\n\nThe 6-level annotation schema, used by annotators, is the following: \n- Positive for sentences that are entirely or predominantly positive\n- Negative for sentences that are entirely or predominantly negative\n- M_Positive for sentences that convey an ambiguous sentiment or a mixture of sentiments, but lean more towards the positive sentiment\n- M_Negative for sentences that convey an ambiguous sentiment or a mixture of sentiments, but lean more towards the negative sentiment\n- P_Neutral for sentences that only contain non-sentiment-related statements, but still lean more towards the positive sentiment\n- N_Neutral for sentences that only contain non-sentiment-related statements, but still lean more towards the negative sentiment\n\nDataset is described in detail in our paper.",
"### Data Attributes\nThe attributes in training data are the following:\n- sentence - the sentence labeled for sentiment\n- country - the country of the parliament the sentence comes form\n- annotator1 - first annotator's annotation\n- annotator2 - second annotator's annotation\n- reconciliation - the final label agreed upon after reconciliation\n- label - three level (positive, negative, neutral) label based on the reconciliation label\n- document_id - internal identifier of the document the sentence comes form\n- sentence_id - internal identifier of the sentence inside the document\n- term - the term of the parliament the sentence comes from\n- date - the date the sentence was uttered as part of a speech in the parliament\n- name - name of the MP giving the speech\n- party - the party of the MP\n- gender - binary gender of the MP\n- birth year - year of birth of the MP\n- split - whether the sentence is to be used as a training, development or testing instance in case evaluation is done of the training portion of the dataset\n- ruling - whether the MP was in a coalition or an opposition at the time of giving the speech\n\nThe attributes in the test data (_test.jsonl files) are the following:\n- sentence - the sentence labeled for sentiment\n- country - the country of the parliament the sentence comes form\n- annotator1 - first (only) annotator's annotation, used as a final annotation\n- label - three level (positive, negative, neutral) label based on the annotator1 label\n- document_id - internal identifier of the document the sentence comes form\n- sentence_id - internal identifier of the sentence inside the document\n- term - the term of the parliament the sentence comes from\n- date - the date the sentence was uttered as part of a speech in the parliament\n- name - name of the MP giving the speech\n- party - the party of the MP\n- gender - binary gender of the MP\n- birth year - year of birth of the MP\n- ruling - whether the MP was in a coalition or an opposition at the time of giving the speech\n\ninformation\n\nPlease quote the following paper:"
] |
[
"TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-Slovenian #language-English #language-Czech #language-Bosnian #language-Croatian #language-Serbian #language-Slovak #license-cc-by-sa-4.0 #sentiment #classification #parliament #parlament #arxiv-2309.09783 #region-us \n",
"# The multilingual sentiment dataset of parliamentary debates ParlaSent 1.0",
"## Dataset Description\n\n- Repository: URL repo \n- Paper: URL",
"### Dataset Summary\n\nThis dataset was created and used for sentiment analysis experiments.\n\nThe dataset consists of five training datasets and two test sets. The test sets have a _test.jsonl suffix and appear in the Dataset Viewer as _additional_test.\n\nEach test set consists of 2,600 sentences, annotated by one highly trained annotator. Training datasets were internally split into \"train\", \"dev\" and \"test\" portions\" for performing language-specific experiments.\n\nThe 6-level annotation schema, used by annotators, is the following: \n- Positive for sentences that are entirely or predominantly positive\n- Negative for sentences that are entirely or predominantly negative\n- M_Positive for sentences that convey an ambiguous sentiment or a mixture of sentiments, but lean more towards the positive sentiment\n- M_Negative for sentences that convey an ambiguous sentiment or a mixture of sentiments, but lean more towards the negative sentiment\n- P_Neutral for sentences that only contain non-sentiment-related statements, but still lean more towards the positive sentiment\n- N_Neutral for sentences that only contain non-sentiment-related statements, but still lean more towards the negative sentiment\n\nDataset is described in detail in our paper.",
"### Data Attributes\nThe attributes in training data are the following:\n- sentence - the sentence labeled for sentiment\n- country - the country of the parliament the sentence comes form\n- annotator1 - first annotator's annotation\n- annotator2 - second annotator's annotation\n- reconciliation - the final label agreed upon after reconciliation\n- label - three level (positive, negative, neutral) label based on the reconciliation label\n- document_id - internal identifier of the document the sentence comes form\n- sentence_id - internal identifier of the sentence inside the document\n- term - the term of the parliament the sentence comes from\n- date - the date the sentence was uttered as part of a speech in the parliament\n- name - name of the MP giving the speech\n- party - the party of the MP\n- gender - binary gender of the MP\n- birth year - year of birth of the MP\n- split - whether the sentence is to be used as a training, development or testing instance in case evaluation is done of the training portion of the dataset\n- ruling - whether the MP was in a coalition or an opposition at the time of giving the speech\n\nThe attributes in the test data (_test.jsonl files) are the following:\n- sentence - the sentence labeled for sentiment\n- country - the country of the parliament the sentence comes form\n- annotator1 - first (only) annotator's annotation, used as a final annotation\n- label - three level (positive, negative, neutral) label based on the annotator1 label\n- document_id - internal identifier of the document the sentence comes form\n- sentence_id - internal identifier of the sentence inside the document\n- term - the term of the parliament the sentence comes from\n- date - the date the sentence was uttered as part of a speech in the parliament\n- name - name of the MP giving the speech\n- party - the party of the MP\n- gender - binary gender of the MP\n- birth year - year of birth of the MP\n- ruling - whether the MP was in a coalition or an opposition at the time of giving the speech\n\ninformation\n\nPlease quote the following paper:"
] |
[
98,
20,
15,
299,
479
] |
[
"passage: TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-Slovenian #language-English #language-Czech #language-Bosnian #language-Croatian #language-Serbian #language-Slovak #license-cc-by-sa-4.0 #sentiment #classification #parliament #parlament #arxiv-2309.09783 #region-us \n# The multilingual sentiment dataset of parliamentary debates ParlaSent 1.0## Dataset Description\n\n- Repository: URL repo \n- Paper: URL### Dataset Summary\n\nThis dataset was created and used for sentiment analysis experiments.\n\nThe dataset consists of five training datasets and two test sets. The test sets have a _test.jsonl suffix and appear in the Dataset Viewer as _additional_test.\n\nEach test set consists of 2,600 sentences, annotated by one highly trained annotator. Training datasets were internally split into \"train\", \"dev\" and \"test\" portions\" for performing language-specific experiments.\n\nThe 6-level annotation schema, used by annotators, is the following: \n- Positive for sentences that are entirely or predominantly positive\n- Negative for sentences that are entirely or predominantly negative\n- M_Positive for sentences that convey an ambiguous sentiment or a mixture of sentiments, but lean more towards the positive sentiment\n- M_Negative for sentences that convey an ambiguous sentiment or a mixture of sentiments, but lean more towards the negative sentiment\n- P_Neutral for sentences that only contain non-sentiment-related statements, but still lean more towards the positive sentiment\n- N_Neutral for sentences that only contain non-sentiment-related statements, but still lean more towards the negative sentiment\n\nDataset is described in detail in our paper."
] |
5ea3620795737a079a6b09f52ccc4cce53e7620f
|
# Dataset Card for "viet-llama-ft-smaller"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
infCapital/viet-llama-ft-smaller
|
[
"region:us"
] |
2023-09-28T05:23:48+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 737179685, "num_examples": 888489}], "download_size": 323108187, "dataset_size": 737179685}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-28T05:28:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "viet-llama-ft-smaller"
More Information needed
|
[
"# Dataset Card for \"viet-llama-ft-smaller\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"viet-llama-ft-smaller\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"viet-llama-ft-smaller\"\n\nMore Information needed"
] |
20fcff61146ddb5db91ee3688c076646607dbc3a
|
# Dataset of Saeki Sayaka
This is the dataset of Saeki Sayaka, containing 129 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 129 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 304 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 349 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 129 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 129 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 129 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 304 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 304 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 248 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 349 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 349 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/saeki_sayaka_yagatekimininaru
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-28T05:36:55+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-28T05:39:22+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Saeki Sayaka
=======================
This is the dataset of Saeki Sayaka, containing 129 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
3d21adac503d10bd8529f01e53ab67c442562fa0
|
# Dataset of Kanou Koyomi
This is the dataset of Kanou Koyomi, containing 82 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 82 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 181 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 199 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 82 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 82 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 82 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 181 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 181 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 126 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 199 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 199 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/kanou_koyomi_yagatekimininaru
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-28T05:48:37+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-28T05:49:49+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Kanou Koyomi
=======================
This is the dataset of Kanou Koyomi, containing 82 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
a5b177bfa04a2b054f5e08e3c218d1725764e5fd
|
# Dataset Card for "463b7b19"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/463b7b19
|
[
"region:us"
] |
2023-09-28T05:54:07+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 161, "num_examples": 10}], "download_size": 1299, "dataset_size": 161}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-28T05:54:08+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "463b7b19"
More Information needed
|
[
"# Dataset Card for \"463b7b19\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"463b7b19\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"463b7b19\"\n\nMore Information needed"
] |
725ce8fbe6b0ee511fddae343ba512982405675f
|
# Dataset of Hagozaki Riko
This is the dataset of Hagozaki Riko, containing 48 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 48 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 113 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 127 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 48 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 48 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 48 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 113 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 113 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 80 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 127 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 127 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/hagozaki_riko_yagatekimininaru
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-28T05:55:20+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-28T06:01:59+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Hagozaki Riko
========================
This is the dataset of Hagozaki Riko, containing 48 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
dbd72905550fd4c80b93db35afd8372c40160a5f
|
# Dataset Card for "email_subject_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
chats-bug/email_subject_gen
|
[
"region:us"
] |
2023-09-28T05:56:30+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "subject_line", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 33264969.9304227, "num_examples": 59489}, {"name": "test", "num_bytes": 1751347.0695772984, "num_examples": 3132}], "download_size": 10335744, "dataset_size": 35016317.0}}
|
2023-10-05T10:52:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "email_subject_dataset"
More Information needed
|
[
"# Dataset Card for \"email_subject_dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"email_subject_dataset\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"email_subject_dataset\"\n\nMore Information needed"
] |
0feddc21379e6f6ba6c5a10a2eaedf57b592bef7
|
# Dataset of Kodama Miyako
This is the dataset of Kodama Miyako, containing 36 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 36 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 92 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 101 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 36 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 36 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 36 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 92 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 92 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 67 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 101 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 101 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/kodama_miyako_yagatekimininaru
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-28T06:06:52+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-28T06:13:27+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Kodama Miyako
========================
This is the dataset of Kodama Miyako, containing 36 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
f444a94421262ec1fa777e23b0d4cfe2e06d7ded
|
Explain tuned WizardLM dataset ~55K created using approaches from Orca Research Paper.
We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets.
This helps student models like orca_mini_13b to learn thought process from teacher model, which is ChatGPT (gpt-3.5-turbo version).
|
infCapital/WizardLM_Orca_vi
|
[
"license:mit",
"region:us"
] |
2023-09-28T06:08:07+00:00
|
{"license": "mit", "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 140945974, "num_examples": 52507}], "download_size": 58938956, "dataset_size": 140945974}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-28T06:09:56+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
Explain tuned WizardLM dataset ~55K created using approaches from Orca Research Paper.
We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets.
This helps student models like orca_mini_13b to learn thought process from teacher model, which is ChatGPT (gpt-3.5-turbo version).
|
[] |
[
"TAGS\n#license-mit #region-us \n"
] |
[
11
] |
[
"passage: TAGS\n#license-mit #region-us \n"
] |
b5e3e325ba902eef7f79e94f97f63dd5575170c0
|
# Dataset of Hyuuga Akari
This is the dataset of Hyuuga Akari, containing 39 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 39 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 95 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 104 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 39 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 39 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 39 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 95 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 95 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 80 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 104 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 104 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/hyuuga_akari_yagatekimininaru
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-28T06:18:29+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-28T06:23:21+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Hyuuga Akari
=======================
This is the dataset of Hyuuga Akari, containing 39 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
c63783f35b00dd6e4be8cf86ac9aead7ff419890
|
# Dataset Card for "rag_oasst"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Tianduo/rag_oasst
|
[
"region:us"
] |
2023-09-28T06:33:09+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 35555913, "num_examples": 12947}], "download_size": 20853725, "dataset_size": 35555913}}
|
2023-09-28T17:21:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "rag_oasst"
More Information needed
|
[
"# Dataset Card for \"rag_oasst\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"rag_oasst\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"rag_oasst\"\n\nMore Information needed"
] |
6b3c53f655ac741a35e9fc8ef47ec00eaf8b3714
|
# Dataset Card for "donut_vqa_ISynHMP_all_labels_modified"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
warshakhan/donut_vqa_ISynHMP_all_labels_modified
|
[
"region:us"
] |
2023-09-28T06:48:17+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 583333339.0, "num_examples": 2800}, {"name": "valid", "num_bytes": 85997587.0, "num_examples": 400}, {"name": "test", "num_bytes": 173591889.0, "num_examples": 800}], "download_size": 165381311, "dataset_size": 842922815.0}}
|
2023-09-28T07:29:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "donut_vqa_ISynHMP_all_labels_modified"
More Information needed
|
[
"# Dataset Card for \"donut_vqa_ISynHMP_all_labels_modified\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"donut_vqa_ISynHMP_all_labels_modified\"\n\nMore Information needed"
] |
[
6,
28
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"donut_vqa_ISynHMP_all_labels_modified\"\n\nMore Information needed"
] |
cf0fe9ef85401b18e23bf88d4d443dc596ef1e92
|
# Dataset Card for "fonts_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yuanmei424/fonts_en
|
[
"region:us"
] |
2023-09-28T06:55:51+00:00
|
{"dataset_info": {"features": [{"name": "edit_prompt", "dtype": "string"}, {"name": "input_image", "dtype": "image"}, {"name": "edited_image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 84494062422.25, "num_examples": 19837823}], "download_size": 1463236645, "dataset_size": 84494062422.25}}
|
2023-09-30T12:43:30+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "fonts_en"
More Information needed
|
[
"# Dataset Card for \"fonts_en\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"fonts_en\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"fonts_en\"\n\nMore Information needed"
] |
85c7c326fcfb9c6587daabd9582e7cfaddcc65bd
|
# NexusRaven API Evaluation dataset
Please see [blog post](http://nexusflow.ai/blog) or [NexusRaven Github repo](https://github.com/nexusflowai/NexusRaven) for more information.
## License
The evaluation data in this repository consists primarily of our own curated evaluation data that only uses open source commercializable models. However, we include general domain data from the ToolLLM and ToolAlpaca papers. Since the data in the ToolLLM and ToolAlpaca works use OpenAI's GPT models for the generated content, the data is not commercially licensable, even if our own data is. As a result, the evaluation data used here is strictly non-commercial under [CC-BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/). Thank you for understanding!
## References
We thank the following authors and entities for their evaluation data, which we leveraged to produce the results contained in this repository. Their citations can be found below
1. ToolAlpaca team
2. ToolLLM team
```
@misc{tang2023toolalpaca,
title={ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases},
author={Qiaoyu Tang and Ziliang Deng and Hongyu Lin and Xianpei Han and Qiao Liang and Boxi Cao and Le Sun},
year={2023},
eprint={2306.05301},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{qin2023toolllm,
title={ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs},
author={Yujia Qin and Shihao Liang and Yining Ye and Kunlun Zhu and Lan Yan and Yaxi Lu and Yankai Lin and Xin Cong and Xiangru Tang and Bill Qian and Sihan Zhao and Runchu Tian and Ruobing Xie and Jie Zhou and Mark Gerstein and Dahai Li and Zhiyuan Liu and Maosong Sun},
year={2023},
eprint={2307.16789},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
## Citation
```
@misc{nexusraven,
title={NexusRaven: Surpassing the state-of-the-art in open-source function calling LLMs},
author={Nexusflow.ai team},
year={2023},
url={http://nexusflow.ai/blog}
}
```
## Contact
Please reach out to [email protected] for any questions!
|
Nexusflow/NexusRaven_API_evaluation
|
[
"arxiv:2306.05301",
"arxiv:2307.16789",
"region:us"
] |
2023-09-28T06:58:02+00:00
|
{"dataset_info": [{"config_name": "outputs_in_toolllm_format", "features": [{"name": "response", "list": [{"name": "function_call", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "task_id", "dtype": "int64"}, {"name": "timestamp", "dtype": "float64"}]}], "splits": [{"name": "train", "num_bytes": 303376, "num_examples": 348}], "download_size": 83053, "dataset_size": 303376}, {"config_name": "raw_api_list", "features": [{"name": "dataset", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "args_dicts", "list": [{"name": "default", "dtype": "null"}, {"name": "description", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "required", "dtype": "bool"}, {"name": "type", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 22276, "num_examples": 2}], "download_size": 10949, "dataset_size": 22276}, {"config_name": "raw_queries", "features": [{"name": "dataset", "dtype": "string"}, {"name": "query_dict", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 466227, "num_examples": 339}], "download_size": 98527, "dataset_size": 466227}, {"config_name": "standardized_api_list", "features": [{"name": "dataset", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "args_dicts", "list": [{"name": "default", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "required", "dtype": "bool"}, {"name": "type", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 47776, "num_examples": 65}], "download_size": 27751, "dataset_size": 47776}, {"config_name": "standardized_queries", "features": [{"name": "dataset", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "python_function_name", "dtype": "string"}, {"name": "python_args_dict", "dtype": "string"}, {"name": "context_functions", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 153860, "num_examples": 318}], "download_size": 36721, "dataset_size": 153860}], "configs": [{"config_name": "outputs_in_toolllm_format", "data_files": [{"split": "train", "path": "outputs_in_toolllm_format/train-*"}]}, {"config_name": "raw_queries", "data_files": [{"split": "train", "path": "raw_queries/train-*"}]}, {"config_name": "standardized_api_list", "data_files": [{"split": "train", "path": "standardized_api_list/train-*"}]}, {"config_name": "standardized_queries", "data_files": [{"split": "train", "path": "standardized_queries/train-*"}]}]}
|
2023-09-29T04:19:42+00:00
|
[
"2306.05301",
"2307.16789"
] |
[] |
TAGS
#arxiv-2306.05301 #arxiv-2307.16789 #region-us
|
# NexusRaven API Evaluation dataset
Please see blog post or NexusRaven Github repo for more information.
## License
The evaluation data in this repository consists primarily of our own curated evaluation data that only uses open source commercializable models. However, we include general domain data from the ToolLLM and ToolAlpaca papers. Since the data in the ToolLLM and ToolAlpaca works use OpenAI's GPT models for the generated content, the data is not commercially licensable, even if our own data is. As a result, the evaluation data used here is strictly non-commercial under CC-BY-NC-4.0. Thank you for understanding!
## References
We thank the following authors and entities for their evaluation data, which we leveraged to produce the results contained in this repository. Their citations can be found below
1. ToolAlpaca team
2. ToolLLM team
## Contact
Please reach out to info@URL for any questions!
|
[
"# NexusRaven API Evaluation dataset\nPlease see blog post or NexusRaven Github repo for more information.",
"## License\nThe evaluation data in this repository consists primarily of our own curated evaluation data that only uses open source commercializable models. However, we include general domain data from the ToolLLM and ToolAlpaca papers. Since the data in the ToolLLM and ToolAlpaca works use OpenAI's GPT models for the generated content, the data is not commercially licensable, even if our own data is. As a result, the evaluation data used here is strictly non-commercial under CC-BY-NC-4.0. Thank you for understanding!",
"## References\nWe thank the following authors and entities for their evaluation data, which we leveraged to produce the results contained in this repository. Their citations can be found below\n\n1. ToolAlpaca team\n2. ToolLLM team",
"## Contact\nPlease reach out to info@URL for any questions!"
] |
[
"TAGS\n#arxiv-2306.05301 #arxiv-2307.16789 #region-us \n",
"# NexusRaven API Evaluation dataset\nPlease see blog post or NexusRaven Github repo for more information.",
"## License\nThe evaluation data in this repository consists primarily of our own curated evaluation data that only uses open source commercializable models. However, we include general domain data from the ToolLLM and ToolAlpaca papers. Since the data in the ToolLLM and ToolAlpaca works use OpenAI's GPT models for the generated content, the data is not commercially licensable, even if our own data is. As a result, the evaluation data used here is strictly non-commercial under CC-BY-NC-4.0. Thank you for understanding!",
"## References\nWe thank the following authors and entities for their evaluation data, which we leveraged to produce the results contained in this repository. Their citations can be found below\n\n1. ToolAlpaca team\n2. ToolLLM team",
"## Contact\nPlease reach out to info@URL for any questions!"
] |
[
22,
25,
128,
53,
13
] |
[
"passage: TAGS\n#arxiv-2306.05301 #arxiv-2307.16789 #region-us \n# NexusRaven API Evaluation dataset\nPlease see blog post or NexusRaven Github repo for more information.## License\nThe evaluation data in this repository consists primarily of our own curated evaluation data that only uses open source commercializable models. However, we include general domain data from the ToolLLM and ToolAlpaca papers. Since the data in the ToolLLM and ToolAlpaca works use OpenAI's GPT models for the generated content, the data is not commercially licensable, even if our own data is. As a result, the evaluation data used here is strictly non-commercial under CC-BY-NC-4.0. Thank you for understanding!## References\nWe thank the following authors and entities for their evaluation data, which we leveraged to produce the results contained in this repository. Their citations can be found below\n\n1. ToolAlpaca team\n2. ToolLLM team## Contact\nPlease reach out to info@URL for any questions!"
] |
9c9c450324a8f30006ea154f3af85dbcaf0feb24
|
# Dataset of Nishikigi Chisato
This is the dataset of Nishikigi Chisato, containing 298 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 298 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 688 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 802 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 298 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 298 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 298 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 688 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 688 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 601 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 802 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 802 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/nishikigi_chisato_lycorisrecoil
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-28T07:07:41+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-28T07:18:42+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Nishikigi Chisato
============================
This is the dataset of Nishikigi Chisato, containing 298 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
97ff1096510e660dd238ddf3798b197ee2aaf1be
|
# Dataset Card for "Labeled_Data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jhuang14/Labeled_Data
|
[
"region:us"
] |
2023-09-28T07:32:09+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "airplane", "1": "bustruck", "2": "other", "3": "rail"}}}}], "splits": [{"name": "train", "num_bytes": 1652124.1515151516, "num_examples": 92}, {"name": "test", "num_bytes": 718314.8484848485, "num_examples": 40}], "download_size": 2372957, "dataset_size": 2370439.0}}
|
2023-09-28T07:32:36+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Labeled_Data"
More Information needed
|
[
"# Dataset Card for \"Labeled_Data\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Labeled_Data\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Labeled_Data\"\n\nMore Information needed"
] |
f45e12814f0040989987946ec4942eb33c55fa58
|
# Dataset Card for "nafkhan_ft_dataset_with_id_amr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
abdiharyadi/nafkhan_ft_dataset_with_id_amr
|
[
"region:us"
] |
2023-09-28T07:44:38+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "en_amr", "dtype": "string"}, {"name": "id_amr", "dtype": "string"}, {"name": "en", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 106147279, "num_examples": 92867}, {"name": "validation", "num_bytes": 2278476, "num_examples": 1722}, {"name": "test", "num_bytes": 1866019, "num_examples": 1371}], "download_size": 41233166, "dataset_size": 110291774}}
|
2023-10-12T23:19:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "nafkhan_ft_dataset_with_id_amr"
More Information needed
|
[
"# Dataset Card for \"nafkhan_ft_dataset_with_id_amr\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"nafkhan_ft_dataset_with_id_amr\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"nafkhan_ft_dataset_with_id_amr\"\n\nMore Information needed"
] |
08641fbcbabfd0f8a2bc46922d2c08a4ffc6ffe0
|
# Dataset of Inoue Takina
This is the dataset of Inoue Takina, containing 286 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 286 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 612 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 686 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 286 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 286 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 286 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 612 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 612 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 521 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 686 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 686 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/inoue_takina_lycorisrecoil
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-28T07:54:04+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-28T07:58:24+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Inoue Takina
=======================
This is the dataset of Inoue Takina, containing 286 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
c2dd8e3876851f01f09df520329b5e35f16545c5
|
# Dataset of Nakahara Mizuki
This is the dataset of Nakahara Mizuki, containing 116 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 116 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 278 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 316 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 116 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 116 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 116 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 278 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 278 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 208 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 316 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 316 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/nakahara_mizuki_lycorisrecoil
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-28T08:10:52+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-28T08:12:31+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Nakahara Mizuki
==========================
This is the dataset of Nakahara Mizuki, containing 116 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
8ebc51887b465d2359b8d2463065c215b82cea28
|
# Dataset of Kurumi
This is the dataset of Kurumi, containing 99 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 99 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 226 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 246 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 99 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 99 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 99 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 226 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 226 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 175 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 246 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 246 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/kurumi_lycorisrecoil
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-28T08:24:10+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-28T08:25:52+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Kurumi
=================
This is the dataset of Kurumi, containing 99 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
84b5c5ea4e02ee9f11cf8d9d5cf82ed382843063
|
# Dataset of Harukawa Fuki
This is the dataset of Harukawa Fuki, containing 64 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 64 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 127 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 151 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 64 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 64 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 64 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 127 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 127 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 105 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 151 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 151 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/harukawa_fuki_lycorisrecoil
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-28T08:32:25+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-28T08:37:37+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Harukawa Fuki
========================
This is the dataset of Harukawa Fuki, containing 64 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
f22d344fa944145e69e10ee7b91723a59c2f013f
|
# Dataset Card for "Persian-MultiChoice-QA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
SeyedAli/Persian-MultiChoice-QA
|
[
"region:us"
] |
2023-09-28T08:34:28+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "candidates", "sequence": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 923468, "num_examples": 2808}, {"name": "test", "num_bytes": 235093, "num_examples": 702}], "download_size": 452806, "dataset_size": 1158561}}
|
2023-09-28T09:46:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Persian-MultiChoice-QA"
More Information needed
|
[
"# Dataset Card for \"Persian-MultiChoice-QA\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Persian-MultiChoice-QA\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Persian-MultiChoice-QA\"\n\nMore Information needed"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.