sha
stringlengths
40
40
text
stringlengths
0
13.4M
id
stringlengths
2
117
tags
list
created_at
stringlengths
25
25
metadata
stringlengths
2
31.7M
last_modified
stringlengths
25
25
8594ad93c7a05b0ecc54c29cd02b420488bf43be
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: Aiyshwariya/bert-finetuned-squad * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@bestuh](https://huggingface.co/bestuh) for evaluating this model.
autoevaluate/autoeval-eval-squad-plain_text-fee91a-2282172274
[ "autotrain", "evaluation", "region:us" ]
2022-11-30T17:11:01+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "Aiyshwariya/bert-finetuned-squad", "metrics": ["squad", "squad_v2"], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-11-30T17:14:25+00:00
47893bb4d8be15289e58d75665af1afad1dde61d
# Dataset Card for clearance ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage: https://moleculenet.org/** - **Repository: https://github.com/deepchem/deepchem/tree/master** - **Paper: https://arxiv.org/abs/1703.00564** ### Dataset Summary `clearance` is a dataset included in [Chemberta-2 benchmarking](https://arxiv.org/pdf/2209.01712.pdf). ## Dataset Structure ### Data Fields Each split contains * `smiles`: the [SMILES](https://en.wikipedia.org/wiki/Simplified_molecular-input_line-entry_system) representation of a molecule * `selfies`: the [SELFIES](https://github.com/aspuru-guzik-group/selfies) representation of a molecule * `target`: ### Data Splits The dataset is split into an 80/10/10 train/valid/test split using scaffold split. ### Source Data #### Initial Data Collection and Normalization Data was originially generated by the Pande Group at Standford ### Licensing Information This dataset was originally released under an MIT license ### Citation Information ``` @misc{https://doi.org/10.48550/arxiv.1703.00564, doi = {10.48550/ARXIV.1703.00564}, url = {https://arxiv.org/abs/1703.00564}, author = {Wu, Zhenqin and Ramsundar, Bharath and Feinberg, Evan N. and Gomes, Joseph and Geniesse, Caleb and Pappu, Aneesh S. and Leswing, Karl and Pande, Vijay}, keywords = {Machine Learning (cs.LG), Chemical Physics (physics.chem-ph), Machine Learning (stat.ML), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Physical sciences, FOS: Physical sciences}, title = {MoleculeNet: A Benchmark for Molecular Machine Learning}, publisher = {arXiv}, year = {2017}, copyright = {arXiv.org perpetual, non-exclusive license} } ``` ### Contributions Thanks to [@zanussbaum](https://github.com/zanussbaum) for adding this dataset.
zpn/clearance
[ "task_categories:other", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:n<1K", "license:mit", "bio", "bio-chem", "molnet", "molecule-net", "biophysics", "arxiv:1703.00564", "arxiv:2209.01712", "region:us" ]
2022-11-30T17:13:08+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": [], "task_categories": ["other"], "task_ids": [], "pretty_name": "clearance", "tags": ["bio", "bio-chem", "molnet", "molecule-net", "biophysics"]}
2022-11-30T17:20:47+00:00
7c185766c22ed4bc33f33e9831376e1985c9a2df
# Dataset Card for lipo ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage: https://moleculenet.org/** - **Repository: https://github.com/deepchem/deepchem/tree/master** - **Paper: https://arxiv.org/abs/1703.00564** ### Dataset Summary `lipo` is a dataset included in [MoleculeNet](https://moleculenet.org/). It measures the experimental results of octanol/water distribution coefficient(logD at pH 7.4) ## Dataset Structure ### Data Fields Each split contains * `smiles`: the [SMILES](https://en.wikipedia.org/wiki/Simplified_molecular-input_line-entry_system) representation of a molecule * `selfies`: the [SELFIES](https://github.com/aspuru-guzik-group/selfies) representation of a molecule * `target`: octanol/water distribution coefficient(logD at pH 7.4) ### Data Splits The dataset is split into an 80/10/10 train/valid/test split using scaffold split. ### Source Data #### Initial Data Collection and Normalization Data was originially generated by the Pande Group at Standford ### Licensing Information This dataset was originally released under an MIT license ### Citation Information ``` @misc{https://doi.org/10.48550/arxiv.1703.00564, doi = {10.48550/ARXIV.1703.00564}, url = {https://arxiv.org/abs/1703.00564}, author = {Wu, Zhenqin and Ramsundar, Bharath and Feinberg, Evan N. and Gomes, Joseph and Geniesse, Caleb and Pappu, Aneesh S. and Leswing, Karl and Pande, Vijay}, keywords = {Machine Learning (cs.LG), Chemical Physics (physics.chem-ph), Machine Learning (stat.ML), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Physical sciences, FOS: Physical sciences}, title = {MoleculeNet: A Benchmark for Molecular Machine Learning}, publisher = {arXiv}, year = {2017}, copyright = {arXiv.org perpetual, non-exclusive license} } ``` ### Contributions Thanks to [@zanussbaum](https://github.com/zanussbaum) for adding this dataset.
zpn/lipo
[ "task_categories:other", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "license:mit", "bio", "bio-chem", "molnet", "molecule-net", "biophysics", "arxiv:1703.00564", "region:us" ]
2022-11-30T17:23:53+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["other"], "task_ids": [], "pretty_name": "lipo", "tags": ["bio", "bio-chem", "molnet", "molecule-net", "biophysics"]}
2022-11-30T17:25:11+00:00
788e8b5132cc61fc19a71a4f9fb54a15b4d10b29
# Dataset Card for bbbp ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage: https://moleculenet.org/** - **Repository: https://github.com/deepchem/deepchem/tree/master** - **Paper: https://arxiv.org/abs/1703.00564** ### Dataset Summary `bbbp` is a dataset included in [MoleculeNet](https://moleculenet.org/). This dataset has binary labels of blood-brain barrier penetration(permeability). ## Dataset Structure ### Data Fields Each split contains * `smiles`: the [SMILES](https://en.wikipedia.org/wiki/Simplified_molecular-input_line-entry_system) representation of a molecule * `selfies`: the [SELFIES](https://github.com/aspuru-guzik-group/selfies) representation of a molecule * `target`: blood-brain barrier penetration(permeability) ### Data Splits The dataset is split into an 80/10/10 train/valid/test split using scaffold split. ### Source Data #### Initial Data Collection and Normalization Data was originially generated by the Pande Group at Standford ### Licensing Information This dataset was originally released under an MIT license ### Citation Information ``` @misc{https://doi.org/10.48550/arxiv.1703.00564, doi = {10.48550/ARXIV.1703.00564}, url = {https://arxiv.org/abs/1703.00564}, author = {Wu, Zhenqin and Ramsundar, Bharath and Feinberg, Evan N. and Gomes, Joseph and Geniesse, Caleb and Pappu, Aneesh S. and Leswing, Karl and Pande, Vijay}, keywords = {Machine Learning (cs.LG), Chemical Physics (physics.chem-ph), Machine Learning (stat.ML), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Physical sciences, FOS: Physical sciences}, title = {MoleculeNet: A Benchmark for Molecular Machine Learning}, publisher = {arXiv}, year = {2017}, copyright = {arXiv.org perpetual, non-exclusive license} } ``` ### Contributions Thanks to [@zanussbaum](https://github.com/zanussbaum) for adding this dataset.
zpn/bbbp
[ "task_categories:other", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "license:mit", "bio", "bio-chem", "molnet", "molecule-net", "biophysics", "arxiv:1703.00564", "region:us" ]
2022-11-30T17:27:29+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["other"], "task_ids": [], "pretty_name": "bbbp", "tags": ["bio", "bio-chem", "molnet", "molecule-net", "biophysics"]}
2022-12-09T20:33:42+00:00
01c891497eccf050072a24aca0d66613e10d3310
# Dataset Card for fcc-comments ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository: https://github.com/slnader/fcc-comments ** - **Paper: https://doi.org/10.1002/poi3.327 ** ### Dataset Summary Online comment floods during public consultations have posed unique governance challenges for regulatory bodies seeking relevant information on proposed regulations. How should regulatory bodies separate spam and fake comments from genuine submissions by the public, especially when fake comments are designed to imitate ordinary citizens? How can regulatory bodies achieve both breadth and depth in their citations to the comment corpus? What is the best way to select comments that represent the average submission and comments that supply highly specialized information? `fcc-comments` is an annotated version of the comment corpus from the Federal Communications Commission's (FCC) 2017 "Restoring Internet Freedom" proceeding. The source data were downloaded directly from the FCC's Electronic Comment Filing System (ECFS) between January and February of 2019 and include raw comment text and metadata on comment submissions. The comment data were processed to be in a consistent format (machine-readable pdf or plain text), and annotated with three types of information: whether the comment was cited in the agency's final order, the type of commenter (individual, interest group, business group), and whether the comment was associated with an in-person meeting. The release also includes query-term and document-term matrices to facilitate keyword searches on the comment corpus. An example of how these can be used with the bm25 algorithm can be found [here](https://github.com/slnader/fcc-comments/blob/main/process_comments/1_score_comments.py). ## Dataset Structure FCC relational database (fcc.pgsql): The core components of the database include a table for submission metadata, a table for attachment metadata, a table for filer metadata, and a table that contains comment text if submitted in express format. In addition to these core tables, there are several derived tables specific to the analyses in the paper, including which submissions and attachments were cited in the final order, which submissions were associated with in-person meetings, and which submissions were associated with interest groups. Full documentation of the tables can be found in fcc_database.md. Attachments (attachments.tar.gz): Attachments to submissions that could be converted to text via OCR and saved in machine-readable pdf format. The filenames are formatted as [submission_id]_[document_id].pdf, where submission_id and document_id are keys in the relational database. Search datasets (search.tar.gz): Objects to facilitate prototyping of search algorithms on the comment corpus. Contains the following elements: | Filename | description | | ----------- | ----------- | query_dtm.pickle | Query-term matrix (79x3986) in sparse csr format (rows are queries, columns are bigram keyword counts). query_text.pickle | Dictionary keyed by the paragraph number in the FCC’s Notice of Proposed Rulemaking. Values are the text of the query containing a call for comments. | search_dtms_express.pickle | Document-term matrix for express comments (3800691x3986) in sparse csr format (rows are comment pages, columns are bigram keyword counts). | search_index_express.pickle | Pandas dataframe containing unique id and total term length for express comments. | search_dtms.pickle | Document-term matrix for standard comment attachments (44655x3986) in sparse csr format (rows are comment pages, columns are bigram keyword counts). | search_index.pickle | Pandas dataframe containing unique id and total term length for standard comment attachments. | ### Data Fields The following tables are available in fcc.pgsql: - comments: plain text comments associated with submissions | column | type | description | | ----------- | ----------- | ----------- | | comment_id | character varying(64) | unique id for plain text comment | comment_text | text | raw text of plain text comment row_id | integer | row sequence for plain text comments - submissions: metadata for submissions | column | type | description | | ----------- | ----------- | ----------- | submission_id | character varying(20) | unique id for submission submission_type | character varying(100) | type of submission (e.g., comment, reply, statement) express_comment | numeric | 1 if express comment date_received | date | date submission was received contact_email | character varying(255) | submitter email address city | character varying(255) | submitter city address_line_1 | character varying(255) | submitter address line 1 address_line_2 | character varying(255) | submitter address line 2 state | character varying(255) | submitter state zip_code | character varying(50) | submitter zip comment_id | character varying(64) | unique id for plain text comment - filers: names of filers associated with submissions | column | type | description | | ----------- | ----------- | ----------- | submission_id | character varying(20) | unique id for submission filer_name | character varying(250) | name of filer associated with submission - documents: attachments associated with submissions | column | type | description | | ----------- | ----------- | ----------- | submission_id | character varying(20) | unique id for submission document_name | text | filename of attachment download_status | numeric | status of attachment download document_id | character varying(64) | unique id for attachment file_extension | character varying(4) | file extension for attachment - filers_cited: citations from final order | column | type | description | | ----------- | ----------- | ----------- | point | numeric | paragraph number in final order filer_name | character varying(250) | name of cited filer submission_type | character varying(12) | type of submission as indicated in final order page_numbers | text[] | cited page numbers cite_id | integer | unique id for citation filer_id | character varying(250) | id for cited filer - docs_cited: attachments associated with cited submissions | column | type | description | | ----------- | ----------- | ----------- | cite_id | numeric | unique id for citation submission_id | character varying(20) | unique id for submission document_id | character varying(64) | unique id for attachment - near_duplicates: lookup table for comment near-duplicates | column | type | description | | ----------- | ----------- | ----------- | target_document_id | unique id for target document duplicate_document_id | unique id for duplicate of target document - exact_duplicates: lookup table for comment exact duplicates | column | type | description | | ----------- | ----------- | ----------- | target_document_id | character varying(100) | unique id for target document duplicate_document_id | character varying(100) | unique id for duplicate of target document - in_person_exparte: submissions associated with ex parte meeting | column | type | description | | ----------- | ----------- | ----------- | submission_id | character varying(20) | unique id for submission - interest_groups: submissions associated with interest groups | column | type | description | | ----------- | ----------- | ----------- | submission_id | character varying(20) | unique id for submission business | numeric | 1 if business group, 0 otherwise ## Dataset Creation ### Curation Rationale The data were curated to perform information retrieval and summarization tasks as documented in https://doi.org/10.1002/poi3.327. ### Source Data #### Initial Data Collection and Normalization The data for this study come from the FCC's Electronic Comment Filing System (ECFS) system, accessed between January and February of 2019. I converted the API responses into a normalized, relational database containing information on 23,951,967 submissions. 23,938,686 "express" submissions contained a single plain text comment submitted directly through the comment form. 13,821 "standard" submissions contained one or more comment documents submitted as attachments in various file formats. While the FCC permitted any file format for attachments, I only consider documents attached in pdf, plain text, rich text, and Microsoft Word file formats, and I drop submitted documents that were simply copies of the FCC’s official documents (e.g., the NPRM itself). Using standard OCR software, I attempted to convert all attachments into plain text and saved them as machine-readable pdfs. #### Who are the source language producers? All submitters of public comments during the public comment period (but see note on fake comments in considerations). ### Annotations #### Annotation process - Citations: I consider citations from the main text of the FCC's final rule. I did not include citations to supporting documents not available through ECFS (e.g., court decisions), nor did I include citations to submissions from prior FCC proceedings. The direct citations to filed submissions are included in a series of 1,186 footnotes. The FCC’s citation format typically followed a relatively standard pattern: the name of the filer (e.g., Verizon), a description of the document (e.g., Comment), and at times a page number. I extracted citations from the text using regular expressions. Based on a random sample of paragraphs from the final order, the regular expressions identified 98% of eligible citations, while successfully excluding all non-citation text. In total, this produced 1,886 unique citations. I then identified which of the comments were cited. First, I identified all documents from the cited filer that had enough pages to contain the page number cited (if provided), and, where applicable, whose filename contained the moniker from the FCC’s citation (e.g., "Reply"). The majority of citations matched to only one possible comment submitted, and I identified the re- maining cited comments through manual review of the citations. In this way, I was able to tag documents associated with all but three citations. When the same cited document was submitted under multiple separate submissions, I tagged all versions of the document as being cited. - Commenter type: Comments are labeled as mass comments if 10 or more duplicate or near-duplicate copies were submitted by individual commenters. Near-duplicates were defined as comments with non-zero identical information scores. To identify the type of commenter for non-mass comments, I take advantage of the fact that the vast majority of organized groups preferred standard submissions over express submissions. Any non-mass comment submitted as an express comment was coded as coming from an individual. To distinguish between individuals and organizations that used standard submissions, I use a first name and surname database from the names dataset Python package to characterize filer names as belonging to individuals or organizations. I also use the domain of the submitter’s email address to re-categorize comments as coming from organizations if they were submitted on behalf of organizations by an individual. Government officials were identified by their .gov email addresses. I manually review this procedure for mischaracterizations. After obtaining a list of organization names, I manually code each one as belonging to a business group or a non-business group. Government officials writing in their official capacity were categorized as a non-business group. - In-person meetings: To identify which commenters held in-person meetings with the agency, I collect all comments labeled as an ex-parte submission in the EFCS. I manually review these submissions for mention of an in-person meeting. I label a commenter as having held an in-person meeting if they submitted at least one ex-parte document that mentioned an in-person meeting. #### Who are the annotators? Annotations are a combination of automated and manual review done by the author. ### Personal and Sensitive Information This dataset may contain personal and sensitive information, as there were no restrictions on what commenters could submit to the agency. This dataset also contains numerous examples of profanity and spam. These comments represent what the FCC decided was appropriate to share publicly on their own website. ## Considerations for Using the Data ### Discussion of Biases This proceeding was famous for the large number of "fake" comments (comments impersonating ordinary citizens) submitted to the agency (see [this report](https://ag.ny.gov/sites/default/files/oag-fakecommentsreport.pdf) by the NY AG for more information). As such, this comment corpus contains a mix of computer-generated and natural language, and there is currently no way to reliably separate mass comments submitted with the approval of the commenter and those submitted on behalf of the commenter without their knowledge. ## Additional Information ### Licensing Information CreativeCommons Attribution-NonCommercial-ShareAlike 4.0 International. ### Citation Information ``` @article{handan2022, title={Do fake online comments pose a threat to regulatory policymaking? Evidence from Internet regulation in the United States}, author={Handan-Nader, Cassandra}, journal={Policy \& Internet}, year={2022} } ```
slnader/fcc-comments
[ "task_categories:text-retrieval", "task_ids:document-retrieval", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10M<n<100M", "source_datasets:original", "language:en", "license:cc-by-nc-sa-4.0", "notice and comment", "regulation", "government", "region:us" ]
2022-11-30T17:38:32+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "pretty_name": "fcc-comments", "tags": ["notice and comment", "regulation", "government"]}
2022-11-30T19:05:23+00:00
2b8093c6599dd3999aa44a7a14af75c52df9362d
# Yellow Module embedding ## Samples <img alt="Samples" src="https://huggingface.co/datasets/DJSoft/yellow_module/resolve/main/samples.jpg" style="max-height: 80vh"/> ## About Use this Stable Diffusion embedding to achieve the Project Diva Yellow outfit ## Usage To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt add __yellow_module-*__ Add **( :1.0)** around it to modify its weight ## Additional info In order to improve some details you can use the following words: **Positive prompt:** blue eyes, white sport shorts, yellow see-through babydoll, unzipped, yellow legwear **Negative prompt:** white babydoll, skirt, black shorts, yellow shorts ## Included Files - 15000 steps Usage: **yellow_module-15000** ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
DJSoft/yellow_module
[ "license:creativeml-openrail-m", "region:us" ]
2022-11-30T17:58:12+00:00
{"license": "creativeml-openrail-m"}
2022-11-30T18:11:04+00:00
c14b14f8b23079d25070d66b7db892b73d819632
# Dataset Card for "google" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
steciuk/google
[ "region:us" ]
2022-11-30T18:33:07+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1640375, "num_examples": 10504}], "download_size": 971872, "dataset_size": 1640375}}
2022-11-30T18:33:20+00:00
f45d740c271d390ae25c4ec4b83bb1a1386d6419
# Dataset Card for "imdb" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
steciuk/imdb
[ "region:us" ]
2022-11-30T18:33:22+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 52901123, "num_examples": 40000}], "download_size": 34391296, "dataset_size": 52901123}}
2022-11-30T18:33:38+00:00
a1f2c76cfc30ad95d57d94bffa221aa423c63203
KirbyShrine/plainbagbean
[ "license:cc-by-nc-nd-4.0", "region:us" ]
2022-11-30T18:49:20+00:00
{"license": "cc-by-nc-nd-4.0"}
2022-11-30T18:49:49+00:00
e65b73b9d65fdcd430b4574eca0ef9ba6c346e4e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/pegasus-x-large-book-summary-C-r2 * Dataset: kmfoda/booksum * Config: kmfoda--booksum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-66d70e-2296872703
[ "autotrain", "evaluation", "region:us" ]
2022-11-30T19:05:45+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/pegasus-x-large-book-summary-C-r2", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}}
2022-12-01T01:21:36+00:00
045259d09f4357fc85422d0bf6a7f6d0da41a4fa
# Caltech-101 Webdataset (Test set only) Original paper: [One-shot learning of object categories](https://ieeexplore.ieee.org/document/1597116) Homepage: https://data.caltech.edu/records/mzrjq-6wc02 Bibtex: ``` @misc{li_andreeto_ranzato_perona_2022, title={Caltech 101}, DOI={10.22002/D1.20086}, publisher={CaltechDATA}, author={Li, Fei-Fei and Andreeto, Marco and Ranzato, Marc'Aurelio and Perona, Pietro}, year={2022}, month={Apr} } ```
djghosh/wds_vtab-caltech101_test
[ "region:us" ]
2022-11-30T21:35:43+00:00
{}
2022-12-12T20:37:10+00:00
2510fd71df6bdc8a87a62a5bbb342d1e6dc30dcb
# CIFAR-100 Webdataset (Test set only) Original paper: [Learning Multiple Layers of Features from Tiny Images](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf) Homepage: https://www.cs.toronto.edu/~kriz/cifar.html Bibtex: ``` @TECHREPORT{Krizhevsky09learningmultiple, author = {Alex Krizhevsky}, title = {Learning multiple layers of features from tiny images}, institution = {}, year = {2009} } ```
djghosh/wds_vtab-cifar100_test
[ "region:us" ]
2022-11-30T21:41:23+00:00
{}
2022-12-12T20:25:53+00:00
196667de6472be9447f917288b05750024f2002b
# CLEVR Count All Webdataset (Test set only) Original paper: [CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning](https://arxiv.org/pdf/1612.06890.pdf) Homepage: https://cs.stanford.edu/people/jcjohns/clevr/ Bibtex: ``` @article{DBLP:journals/corr/JohnsonHMFZG16, author = {Justin Johnson and Bharath Hariharan and Laurens van der Maaten and Li Fei{-}Fei and C. Lawrence Zitnick and Ross B. Girshick}, title = {{CLEVR:} {A} Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning}, journal = {CoRR}, volume = {abs/1612.06890}, year = {2016}, url = {http://arxiv.org/abs/1612.06890}, eprinttype = {arXiv}, eprint = {1612.06890}, timestamp = {Sat, 19 Oct 2019 16:30:04 +0200}, biburl = {https://dblp.org/rec/journals/corr/JohnsonHMFZG16.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
djghosh/wds_vtab-clevr_count_all_test
[ "arxiv:1612.06890", "region:us" ]
2022-11-30T21:41:46+00:00
{}
2022-12-12T20:40:49+00:00
960ed30e5ff2a205c80befe503984946fad59106
# CLEVR Closest Object Distance Webdataset (Test set only) Original paper: [CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning](https://arxiv.org/pdf/1612.06890.pdf) Homepage: https://cs.stanford.edu/people/jcjohns/clevr/ Bibtex: ``` @article{DBLP:journals/corr/JohnsonHMFZG16, author = {Justin Johnson and Bharath Hariharan and Laurens van der Maaten and Li Fei{-}Fei and C. Lawrence Zitnick and Ross B. Girshick}, title = {{CLEVR:} {A} Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning}, journal = {CoRR}, volume = {abs/1612.06890}, year = {2016}, url = {http://arxiv.org/abs/1612.06890}, eprinttype = {arXiv}, eprint = {1612.06890}, timestamp = {Sat, 19 Oct 2019 16:30:04 +0200}, biburl = {https://dblp.org/rec/journals/corr/JohnsonHMFZG16.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
djghosh/wds_vtab-clevr_closest_object_distance_test
[ "arxiv:1612.06890", "region:us" ]
2022-11-30T21:42:58+00:00
{}
2022-12-12T20:40:20+00:00
154bf10ce86beabe53eff464079fe9d6186f13ac
# Country-211 (Test set only) Original paper: [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) Homepage: https://github.com/openai/CLIP/blob/main/data/country211.md Derived from YFCC100M: https://multimediacommons.wordpress.com/yfcc100m-core-dataset/ Bibtex: ``` @article{DBLP:journals/corr/abs-2103-00020, author = {Alec Radford and Jong Wook Kim and Chris Hallacy and Aditya Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever}, title = {Learning Transferable Visual Models From Natural Language Supervision}, journal = {CoRR}, volume = {abs/2103.00020}, year = {2021}, url = {https://arxiv.org/abs/2103.00020}, eprinttype = {arXiv}, eprint = {2103.00020}, timestamp = {Thu, 04 Mar 2021 17:00:40 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2103-00020.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
djghosh/wds_country211_test
[ "arxiv:2103.00020", "region:us" ]
2022-11-30T21:44:34+00:00
{}
2022-12-12T20:44:50+00:00
276b0059cb0cc956aaf588225d5197c13924adb3
# Describable Textures Dataset (Test set only) Original paper: [Describing Textures in the Wild](http://www.robots.ox.ac.uk/~vgg/publications/2014/Cimpoi14/cimpoi14.pdf) Homepage: https://www.robots.ox.ac.uk/~vgg/data/dtd/ Bibtex: ``` @InProceedings{cimpoi14describing, Author = {M. Cimpoi and S. Maji and I. Kokkinos and S. Mohamed and and A. Vedaldi}, Title = {Describing Textures in the Wild}, Booktitle = {Proceedings of the {IEEE} Conf. on Computer Vision and Pattern Recognition ({CVPR})}, Year = {2014}} ```
djghosh/wds_vtab-dtd_test
[ "region:us" ]
2022-11-30T21:46:14+00:00
{}
2022-12-12T20:46:48+00:00
d96ddaa13dabad7da66e04c48141560f21cbf7b5
# DMLab Frames (Test set only) Original paper: [The Visual Task Adaptation Benchmark](https://arxiv.org/abs/1910.04867) Homepage: https://github.com/google-research/task_adaptation Bibtex: ``` @article{zhai2019visual, title={The Visual Task Adaptation Benchmark}, author={Xiaohua Zhai and Joan Puigcerver and Alexander Kolesnikov and Pierre Ruyssen and Carlos Riquelme and Mario Lucic and Josip Djolonga and Andre Susano Pinto and Maxim Neumann and Alexey Dosovitskiy and Lucas Beyer and Olivier Bachem and Michael Tschannen and Marcin Michalski and Olivier Bousquet and Sylvain Gelly and Neil Houlsby}, year={2019}, eprint={1910.04867}, archivePrefix={arXiv}, primaryClass={cs.CV}, url = {https://arxiv.org/abs/1910.04867} } ```
djghosh/wds_vtab-dmlab_test
[ "arxiv:1910.04867", "region:us" ]
2022-11-30T21:46:50+00:00
{}
2022-12-12T20:49:10+00:00
4d54915cdf934af23a2d4f3b5f895a4fbb7402d3
# dSprites Orientation (Test set only) Original paper: [beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework](https://openreview.net/forum?id=Sy2fzU9gl) Homepage: https://github.com/deepmind/dsprites-dataset Bibtex: ``` @misc{dsprites17, author = {Loic Matthey and Irina Higgins and Demis Hassabis and Alexander Lerchner}, title = {dSprites: Disentanglement testing Sprites dataset}, howpublished= {https://github.com/deepmind/dsprites-dataset/}, year = "2017", } ```
djghosh/wds_vtab-dsprites_label_orientation_test
[ "region:us" ]
2022-11-30T21:47:55+00:00
{}
2022-12-12T20:51:57+00:00
8ded762a7b18bc4b75985cb39cb5355cecf58d27
# dSprites X Position (Test set only) Original paper: [beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework](https://openreview.net/forum?id=Sy2fzU9gl) Homepage: https://github.com/deepmind/dsprites-dataset Bibtex: ``` @misc{dsprites17, author = {Loic Matthey and Irina Higgins and Demis Hassabis and Alexander Lerchner}, title = {dSprites: Disentanglement testing Sprites dataset}, howpublished= {https://github.com/deepmind/dsprites-dataset/}, year = "2017", } ```
djghosh/wds_vtab-dsprites_label_x_position_test
[ "region:us" ]
2022-11-30T21:48:20+00:00
{}
2022-12-12T20:52:19+00:00
e1c187043efbc540f37db8b07af5d949920718fa
# dSprites Y Position (Test set only) Original paper: [beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework](https://openreview.net/forum?id=Sy2fzU9gl) Homepage: https://github.com/deepmind/dsprites-dataset Bibtex: ``` @misc{dsprites17, author = {Loic Matthey and Irina Higgins and Demis Hassabis and Alexander Lerchner}, title = {dSprites: Disentanglement testing Sprites dataset}, howpublished= {https://github.com/deepmind/dsprites-dataset/}, year = "2017", } ```
djghosh/wds_vtab-dsprites_label_y_position_test
[ "region:us" ]
2022-11-30T21:48:45+00:00
{}
2022-12-12T20:52:35+00:00
7d107212549825bf1fc46d322d0839e9cc8a3ff1
# dSprites EuroSAT (Test set only) Original paper: [EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification](https://arxiv.org/abs/1709.00029) Homepage: https://github.com/phelber/EuroSAT Bibtex: ``` @article{helber2019eurosat, title={Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification}, author={Helber, Patrick and Bischke, Benjamin and Dengel, Andreas and Borth, Damian}, journal={IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing}, year={2019}, publisher={IEEE} } @inproceedings{helber2018introducing, title={Introducing EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification}, author={Helber, Patrick and Bischke, Benjamin and Dengel, Andreas and Borth, Damian}, booktitle={IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium}, pages={204--207}, year={2018}, organization={IEEE} } ```
djghosh/wds_vtab-eurosat_test
[ "arxiv:1709.00029", "region:us" ]
2022-11-30T21:49:11+00:00
{}
2022-12-12T20:54:32+00:00
6b5e12a741d27b63718dd08365b866a9a8e5e7a5
# FGVC-Aircraft (Test set only) Original paper: [Fine-Grained Visual Classification of Aircraft](https://arxiv.org/abs/1306.5151) Homepage: https://www.robots.ox.ac.uk/~vgg/data/fgvc-aircraft/ Bibtex: ``` @techreport{maji13fine-grained, title = {Fine-Grained Visual Classification of Aircraft}, author = {S. Maji and J. Kannala and E. Rahtu and M. Blaschko and A. Vedaldi}, year = {2013}, archivePrefix = {arXiv}, eprint = {1306.5151}, primaryClass = "cs-cv", } ```
djghosh/wds_fgvc_aircraft_test
[ "arxiv:1306.5151", "region:us" ]
2022-11-30T21:49:32+00:00
{}
2022-12-12T20:57:36+00:00
056ba0c1c51e384f0d64b54b4bd02bea6e3e3382
# Food-101 (Test set only) Original paper: [Food-101 – Mining Discriminative Components with Random Forests](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/static/bossard_eccv14_food-101.pdf) Homepage: https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/ Bibtex: ``` @inproceedings{bossard14, title = {Food-101 -- Mining Discriminative Components with Random Forests}, author = {Bossard, Lukas and Guillaumin, Matthieu and Van Gool, Luc}, booktitle = {European Conference on Computer Vision}, year = {2014} } ```
djghosh/wds_food101_test
[ "region:us" ]
2022-11-30T21:50:50+00:00
{}
2022-12-12T20:57:21+00:00
7cc5c0dc48e76d0a8a76649234db09145582dc5a
# GTSRB (Test set only) Original paper: [Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition](https://doi.org/10.1016/j.neunet.2012.02.016) Homepage: https://benchmark.ini.rub.de/gtsrb_news.html Bibtex: ``` @article{Stallkamp2012, title = "Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition", journal = "Neural Networks", volume = "", number = "0", pages = " - ", year = "2012", note = "", issn = "0893-6080", doi = "10.1016/j.neunet.2012.02.016", url = "http://www.sciencedirect.com/science/article/pii/S0893608012000457", author = "J. Stallkamp and M. Schlipsing and J. Salmen and C. Igel", keywords = "Traffic sign recognition", keywords = "Machine learning", keywords = "Convolutional neural networks", keywords = "Benchmarking" } ```
djghosh/wds_gtsrb_test
[ "region:us" ]
2022-11-30T21:54:33+00:00
{}
2022-12-12T20:59:36+00:00
ab0b1cc7ac87b9688de640151800cc5c1f8a831d
# Dataset Card for "laion-2b-en-very-unsafe" A version of laion5b dataset(en subset) with strictly `unsafe` images. Dataset was filtered to retain only examples with `punsafe` present and > 0.9. However, due to the way nsfw detector was train, there is a significant amount of false postives. There is, likely, more false positives than real unsafe images.
Rexhaif/laion-2b-en-very-unsafe
[ "region:us" ]
2022-11-30T22:04:39+00:00
{"dataset_info": {"features": [{"name": "URL", "dtype": "string"}, {"name": "TEXT", "dtype": "string"}, {"name": "WIDTH", "dtype": "int32"}, {"name": "HEIGHT", "dtype": "int32"}, {"name": "similarity", "dtype": "float64"}, {"name": "hash", "dtype": "int64"}, {"name": "punsafe", "dtype": "float32"}, {"name": "pwatermark", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 6799407448, "num_examples": 34607134}], "download_size": 5322013902, "dataset_size": 6799407448}}
2022-11-30T23:18:49+00:00
1d7fbb7e1879b05f9db3ab0775ac68435b8ba385
# ImageNet-Sketch (Test set only) Original paper: [Learning Robust Global Representations by Penalizing Local Predictive Power](https://arxiv.org/abs/1905.13549) Homepage: https://github.com/HaohanWang/ImageNet-Sketch Bibtex: ``` @inproceedings{wang2019learning, title={Learning Robust Global Representations by Penalizing Local Predictive Power}, author={Wang, Haohan and Ge, Songwei and Lipton, Zachary and Xing, Eric P}, booktitle={Advances in Neural Information Processing Systems}, pages={10506--10518}, year={2019} } ```
djghosh/wds_imagenet_sketch_test
[ "arxiv:1905.13549", "region:us" ]
2022-11-30T23:23:24+00:00
{}
2022-12-12T21:03:25+00:00
4d1d20ffc1718d1db6a1f47bdf5e4da57d671d81
# Dataset Card for [best-selling-video-games] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@arjunpatel7](https://github.com/<github-username>) for adding this dataset.
arjunpatel/best-selling-video-games
[ "region:us" ]
2022-11-30T23:25:57+00:00
{}
2022-12-02T23:46:24+00:00
622aa95311079dde7d64d541c411f7c8b796e6b5
# ImageNetV2 Matched Frequency (Test set only) Original paper: [Do ImageNet Classifiers Generalize to ImageNet?](https://arxiv.org/abs/1902.10811) Homepage: https://github.com/modestyachts/ImageNetV2 Bibtex: ``` @article{DBLP:journals/corr/abs-1902-10811, author = {Benjamin Recht and Rebecca Roelofs and Ludwig Schmidt and Vaishaal Shankar}, title = {Do ImageNet Classifiers Generalize to ImageNet?}, journal = {CoRR}, volume = {abs/1902.10811}, year = {2019}, url = {http://arxiv.org/abs/1902.10811}, eprinttype = {arXiv}, eprint = {1902.10811}, timestamp = {Tue, 21 May 2019 18:03:38 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1902-10811.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
djghosh/wds_imagenetv2_test
[ "arxiv:1902.10811", "region:us" ]
2022-11-30T23:26:19+00:00
{}
2022-12-12T21:06:43+00:00
370e992788929ebec1a633652d4fb4330b4a621a
# ImageNet-A (Test set only) Original paper: [Natural Adversarial Examples](https://arxiv.org/abs/1907.07174) Homepage: https://github.com/hendrycks/natural-adv-examples Bibtex: ``` @article{hendrycks2021nae, title={Natural Adversarial Examples}, author={Dan Hendrycks and Kevin Zhao and Steven Basart and Jacob Steinhardt and Dawn Song}, journal={CVPR}, year={2021} } ```
djghosh/wds_imagenet-a_test
[ "arxiv:1907.07174", "region:us" ]
2022-11-30T23:27:38+00:00
{}
2022-12-12T21:08:01+00:00
48413c5561bdd0b4a0df490d46cf8dabee7f6db3
# ImageNet-O (Test set only) Original paper: [Natural Adversarial Examples](https://arxiv.org/abs/1907.07174) Homepage: https://github.com/hendrycks/natural-adv-examples Bibtex: ``` @article{hendrycks2021nae, title={Natural Adversarial Examples}, author={Dan Hendrycks and Kevin Zhao and Steven Basart and Jacob Steinhardt and Dawn Song}, journal={CVPR}, year={2021} } ```
djghosh/wds_imagenet-o_test
[ "arxiv:1907.07174", "region:us" ]
2022-11-30T23:30:09+00:00
{}
2022-12-12T21:08:16+00:00
582a158b4ff1de74de428fc34df09e8cf76fe484
# ImageNet-R (Test set only) Original paper: [The Many Faces of Robustness](https://arxiv.org/abs/2006.16241) Homepage: https://github.com/hendrycks/imagenet-r Bibtex: ``` @article{hendrycks2021many, title={The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization}, author={Dan Hendrycks and Steven Basart and Norman Mu and Saurav Kadavath and Frank Wang and Evan Dorundo and Rahul Desai and Tyler Zhu and Samyak Parajuli and Mike Guo and Dawn Song and Jacob Steinhardt and Justin Gilmer}, journal={ICCV}, year={2021} } ```
djghosh/wds_imagenet-r_test
[ "arxiv:2006.16241", "region:us" ]
2022-11-30T23:30:37+00:00
{}
2022-12-12T21:09:12+00:00
b7565ccd1eabd787806222a9c75e8a9f56c6e835
# KITTI Closest Vehicle Distance (Test set only) Original paper: [Vision meets Robotics: The KITTI Dataset](https://www.cvlibs.net/publications/Geiger2013IJRR.pdf) Homepage: https://www.cvlibs.net/datasets/kitti/ Bibtex: ``` @ARTICLE{Geiger2013IJRR, author = {Andreas Geiger and Philip Lenz and Christoph Stiller and Raquel Urtasun}, title = {Vision meets Robotics: The KITTI Dataset}, journal = {International Journal of Robotics Research (IJRR)}, year = {2013} } ```
djghosh/wds_vtab-kitti_closest_vehicle_distance_test
[ "region:us" ]
2022-11-30T23:32:50+00:00
{}
2022-12-12T21:11:59+00:00
668da094a3e9a48be4bca62f14a569a8386ee6cb
# MNIST (Test set only) Original paper: [Gradient-based learning applied to document recognition](https://ieeexplore.ieee.org/document/726791) Homepage (broken link): http://yann.lecun.com/exdb/mnist Bibtex: ``` @ARTICLE{726791, author={Lecun, Y. and Bottou, L. and Bengio, Y. and Haffner, P.}, journal={Proceedings of the IEEE}, title={Gradient-based learning applied to document recognition}, year={1998}, volume={86}, number={11}, pages={2278-2324}, doi={10.1109/5.726791} } ```
djghosh/wds_mnist_test
[ "region:us" ]
2022-11-30T23:34:14+00:00
{}
2022-12-12T22:24:17+00:00
e433f2df2999e77401a627de654bddbeadf6f8e9
# Oxford Flowers-102 (Test set only) Original paper: [Automated flower classification over a large number of classes](https://www.robots.ox.ac.uk/~vgg/publications/2008/Nilsback08/nilsback08.pdf) Homepage: https://www.robots.ox.ac.uk/~vgg/data/flowers/102/ Bibtex: ``` @InProceedings{Nilsback08, author = "Maria-Elena Nilsback and Andrew Zisserman", title = "Automated Flower Classification over a Large Number of Classes", booktitle = "Indian Conference on Computer Vision, Graphics and Image Processing", month = "Dec", year = "2008", } ```
djghosh/wds_vtab-flowers_test
[ "region:us" ]
2022-11-30T23:34:43+00:00
{}
2022-12-12T21:20:43+00:00
8e094aae70d884c4ccbc9fffe51d1903d897e83d
# Oxford-IIIT Pets (Test set only) Original paper: [Cats and Dogs](https://www.robots.ox.ac.uk/~vgg/publications/2012/parkhi12a/parkhi12a.pdf) Homepage: https://www.robots.ox.ac.uk/~vgg/data/pets/ Bibtex: ``` @InProceedings{parkhi12a, author = "Omkar M. Parkhi and Andrea Vedaldi and Andrew Zisserman and C. V. Jawahar", title = "Cats and Dogs", booktitle = "IEEE Conference on Computer Vision and Pattern Recognition", year = "2012", } ```
djghosh/wds_vtab-pets_test
[ "region:us" ]
2022-11-30T23:36:13+00:00
{}
2022-12-12T21:22:46+00:00
c19b3db699f7fab639d70250f74bb70b5002141e
# Pascal VOC2007 (Test set only) Original paper: [The PASCAL Visual Object Classes Challenge: A Retrospective](http://host.robots.ox.ac.uk/pascal/VOC/pubs/everingham15.pdf) Homepage: http://host.robots.ox.ac.uk/pascal/VOC/voc2007/ Bibtex: ``` @misc{pascal-voc-2007, author = "Everingham, M. and Van~Gool, L. and Williams, C. K. I. and Winn, J. and Zisserman, A.", title = "The {PASCAL} {V}isual {O}bject {C}lasses {C}hallenge 2007 {(VOC2007)} {R}esults", howpublished = "http://www.pascal-network.org/challenges/VOC/voc2007/workshop/index.html"} ```
djghosh/wds_voc2007_test
[ "region:us" ]
2022-11-30T23:38:38+00:00
{}
2022-12-12T21:25:32+00:00
b1bcefc6b04ded211ca4201cb849156d9a98ef1e
# PatchCamelyon (Test set only) Original paper: [Rotation Equivariant CNNs for Digital Pathology](http://arxiv.org/abs/1806.03962) Homepage: https://github.com/basveeling/pcam Bibtex: ``` @ARTICLE{Veeling2018-qh, title = "Rotation Equivariant {CNNs} for Digital Pathology", author = "Veeling, Bastiaan S and Linmans, Jasper and Winkens, Jim and Cohen, Taco and Welling, Max", month = jun, year = 2018, archivePrefix = "arXiv", primaryClass = "cs.CV", eprint = "1806.03962" } ```
djghosh/wds_vtab-pcam_test
[ "arxiv:1806.03962", "region:us" ]
2022-11-30T23:39:37+00:00
{}
2022-12-12T21:26:52+00:00
1c85f91db604f5f4919cd16603c5898b2e088bf8
![INESC-ID](https://www.inesc-id.pt/wp-content/uploads/2019/06/INESC-ID-logo_01.png) ![A Semantic Search System for Supremo Tribunal de Justiça](https://rufimelo99.github.io/SemanticSearchSystemForSTJ/_static/logo.png) Work developed as part of [Project IRIS](https://www.inesc-id.pt/projects/PR07005/). Thesis: [A Semantic Search System for Supremo Tribunal de Justiça](https://rufimelo99.github.io/SemanticSearchSystemForSTJ/) # Portuguese Legal Sentences Collection of Legal Sentences pairs from the Portuguese Supreme Court of Justice The goal of this dataset was to be used for Semantic Textual Similarity - Values from 0-1: random sentences across documents - Values from 2-4: sentences from the same summary (implying some level of entailment) - Values from 4-5: sentences pairs generated through OpenAi' text-davinci-003 ("Escreve por outras palavras:\n\Entrada:\n"+originalQuery + "\Saída: \n") ### Contributions [@rufimelo99](https://github.com/rufimelo99) If you use this work, please cite: ```bibtex @inproceedings{MeloSemantic, author = {Melo, Rui and Santos, Professor Pedro Alexandre and Dias, Professor Jo{\~ a}o}, title = {A {Semantic} {Search} {System} for {Supremo} {Tribunal} de {Justi}{\c c}a}, } ```
stjiris/IRIS_sts
[ "task_categories:text-classification", "task_ids:text-scoring", "task_ids:semantic-similarity-scoring", "annotations_creators:automated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K>n", "source_datasets:original", "language:pt", "license:mit", "region:us" ]
2022-11-30T23:51:04+00:00
{"annotations_creators": ["automated"], "language_creators": ["found"], "language": ["pt"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K>n"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["text-scoring", "semantic-similarity-scoring"], "pretty_name": "IRIS Legal Dataset"}
2023-01-08T02:54:33+00:00
cf370843eae0cac330f1ed47534c1f29a74da542
# laion-translated-to-en-korean-subset ## Dataset Description - **Homepage:** [laion-5b](https://laion.ai/blog/laion-5b/) - **Download Size** 1.40 GiB - **Generated Size** 3.49 GiB - **Total Size** 4.89 GiB ## About dataset a subset data of [laion/laion2B-multi-joined-translated-to-en](https://huggingface.co/datasets/laion/laion2B-multi-joined-translated-to-en) and [laion/laion1B-nolang-joined-translated-to-en](https://huggingface.co/datasets/laion/laion1B-nolang-joined-translated-to-en), including only korean ### Lisence CC-BY-4.0 ## Data Structure ### Data Instance ```py >>> from datasets import load_dataset >>> dataset = load_dataset("Bingsu/laion-translated-to-en-korean-subset") >>> dataset DatasetDict({ train: Dataset({ features: ['hash', 'URL', 'TEXT', 'ENG TEXT', 'WIDTH', 'HEIGHT', 'LANGUAGE', 'similarity', 'pwatermark', 'punsafe', 'AESTHETIC_SCORE'], num_rows: 12769693 }) }) ``` ```py >>> dataset["train"].features {'hash': Value(dtype='int64', id=None), 'URL': Value(dtype='large_string', id=None), 'TEXT': Value(dtype='large_string', id=None), 'ENG TEXT': Value(dtype='large_string', id=None), 'WIDTH': Value(dtype='int32', id=None), 'HEIGHT': Value(dtype='int32', id=None), 'LANGUAGE': Value(dtype='large_string', id=None), 'similarity': Value(dtype='float32', id=None), 'pwatermark': Value(dtype='float32', id=None), 'punsafe': Value(dtype='float32', id=None), 'AESTHETIC_SCORE': Value(dtype='float32', id=None)} ``` ### Data Size download: 1.40 GiB<br> generated: 3.49 GiB<br> total: 4.89 GiB ### Data Field - 'hash': `int` - 'URL': `string` - 'TEXT': `string` - 'ENG TEXT': `string`, null data are dropped - 'WIDTH': `int`, null data are filled with 0 - 'HEIGHT': `int`, null data are filled with 0 - 'LICENSE': `string` - 'LANGUAGE': `string` - 'similarity': `float32`, CLIP similarity score, null data are filled with 0.0 - 'pwatermark': `float32`, Probability of containing a watermark, null data are filled with 0.0 - 'punsafe': `float32`, Probability of nsfw image, null data are filled with 0.0 - 'AESTHETIC_SCORE': `float32`, null data are filled with 0.0 ### Data Splits | | train | | --------- | -------- | | # of data | 12769693 | ### polars ```sh pip install polars[fsspec] ``` ```py import polars as pl from huggingface_hub import hf_hub_url url = hf_hub_url("Bingsu/laion-translated-to-en-korean-subset", filename="train.parquet", repo_type="dataset") # url = "https://huggingface.co/datasets/Bingsu/laion-translated-to-en-korean-subset/resolve/main/train.parquet" df = pl.read_parquet(url) ``` pandas broke my colab session.
Bingsu/laion-translated-to-en-korean-subset
[ "task_categories:feature-extraction", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:10M<n<100M", "language:ko", "language:en", "license:cc-by-4.0", "region:us" ]
2022-12-01T01:58:31+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["ko", "en"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10M<n<100M"], "task_categories": ["feature-extraction"], "pretty_name": "laion-translated-to-en-korean-subset"}
2023-02-01T01:15:43+00:00
d201c488dc7024623d1ecbcc987b3f132c4c2e12
# The Hateful Memes Challenge README The Hateful Memes Challenge is a dataset and benchmark created by Facebook AI to drive and measure progress on multimodal reasoning and understanding. The task focuses on detecting hate speech in multimodal memes. Please see the paper for further details: [The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes D. Kiela, H. Firooz, A. Mohan, V. Goswami, A. Singh, P. Ringshia, D. Testuggine]( https://arxiv.org/abs/2005.04790) For more details, see also the website: https://hatefulmemeschallenge.com # Dataset details The files for this folder are arranged as follows: img/ - the PNG images train.jsonl - the training set dev_seen.jsonl - the "seen" development set test_seen.jsonl - the "seen" test set dev_unseen.jsonl - the "unseen" development set test_unseen.jsonl - the "unseen" test set The "seen" dataset was presented in the NeurIPS paper; the “unseen” dev and test set were released as a part of the NeurIPS 2020 competition. The .jsonl format contains one JSON-encoded example per line, each of which has the following fields: ‘text’ - the text occurring in the meme ‘img’ - the path to the image in the img/ directory ‘label’ - the label for the meme (0=not-hateful, 1=hateful), provided for train and dev The metric to use is AUROC. You may also report accuracy in addition, since this is arguably more interpretable. To compute these metrics, we recommend the roc_auc_score and accuracy_score methods in sklearn.metrics, with default settings. # Getting started To get started working on this dataset, there's an easy-to-use "starter kit" available in MMF: https://github.com/facebookresearch/mmf/tree/master/projects/hateful_memes. # Note on Annotator Accuracy As is to be expected with a dataset of this size and nature, some of the examples in the training set have been misclassified. We are not claiming that our dataset labels are completely accurate, or even that all annotators would agree on a particular label. Misclassifications, although possible, should be very rare in the dev and seen test set, however, and we will take extra care with the unseen test set. As a reminder, the annotations collected for this dataset were not collected using Facebook annotators and we did not employ Facebook’s hate speech policy. As such, the dataset labels do not in any way reflect Facebook’s official stance on this matter. # License The dataset is licensed under the terms in the `LICENSE.txt` file. # Image Attribution If you wish to display example memes in your paper, please provide the following attribution: *Image is a compilation of assets, including ©Getty Image.* # Citations If you wish to cite this work, please use the following BiBTeX: ``` @inproceedings{Kiela:2020hatefulmemes, author = {Kiela, Douwe and Firooz, Hamed and Mohan, Aravind and Goswami, Vedanuj and Singh, Amanpreet and Ringshia, Pratik and Testuggine, Davide}, booktitle = {Advances in Neural Information Processing Systems}, editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin}, pages = {2611--2624}, publisher = {Curran Associates, Inc.}, title = {The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes}, url = {https://proceedings.neurips.cc/paper/2020/file/1b84c4cee2b8b3d823b30e2d604b1878-Paper.pdf}, volume = {33}, year = {2020} } ``` # Contact If you have any questions or comments on the dataset, please contact [email protected] or one of the authors.
neuralcatcher/hateful_memes
[ "arxiv:2005.04790", "region:us" ]
2022-12-01T03:49:06+00:00
{}
2022-12-01T07:08:59+00:00
2c82a8a0d42f58c4e146198f172126ab093a4c7e
# Dataset Card for "whisper-transcripts-mlst" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Whispering-GPT/whisper-transcripts-ml-street-talk
[ "task_categories:automatic-speech-recognition", "whisper", "whispering", "medium", "region:us" ]
2022-12-01T05:24:10+00:00
{"task_categories": ["automatic-speech-recognition"], "dataset_info": {"features": [{"name": "CHANNEL_NAME", "dtype": "string"}, {"name": "URL", "dtype": "string"}, {"name": "TITLE", "dtype": "string"}, {"name": "DESCRIPTION", "dtype": "string"}, {"name": "TRANSCRIPTION", "dtype": "string"}, {"name": "SEGMENTS", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21764632, "num_examples": 83}], "download_size": 10320580, "dataset_size": 21764632}, "tags": ["whisper", "whispering", "medium"]}
2022-12-02T07:19:28+00:00
41d483350941802434028184a9c762cafe79f3e6
iosilvar/ivanord
[ "license:artistic-2.0", "region:us" ]
2022-12-01T05:28:12+00:00
{"license": "artistic-2.0"}
2022-12-01T06:02:17+00:00
f0d6b8b883146d87eae9a8b20e668eb2e734c096
Lazylaziness/Diona
[ "license:other", "region:us" ]
2022-12-01T06:18:18+00:00
{"license": "other"}
2022-12-01T06:20:00+00:00
d21e7540cb904cf9143b5433c064cefe63e223a0
# spanish-tweets-small ## A smaller version of spanish-tweets ## A corpus of tweets for pretraining embeddings and language models ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage**: https://github.com/pysentimiento/robertuito - **Paper**: [RoBERTuito: a pre-trained language model for social media text in Spanish](https://aclanthology.org/2022.lrec-1.785/) - **Point of Contact:** jmperez (at) dc.uba.ar ### Dataset Summary A big dataset of (mostly) Spanish tweets for pre-training language models (or other representations). ### Supported Tasks and Leaderboards Language Modeling ### Languages Mostly Spanish, but some Portuguese, English, and other languages. ## Dataset Structure ### Data Fields - *tweet_id*: id of the tweet - *user_id*: id of the user - *text*: text from the tweet ## Dataset Creation The full process of data collection is described in the paper. Here we roughly outline the main points: - A Spritzer collection uploaded to Archive.org dating from May 2019 was downloaded - From this, we only kept tweets with language metadata equal to Spanish, and mark the users who posted these messages. - Then, the tweetline from each of these marked users was downloaded. This corpus consists of ~30M tweets. Please note that we did not filter tweets from other languages, so you might find English, Portuguese, Catalan and other languages in the dataset (around 7/8% of the tweets are not in Spanish) ### Citation Information ``` @inproceedings{perez-etal-2022-robertuito, title = "{R}o{BERT}uito: a pre-trained language model for social media text in {S}panish", author = "P{\'e}rez, Juan Manuel and Furman, Dami{\'a}n Ariel and Alonso Alemany, Laura and Luque, Franco M.", booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.lrec-1.785", pages = "7235--7243", abstract = "Since BERT appeared, Transformer language models and transfer learning have become state-of-the-art for natural language processing tasks. Recently, some works geared towards pre-training specially-crafted models for particular domains, such as scientific papers, medical documents, user-generated texts, among others. These domain-specific models have been shown to improve performance significantly in most tasks; however, for languages other than English, such models are not widely available. In this work, we present RoBERTuito, a pre-trained language model for user-generated text in Spanish, trained on over 500 million tweets. Experiments on a benchmark of tasks involving user-generated text showed that RoBERTuito outperformed other pre-trained language models in Spanish. In addition to this, our model has some cross-lingual abilities, achieving top results for English-Spanish tasks of the Linguistic Code-Switching Evaluation benchmark (LinCE) and also competitive performance against monolingual models in English Twitter tasks. To facilitate further research, we make RoBERTuito publicly available at the HuggingFace model hub together with the dataset used to pre-train it.", } ```
pysentimiento/spanish-tweets-small
[ "region:us" ]
2022-12-01T11:52:09+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "tweet_id", "dtype": "string"}, {"name": "user_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 82649695458, "num_examples": 597433111}, {"name": "test", "num_bytes": 892219251, "num_examples": 6224733}], "download_size": 51737237106, "dataset_size": 83541914709}}
2022-12-01T13:50:26+00:00
eb1cd7531f3ac9b64607af664d4bd7febd14f0e5
truezhichu/52425
[ "license:bigscience-openrail-m", "region:us" ]
2022-12-01T12:20:27+00:00
{"license": "bigscience-openrail-m"}
2022-12-01T12:20:27+00:00
41d9f3ab512e89b28ecdd91751a292a47ec006e6
FranciscoAlderan/AlderanArchitecture
[ "region:us" ]
2022-12-01T12:37:38+00:00
{}
2022-12-01T12:43:01+00:00
3356674bed861df59691a0ce30da5dd6918ec9ca
lyzylyzy/PN
[ "license:mit", "region:us" ]
2022-12-01T13:35:01+00:00
{"license": "mit"}
2022-12-01T14:27:26+00:00
4323d2431f4471758222493595e0de72041e114b
# Dataset Card for "turkishKuran" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
erkanxyzalaca/turkishKuran
[ "region:us" ]
2022-12-01T13:45:28+00:00
{"dataset_info": {"features": [{"name": "Ayet", "dtype": "string"}, {"name": "review_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 255726.9, "num_examples": 738}, {"name": "validation", "num_bytes": 28414.1, "num_examples": 82}], "download_size": 0, "dataset_size": 284141.0}}
2022-12-02T14:01:58+00:00
f3b5886387630db60cc82dcb0b16adaed26f72a1
ristow/test1
[ "license:afl-3.0", "region:us" ]
2022-12-01T14:12:53+00:00
{"license": "afl-3.0"}
2022-12-03T15:27:34+00:00
d7ca8ddb4327dfa6994e2b44ae6c89b4ea82053e
# Dataset Card for "my_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Mizurodp/my_dataset
[ "region:us" ]
2022-12-01T16:09:51+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11800901344.896, "num_examples": 219008}, {"name": "test", "num_bytes": 473579114.0, "num_examples": 9230}], "download_size": 10947568917, "dataset_size": 12274480458.896}}
2022-12-02T08:30:40+00:00
192c83c7f843e7eba0c522a11cad137e6da5ab26
# Dataset Card for "gov_report" ## GOV_REPORT A dataset "consisting of about 19.5k U.S. government reports with expert-written ab- stractive summaries.3 GOVREPORT has two impor- tant features: (1) It contains significantly longer documents (9.4k words) and summaries (553 words) than existing datasets, such as PubMed and arXiv (Cohan et al., 2018) (2) Salient content is spread throughout the documents, as opposed to cases where summary-worthy words are more heavily concentrated in specific parts of the document. These properties make GOVREPORT an important benchmark for producing long document summaries with multiple paragraphs. ## Links - [Paper](https://aclanthology.org/2021.naacl-main.112.pdf) - [GitHub repo](http://cloud.datacrunch.io) - [GDrive Folder](https://drive.google.com/drive/folders/128KyqPTwZ0Si9RV_IX-md2dcHeRTUHkr) ## Citation ``` @article{kryscinski2021booksum, title={BookSum: A Collection of Datasets for Long-form Narrative Summarization}, author={Wojciech Kry{\'s}ci{\'n}ski and Nazneen Rajani and Divyansh Agarwal and Caiming Xiong and Dragomir Radev}, year={2021}, eprint={2105.08209}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## License Licensing info TBD. [Issue raised](https://github.com/luyang-huang96/LongDocSum/issues/7) in main repo to get info on the license from the original authors.
kmfoda/gov_report
[ "arxiv:2105.08209", "region:us" ]
2022-12-01T16:18:16+00:00
{"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 55710125, "num_examples": 972}, {"name": "train", "num_bytes": 976584268, "num_examples": 17519}, {"name": "validation", "num_bytes": 57315603, "num_examples": 973}], "download_size": 528419980, "dataset_size": 1089609996}}
2022-12-01T16:52:30+00:00
fea8970db0ee25164db34bfce0ff661e5c5ae2d1
# Indonesian Dataset Apache Access
EgilKarlsen/ApacheAccessLabeled
[ "region:us" ]
2022-12-01T16:21:21+00:00
{}
2022-12-01T16:22:08+00:00
d42ae7545e6c548a566abed310cf61f85fe895ad
# NLU Evaluation Data - English and German A labeled English **and German** language multi-domain dataset (21 domains) with 25K user utterances for human-robot interaction. This dataset is collected and annotated for evaluating NLU services and platforms. The detailed paper on this dataset can be found at arXiv.org: [Benchmarking Natural Language Understanding Services for building Conversational Agents](https://arxiv.org/abs/1903.05566) The dataset builds on the annotated data of the [xliuhw/NLU-Evaluation-Data](https://github.com/xliuhw/NLU-Evaluation-Data) repository. We have added an additional column (`answer_de`) by translating the texts in column `answer` into German. The translation was made with [DeepL](https://www.deepl.com/translator). ## Creator This data set was compiled and open sourced by [Philip May](https://may.la/) of [Deutsche Telekom](https://www.telekom.de/). ## Labels The columns `scenario` and `intent` can be used for classification tasks. However, we recommend to use even more fine-grained labels. For this purpose, a new label can be derived by concatenating `scenario` and `intent`. For example, this would turn "alarm" and "set" into "alarm_set". ## Dataset Quirks The original dataset contains some `NaN` values in the `answer` column. This means that there are also `NaN` values in the translations (`answer_de` column). These rows should be filtered. The dataset also contains duplicate values. ## Copyright Copyright (c) the authors of [xliuhw/NLU-Evaluation-Data](https://github.com/xliuhw/NLU-Evaluation-Data)\ Copyright (c) 2022 [Philip May](https://may.la/) All data is released under the [Creative Commons Attribution 4.0 International License (CC BY 4.0)](http://creativecommons.org/licenses/by/4.0/).
deutsche-telekom/NLU-Evaluation-Data-en-de
[ "task_categories:text-classification", "task_ids:intent-classification", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:extended|nlu_evaluation_data", "language:en", "language:de", "license:cc-by-4.0", "arxiv:1903.05566", "region:us" ]
2022-12-01T16:54:19+00:00
{"language": ["en", "de"], "license": "cc-by-4.0", "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|nlu_evaluation_data"], "task_categories": ["text-classification"], "task_ids": ["intent-classification"]}
2023-12-17T17:42:48+00:00
15b0ca33d20ba15984ee7617eda35019051f8949
## This is Arabic news data with 9 categories in csv format original data link: https://www.kaggle.com/datasets/muhammedfathi/arabic-news-texts-corpus Data preparation and summary link: https://www.kaggle.com/code/abdalrahmanshahrour/arabic-text-summarization
abdalrahmanshahrour/ArabicTextSummarization
[ "region:us" ]
2022-12-01T17:14:40+00:00
{}
2022-12-01T17:16:50+00:00
bafa0c15e8d5780859c0e0e56eee1e919ced0115
# AutoTrain Dataset for project: shahroursummarizer ## Dataset Description This dataset has been automatically processed by AutoTrain for project shahroursummarizer. ### Languages The BCP-47 code for the dataset's language is ar. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "\n\u064a\u0648\u0627\u062c\u0647 \u0627\u0644\u0627\u0633\u0628\u0627\u0646\u064a \u0631\u0641\u0627\u0626\u064a\u0644 \u0646\u0627\u062f\u0627\u0644 \u064a\u0648\u0645 \u063a\u062f \u0627\u0644\u0627\u062d\u062f \u0627\u0646\u0637\u0644\u0627\u0642\u0627 \u0645\u0646 \u0627\u0644\u0633\u0627\u0639\u0629 21:00 \u0645\u0646\u0627\u0641\u0633\u0647 \u0627\u0644\u0633\u0648\u064a\u0633\u0631\u064a \u0631\u0648\u062c\u0631 \u0641\u064a\u062f\u0631\u0631 \u0641\u064a \u0646\u0647\u0627\u0626\u064a \u0628\u0637\u0648\u0644\u0629 \u0645\u064a\u0627\u0645\u064a \u0627\u0644\u0645\u0641\u062a\u0648\u062d\u0629 \u0644\u0644\u062a\u0646\u0633 .\n\u0648 \u064a\u0633\u0639\u0649 \u0641\u064a\u062f\u0631\u0631 \u0644\u062a\u062d\u0642\u064a\u0642 \u062b\u0627\u0644\u062b \u0627\u0644\u0642\u0627\u0628\u0647 \u0647\u0630\u0627 \u0627\u0644\u0645\u0648\u0633\u0645 \u0628\u0639\u062f \u062a\u062a\u0648\u064a\u062c\u0647 \u0628\u0644\u0642\u0628 \u0628\u0637\u0648\u0644\u0629 \u0627\u0633\u062a\u0631\u0627\u0644\u064a\u0627 \u0641\u064a \u062c\u0627\u0646\u0641\u064a \u0627\u0644\u0645\u0627\u0636\u064a \u0639\u0644\u0649 \u062d\u0633\u0627\u0628 \u0646\u0627\u062f\u0627\u0644 \u0648 \u062a\u062a\u0648\u064a\u062c\u0647\u00a0\u0628\u0628\u0637\u0648\u0644\u0629 \u0625\u0646\u062f\u064a\u0627\u0646 \u0648\u064a\u0644\u0632 \u0639\u0644\u0649 \u062d\u0633\u0627\u0628 \u0645\u0648\u0627\u0637\u0646\u0647 \u0641\u0627\u0641\u0631\u064a\u0646\u0643\u0627\u00a0\u00a0.\n", "target": "\u0648 \u064a\u0633\u0639\u0649 \u0641\u064a\u062f\u0631\u0631 \u0644\u062a\u062d\u0642\u064a\u0642 \u062b\u0627\u0644\u062b \u0627\u0644\u0642\u0627\u0628\u0647 \u0647\u0630\u0627 \u0627\u0644\u0645\u0648\u0633\u0645 \u0628\u0639\u062f \u062a\u062a\u0648\u064a\u062c\u0647 \u0628\u0644\u0642\u0628 \u0628\u0637\u0648\u0644\u0629 \u0627\u0633\u062a\u0631\u0627\u0644\u064a\u0627 \u0641\u064a \u062c\u0627\u0646\u0641\u064a \u0627\u0644\u0645\u0627\u0636\u064a \u0639\u0644\u0649 \u062d\u0633\u0627\u0628 \u0646\u0627\u062f\u0627\u0644 \u0648 \u062a\u062a\u0648\u064a\u062c\u0647\u00a0\u0628\u0628\u0637\u0648\u0644\u0629 \u0625\u0646\u062f\u064a\u0627\u0646 \u0648\u064a\u0644\u0632 \u0639\u0644\u0649 \u062d\u0633\u0627\u0628 \u0645\u0648\u0627\u0637\u0646\u0647 \u0641\u0627\u0641\u0631\u064a\u0646\u0643\u0627\u00a0\u00a0. \n\u064a\u0648\u0627\u062c\u0647 \u0627\u0644\u0627\u0633\u0628\u0627\u0646\u064a \u0631\u0641\u0627\u0626\u064a\u0644 \u0646\u0627\u062f\u0627\u0644 \u064a\u0648\u0645 \u063a\u062f \u0627\u0644\u0627\u062d\u062f \u0627\u0646\u0637\u0644\u0627\u0642\u0627 \u0645\u0646 \u0627\u0644\u0633\u0627\u0639\u0629 21:00 \u0645\u0646\u0627\u0641\u0633\u0647 \u0627\u0644\u0633\u0648\u064a\u0633\u0631\u064a \u0631\u0648\u062c\u0631 \u0641\u064a\u062f\u0631\u0631 \u0641\u064a \u0646\u0647\u0627\u0626\u064a \u0628\u0637\u0648\u0644\u0629 \u0645\u064a\u0627\u0645\u064a \u0627\u0644\u0645\u0641\u062a\u0648\u062d\u0629 \u0644\u0644\u062a\u0646\u0633 ." }, { "text": "\n\u0642\u0627\u0644\u062a \u0627\u0644\u0634\u0631\u0637\u0629 \u0627\u0644\u0623\u0645\u064a\u0631\u0643\u064a\u0629 \u0625\u0646 \u0631\u062c\u0644\u0627 \u0645\u0646 \u0648\u0644\u0627\u064a\u0629 \u0628\u0646\u0633\u0644\u0641\u0627\u0646\u064a\u0627 \u0627\u062a\u0635\u0644 \u0645\u0631\u0627\u0631\u0627 \u0628\u062e\u062f\u0645\u0629 \u0627\u0644\u0637\u0648\u0627\u0631\u0626 \u0628\u064a\u0646\u0645\u0627 \u0643\u0627\u0646 \u062a\u062d\u062a \u062a\u0623\u062b\u064a\u0631 \u0627\u0644\u0643\u062d\u0648\u0644 \u0648\u0628\u062d\u0627\u062c\u0629 \u0625\u0644\u0649 \u0634\u062e\u0635 \u064a\u062a\u062d\u062f\u062b \u0645\u0639\u0647.\n\u0648\u0630\u0643\u0631\u062a \u0627\u0644\u0634\u0631\u0637\u0629 \u0625\u0646 \u0644\u0627\u0631\u0649 \u0643\u064a\u0632\u0631 \u0627\u0644\u0628\u0627\u0644\u063a 51 \u0639\u0627\u0645\u0627 \u0627\u062a\u0635\u0644 \u0628\u0627\u0644\u0637\u0648\u0627\u0631\u0626 \u0644\u0623\u0648\u0644 \u0645\u0631\u0629 \u062d\u0648\u0627\u0644\u064a \u0627\u0644\u0639\u0627\u0634\u0631\u0629 \u0648\u0627\u0644\u0646\u0635\u0641 \u0645\u0646 \u0645\u0633\u0627\u0621 \u0627\u0644\u062c\u0645\u0639\u0629\u060c \u0637\u0627\u0644\u0628\u0627 \u0627\u0644\u062a\u062d\u062f\u062b \u0645\u0639 \u0636\u0627\u0628\u0637 \u0634\u0631\u0637\u0629\u060c \u0645\u0646 \u062f\u0648\u0646 \u0627\u0644\u062d\u0627\u062c\u0629 \u0625\u0644\u0649 \u0627\u0633\u062a\u062f\u0639\u0627\u0621 \u0627\u0644\u0637\u0648\u0627\u0631\u0626. \u0648\u0648\u0641\u0642\u0627 \u0644\u0640\"\u0623\u0633\u0648\u0634\u064a\u062a\u062f\u0628\u0631\u0633\" \u0641\u0642\u062f \u0639\u0627\u0648\u062f \u0643\u064a\u0632\u0631 \u0627\u0644\u0627\u062a\u0635\u0627\u0644 5 \u0645\u0631\u0627\u062a \u0623\u062e\u0631\u0649\u060c \u0627\u0644\u0623\u0645\u0631 \u0627\u0644\u0630\u064a \u0627\u0633\u062a\u062f\u0639\u0649 \u0642\u0648\u0629 \u0645\u0646 \u0627\u0644\u0634\u0631\u0637\u0629 \u0625\u0644\u0649 \u0645\u0646\u0632\u0644\u0647 \u0641\u064a \u0628\u0644\u062f\u0629 \u0648\u0627\u064a\u062a\u0647\u0648\u0644 \u0627\u0644\u0634\u0645\u0627\u0644\u064a\u0629 \u0628\u0639\u062f \u0645\u0646\u062a\u0635\u0641 \u0627\u0644\u0644\u064a\u0644. \u0648\u0642\u0627\u0644 \u0627\u0644\u0631\u062c\u0644 \u0644\u0644\u0636\u0628\u0627\u0637 \u0625\u0646\u0647 \u0634\u0631\u0628 \u0627\u0644\u0639\u062f\u064a\u062f \u0645\u0646 \u0627\u0644\u062c\u0639\u0629 \u0644\u0623\u0646\u0647 \u0643\u0627\u0646 \u0645\u0646\u0632\u0639\u062c\u0627 \u0645\u0646 \u0645\u0634\u0627\u0643\u0644\u0647 \u0627\u0644\u0639\u0627\u0626\u0644\u064a\u0629\u060c \u0648\u062a\u0639\u0647\u062f \u0643\u064a\u0632\u0631 \u0628\u0639\u062f\u0645 \u0627\u0644\u0627\u062a\u0635\u0627\u0644 \u0645\u062c\u062f\u062f\u0627 \u0628\u0627\u0644\u0634\u0631\u0637\u0629\u060c \u0625\u0644\u0627 \u0625\u0630\u0627 \u0643\u0627\u0646 \u0647\u0646\u0627\u0643 \u062d\u0627\u0644\u0629 \u0637\u0648\u0627\u0631\u0626 \u062d\u0642\u064a\u0642\u0629 \u062a\u0633\u062a\u062f\u0639\u064a\u0647\u0645. \u0644\u0643\u0646 \u0627\u0644\u0631\u062c\u0644 \u0643\u0631\u0631 \u0627\u0644\u0627\u062a\u0635\u0627\u0644 \u0628\u0627\u0644\u0637\u0648\u0627\u0631\u0626 \u0628\u0639\u062f \u062f\u0642\u064a\u0642\u0629 \u0648\u0627\u062d\u062f\u0629 \u0641\u0642\u0637 \u0645\u0646 \u0645\u063a\u0627\u062f\u0631\u0629 \u0627\u0644\u0636\u0628\u0627\u0637. \u0648\u0642\u062f \u0623\u0644\u0642\u064a \u0627\u0644\u0642\u0628\u0636 \u0639\u0644\u0649 \u0643\u064a\u0632\u0631\u060c \u0644\u064a\u0648\u0627\u062c\u0647 \u0627\u062a\u0647\u0627\u0645\u0627\u062a \u0628\u0627\u0633\u062a\u062f\u0639\u0627\u0621 \u0627\u0644\u0637\u0648\u0627\u0631\u0626 \u0639\u0645\u062f\u0627 \u0644\u0623\u0633\u0628\u0627\u0628 \u063a\u064a\u0631 \u0637\u0627\u0631\u0626\u0629.\n", "target": "\u0648\u0648\u0641\u0642\u0627 \u0644\u0640\"\u0623\u0633\u0648\u0634\u064a\u062a\u062f\u0628\u0631\u0633\" \u0641\u0642\u062f \u0639\u0627\u0648\u062f \u0643\u064a\u0632\u0631 \u0627\u0644\u0627\u062a\u0635\u0627\u0644 5 \u0645\u0631\u0627\u062a \u0623\u062e\u0631\u0649\u060c \u0627\u0644\u0623\u0645\u0631 \u0627\u0644\u0630\u064a \u0627\u0633\u062a\u062f\u0639\u0649 \u0642\u0648\u0629 \u0645\u0646 \u0627\u0644\u0634\u0631\u0637\u0629 \u0625\u0644\u0649 \u0645\u0646\u0632\u0644\u0647 \u0641\u064a \u0628\u0644\u062f\u0629 \u0648\u0627\u064a\u062a\u0647\u0648\u0644 \u0627\u0644\u0634\u0645\u0627\u0644\u064a\u0629 \u0628\u0639\u062f \u0645\u0646\u062a\u0635\u0641 \u0627\u0644\u0644\u064a\u0644. \u0648\u0630\u0643\u0631\u062a \u0627\u0644\u0634\u0631\u0637\u0629 \u0625\u0646 \u0644\u0627\u0631\u0649 \u0643\u064a\u0632\u0631 \u0627\u0644\u0628\u0627\u0644\u063a 51 \u0639\u0627\u0645\u0627 \u0627\u062a\u0635\u0644 \u0628\u0627\u0644\u0637\u0648\u0627\u0631\u0626 \u0644\u0623\u0648\u0644 \u0645\u0631\u0629 \u062d\u0648\u0627\u0644\u064a \u0627\u0644\u0639\u0627\u0634\u0631\u0629 \u0648\u0627\u0644\u0646\u0635\u0641 \u0645\u0646 \u0645\u0633\u0627\u0621 \u0627\u0644\u062c\u0645\u0639\u0629\u060c \u0637\u0627\u0644\u0628\u0627 \u0627\u0644\u062a\u062d\u062f\u062b \u0645\u0639 \u0636\u0627\u0628\u0637 \u0634\u0631\u0637\u0629\u060c \u0645\u0646 \u062f\u0648\u0646 \u0627\u0644\u062d\u0627\u062c\u0629 \u0625\u0644\u0649 \u0627\u0633\u062a\u062f\u0639\u0627\u0621 \u0627\u0644\u0637\u0648\u0627\u0631\u0626. \u0648\u0642\u0627\u0644 \u0627\u0644\u0631\u062c\u0644 \u0644\u0644\u0636\u0628\u0627\u0637 \u0625\u0646\u0647 \u0634\u0631\u0628 \u0627\u0644\u0639\u062f\u064a\u062f \u0645\u0646 \u0627\u0644\u062c\u0639\u0629 \u0644\u0623\u0646\u0647 \u0643\u0627\u0646 \u0645\u0646\u0632\u0639\u062c\u0627 \u0645\u0646 \u0645\u0634\u0627\u0643\u0644\u0647 \u0627\u0644\u0639\u0627\u0626\u0644\u064a\u0629\u060c \u0648\u062a\u0639\u0647\u062f \u0643\u064a\u0632\u0631 \u0628\u0639\u062f\u0645 \u0627\u0644\u0627\u062a\u0635\u0627\u0644 \u0645\u062c\u062f\u062f\u0627 \u0628\u0627\u0644\u0634\u0631\u0637\u0629\u060c \u0625\u0644\u0627 \u0625\u0630\u0627 \u0643\u0627\u0646 \u0647\u0646\u0627\u0643 \u062d\u0627\u0644\u0629 \u0637\u0648\u0627\u0631\u0626 \u062d\u0642\u064a\u0642\u0629 \u062a\u0633\u062a\u062f\u0639\u064a\u0647\u0645." } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "Value(dtype='string', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 2400 | | valid | 600 |
abdalrahmanshahrour/shahroursummarizerdataset
[ "language:ar", "region:us" ]
2022-12-01T17:19:19+00:00
{"language": ["ar"], "task_categories": ["conditional-text-generation"]}
2022-12-01T18:22:47+00:00
b25b448d147a15ad828879546650fd569e70106b
dattatreya303/covid-qa-synthetic
[ "license:mit", "region:us" ]
2022-12-01T19:16:01+00:00
{"license": "mit"}
2022-12-02T20:48:11+00:00
e39ea9713ca47689c8bdd635b94c2d26ea8390f7
The xfund dataset with annotations at the word level. The original XFUND dataset see more detail at [this](https://github.com/doc-analysis/XFUND) #### Citation Information ``` latex @inproceedings{xu-etal-2022-xfund, title = "{XFUND}: A Benchmark Dataset for Multilingual Visually Rich Form Understanding", author = "Xu, Yiheng and Lv, Tengchao and Cui, Lei and Wang, Guoxin and Lu, Yijuan and Florencio, Dinei and Zhang, Cha and Wei, Furu", booktitle = "Findings of the Association for Computational Linguistics: ACL 2022", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-acl.253", doi = "10.18653/v1/2022.findings-acl.253", pages = "3214--3224", abstract = "Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually rich document understanding tasks recently, which demonstrates the great potential for joint learning across different modalities. However, the existed research work has focused only on the English domain while neglecting the importance of multilingual generalization. In this paper, we introduce a human-annotated multilingual form understanding benchmark dataset named XFUND, which includes form understanding samples in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese). Meanwhile, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually rich document understanding. Experimental results show that the LayoutXLM model has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUND dataset. The XFUND dataset and the pre-trained LayoutXLM model have been publicly available at https://aka.ms/layoutxlm.", } ```
cooleel/xfund_de
[ "license:mit", "region:us" ]
2022-12-01T19:42:05+00:00
{"license": "mit"}
2022-12-02T03:12:40+00:00
7509909b272be4338642a3162ae09f4c4d259262
# CIFAR-10 Webdataset (Test set only) Original paper: [Learning Multiple Layers of Features from Tiny Images](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf) Homepage: https://www.cs.toronto.edu/~kriz/cifar.html Bibtex: ``` @TECHREPORT{Krizhevsky09learningmultiple, author = {Alex Krizhevsky}, title = {Learning multiple layers of features from tiny images}, institution = {}, year = {2009} } ```
djghosh/wds_cifar10_test
[ "region:us" ]
2022-12-01T19:47:51+00:00
{}
2023-01-24T00:39:41+00:00
180eb530bf9bc2808c605557971f62db03715b3e
license: MIT --- --- Updated embedding for SDV 2.1 - ddedv2.pt This is a Day of the dead embedding for SD2 Use dded to initiate the embedding built on 2.0 </br> Use ddedv2.pt to initiate embedding built on 2.1 (High quality Professional Photo:1.3) of (Realistic:1) assassin creed character, style dded,HD,4K,8K,highly detailed,Sharp,Photo-realism,Professional photograph,Masterpiece,((Agfacolor)),(close portrait:1.3),(Feminine:1.4),(beautiful:1.4),(attractive:1.3),handsome,calendar pose,perfectly detailed eyes,studio lighting,thematic background ![assasin creed dod](https://huggingface.co/datasets/Rocinante2000/Day-of-the-Dead/resolve/main/sample1.png) (High quality Professional Photo:1.3) of (Realistic:1) a photo of a beautiful woman with a skull face paint and flowers in her hair and makeup make up for a day of the dead costume, dded style,HD,4K,8K,highly detailed,Sharp,Photo-realism,Professional photograph,Masterpiece,(close portrait:1.3),(Feminine:1.4),(beautiful:1.4),(attractive:1.3),handsome,calendar pose,perfectly detailed eyes,studio lighting,thematic background ![DOD beautiful face](https://huggingface.co/datasets/Rocinante2000/Day-of-the-Dead/resolve/main/sample2.png) Enjoy everyone.
Rocinante2000/Day-of-the-Dead
[ "region:us" ]
2022-12-01T19:53:02+00:00
{}
2022-12-09T11:30:56+00:00
dc347c6e19e40550545475507ca8638d584f4c9c
# lex_fridman_podcast ## Data This dataset provides transcribed Lex Fridman Podcast episodes that were shared on [Andrej Karpathy's site](https://karpathy.ai/lexicap). Last update was on 01.12.2022. ## Goal The goal of this dataset is to use it in [niph - needle in podcast haystack](https://github.com/lambdaofgod/niph) library to make searching it easier. There exist other similar datasets, but they lack timestamp information that makes finding stuff on Karpathy's page easier. ### Schema ``` # Column Non-Null Count Dtype --- ------ -------------- ----- 0 episode 802299 non-null object 1 text 802299 non-null object 2 timestamp_link 802299 non-null object ``` ## Acknowledgements LONG LIVE ANDREJ KARPATHY! GLORY TO OpenAI WHISPER! Thank you Lex Fridman for the podcast!
lambdaofgod/lex_fridman_podcast
[ "region:us" ]
2022-12-01T20:11:41+00:00
{}
2022-12-01T20:22:12+00:00
ed47abbab3085f0e4e90b799326fd2932b96f0bd
# Individuality Of Handwriting (CEDAR) https://pubmed.ncbi.nlm.nih.gov/12136998/ \ https://cedar.buffalo.edu/NIJ/projectinfo.html ## Abstract Motivated by several rulings in United States courts concerning expert testimony in general, and handwriting testimony in particular, we undertook a study to objectively validate the hypothesis that handwriting is individual. Handwriting samples of 1,500 individuals, representative of the U.S. population with respect to gender, age, ethnic groups, etc., were obtained. Analyzing differences in handwriting was done by using computer algorithms for extracting features from scanned images of handwriting. Attributes characteristic of the handwriting were obtained, e.g., line separation, slant, character shapes, etc. These attributes, which are a subset of attributes used by forensic document examiners (FDEs), were used to quantitatively establish individuality by using machine learning approaches. Using global attributes of handwriting and very few characters in the writing, the ability to determine the writer with a high degree of confidence was established. The work is a step towards providing scientific support for admitting handwriting evidence in court. The mathematical approach and the resulting software also have the promise of aiding the FDE. Srihari SN, Cha SH, Arora H, Lee S. Individuality of handwriting. J Forensic Sci. 2002 Jul;47(4):856-72. PMID: 12136998.
1aurent/individuality-of-handwriting
[ "task_categories:image-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "language:en", "license:unknown", "legal", "signatures", "CEDAR", "region:us" ]
2022-12-01T20:42:04+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "task_categories": ["image-classification"], "pretty_name": "Individuality Of Handwriting (CEDAR)", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "original", "1": "forgeries"}}}}, {"name": "individual", "dtype": "uint8"}, {"name": "figure", "dtype": "uint8"}], "splits": [{"name": "train", "num_bytes": 195780898.8, "num_examples": 2640}], "download_size": 252337526, "dataset_size": 195780898.8}, "tags": ["legal", "signatures", "CEDAR"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-01T14:15:30+00:00
abc33d83fb6c9a78639723ad6fc4fc014036e078
airaspberry/sweater-cads
[ "license:openrail", "region:us" ]
2022-12-01T20:43:05+00:00
{"license": "openrail"}
2022-12-05T19:27:24+00:00
0ab77186d4608c21d9b0f8a6b9df199cb93231fa
# ICDAR 2011 Signature Verification Competition (SigComp2011) http://iapr-tc11.org/mediawiki/index.php/ICDAR_2011_Signature_Verification_Competition_(SigComp2011) The collection contains simultaneously acquired online and offline samples. The collection contains offline and online signature samples. The offline dataset comprises PNG images, scanned at 400 dpi, RGB color. The online dataset comprises ascii files with the format: X, Y, Z (per line). Marcus Liwicki, Michael Blumenstein, Elisa van den Heuvel, Charles E.H. Berger, Reinoud D. Stoel, Bryan Found, Xiaohong Chen, Muhammad Imran Malik. "SigComp11: Signature Verification Competition for On- and Offline Skilled Forgeries", Proc. 11th Int. Conference on Document Analysis and Recognition, 2011
1aurent/ICDAR-2011
[ "size_categories:1K<n<10K", "license:unknown", "online handwriting", "offline handwriting", "signature", "verification", "region:us" ]
2022-12-01T21:08:23+00:00
{"license": "unknown", "size_categories": ["1K<n<10K"], "tags": ["online handwriting", "offline handwriting", "signature", "verification"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "genuine", "1": "forgeries"}}}}, {"name": "forger", "dtype": "int32"}, {"name": "writer", "dtype": "uint32"}, {"name": "attempt", "dtype": "uint32"}], "splits": [{"name": "train", "num_bytes": 240159596.0, "num_examples": 937}, {"name": "test", "num_bytes": 466376280.094, "num_examples": 2534}], "download_size": 793149429, "dataset_size": 706535876.094}}
2023-09-23T17:58:09+00:00
4c1878fda32583a1f585c32d06f18c4157b1c157
Dataset of 10000 [Bored Ape](https://opensea.io/collection/boredapeyachtclub) images.
daspartho/bored-ape
[ "region:us" ]
2022-12-01T22:04:13+00:00
{}
2022-12-03T17:47:13+00:00
5f571837da18cb1fc390072ad57d6fb89e7b1f71
# Stanford Cars (Test set only) Original paper: [3D Object Representations for Fine-Grained Categorization](http://ai.stanford.edu/~jkrause/papers/3drr13.pdf) Homepage: http://ai.stanford.edu/~jkrause/cars/car_dataset.html Bibtex: ``` @inproceedings{KrauseStarkDengFei-Fei_3DRR2013, title = {3D Object Representations for Fine-Grained Categorization}, booktitle = {4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13)}, year = {2013}, address = {Sydney, Australia}, author = {Jonathan Krause and Michael Stark and Jia Deng and Li Fei-Fei} } ```
djghosh/wds_cars_test
[ "region:us" ]
2022-12-01T22:05:28+00:00
{}
2022-12-12T22:13:41+00:00
e26a882e4cedabc7bed82e2e24525dd7e328c14b
# Rendered SST2 (Test set only) Original paper: [The Visual Task Adaptation Benchmark](https://arxiv.org/abs/1910.04867) Homepage: https://github.com/openai/CLIP/blob/main/data/rendered-sst2.md Derived from SST2: https://nlp.stanford.edu/sentiment/treebank.html Bibtex: ``` @article{zhai2019visual, title={The Visual Task Adaptation Benchmark}, author={Xiaohua Zhai and Joan Puigcerver and Alexander Kolesnikov and Pierre Ruyssen and Carlos Riquelme and Mario Lucic and Josip Djolonga and Andre Susano Pinto and Maxim Neumann and Alexey Dosovitskiy and Lucas Beyer and Olivier Bachem and Michael Tschannen and Marcin Michalski and Olivier Bousquet and Sylvain Gelly and Neil Houlsby}, year={2019}, eprint={1910.04867}, archivePrefix={arXiv}, primaryClass={cs.CV}, url = {https://arxiv.org/abs/1910.04867} } ```
djghosh/wds_renderedsst2_test
[ "arxiv:1910.04867", "region:us" ]
2022-12-01T22:06:34+00:00
{}
2022-12-12T22:07:24+00:00
9df2c4de5740764de4b1efce408da3291803b38e
# STL-10 (Test set only) Original paper: [An Analysis of Single Layer Networks in Unsupervised Feature Learning](http://cs.stanford.edu/~acoates/papers/coatesleeng_aistats_2011.pdf) Homepage: https://cs.stanford.edu/~acoates/stl10/ Bibtex: ``` @InProceedings{pmlr-v15-coates11a, title = {An Analysis of Single-Layer Networks in Unsupervised Feature Learning}, author = {Coates, Adam and Ng, Andrew and Lee, Honglak}, booktitle = {Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics}, pages = {215--223}, year = {2011}, editor = {Gordon, Geoffrey and Dunson, David and Dudík, Miroslav}, volume = {15}, series = {Proceedings of Machine Learning Research}, address = {Fort Lauderdale, FL, USA}, month = {11--13 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v15/coates11a/coates11a.pdf}, url = {https://proceedings.mlr.press/v15/coates11a.html} } ```
djghosh/wds_stl10_test
[ "region:us" ]
2022-12-01T22:07:23+00:00
{}
2022-12-12T22:16:13+00:00
877df5300e73eb86c7a7675f73d82bae6e5accf9
# Small NORB Azimuth (Test set only) Original paper: [Learning Methods for Generic Object Recognition with Invariance to Pose and Lighting](https://ieeexplore.ieee.org/document/1315150) Homepage: https://cs.nyu.edu/~ylclab/data/norb-v1.0-small/ Bibtex: ``` @INPROCEEDINGS{1315150, author={LeCun, Y. and Fu Jie Huang and Bottou, L.}, booktitle={Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004.}, title={Learning methods for generic object recognition with invariance to pose and lighting}, year={2004}, volume={2}, number={}, pages={II-104 Vol.2}, doi={10.1109/CVPR.2004.1315150}} ```
djghosh/wds_vtab-smallnorb_label_azimuth_test
[ "region:us" ]
2022-12-01T22:07:50+00:00
{}
2022-12-12T22:12:30+00:00
775c96dd8d75371e7d21bdf14ec704e20843df45
# Small NORB Elevation (Test set only) Original paper: [Learning Methods for Generic Object Recognition with Invariance to Pose and Lighting](https://ieeexplore.ieee.org/document/1315150) Homepage: https://cs.nyu.edu/~ylclab/data/norb-v1.0-small/ Bibtex: ``` @INPROCEEDINGS{1315150, author={LeCun, Y. and Fu Jie Huang and Bottou, L.}, booktitle={Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004.}, title={Learning methods for generic object recognition with invariance to pose and lighting}, year={2004}, volume={2}, number={}, pages={II-104 Vol.2}, doi={10.1109/CVPR.2004.1315150}} ```
djghosh/wds_vtab-smallnorb_label_elevation_test
[ "region:us" ]
2022-12-01T22:08:31+00:00
{}
2022-12-12T22:12:08+00:00
2b7f7ed640445230410588aa7dba1672a2f34ff0
manirai91/ebiquity-v2-stemmed
[ "region:us" ]
2022-12-01T22:28:32+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC", "7": "B-MISC", "8": "I-MISC"}}}}], "config_name": "ebiquity-v2-stemmed", "splits": [{"name": "train", "num_bytes": 2192488, "num_examples": 3289}], "download_size": 1414009, "dataset_size": 2192488}}
2022-12-01T23:27:04+00:00
465849035bea10d0c58b0b8b0ae9a905b778d5ff
# AutoTrain Dataset for project: testjumeee ## Dataset Description This dataset has been automatically processed by AutoTrain for project testjumeee. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "context": "766", "question": "Mass analysis is based on analyzing debitage populations based on their size distribution across specified size grades.", "answers.text": [ "One form of debitage analysis is based on analyzing debitage populations based on their size distribution across specified size grades." ], "answers.answer_start": [ 0 ] }, { "context": "658", "question": "Just watched the first 15 minutes, got bored, skipped to the magic bit, it's funnier as a GIF.", "answers.text": [ "Just watched the first 30 minutes, got bored, skipped to the magic bit, it's funnier as a GIF." ], "answers.answer_start": [ 1 ] } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "context": "Value(dtype='string', id=None)", "question": "Value(dtype='string', id=None)", "answers.text": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)", "answers.answer_start": "Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 883 | | valid | 221 |
Jumus/autotrain-data-testjumeee
[ "language:en", "region:us" ]
2022-12-01T22:53:44+00:00
{"language": ["en"]}
2022-12-01T22:56:07+00:00
588a5356361dd19c5cfa551e9b1cbd88964ad026
Doxo/Tul_a
[ "license:artistic-2.0", "region:us" ]
2022-12-01T23:50:21+00:00
{"license": "artistic-2.0"}
2022-12-01T23:50:21+00:00
dfb480d142d00182cfefae60d6d9e492068c6348
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
DTU54DL/commonvoice10k
[ "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "region:us" ]
2022-12-02T00:57:27+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["token-classification-other-acronym-identification"], "paperswithcode_id": "acronym-identification", "pretty_name": "Acronym Identification Dataset", "train-eval-index": [{"col_mapping": {"labels": "tags", "tokens": "tokens"}, "config": "default", "splits": {"eval_split": "test"}, "task": "token-classification", "task_id": "entity_extraction"}]}
2022-12-02T00:58:26+00:00
ab22563431db30dda7acdc7c9c7045ef6062fee3
# SVHN (Test set only) Original paper: [Reading Digits in Natural Images with Unsupervised Feature Learning](http://ufldl.stanford.edu/housenumbers/nips2011_housenumbers.pdf) Homepage: http://ufldl.stanford.edu/housenumbers/ Bibtex: ``` @inproceedings{Netzer2011ReadingDI, title={Reading Digits in Natural Images with Unsupervised Feature Learning}, author={Yuval Netzer and Tao Wang and Adam Coates and A. Bissacco and Bo Wu and A. Ng}, year={2011} } ```
djghosh/wds_vtab-svhn_test
[ "region:us" ]
2022-12-02T01:04:11+00:00
{}
2022-12-12T22:22:10+00:00
393ad3eedeb13606ed339a4d10cc33f3153e0068
# Dataset Card for "es_docvqa_donut" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Viccorza/es_docvqa_donut
[ "region:us" ]
2022-12-02T01:56:02+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "query", "struct": [{"name": "de", "dtype": "string"}, {"name": "en", "dtype": "string"}, {"name": "es", "dtype": "string"}, {"name": "fr", "dtype": "string"}, {"name": "it", "dtype": "string"}]}, {"name": "answers", "sequence": "string"}, {"name": "words", "sequence": "string"}, {"name": "bounding_boxes", "sequence": {"sequence": "float32", "length": 4}}, {"name": "answer", "struct": [{"name": "match_score", "dtype": "float64"}, {"name": "matched_text", "dtype": "string"}, {"name": "start", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 380254939.0, "num_examples": 1000}, {"name": "test", "num_bytes": 70616064.0, "num_examples": 200}], "download_size": 147813399, "dataset_size": 450871003.0}}
2022-12-02T01:56:26+00:00
b1079a8940e6fa79c020f6365f872758b085306e
# Dataset Card for "lat_en_loeb_whitaker" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
grosenthal/lat_en_loeb_whitaker
[ "region:us" ]
2022-12-02T01:56:48+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "la", "dtype": "string"}, {"name": "en", "dtype": "string"}, {"name": "file", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 34184558.73094817, "num_examples": 89176}, {"name": "test", "num_bytes": 1899056.965474088, "num_examples": 4954}, {"name": "valid", "num_bytes": 1899440.3035777363, "num_examples": 4955}], "download_size": 24273625, "dataset_size": 37983056.0}}
2023-01-25T17:47:57+00:00
75131116f9b64994c6e1ff565991b42895ab2850
# Dataset Card for "viewiki_segment_sent" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hieule/viewiki_segment_sent
[ "region:us" ]
2022-12-02T04:42:56+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1280366026, "num_examples": 8231949}], "download_size": 251969091, "dataset_size": 1280366026}}
2022-12-02T04:50:37+00:00
8310e3e15437045f6c8fd786922463802688862a
# Dataset Card for "wikivie" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hieule/wikivie
[ "region:us" ]
2022-12-02T05:00:49+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1292985940, "num_examples": 1273469}], "download_size": 603762953, "dataset_size": 1292985940}}
2022-12-02T05:03:41+00:00
30ffd8658765862a7a2d3e71643e732762a1ead1
# Dataset Card for "text_summarization_dataset7" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
shahidul034/text_summarization_dataset7
[ "region:us" ]
2022-12-02T05:01:28+00:00
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 128244993, "num_examples": 108802}], "download_size": 44502964, "dataset_size": 128244993}}
2022-12-02T05:01:33+00:00
650b27b6a0dde78749495f023af652310ab9fd9a
# Dataset Card for "text_summarization_dataset8" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
shahidul034/text_summarization_dataset8
[ "region:us" ]
2022-12-02T05:02:09+00:00
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 126184009, "num_examples": 101745}], "download_size": 44181954, "dataset_size": 126184009}}
2022-12-02T05:02:13+00:00
1c9ca31e0e4634b56561d81226d298ac600042b2
# Dataset Card for "text_summarization_dataset9" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
shahidul034/text_summarization_dataset9
[ "region:us" ]
2022-12-02T05:02:47+00:00
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 130228352, "num_examples": 104575}], "download_size": 45376452, "dataset_size": 130228352}}
2022-12-02T05:02:51+00:00
8e0044eab75d6598568b466c01738db796b83897
# AutoTrain Dataset for project: summarizer ## Dataset Description This dataset has been automatically processed by AutoTrain for project summarizer. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "\n\u064a\u0648\u0627\u062c\u0647 \u0627\u0644\u0627\u0633\u0628\u0627\u0646\u064a \u0631\u0641\u0627\u0626\u064a\u0644 \u0646\u0627\u062f\u0627\u0644 \u064a\u0648\u0645 \u063a\u062f \u0627\u0644\u0627\u062d\u062f \u0627\u0646\u0637\u0644\u0627\u0642\u0627 \u0645\u0646 \u0627\u0644\u0633\u0627\u0639\u0629 21:00 \u0645\u0646\u0627\u0641\u0633\u0647 \u0627\u0644\u0633\u0648\u064a\u0633\u0631\u064a \u0631\u0648\u062c\u0631 \u0641\u064a\u062f\u0631\u0631 \u0641\u064a \u0646\u0647\u0627\u0626\u064a \u0628\u0637\u0648\u0644\u0629 \u0645\u064a\u0627\u0645\u064a \u0627\u0644\u0645\u0641\u062a\u0648\u062d\u0629 \u0644\u0644\u062a\u0646\u0633 .\n\u0648 \u064a\u0633\u0639\u0649 \u0641\u064a\u062f\u0631\u0631 \u0644\u062a\u062d\u0642\u064a\u0642 \u062b\u0627\u0644\u062b \u0627\u0644\u0642\u0627\u0628\u0647 \u0647\u0630\u0627 \u0627\u0644\u0645\u0648\u0633\u0645 \u0628\u0639\u062f \u062a\u062a\u0648\u064a\u062c\u0647 \u0628\u0644\u0642\u0628 \u0628\u0637\u0648\u0644\u0629 \u0627\u0633\u062a\u0631\u0627\u0644\u064a\u0627 \u0641\u064a \u062c\u0627\u0646\u0641\u064a \u0627\u0644\u0645\u0627\u0636\u064a \u0639\u0644\u0649 \u062d\u0633\u0627\u0628 \u0646\u0627\u062f\u0627\u0644 \u0648 \u062a\u062a\u0648\u064a\u062c\u0647\u00a0\u0628\u0628\u0637\u0648\u0644\u0629 \u0625\u0646\u062f\u064a\u0627\u0646 \u0648\u064a\u0644\u0632 \u0639\u0644\u0649 \u062d\u0633\u0627\u0628 \u0645\u0648\u0627\u0637\u0646\u0647 \u0641\u0627\u0641\u0631\u064a\u0646\u0643\u0627\u00a0\u00a0.\n", "target": "\u0648 \u064a\u0633\u0639\u0649 \u0641\u064a\u062f\u0631\u0631 \u0644\u062a\u062d\u0642\u064a\u0642 \u062b\u0627\u0644\u062b \u0627\u0644\u0642\u0627\u0628\u0647 \u0647\u0630\u0627 \u0627\u0644\u0645\u0648\u0633\u0645 \u0628\u0639\u062f \u062a\u062a\u0648\u064a\u062c\u0647 \u0628\u0644\u0642\u0628 \u0628\u0637\u0648\u0644\u0629 \u0627\u0633\u062a\u0631\u0627\u0644\u064a\u0627 \u0641\u064a \u062c\u0627\u0646\u0641\u064a \u0627\u0644\u0645\u0627\u0636\u064a \u0639\u0644\u0649 \u062d\u0633\u0627\u0628 \u0646\u0627\u062f\u0627\u0644 \u0648 \u062a\u062a\u0648\u064a\u062c\u0647\u00a0\u0628\u0628\u0637\u0648\u0644\u0629 \u0625\u0646\u062f\u064a\u0627\u0646 \u0648\u064a\u0644\u0632 \u0639\u0644\u0649 \u062d\u0633\u0627\u0628 \u0645\u0648\u0627\u0637\u0646\u0647 \u0641\u0627\u0641\u0631\u064a\u0646\u0643\u0627\u00a0\u00a0. \n\u064a\u0648\u0627\u062c\u0647 \u0627\u0644\u0627\u0633\u0628\u0627\u0646\u064a \u0631\u0641\u0627\u0626\u064a\u0644 \u0646\u0627\u062f\u0627\u0644 \u064a\u0648\u0645 \u063a\u062f \u0627\u0644\u0627\u062d\u062f \u0627\u0646\u0637\u0644\u0627\u0642\u0627 \u0645\u0646 \u0627\u0644\u0633\u0627\u0639\u0629 21:00 \u0645\u0646\u0627\u0641\u0633\u0647 \u0627\u0644\u0633\u0648\u064a\u0633\u0631\u064a \u0631\u0648\u062c\u0631 \u0641\u064a\u062f\u0631\u0631 \u0641\u064a \u0646\u0647\u0627\u0626\u064a \u0628\u0637\u0648\u0644\u0629 \u0645\u064a\u0627\u0645\u064a \u0627\u0644\u0645\u0641\u062a\u0648\u062d\u0629 \u0644\u0644\u062a\u0646\u0633 ." }, { "text": "\n\u0642\u0627\u0644\u062a \u0627\u0644\u0634\u0631\u0637\u0629 \u0627\u0644\u0623\u0645\u064a\u0631\u0643\u064a\u0629 \u0625\u0646 \u0631\u062c\u0644\u0627 \u0645\u0646 \u0648\u0644\u0627\u064a\u0629 \u0628\u0646\u0633\u0644\u0641\u0627\u0646\u064a\u0627 \u0627\u062a\u0635\u0644 \u0645\u0631\u0627\u0631\u0627 \u0628\u062e\u062f\u0645\u0629 \u0627\u0644\u0637\u0648\u0627\u0631\u0626 \u0628\u064a\u0646\u0645\u0627 \u0643\u0627\u0646 \u062a\u062d\u062a \u062a\u0623\u062b\u064a\u0631 \u0627\u0644\u0643\u062d\u0648\u0644 \u0648\u0628\u062d\u0627\u062c\u0629 \u0625\u0644\u0649 \u0634\u062e\u0635 \u064a\u062a\u062d\u062f\u062b \u0645\u0639\u0647.\n\u0648\u0630\u0643\u0631\u062a \u0627\u0644\u0634\u0631\u0637\u0629 \u0625\u0646 \u0644\u0627\u0631\u0649 \u0643\u064a\u0632\u0631 \u0627\u0644\u0628\u0627\u0644\u063a 51 \u0639\u0627\u0645\u0627 \u0627\u062a\u0635\u0644 \u0628\u0627\u0644\u0637\u0648\u0627\u0631\u0626 \u0644\u0623\u0648\u0644 \u0645\u0631\u0629 \u062d\u0648\u0627\u0644\u064a \u0627\u0644\u0639\u0627\u0634\u0631\u0629 \u0648\u0627\u0644\u0646\u0635\u0641 \u0645\u0646 \u0645\u0633\u0627\u0621 \u0627\u0644\u062c\u0645\u0639\u0629\u060c \u0637\u0627\u0644\u0628\u0627 \u0627\u0644\u062a\u062d\u062f\u062b \u0645\u0639 \u0636\u0627\u0628\u0637 \u0634\u0631\u0637\u0629\u060c \u0645\u0646 \u062f\u0648\u0646 \u0627\u0644\u062d\u0627\u062c\u0629 \u0625\u0644\u0649 \u0627\u0633\u062a\u062f\u0639\u0627\u0621 \u0627\u0644\u0637\u0648\u0627\u0631\u0626. \u0648\u0648\u0641\u0642\u0627 \u0644\u0640\"\u0623\u0633\u0648\u0634\u064a\u062a\u062f\u0628\u0631\u0633\" \u0641\u0642\u062f \u0639\u0627\u0648\u062f \u0643\u064a\u0632\u0631 \u0627\u0644\u0627\u062a\u0635\u0627\u0644 5 \u0645\u0631\u0627\u062a \u0623\u062e\u0631\u0649\u060c \u0627\u0644\u0623\u0645\u0631 \u0627\u0644\u0630\u064a \u0627\u0633\u062a\u062f\u0639\u0649 \u0642\u0648\u0629 \u0645\u0646 \u0627\u0644\u0634\u0631\u0637\u0629 \u0625\u0644\u0649 \u0645\u0646\u0632\u0644\u0647 \u0641\u064a \u0628\u0644\u062f\u0629 \u0648\u0627\u064a\u062a\u0647\u0648\u0644 \u0627\u0644\u0634\u0645\u0627\u0644\u064a\u0629 \u0628\u0639\u062f \u0645\u0646\u062a\u0635\u0641 \u0627\u0644\u0644\u064a\u0644. \u0648\u0642\u0627\u0644 \u0627\u0644\u0631\u062c\u0644 \u0644\u0644\u0636\u0628\u0627\u0637 \u0625\u0646\u0647 \u0634\u0631\u0628 \u0627\u0644\u0639\u062f\u064a\u062f \u0645\u0646 \u0627\u0644\u062c\u0639\u0629 \u0644\u0623\u0646\u0647 \u0643\u0627\u0646 \u0645\u0646\u0632\u0639\u062c\u0627 \u0645\u0646 \u0645\u0634\u0627\u0643\u0644\u0647 \u0627\u0644\u0639\u0627\u0626\u0644\u064a\u0629\u060c \u0648\u062a\u0639\u0647\u062f \u0643\u064a\u0632\u0631 \u0628\u0639\u062f\u0645 \u0627\u0644\u0627\u062a\u0635\u0627\u0644 \u0645\u062c\u062f\u062f\u0627 \u0628\u0627\u0644\u0634\u0631\u0637\u0629\u060c \u0625\u0644\u0627 \u0625\u0630\u0627 \u0643\u0627\u0646 \u0647\u0646\u0627\u0643 \u062d\u0627\u0644\u0629 \u0637\u0648\u0627\u0631\u0626 \u062d\u0642\u064a\u0642\u0629 \u062a\u0633\u062a\u062f\u0639\u064a\u0647\u0645. \u0644\u0643\u0646 \u0627\u0644\u0631\u062c\u0644 \u0643\u0631\u0631 \u0627\u0644\u0627\u062a\u0635\u0627\u0644 \u0628\u0627\u0644\u0637\u0648\u0627\u0631\u0626 \u0628\u0639\u062f \u062f\u0642\u064a\u0642\u0629 \u0648\u0627\u062d\u062f\u0629 \u0641\u0642\u0637 \u0645\u0646 \u0645\u063a\u0627\u062f\u0631\u0629 \u0627\u0644\u0636\u0628\u0627\u0637. \u0648\u0642\u062f \u0623\u0644\u0642\u064a \u0627\u0644\u0642\u0628\u0636 \u0639\u0644\u0649 \u0643\u064a\u0632\u0631\u060c \u0644\u064a\u0648\u0627\u062c\u0647 \u0627\u062a\u0647\u0627\u0645\u0627\u062a \u0628\u0627\u0633\u062a\u062f\u0639\u0627\u0621 \u0627\u0644\u0637\u0648\u0627\u0631\u0626 \u0639\u0645\u062f\u0627 \u0644\u0623\u0633\u0628\u0627\u0628 \u063a\u064a\u0631 \u0637\u0627\u0631\u0626\u0629.\n", "target": "\u0648\u0648\u0641\u0642\u0627 \u0644\u0640\"\u0623\u0633\u0648\u0634\u064a\u062a\u062f\u0628\u0631\u0633\" \u0641\u0642\u062f \u0639\u0627\u0648\u062f \u0643\u064a\u0632\u0631 \u0627\u0644\u0627\u062a\u0635\u0627\u0644 5 \u0645\u0631\u0627\u062a \u0623\u062e\u0631\u0649\u060c \u0627\u0644\u0623\u0645\u0631 \u0627\u0644\u0630\u064a \u0627\u0633\u062a\u062f\u0639\u0649 \u0642\u0648\u0629 \u0645\u0646 \u0627\u0644\u0634\u0631\u0637\u0629 \u0625\u0644\u0649 \u0645\u0646\u0632\u0644\u0647 \u0641\u064a \u0628\u0644\u062f\u0629 \u0648\u0627\u064a\u062a\u0647\u0648\u0644 \u0627\u0644\u0634\u0645\u0627\u0644\u064a\u0629 \u0628\u0639\u062f \u0645\u0646\u062a\u0635\u0641 \u0627\u0644\u0644\u064a\u0644. \u0648\u0630\u0643\u0631\u062a \u0627\u0644\u0634\u0631\u0637\u0629 \u0625\u0646 \u0644\u0627\u0631\u0649 \u0643\u064a\u0632\u0631 \u0627\u0644\u0628\u0627\u0644\u063a 51 \u0639\u0627\u0645\u0627 \u0627\u062a\u0635\u0644 \u0628\u0627\u0644\u0637\u0648\u0627\u0631\u0626 \u0644\u0623\u0648\u0644 \u0645\u0631\u0629 \u062d\u0648\u0627\u0644\u064a \u0627\u0644\u0639\u0627\u0634\u0631\u0629 \u0648\u0627\u0644\u0646\u0635\u0641 \u0645\u0646 \u0645\u0633\u0627\u0621 \u0627\u0644\u062c\u0645\u0639\u0629\u060c \u0637\u0627\u0644\u0628\u0627 \u0627\u0644\u062a\u062d\u062f\u062b \u0645\u0639 \u0636\u0627\u0628\u0637 \u0634\u0631\u0637\u0629\u060c \u0645\u0646 \u062f\u0648\u0646 \u0627\u0644\u062d\u0627\u062c\u0629 \u0625\u0644\u0649 \u0627\u0633\u062a\u062f\u0639\u0627\u0621 \u0627\u0644\u0637\u0648\u0627\u0631\u0626. \u0648\u0642\u0627\u0644 \u0627\u0644\u0631\u062c\u0644 \u0644\u0644\u0636\u0628\u0627\u0637 \u0625\u0646\u0647 \u0634\u0631\u0628 \u0627\u0644\u0639\u062f\u064a\u062f \u0645\u0646 \u0627\u0644\u062c\u0639\u0629 \u0644\u0623\u0646\u0647 \u0643\u0627\u0646 \u0645\u0646\u0632\u0639\u062c\u0627 \u0645\u0646 \u0645\u0634\u0627\u0643\u0644\u0647 \u0627\u0644\u0639\u0627\u0626\u0644\u064a\u0629\u060c \u0648\u062a\u0639\u0647\u062f \u0643\u064a\u0632\u0631 \u0628\u0639\u062f\u0645 \u0627\u0644\u0627\u062a\u0635\u0627\u0644 \u0645\u062c\u062f\u062f\u0627 \u0628\u0627\u0644\u0634\u0631\u0637\u0629\u060c \u0625\u0644\u0627 \u0625\u0630\u0627 \u0643\u0627\u0646 \u0647\u0646\u0627\u0643 \u062d\u0627\u0644\u0629 \u0637\u0648\u0627\u0631\u0626 \u062d\u0642\u064a\u0642\u0629 \u062a\u0633\u062a\u062f\u0639\u064a\u0647\u0645." } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "Value(dtype='string', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 2400 | | valid | 600 |
abdalrahmanshahrour/data-summarizer
[ "region:us" ]
2022-12-02T07:33:52+00:00
{"task_categories": ["conditional-text-generation"]}
2022-12-02T07:36:02+00:00
2eb1ead2be30d7194d38537f9bb5c4f48a8c8f06
tqhuyen/MC_OCR2021
[ "license:unknown", "region:us" ]
2022-12-02T07:40:52+00:00
{"license": "unknown"}
2022-12-02T09:46:36+00:00
b59167e3d7a08086624ba70b082f28a5e15f8f1d
# Dataset Card for "flintstones_story" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dhruvrnaik/flintstones_story
[ "region:us" ]
2022-12-02T08:49:18+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4816061959.792, "num_examples": 20656}, {"name": "test", "num_bytes": 588052405.413, "num_examples": 2377}, {"name": "validation", "num_bytes": 529750545.045, "num_examples": 2135}], "download_size": 6232281749, "dataset_size": 5933864910.25}}
2022-12-02T08:52:40+00:00
73bae4d21d86036cc0f567e612633ca6d84c018c
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Text Classification * Model: autoevaluate/binary-classification * Dataset: glue * Config: sst2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-0cfbe1dd-f38d-4b9d-9a4a-48037e1ca217-114110
[ "autotrain", "evaluation", "region:us" ]
2022-12-02T09:36:29+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}}
2022-12-02T09:37:11+00:00
b715b9252df462fe989989243931ee83cd8592e6
ahujack/bg_bilp_data
[ "license:bsd-2-clause", "region:us" ]
2022-12-02T09:40:36+00:00
{"license": "bsd-2-clause"}
2022-12-02T10:33:17+00:00
eaf3d089703e474e8a136ef6cc16ca6fef37832e
SciSearch/wiki-data
[ "license:other", "region:us" ]
2022-12-02T10:04:39+00:00
{"license": "other"}
2022-12-02T10:04:39+00:00
812e26ef4bb12d3de7e2469b687baedb198f7260
SciSearch/wiki
[ "license:unknown", "region:us" ]
2022-12-02T10:08:23+00:00
{"license": "unknown"}
2022-12-02T10:08:23+00:00
bf40b93a7991981bb711547bf9b73e8805df3a8f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Text Classification * Model: autoevaluate/binary-classification * Dataset: glue * Config: sst2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-967d2d60-ff7d-4923-acf4-ea7cf37843b4-115111
[ "autotrain", "evaluation", "region:us" ]
2022-12-02T10:30:24+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}}
2022-12-02T10:31:00+00:00
ef33677bb4ac8cb6db274e4c61c114d13689ab83
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Text Classification * Model: autoevaluate/binary-classification * Dataset: glue * Config: sst2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-e40e4fc9-c0ca-4ccb-8d8b-09dce405aafc-116112
[ "autotrain", "evaluation", "region:us" ]
2022-12-02T10:34:09+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}}
2022-12-02T10:34:46+00:00
87125a405b8ea8031d5b1e0939d33ea2cefeecf9
teknologipendidikan/EDT101-Javanese-Corpus
[ "license:cc-by-sa-4.0", "region:us" ]
2022-12-02T11:16:04+00:00
{"license": "cc-by-sa-4.0"}
2022-12-02T11:16:04+00:00
f0d0fcf0ab24f2635e16cad44403a3b8c1e3904b
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary SK-QuAD is the first QA dataset for the Slovak language. It is manually annotated, so it has no distortion caused by machine translation. The dataset is thematically diverse – it does not overlap with SQuAD – it brings new knowledge. It passed the second round of annotation – each question and the answer were seen by at least two annotators. ### Supported Tasks and Leaderboards - Question answering - Document retrieval ### Languages - Slovak ## Dataset Structure #### squad_v2 - **Size of downloaded dataset files:** 44.34 MB - **Size of the generated dataset:** 122.57 MB - **Total amount of disk used:** 166.91 MB - An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "answers": { "answer_start": [94, 87, 94, 94], "text": ["10th and 11th centuries", "in the 10th and 11th centuries", "10th and 11th centuries", "10th and 11th centuries"] }, "context": "\"The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave thei...", "id": "56ddde6b9a695914005b9629", "question": "When were the Normans in Normandy?", "title": "Normans" } ``` ### Data Fields The data fields are the same among all splits. #### squad_v2 - `id`: a `string` feature. - `title`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. ### Data Splits | | Train | Dev | Translated | | ------------- | -----: | -----: | -------: | | Documents | 8,377 | 940 | 442 | | Paragraphs | 22,062 | 2,568 | 18,931 | | Questions | 81,582 | 9,583 | 120,239 | | Answers | 65,839 | 7,822 | 79,978 | | Unanswerable | 15,877 | 1,784 | 40,261 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators - Deutsche Telekom Systems Solutions Slovakia - Technical Univesity of Košice ### Licensing Information Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
TUKE-DeutscheTelekom/skquad
[ "task_categories:question-answering", "task_categories:text-retrieval", "task_ids:open-domain-qa", "task_ids:extractive-qa", "task_ids:document-retrieval", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:sk", "license:cc-by-sa-4.0", "license:cc-by-4.0", "wikipedia", "region:us" ]
2022-12-02T11:28:37+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced", "found"], "language": ["sk"], "license": ["cc-by-sa-4.0", "cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering", "text-retrieval"], "task_ids": ["open-domain-qa", "extractive-qa", "document-retrieval"], "paperswithcode_id": "squad", "pretty_name": "skquad", "tags": ["wikipedia"], "train-eval-index": [{"col_mapping": {"answers": {"answer_start": "answer_start", "text": "text"}, "context": "context", "question": "question"}, "config": "squad_v2", "metrics": [{"name": "SQuAD v2", "type": "squad_v2"}], "splits": {"eval_split": "validation", "train_split": "train"}, "task": "question-answering", "task_id": "extractive_question_answering"}]}
2022-12-05T14:10:32+00:00
b773d8156a84fa69dfa1cd79678d5fbd61249601
# Dataset Card for "github-discussion" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Rami/github-discussion
[ "region:us" ]
2022-12-02T12:06:43+00:00
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 585875, "num_examples": 286}, {"name": "valid", "num_bytes": 295046, "num_examples": 142}], "download_size": 0, "dataset_size": 880921}}
2022-12-02T12:16:25+00:00
11fb75cc9d0f73b5d58b7e8346c3801fa351cd3a
spaablauw/portraithelper
[ "license:cc", "region:us" ]
2022-12-02T13:01:16+00:00
{"license": "cc"}
2022-12-02T13:01:51+00:00
784dcd6ee754be1e8ee59c88fb8619978b142f31
diversoailab/results
[ "license:mit", "region:us" ]
2022-12-02T13:48:24+00:00
{"license": "mit"}
2022-12-12T09:49:09+00:00
5580bbdc10dec88015242539befc552822650c01
LadinoMendes/ladinomendes
[ "license:openrail", "region:us" ]
2022-12-02T14:27:08+00:00
{"license": "openrail"}
2022-12-02T14:27:08+00:00
869562ad2596641a72e7bdfe10861a055aae0f19
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Text Classification * Model: autoevaluate/binary-classification-not-evaluated * Dataset: glue * Config: sst2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-efa0c910-63e6-4e94-9ead-ecdfc9f84f6e-117113
[ "autotrain", "evaluation", "region:us" ]
2022-12-02T14:51:08+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification-not-evaluated", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}}
2022-12-02T14:51:50+00:00
52c6fb5d2f42f46d0c6a7f3659e0c6fcc16b4516
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: autoevaluate/multi-class-classification-not-evaluated * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-e4791b21-302d-4702-9dba-a4a3a73498cd-118114
[ "autotrain", "evaluation", "region:us" ]
2022-12-02T14:55:50+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "autoevaluate/multi-class-classification-not-evaluated", "metrics": ["matthews_correlation"], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-12-02T14:56:37+00:00
d239374827c776e0592421bd11c8067e0d408634
NagaSaiAbhinay/CheckpointMergerSamples
[ "license:openrail", "region:us" ]
2022-12-02T15:11:49+00:00
{"license": "openrail"}
2022-12-02T15:39:33+00:00
7ed1bc361cf802f0b30f1fcabe6de517334c4e49
# Dataset Card for "IC-Satellites" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
excitedlord/IC-Satellites
[ "region:us" ]
2022-12-02T15:39:38+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 9018028.25, "num_examples": 1275}, {"name": "test", "num_bytes": 1584428.55, "num_examples": 225}], "download_size": 10777803, "dataset_size": 10602456.8}}
2022-12-02T15:39:46+00:00