sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
5762c9fbe217adf957425bf7d02c4cd154c6e46d | Monosis1995/KoiSam | [
"license:afl-3.0",
"region:us"
] | 2022-09-22T19:24:55+00:00 | {"license": "afl-3.0"} | 2022-09-22T19:26:43+00:00 |
|
40bdb13a08d7acbfdefc8757fcf8992b7963e060 | # Dataset Card for "gradio-dependents"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | open-source-metrics/gradio-dependents | [
"region:us"
] | 2022-09-22T19:32:03+00:00 | {"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "stars", "dtype": "int64"}, {"name": "forks", "dtype": "int64"}], "splits": [{"name": "package", "num_bytes": 2413, "num_examples": 60}, {"name": "repository", "num_bytes": 185253, "num_examples": 3926}], "download_size": 112345, "dataset_size": 187666}} | 2024-02-16T20:56:10+00:00 |
aec7dd1b87ea54c67b2823ba5fc09c2b9ede8f6e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: autoevaluate/zero-shot-classification
* Dataset: autoevaluate/zero-shot-classification-sample
* Config: autoevaluate--zero-shot-classification-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-staging-eval-autoevaluate__zero-shot-classification-sample-autoevalu-ded028-2312 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-22T19:55:23+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/zero-shot-classification-sample"], "eval_info": {"task": "text_zero_shot_classification", "model": "autoevaluate/zero-shot-classification", "metrics": [], "dataset_name": "autoevaluate/zero-shot-classification-sample", "dataset_config": "autoevaluate--zero-shot-classification-sample", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-09-22T20:03:51+00:00 |
6abfd356ba7ac593c607c0fee3f8666e39db69a6 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: autoevaluate/zero-shot-classification
* Dataset: autoevaluate/zero-shot-classification-sample
* Config: autoevaluate--zero-shot-classification-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-staging-eval-autoevaluate__zero-shot-classification-sample-autoevalu-ab10d5-2413 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-22T20:11:36+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/zero-shot-classification-sample"], "eval_info": {"task": "text_zero_shot_classification", "model": "autoevaluate/zero-shot-classification", "metrics": [], "dataset_name": "autoevaluate/zero-shot-classification-sample", "dataset_config": "autoevaluate--zero-shot-classification-sample", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-09-22T20:12:01+00:00 |
62eddd2262a1357f9574f59f54a6eac7794e6d07 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: autoevaluate/zero-shot-classification
* Dataset: Tristan/zero-shot-classification-large-test
* Config: Tristan--zero-shot-classification-large-test
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-staging-eval-Tristan__zero-shot-classification-large-test-Tristan__z-914f2c-2514 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-22T20:16:51+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Tristan/zero-shot-classification-large-test"], "eval_info": {"task": "text_zero_shot_classification", "model": "autoevaluate/zero-shot-classification", "metrics": [], "dataset_name": "Tristan/zero-shot-classification-large-test", "dataset_config": "Tristan--zero-shot-classification-large-test", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-09-22T21:03:52+00:00 |
6c3ed433023c6b7830a9f1f957ee511c31bb4ce9 |
## Description
This dataset contains triples of the form "query1", "query2", "label" where labels are mapped as follows
- similar: 1
- not similar: 0
- ambiguous: -1 | neeva/query2query_evaluation | [
"task_categories:sentence-similarity",
"region:us"
] | 2022-09-22T20:43:54+00:00 | {"task_categories": ["sentence-similarity"]} | 2022-09-22T21:58:34+00:00 |
69cb9d1035e5bbc34516d9dc016b50aa03e279c7 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: Jorgeutd/sagemaker-roberta-base-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@neehau](https://huggingface.co/neehau) for evaluating this model. | autoevaluate/autoeval-eval-emotion-default-98e72c-1536755281 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-22T20:50:45+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "Jorgeutd/sagemaker-roberta-base-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-09-22T20:51:27+00:00 |
70ade0819ad2c1f3b42f83e859a489b457f667e8 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: autoevaluate/zero-shot-classification
* Dataset: Tristan/zero-shot-classification-large-test
* Config: Tristan--zero-shot-classification-large-test
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Tristan](https://huggingface.co/Tristan) for evaluating this model. | autoevaluate/autoeval-staging-eval-Tristan__zero-shot-classification-large-test-Tristan__z-eb4ad9-22 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-22T21:31:31+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Tristan/zero-shot-classification-large-test"], "eval_info": {"task": "text_zero_shot_classification", "model": "autoevaluate/zero-shot-classification", "metrics": [], "dataset_name": "Tristan/zero-shot-classification-large-test", "dataset_config": "Tristan--zero-shot-classification-large-test", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-09-22T23:38:10+00:00 |
27fd6aba198ae571c71b11aefb2335f04cd151de | Oragani/BoneworksFord | [
"license:afl-3.0",
"region:us"
] | 2022-09-22T23:49:07+00:00 | {"license": "afl-3.0"} | 2022-09-22T23:49:07+00:00 |
|
52d9dd11f3f31e920f3b86b3fecb2655ecb94be1 | cjsojulz01/cjsojulz | [
"license:afl-3.0",
"region:us"
] | 2022-09-23T03:06:18+00:00 | {"license": "afl-3.0"} | 2022-09-23T03:06:43+00:00 |
|
37b92f99bbd820c24fc60cad5984a242bda86b4e | ourjames/Linda-Chase-Head-20170720 | [
"license:apache-2.0",
"region:us"
] | 2022-09-23T03:55:48+00:00 | {"license": "apache-2.0"} | 2022-09-23T04:17:44+00:00 |
|
f3e42bc8df06ce710946a8a14ef5ebacf1a4e19b | taskmasterpeace/d | [
"license:bigscience-openrail-m",
"region:us"
] | 2022-09-23T04:31:04+00:00 | {"license": "bigscience-openrail-m"} | 2022-09-23T04:31:04+00:00 |
|
eda21347985c2b59d4a050809ebc5ea8b322ae2f | Kris5/test | [
"license:other",
"region:us"
] | 2022-09-23T04:32:14+00:00 | {"license": "other"} | 2022-09-23T04:32:15+00:00 |
|
3398e8f029cb199893c036ee39f32ae1d3392ffb | SQexplorer/SQ | [
"license:openrail",
"region:us"
] | 2022-09-23T07:19:24+00:00 | {"license": "openrail"} | 2022-09-23T07:19:24+00:00 |
|
8075a09728578927f1984022df33907bcadba41c | varun-d/asdfasdfa | [
"license:openrail",
"region:us"
] | 2022-09-23T07:36:49+00:00 | {"license": "openrail"} | 2022-09-23T07:36:49+00:00 |
|
7772b4c915269a59f75a85f9875e82e3e33889c4 | j0hngou/ccmatrix_en-it_subsampled | [
"language:en",
"language:it",
"region:us"
] | 2022-09-23T10:40:11+00:00 | {"language": ["en", "it"]} | 2022-09-26T15:34:43+00:00 |
|
43b223a8643cbb2f5347d82f83a3c1770af49573 | jinyan438/hh | [
"region:us"
] | 2022-09-23T11:18:10+00:00 | {} | 2022-09-23T11:29:09+00:00 |
|
7047858126a84448d9d1c5b5a16abcb233f22243 | freddyaboulton/gradio-subapp | [
"license:mit",
"region:us"
] | 2022-09-23T15:17:40+00:00 | {"license": "mit"} | 2022-09-23T15:17:40+00:00 |
|
bc0e6e13bd30db81e45194b7e95ba06ea15c40f4 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: Tristan/opt-66b-copy
* Dataset: Tristan/zero-shot-classification-large-test
* Config: Tristan--zero-shot-classification-large-test
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-staging-eval-Tristan__zero-shot-classification-large-test-Tristan__z-d81307-16956302 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-23T17:13:34+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Tristan/zero-shot-classification-large-test"], "eval_info": {"task": "text_zero_shot_classification", "model": "Tristan/opt-66b-copy", "metrics": [], "dataset_name": "Tristan/zero-shot-classification-large-test", "dataset_config": "Tristan--zero-shot-classification-large-test", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-09-23T20:43:03+00:00 |
36753cc241cc2951be69b6e230f3d7a028e5b066 | # Dataset Card for "issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | open-source-metrics/issues | [
"region:us"
] | 2022-09-23T17:41:08+00:00 | {"dataset_info": {"features": [{"name": "dates", "dtype": "string"}, {"name": "type", "struct": [{"name": "authorAssociation", "dtype": "string"}, {"name": "comment", "dtype": "bool"}, {"name": "issue", "dtype": "bool"}]}], "splits": [{"name": "transformers", "num_bytes": 4712948, "num_examples": 133536}, {"name": "peft", "num_bytes": 228526, "num_examples": 6670}, {"name": "evaluate", "num_bytes": 63940, "num_examples": 1825}, {"name": "huggingface_hub", "num_bytes": 288140, "num_examples": 8274}, {"name": "accelerate", "num_bytes": 361197, "num_examples": 10324}, {"name": "datasets", "num_bytes": 821418, "num_examples": 23444}, {"name": "optimum", "num_bytes": 195473, "num_examples": 5630}, {"name": "pytorch_image_models", "num_bytes": 143735, "num_examples": 4167}, {"name": "gradio", "num_bytes": 1118865, "num_examples": 30797}, {"name": "tokenizers", "num_bytes": 195421, "num_examples": 5703}, {"name": "diffusers", "num_bytes": 1346732, "num_examples": 38439}, {"name": "safetensors", "num_bytes": 48986, "num_examples": 1418}, {"name": "candle", "num_bytes": 153795, "num_examples": 4054}, {"name": "text_generation_inference", "num_bytes": 204982, "num_examples": 6044}, {"name": "chat_ui", "num_bytes": 82128, "num_examples": 2360}, {"name": "hub_docs", "num_bytes": 137648, "num_examples": 3914}], "download_size": 3150086, "dataset_size": 10103934}, "configs": [{"config_name": "default", "data_files": [{"split": "peft", "path": "data/peft-*"}, {"split": "hub_docs", "path": "data/hub_docs-*"}, {"split": "evaluate", "path": "data/evaluate-*"}, {"split": "huggingface_hub", "path": "data/huggingface_hub-*"}, {"split": "accelerate", "path": "data/accelerate-*"}, {"split": "datasets", "path": "data/datasets-*"}, {"split": "optimum", "path": "data/optimum-*"}, {"split": "pytorch_image_models", "path": "data/pytorch_image_models-*"}, {"split": "gradio", "path": "data/gradio-*"}, {"split": "tokenizers", "path": "data/tokenizers-*"}, {"split": "diffusers", "path": "data/diffusers-*"}, {"split": "transformers", "path": "data/transformers-*"}, {"split": "safetensors", "path": "data/safetensors-*"}]}]} | 2024-02-15T12:00:57+00:00 |
51e1265fc8118bc9273550c3ade7ee4e546e0bb9 |
# Dataset Card for WinoGAViL
- [Dataset Description](#dataset-description)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Colab notebook code for Winogavil evaluation with CLIP](#colab-notebook-code-for-winogavil-evaluation-with-clip)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
WinoGAViL is a challenging dataset for evaluating vision-and-language commonsense reasoning abilities. Given a set of images, a cue, and a number K, the task is to select the K images that best fits the association. This dataset was collected via the WinoGAViL online game to collect vision-and-language associations, (e.g., werewolves to a full moon). Inspired by the popular card game Codenames, a spymaster gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating associations that are challenging for a rival AI model but still solvable by other human players. We evaluate several state-of-the-art vision-and-language models, finding that they are intuitive for humans (>90% Jaccard index) but challenging for state-of-the-art AI models, where the best model (ViLT) achieves a score of 52%, succeeding mostly where the cue is visually salient. Our analysis as well as the feedback we collect from players indicate that the collected associations require diverse reasoning skills, including general knowledge, common sense, abstraction, and more.
- **Homepage:**
https://winogavil.github.io/
- **Colab**
https://colab.research.google.com/drive/19qcPovniLj2PiLlP75oFgsK-uhTr6SSi
- **Repository:**
https://github.com/WinoGAViL/WinoGAViL-experiments/
- **Paper:**
https://arxiv.org/abs/2207.12576
- **Leaderboard:**
https://winogavil.github.io/leaderboard
- **Point of Contact:**
[email protected]; [email protected]
### Supported Tasks and Leaderboards
https://winogavil.github.io/leaderboard.
https://paperswithcode.com/dataset/winogavil.
## Colab notebook code for Winogavil evaluation with CLIP
https://colab.research.google.com/drive/19qcPovniLj2PiLlP75oFgsK-uhTr6SSi
### Languages
English.
## Dataset Structure
### Data Fields
candidates (list): ["bison", "shelter", "beard", "flea", "cattle", "shave"] - list of image candidates.
cue (string): pogonophile - the generated cue.
associations (string): ["bison", "beard", "shave"] - the images associated with the cue selected by the user.
score_fool_the_ai (int64): 80 - the spymaster score (100 - model score) for fooling the AI, with CLIP RN50 model.
num_associations (int64): 3 - The number of images selected as associative with the cue.
num_candidates (int64): 6 - the number of total candidates.
solvers_jaccard_mean (float64): 1.0 - three solvers scores average on the generated association instance.
solvers_jaccard_std (float64): 1.0 - three solvers scores standard deviation on the generated association instance
ID (int64): 367 - association ID.
### Data Splits
There is a single TEST split. In the accompanied paper and code we sample it to create different training sets, but the intended use is to use winogavil as a test set.
There are different number of candidates, which creates different difficulty levels:
-- With 5 candidates, random model expected score is 38%.
-- With 6 candidates, random model expected score is 34%.
-- With 10 candidates, random model expected score is 24%.
-- With 12 candidates, random model expected score is 19%.
<details>
<summary>Why random chance for success with 5 candidates is 38%?</summary>
It is a binomial distribution probability calculation.
Assuming N=5 candidates, and K=2 associations, there could be three events:
(1) The probability for a random guess is correct in 0 associations is 0.3 (elaborate below), and the Jaccard index is 0 (there is no intersection between the correct labels and the wrong guesses). Therefore the expected random score is 0.
(2) The probability for a random guess is correct in 1 associations is 0.6, and the Jaccard index is 0.33 (intersection=1, union=3, one of the correct guesses, and one of the wrong guesses). Therefore the expected random score is 0.6*0.33 = 0.198.
(3) The probability for a random guess is correct in 2 associations is 0.1, and the Jaccard index is 1 (intersection=2, union=2). Therefore the expected random score is 0.1*1 = 0.1.
* Together, when K=2, the expected score is 0+0.198+0.1 = 0.298.
To calculate (1), the first guess needs to be wrong. There are 3 "wrong" guesses and 5 candidates, so the probability for it is 3/5. The next guess should also be wrong. Now there are only 2 "wrong" guesses, and 4 candidates, so the probability for it is 2/4. Multiplying 3/5 * 2/4 = 0.3.
Same goes for (2) and (3).
Now we can perform the same calculation with K=3 associations.
Assuming N=5 candidates, and K=3 associations, there could be four events:
(4) The probability for a random guess is correct in 0 associations is 0, and the Jaccard index is 0. Therefore the expected random score is 0.
(5) The probability for a random guess is correct in 1 associations is 0.3, and the Jaccard index is 0.2 (intersection=1, union=4). Therefore the expected random score is 0.3*0.2 = 0.06.
(6) The probability for a random guess is correct in 2 associations is 0.6, and the Jaccard index is 0.5 (intersection=2, union=4). Therefore the expected random score is 0.6*5 = 0.3.
(7) The probability for a random guess is correct in 3 associations is 0.1, and the Jaccard index is 1 (intersection=3, union=3). Therefore the expected random score is 0.1*1 = 0.1.
* Together, when K=3, the expected score is 0+0.06+0.3+0.1 = 0.46.
Taking the average of 0.298 and 0.46 we reach 0.379.
Same process can be recalculated with 6 candidates (and K=2,3,4), 10 candidates (and K=2,3,4,5) and 123 candidates (and K=2,3,4,5,6).
</details>
## Dataset Creation
Inspired by the popular card game Codenames, a “spymaster” gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating
associations that are challenging for a rival AI model but still solvable by other
human players.
### Annotations
#### Annotation process
We paid Amazon Mechanical Turk Workers to play our game.
## Considerations for Using the Data
All associations were obtained with human annotators.
### Licensing Information
CC-By 4.0
### Citation Information
@article{bitton2022winogavil,
title={WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models},
author={Bitton, Yonatan and Guetta, Nitzan Bitton and Yosef, Ron and Elovici, Yuval and Bansal, Mohit and Stanovsky, Gabriel and Schwartz, Roy},
journal={arXiv preprint arXiv:2207.12576},
year={2022}
| nlphuji/winogavil | [
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"commonsense-reasoning",
"visual-reasoning",
"arxiv:2207.12576",
"region:us"
] | 2022-09-23T18:27:29+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_ids": [], "paperswithcode_id": "winogavil", "pretty_name": "WinoGAViL", "tags": ["commonsense-reasoning", "visual-reasoning"], "extra_gated_prompt": "By clicking on \u201cAccess repository\u201d below, you also agree that you are using it solely for research purposes. The full license agreement is available in the dataset files."} | 2022-11-26T19:56:27+00:00 |
24f850ea98b0582135f7ed9fdcf076ef5a85176a | claudio4525/testt | [
"license:afl-3.0",
"region:us"
] | 2022-09-23T18:46:08+00:00 | {"license": "afl-3.0"} | 2022-09-23T18:46:08+00:00 |
|
746385044ca49b021086113b88027e9563645c1e | tednc/images | [
"license:cc",
"region:us"
] | 2022-09-23T20:58:16+00:00 | {"license": "cc"} | 2022-09-23T21:04:38+00:00 |
|
c53614789f63256d057d584d40c10e2fc29212b1 | This dataset is designed to be used in testing multimodal text/image models. It's derived from cm4-10k dataset.
The current splits are: `['100.unique', '100.repeat', '300.unique', '300.repeat', '1k.unique', '1k.repeat', '10k.unique', '10k.repeat']`.
The `unique` ones ensure uniqueness across text entries.
The `repeat` ones are repeating the same 10 unique records: - these are useful for memory leaks debugging as the records are always the same and thus remove the record variation from the equation.
The default split is `100.unique`.
The full process of this dataset creation is documented inside [cm4-synthetic-testing.py](./cm4-synthetic-testing.py).
| HuggingFaceM4/cm4-synthetic-testing | [
"license:bigscience-openrail-m",
"region:us"
] | 2022-09-24T01:37:35+00:00 | {"license": "bigscience-openrail-m"} | 2022-11-22T16:24:24+00:00 |
3ea47d49efd28082366bf993f3d2cac18e3c153d |
# **Ariel Data Challenge NeurIPS 2022**
Dataset is part of the [**Ariel Machine Learning Data Challenge**](https://www.ariel-datachallenge.space/). The Ariel Space mission is a European Space Agency mission to be launched in 2029. Ariel will observe the atmospheres of 1000 extrasolar planets - planets around other stars - to determine how they are made, how they evolve and how to put our own Solar System in the gallactic context.
### **Understanding worlds in our Milky Way**
Today we know of roughly 5000 exoplanets in our Milky Way galaxy. Given that the first planet was only conclusively discovered in the mid-1990's, this is an impressive achievement. Yet, simple number counting does not tell us much about the nature of these worlds. One of the best ways to understand their formation and evolution histories is to understand the composition of their atmospheres. What's the chemistry, temperatures, cloud coverage, etc? Can we see signs of possible bio-markers in the smaller Earth and super-Earth planets? Since we can't get in-situ measurements (even the closest exoplanet is lightyears away), we rely on remote sensing and interpreting the stellar light that shines through the atmosphere of these planets. Model fitting these atmospheric exoplanet spectra is tricky and requires significant computational time. This is where you can help!
### **Speed up model fitting!**
Today, our atmospheric models are fit to the data using MCMC type approaches. This is sufficient if your atmospheric forward models are fast to run but convergence becomes problematic if this is not the case. This challenge looks at inverse modelling using machine learning. For more information on why we need your help, we provide more background in the about page and the documentation.
### **Many thanks to...**
[NeurIPS 2022](https://nips.cc/) for hosting the data challenge and to the [UK Space Agency](https://www.gov.uk/government/organisations/uk-space-agency) and the [European Research Council](https://erc.europa.eu/) for support this effort. Also many thanks to the data challenge team and partnering institutes, and of course thanks to the [Ariel](https://arielmission.space/) team for technical support and building the space mission in the first place!
For more information, contact us at: exoai.ucl [at] gmail.com
| n1ghtf4l1/Ariel-Data-Challenge-NeurIPS-2022 | [
"license:mit",
"region:us"
] | 2022-09-24T04:33:24+00:00 | {"license": "mit"} | 2022-09-24T04:55:23+00:00 |
618847c234ccbaafd4238ac3113da2c20b0ef758 | This is a collection of embeddings that I decided to make public. Additionally, it will be where I host any future embeddings I decide to train. | BumblingOrange/Hanks_Embeddings | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | 2022-09-24T05:01:41+00:00 | {"license": "bigscience-bloom-rail-1.0"} | 2022-09-24T19:32:38+00:00 |
e0f1e2e8e3a85ca342d113fb4281eab0a23b237f | TKKG/inferno | [
"license:afl-3.0",
"region:us"
] | 2022-09-24T05:24:49+00:00 | {"license": "afl-3.0"} | 2022-09-24T08:41:53+00:00 |
|
ca494fba0970456f98f12e4db4241a737fa1db0c | MHCK/AI | [
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2022-09-24T09:15:08+00:00 | {"license": "cc-by-nc-nd-4.0"} | 2022-10-01T07:27:42+00:00 |
|
75b8d3472af2587f51d9f635e078372d308b344a |
# Dataset Card for pokemon-icons
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Pokemon Icons. Most of them are collected and cropped from screenshots captured in Pokémon Sword and Shield.
### Supported Tasks and Leaderboards
Image classification | zishuod/pokemon-icons | [
"task_categories:image-classification",
"license:mit",
"pokemon",
"region:us"
] | 2022-09-24T14:12:08+00:00 | {"annotations_creators": [], "language_creators": [], "language": [], "license": ["mit"], "multilinguality": [], "size_categories": [], "source_datasets": [], "task_categories": ["image-classification"], "task_ids": [], "pretty_name": "pokemon-icons", "tags": ["pokemon"]} | 2022-09-24T14:35:39+00:00 |
aa5a640053c19908b9a988c3c3f45cc9de300700 | amir7d0/laion20M-fa | [
"license:cc-by-4.0",
"region:us"
] | 2022-09-24T15:28:38+00:00 | {"license": "cc-by-4.0"} | 2022-11-04T15:51:21+00:00 |
|
8f854e3e4f7007134410f2040827bba7bf4c3dd8 | Bundesliga Videos dataset from Kaggle competition: https://www.kaggle.com/competitions/dfl-bundesliga-data-shootout | dbal0503/Bundesliga | [
"region:us"
] | 2022-09-24T17:04:15+00:00 | {} | 2022-09-26T16:48:50+00:00 |
337bdbce29ebc97dadf443f34689e3e43d051fb4 | Sonrin/Thorneworks | [
"license:artistic-2.0",
"region:us"
] | 2022-09-24T17:33:12+00:00 | {"license": "artistic-2.0"} | 2022-09-24T17:33:12+00:00 |
|
fdc79ccc1674743e851455079f09cb935cf82c1d | Naimul/testingmyown | [
"license:mit",
"region:us"
] | 2022-09-24T18:07:55+00:00 | {"license": "mit"} | 2022-09-24T18:07:55+00:00 |
|
934a79d988c4507958e62c5c89b0057f5e1ce38f | quecopiones/twitter_extract_suicide_keywords | [
"license:afl-3.0",
"region:us"
] | 2022-09-24T18:33:48+00:00 | {"license": "afl-3.0"} | 2022-09-24T18:42:50+00:00 |
|
e3e2a63ffff66b9a9735524551e3818e96af03ee | https://github.com/karolpiczak/ESC-50
The dataset is available under the terms of the Creative Commons Attribution Non-Commercial license.
K. J. Piczak. ESC: Dataset for Environmental Sound Classification. Proceedings of the 23rd Annual ACM Conference on Multimedia, Brisbane, Australia, 2015.
[DOI: http://dx.doi.org/10.1145/2733373.2806390] | ashraq/esc50 | [
"region:us"
] | 2022-09-24T18:51:49+00:00 | {} | 2023-01-07T08:35:28+00:00 |
7fd72a8472a14c6903b8e7b0fc80aac84f7b8a79 | Lubub/teste_sharp | [
"license:apache-2.0",
"region:us"
] | 2022-09-24T19:19:18+00:00 | {"license": "apache-2.0"} | 2022-09-24T19:19:18+00:00 |
|
afd9400721e19e44f4d28598cb73902558f02bbb | We partition the earnings22 dataset at https://huggingface.co/datasets/anton-l/earnings22_baseline_5_gram by source_id:
Validation: 4420696 4448760 4461799 4469836 4473238 4482110
Test: 4432298 4450488 4470290 4479741 4483338 4485244
Train: remainder
Official script for processing these splits will be released shortly. | sanchit-gandhi/earnings22_split_resampled | [
"region:us"
] | 2022-09-24T19:26:46+00:00 | {} | 2022-09-30T14:24:09+00:00 |
7ded3ffd0d2e9d1920cc456175edbabcaccaa479 | GeneralAwareness/Various | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-09-25T01:13:14+00:00 | {"license": "cc-by-nc-sa-4.0"} | 2023-07-19T23:47:14+00:00 |
|
378947b09975046c1b92f73b0e6cc3f5c21f12ef | gabrielaltay/hacdc-wikipedia | [
"license:cc-by-sa-3.0",
"region:us"
] | 2022-09-25T02:14:01+00:00 | {"license": "cc-by-sa-3.0"} | 2022-10-02T22:05:37+00:00 |
|
505bb434cc751d0b5158ae82f368a7c63e7a94c6 |
# Dataset Card for Nouns auto-captioned
_Dataset used to train Nouns text to image model_
Automatically generated captions for Nouns from their attributes, colors and items. Help on the captioning script appreciated!
For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided.
## Citation
If you use this dataset, please cite it as:
```
@misc{piedrafita2022nouns,
author = {Piedrafita, Miguel},
title = {Nouns auto-captioned},
year={2022},
howpublished= {\url{https://huggingface.co/datasets/m1guelpf/nouns/}}
}
```
| m1guelpf/nouns | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:cc0-1.0",
"region:us"
] | 2022-09-25T02:30:09+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["other"], "language": ["en"], "license": "cc0-1.0", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["text-to-image"], "task_ids": [], "pretty_name": "Nouns auto-captioned", "tags": []} | 2022-09-25T05:18:40+00:00 |
bbfa20fac8083c90012bca77e55acd8aa4d5c824 | # Info
>Try to include embedding info in the commit description (model, author, artist, images, etc)
>Naming: name-object/style | waifu-research-department/embeddings | [
"license:mit",
"region:us"
] | 2022-09-25T05:13:59+00:00 | {"license": "mit"} | 2022-09-29T01:50:05+00:00 |
f26800be885fa716afe26a59fe570a69ee700131 | kevinjesse/ManyRefactors4C | [
"license:cc-by-2.0",
"region:us"
] | 2022-09-25T05:28:58+00:00 | {"license": "cc-by-2.0"} | 2022-09-25T11:59:34+00:00 |
|
431ee067cc8976e255572f9d4f8c4434b24f99a0 | huynguyen208/assignment2 | [
"license:unknown",
"region:us"
] | 2022-09-25T09:25:42+00:00 | {"license": "unknown"} | 2022-09-27T10:57:00+00:00 |
|
9c9b738f010f33843d0bc076f1024d3ca7191fb4 | # Dataset Card for "Text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Miron/NLP_1 | [
"region:us"
] | 2022-09-25T14:43:59+00:00 | {"dataset_info": {"features": [{"name": "Science artilce's texts", "dtype": "string"}, {"name": "text_length", "dtype": "int64"}, {"name": "TEXT", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 54709956.09102402, "num_examples": 711}, {"name": "validation", "num_bytes": 6155831.908975979, "num_examples": 80}], "download_size": 26356400, "dataset_size": 60865788.0}} | 2022-11-10T08:00:19+00:00 |
268eb429954ebbfc5cd6ce7257bb867b14c85351 | wertyworld/taser_1_00 | [
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2022-09-25T15:24:15+00:00 | {"license": "cc-by-nc-nd-4.0"} | 2022-09-27T15:16:22+00:00 |
|
e9e568ffeceef632fcd4b08f39c9b2856e18f655 | SIGMitch/KDroid | [
"region:us"
] | 2022-09-25T18:38:39+00:00 | {} | 2022-11-30T13:52:48+00:00 |
|
ea53e978a3de1a239248dec0d089a4949ccc3093 | pane2k/pan | [
"license:afl-3.0",
"region:us"
] | 2022-09-25T23:57:22+00:00 | {"license": "afl-3.0"} | 2022-09-25T23:58:24+00:00 |
|
265821a55b2a6a358ce3585e4f4964c964b20669 | pane2k/paneModel | [
"license:mit",
"region:us"
] | 2022-09-26T00:26:33+00:00 | {"license": "mit"} | 2022-09-26T00:26:51+00:00 |
|
21f3313de37d60d45fb67a276d63ace9c4a0ac7d |
# Dataset Card for MedNLI
## Dataset Description
- **Homepage:** https://physionet.org/content/mednli/1.0.0/
- **Pubmed:** False
- **Public:** False
- **Tasks:** TE
State of the art models using deep neural networks have become very good in learning an accurate
mapping from inputs to outputs. However, they still lack generalization capabilities in conditions
that differ from the ones encountered during training. This is even more challenging in specialized,
and knowledge intensive domains, where training data is limited. To address this gap, we introduce
MedNLI - a dataset annotated by doctors, performing a natural language inference task (NLI),
grounded in the medical history of patients. As the source of premise sentences, we used the
MIMIC-III. More specifically, to minimize the risks to patient privacy, we worked with clinical
notes corresponding to the deceased patients. The clinicians in our team suggested the Past Medical
History to be the most informative section of a clinical note, from which useful inferences can be
drawn about the patient.
## Citation Information
```
@misc{https://doi.org/10.13026/c2rs98,
title = {MedNLI — A Natural Language Inference Dataset For The Clinical Domain},
author = {Shivade, Chaitanya},
year = 2017,
publisher = {physionet.org},
doi = {10.13026/C2RS98},
url = {https://physionet.org/content/mednli/}
}
```
| bigbio/mednli | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-09-26T02:08:16+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "paperswithcode_id": "mednli", "pretty_name": "MedNLI", "bigbio_language": ["English"], "bigbio_license_short_name": "PHYSIONET_LICENSE_1p5", "homepage": "https://physionet.org/content/mednli/1.0.0/", "bigbio_pubmed": false, "bigbio_public": false, "bigbio_tasks": ["TEXTUAL_ENTAILMENT"]} | 2022-12-22T15:24:43+00:00 |
8950f534f8012eef317e1b90b2a8b13fbec8746d |
# Dataset Card for GAD
## Dataset Description
- **Homepage:** https://geneticassociationdb.nih.gov/
- **Pubmed:** True
- **Public:** True
- **Tasks:** TXTCLASS
A corpus identifying associations between genes and diseases by a semi-automatic
annotation procedure based on the Genetic Association Database.
## Note about homepage
The homepage for this dataset is no longer reachable, but the url is recorded here.
Data for this dataset was originally downloaded from a google drive
folder (the link used in the [BLURB benchmark data download script](https://microsoft.github.io/BLURB/submit.html).
However, we host the data in the huggingface hub for more reliable downloads and access.
## Citation Information
```
@article{Bravo2015,
doi = {10.1186/s12859-015-0472-9},
url = {https://doi.org/10.1186/s12859-015-0472-9},
year = {2015},
month = feb,
publisher = {Springer Science and Business Media {LLC}},
volume = {16},
number = {1},
author = {{\`{A}}lex Bravo and Janet Pi{\~{n}}ero and N{\'{u}}ria Queralt-Rosinach and Michael Rautschka and Laura I Furlong},
title = {Extraction of relations between genes and diseases from text and large-scale data analysis: implications for translational research},
journal = {{BMC} Bioinformatics}
}
```
| bigbio/gad | [
"multilinguality:momolingual",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-09-26T02:36:32+00:00 | {"language": ["en"], "license": "cc-by-4.0", "multilinguality": "momolingual", "paperswithcode_id": "gad", "pretty_name": "GAD", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_4p0", "homepage": "https://geneticassociationdb.nih.gov/", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["TEXT_CLASSIFICATION"]} | 2022-12-22T15:25:28+00:00 |
12352e0e32ac93fa9edc8ea202f5383cc79b9991 | Greg3d/test | [
"license:afl-3.0",
"region:us"
] | 2022-09-26T02:55:47+00:00 | {"license": "afl-3.0"} | 2022-09-26T02:55:47+00:00 |
|
e0ca639ce1a5f1267ada3f8fae2fdad79737887c |
# Dataset Card for BioASQ Task B
## Dataset Description
- **Homepage:** http://participants-area.bioasq.org/datasets/
- **Pubmed:** True
- **Public:** False
- **Tasks:** QA
The BioASQ corpus contains multiple question
answering tasks annotated by biomedical experts, including yes/no, factoid, list,
and summary questions. Pertaining to our objective of comparing neural language
models, we focus on the the yes/no questions (Task 7b), and leave the inclusion
of other tasks to future work. Each question is paired with a reference text
containing multiple sentences from a PubMed abstract and a yes/no answer. We use
the official train/dev/test split of 670/75/140 questions.
See 'Domain-Specific Language Model Pretraining for Biomedical
Natural Language Processing'
## Citation Information
```
@article{tsatsaronis2015overview,
title = {
An overview of the BIOASQ large-scale biomedical semantic indexing and
question answering competition
},
author = {
Tsatsaronis, George and Balikas, Georgios and Malakasiotis, Prodromos
and Partalas, Ioannis and Zschunke, Matthias and Alvers, Michael R and
Weissenborn, Dirk and Krithara, Anastasia and Petridis, Sergios and
Polychronopoulos, Dimitris and others
},
year = 2015,
journal = {BMC bioinformatics},
publisher = {BioMed Central Ltd},
volume = 16,
number = 1,
pages = 138
}
```
| bigbio/bioasq_task_b | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-09-26T03:05:28+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "BioASQ Task B", "bigbio_language": ["English"], "bigbio_license_shortname": "NLM_LICENSE", "homepage": "http://participants-area.bioasq.org/datasets/", "bigbio_pubmed": true, "bigbio_public": false, "bigbio_tasks": ["QUESTION_ANSWERING"]} | 2022-12-22T15:41:12+00:00 |
7f2fb24be7c82a385ee81a1152bc679b6400f41b |  | VirtualJesus/Anthonyface | [
"region:us"
] | 2022-09-26T04:01:42+00:00 | {} | 2022-09-26T07:48:44+00:00 |
5f481a733e7cfb4fec7507aca1720db7b28fbe9e | samuelchan/art | [
"license:afl-3.0",
"region:us"
] | 2022-09-26T05:38:45+00:00 | {"license": "afl-3.0"} | 2022-09-26T05:38:45+00:00 |
|
2be31cb9f5880cbce04b5b68299121992587ace7 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Samuel-Fipps/t5-efficient-large-nl36_fine_tune_sum_V2
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuel-fipps](https://huggingface.co/samuel-fipps) for evaluating this model. | autoevaluate/autoeval-eval-samsum-samsum-8a4c42-1554855493 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-26T05:54:54+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "Samuel-Fipps/t5-efficient-large-nl36_fine_tune_sum_V2", "metrics": ["mse"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-09-26T06:02:52+00:00 |
a35672081af08bf55b7cdcdd8f2864edcb50a2ff | train data | BraimComplexe/train_1 | [
"region:us"
] | 2022-09-26T08:02:31+00:00 | {} | 2022-09-26T08:13:22+00:00 |
911e1d214162fd11d2c78d3f1428cbfcbe07782c |
# Dataset Card for MultiLegalPile: A Large-Scale Multilingual Corpus for the Legal Domain
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** [MultiLegalPile](https://arxiv.org/abs/2306.02069)
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:[email protected])
### Dataset Summary
The Multi_Legal_Pile is a large-scale multilingual legal dataset suited for pretraining language models.
It spans over 24 languages and five legal text types.
### Supported Tasks and Leaderboards
The dataset supports the tasks of fill-mask.
### Languages
The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt,
ro, sk, sl, sv
## Dataset Structure
It is structured in the following format:
type -> language -> jurisdiction.jsonl.xz
type is one of the following:
- caselaw
- contracts
- legislation
- other
- legal_mc4
`legal_mc4` is a subset of the other type but is listed separately so it can be easily excluded since it is less
permissively licensed than the other types.
Use the dataset like this:
```python
from datasets import load_dataset
config = 'en_contracts' # {language}_{type}
dataset = load_dataset('joelniklaus/Multi_Legal_Pile', config, split='train', streaming=True)
```
'config' is a combination of language and text_type, e.g. 'en_contracts' or 'de_caselaw'.
To load all the languages or all the text_types, use 'all' instead of the language or text_type (e.g., '
all_legislation').
### Data Instances
The file format is jsonl.xz and there is one split available ("train").
The complete dataset (689GB) consists of four large subsets:
- Native Multi Legal Pile (112GB)
- Eurlex Resources (179GB)
- Legal MC4 (106GB)
- Pile of Law (292GB)
#### Native Multilingual Legal Pile data
| | Language | Text Type | Jurisdiction | Source | Size (MB) | Words | Documents | Words/Document | URL | License |
|---:|:-----------|:------------|:---------------|:-----------------------------------|------------:|------------:|------------:|-----------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------|
| 0 | bg | legislation | Bulgaria | MARCELL | 8015 | 308946116 | 82777 | 3732 | https://elrc-share.eu/repository/browse/marcell-bulgarian-legislative-subcorpus-v2/946267fe8d8711eb9c1a00155d026706d2c9267e5cdf4d75b5f02168f01906c6/ | [CC0-1.0](https://elrc-share.eu/static/metashare/licences/CC0-1.0.pdf) |
| 1 | cs | caselaw | Czechia | CzCDC Constitutional Court | 11151 | 574336489 | 296652 | 1936 | https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-3052 | [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) |
| 2 | cs | caselaw | Czechia | CzCDC Supreme Administrative Court | 11151 | 574336489 | 296652 | 1936 | https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-3052 | [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) |
| 3 | cs | caselaw | Czechia | CzCDC Supreme Court | 11151 | 574336489 | 296652 | 1936 | https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-3052 | [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) |
| 4 | da | caselaw | Denmark | DDSC | 3469 | 210730560 | 89702 | 2349 | https://huggingface.co/DDSC | [CC BY 4.0 and other, depending on the dataset](https://creativecommons.org/licenses/by-nc/4.0/) |
| 5 | da | legislation | Denmark | DDSC | 10736 | 653153146 | 265868 | 2456 | https://huggingface.co/DDSC | [CC BY 4.0 and other, depending on the dataset](https://creativecommons.org/licenses/by-nc/4.0/) |
| 6 | de | caselaw | Germany | openlegaldata | 31527 | 1785439383 | 596800 | 2991 | https://de.openlegaldata.io/ | [ODbL-1.0](https://opendatacommons.org/licenses/odbl/1-0/) |
| 7 | de | caselaw | Switzerland | entscheidsuche | 31527 | 1785439383 | 596800 | 2991 | https://entscheidsuche.ch/ | [See description](https://entscheidsuche.ch/dataUsage) |
| 8 | de | legislation | Germany | openlegaldata | 8934 | 512840663 | 276034 | 1857 | https://de.openlegaldata.io/ | [ODbL-1.0](https://opendatacommons.org/licenses/odbl/1-0/) |
| 9 | de | legislation | Switzerland | lexfind | 8934 | 512840663 | 276034 | 1857 | https://www.lexfind.ch/fe/de/search | No information provided |
| 10 | fr | caselaw | Switzerland | entscheidsuche | 18313 | 1170335690 | 435569 | 2686 | https://entscheidsuche.ch/ | [See description](https://entscheidsuche.ch/dataUsage) |
| 11 | fr | caselaw | Belgium | jurportal | 18313 | 1170335690 | 435569 | 2686 | https://juportal.be/home/welkom | [See description](https://juportal.be/home/disclaimer) |
| 12 | fr | caselaw | France | CASS | 18313 | 1170335690 | 435569 | 2686 | https://echanges.dila.gouv.fr/OPENDATA/CASS/ | [Open Licence 2.0](https://echanges.dila.gouv.fr/OPENDATA/CASS/DILA_CASS_Presentation_20170824.pdf) |
| 13 | fr | caselaw | Luxembourg | judoc | 18313 | 1170335690 | 435569 | 2686 | https://justice.public.lu/fr.html | [See description](https://justice.public.lu/fr/support/aspects-legaux/conditions-generales.html) |
| 14 | it | caselaw | Switzerland | entscheidsuche | 6483 | 406520336 | 156630 | 2595 | https://entscheidsuche.ch/ | [See description](https://entscheidsuche.ch/dataUsage) |
| 15 | en | legislation | Switzerland | lexfind | 36587 | 2537696894 | 657805 | 3857 | https://www.lexfind.ch/fe/de/search | No information provided |
| 16 | en | legislation | UK | uk-lex | 36587 | 2537696894 | 657805 | 3857 | https://zenodo.org/record/6355465 | [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/legalcode) |
| 17 | fr | legislation | Switzerland | lexfind | 9297 | 600170792 | 243313 | 2466 | https://www.lexfind.ch/fe/fr/search | No information provided |
| 18 | fr | legislation | Belgium | ejustice | 9297 | 600170792 | 243313 | 2466 | https://www.ejustice.just.fgov.be/cgi/welcome.pl | No information provided |
| 19 | it | legislation | Switzerland | lexfind | 8332 | 542579039 | 227968 | 2380 | https://www.lexfind.ch/fe/it/search | No information provided |
| 20 | nl | legislation | Belgium | ejustice | 8484 | 550788527 | 232204 | 2372 | https://www.ejustice.just.fgov.be/cgi/welcome.pl | No information provided |
| 21 | hu | legislation | Hungary | MARCELL | 5744 | 264572303 | 86862 | 3045 | https://elrc-share.eu/repository/browse/marcell-hungarian-legislative-subcorpus-v2/a87295ec8d6511eb9c1a00155d0267065f7e56dc7db34ce5aaae0b48a329daaa/ | [CC0-1.0](https://elrc-share.eu/static/metashare/licences/CC0-1.0.pdf) |
| 22 | pl | legislation | Poland | MARCELL | 5459 | 299334705 | 89264 | 3353 | https://elrc-share.eu/repository/browse/marcell-polish-legislative-subcorpus-v2/dd14fa1c8d6811eb9c1a00155d026706c4718ddc9c6e4a92a88923816ca8b219/ | [CC0-1.0](https://elrc-share.eu/static/metashare/licences/CC0-1.0.pdf) |
| 23 | pt | caselaw | Brazil | RulingBR | 196919 | 12611760973 | 17251236 | 731 | https://github.com/diego-feijo/rulingbr | No information provided |
| 24 | pt | caselaw | Brazil | CRETA | 196919 | 12611760973 | 17251236 | 731 | https://www.kaggle.com/datasets/eliasjacob/brcad5?resource=download&select=language_modeling_texts.parquet | [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) |
| 25 | pt | caselaw | Brazil | CJPG | 196919 | 12611760973 | 17251236 | 731 | https://esaj.tjsp.jus.br/cjsg/consultaCompleta.do?f=1 | No information provided |
| 26 | ro | legislation | Romania | MARCELL | 10464 | 559092153 | 215694 | 2592 | https://elrc-share.eu/repository/browse/marcell-romanian-legislative-subcorpus-v2/2da548428b9d11eb9c1a00155d026706ce94a6b59ffc4b0e9fb5cd9cebe6889e/ | [CC0-1.0](https://elrc-share.eu/static/metashare/licences/CC0-1.0.pdf) |
| 27 | sk | legislation | Slovakia | MARCELL | 5208 | 280182047 | 76760 | 3650 | https://elrc-share.eu/repository/browse/marcell-slovak-legislative-subcorpus-v2/6bdee1d68c8311eb9c1a00155d0267063398d3f1a3af40e1b728468dcbd6efdd/ | [CC0-1.0](https://elrc-share.eu/static/metashare/licences/CC0-1.0.pdf) |
| 28 | sl | legislation | Slovenia | MARCELL | 6057 | 365513763 | 88651 | 4123 | https://elrc-share.eu/repository/browse/marcell-slovenian-legislative-subcorpus-v2/e2a779868d4611eb9c1a00155d026706983c845a30d741b78e051faf91828b0d/ | [CC-BY-4.0](https://elrc-share.eu/static/metashare/licences/CC-BY-4.0.pdf)
| total | all | all | all | 1297609 | xxx | 81214262514 | 57305071 | 1417 | |
#### Eurlex Resources
See [Eurlex Resources](https://huggingface.co/datasets/joelito/eurlex_resources#data-instances) for more information.
#### Legal-MC4
See [Legal-MC4](https://huggingface.co/datasets/joelito/legal-mc4#data-instances) for more information.
#### Pile-of-Law
See [Pile-of-Law](https://huggingface.co/datasets/pile-of-law/pile-of-law#data-instances) for more information.
| Language | Type | Jurisdiction | Source | Size (MB) | Tokens | Documents | Tokens/Document | Part of Multi_Legal_Pile |
|:-----------|:------------|:---------------|:-------------------------------------|------------:|------------:|------------:|------------------:|:---------------------------|
| en | all | all | all | 503712 | 50547777921 | 9872444 | 5120 | yes |
| en | caselaw | EU | echr | 298 | 28374996 | 8480 | 3346 | yes |
| en | caselaw | Canada | canadian_decisions | 486 | 45438083 | 11343 | 4005 | yes |
| en | caselaw | US | dol_ecab | 942 | 99113541 | 28211 | 3513 | no |
| en | caselaw | US | scotus_oral_arguments | 1092 | 108228951 | 7996 | 13535 | no |
| en | caselaw | US | tax_rulings | 1704 | 166915887 | 54064 | 3087 | no |
| en | caselaw | US | nlrb_decisions | 2652 | 294471818 | 32080 | 9179 | no |
| en | caselaw | US | scotus_filings | 4018 | 593870413 | 63775 | 9311 | yes |
| en | caselaw | US | bva_opinions | 35238 | 4084140080 | 839523 | 4864 | no |
| en | caselaw | US | courtlistener_docket_entry_documents | 139006 | 12713614864 | 1983436 | 6409 | yes |
| en | caselaw | US | courtlistener_opinions | 158110 | 15899704961 | 4518445 | 3518 | yes |
| en | contracts | -- | tos | 4 | 391890 | 50 | 7837 | no |
| en | contracts | US | cfpb_creditcard_contracts | 188 | 25984824 | 2638 | 9850 | yes |
| en | contracts | US | edgar | 28698 | 2936402810 | 987926 | 2972 | yes |
| en | contracts | US | atticus_contracts | 78300 | 7997013703 | 650833 | 12287 | yes |
| en | legislation | US | fre | 2 | 173325 | 68 | 2548 | no |
| en | legislation | US | frcp | 4 | 427614 | 92 | 4647 | no |
| en | legislation | US | eoir | 62 | 6109737 | 2229 | 2741 | no |
| en | legislation | -- | constitutions | 66 | 5984865 | 187 | 32004 | yes |
| en | legislation | US | federal_register | 424 | 39854787 | 5414 | 7361 | yes |
| en | legislation | US | uscode | 716 | 78466325 | 58 | 1352867 | yes |
| en | legislation | EU | euro_parl | 808 | 71344326 | 9672 | 7376 | no |
| en | legislation | US | cfr | 1788 | 160849007 | 243 | 661930 | yes |
| en | legislation | US | us_bills | 3394 | 320723838 | 112483 | 2851 | yes |
| en | legislation | EU | eurlex | 3504 | 401324829 | 142036 | 2825 | no |
| en | legislation | US | state_codes | 18066 | 1858333235 | 217 | 8563747 | yes |
| en | other | -- | bar_exam_outlines | 4 | 346924 | 59 | 5880 | no |
| en | other | US | ftc_advisory_opinions | 4 | 509025 | 145 | 3510 | no |
| en | other | US | olc_memos | 98 | 12764635 | 1384 | 9223 | yes |
| en | other | -- | cc_casebooks | 258 | 24857378 | 73 | 340512 | no |
| en | other | -- | un_debates | 360 | 31152497 | 8481 | 3673 | no |
| en | other | -- | r_legaladvice | 798 | 72605386 | 146671 | 495 | no |
| en | other | US | founding_docs | 1118 | 100390231 | 183664 | 546 | no |
| en | other | US | oig | 5056 | 566782244 | 38954 | 14550 | yes |
| en | other | US | congressional_hearings | 16448 | 1801110892 | 31514 | 57152 | no |
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{niklaus2023multilegalpile,
title={MultiLegalPile: A 689GB Multilingual Legal Corpus},
author={Joel Niklaus and Veton Matoshi and Matthias Stürmer and Ilias Chalkidis and Daniel E. Ho},
year={2023},
eprint={2306.02069},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@JoelNiklaus](https://github.com/joelniklaus) for adding this dataset.
| joelniklaus/Multi_Legal_Pile | [
"task_categories:fill-mask",
"annotations_creators:other",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:fi",
"language:fr",
"language:ga",
"language:hr",
"language:hu",
"language:it",
"language:lt",
"language:lv",
"language:mt",
"language:nl",
"language:pl",
"language:pt",
"language:ro",
"language:sk",
"language:sl",
"language:sv",
"license:cc-by-nc-sa-4.0",
"arxiv:2306.02069",
"region:us"
] | 2022-09-26T09:28:06+00:00 | {"annotations_creators": ["other"], "language_creators": ["found"], "language": ["bg", "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "ga", "hr", "hu", "it", "lt", "lv", "mt", "nl", "pl", "pt", "ro", "sk", "sl", "sv"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["fill-mask"], "pretty_name": "MultiLegalPile: A Large-Scale Multilingual Corpus for the Legal Domain"} | 2024-01-12T08:50:24+00:00 |
a2f60155ef84fbb118b337eafa391351277003b3 | huggingface-projects/contribute-a-dataset | [
"license:apache-2.0",
"region:us"
] | 2022-09-26T09:33:05+00:00 | {"license": "apache-2.0"} | 2022-09-26T09:33:05+00:00 |
|
1b6af9f6fbd19bb68f82515f4f6eca993d643b23 | Heisenbergzz1/abdullah-jaber | [
"license:afl-3.0",
"region:us"
] | 2022-09-26T09:56:14+00:00 | {"license": "afl-3.0"} | 2022-09-26T09:56:14+00:00 |
|
82fff01dfe20340fca20b50b66f61cd7e6d7a2e4 | dary/agagga_oaoa | [
"license:openrail",
"region:us"
] | 2022-09-26T09:59:06+00:00 | {"license": "openrail"} | 2022-09-26T09:59:06+00:00 |
|
db3f9a34f0c1c287db91e86861ca8bdff67f5935 |
# Download zenodo dataset files using huggingface datasets
You can download a specific file from the Zenodo dataset using the following code:
Zenodo id : 5172018
File name : FDB-17-fragmentset.smi.gz
```python
from datasets import load_dataset
load_dataset("osbm/zenodo", "5172018_FDB-17-fragmentset.smi.gz")
```
This command will also copy the file into your current directory so that you can use it directly.
Here is an example notebook: https://gist.github.com/osbm/35a499f5756df22de30be20463aa6331
# Contribution
[The huggingface repository](https://huggingface.co/datasets/osbm/zenodo) is actually a mirror of the github repository [osbm/zenodo](https://github.com/osbm/huggingface-zenodo-datasets). If you want to open an issue or PR, please do it on the github repository. I chose to do it this way because I wanted to use github actions. Currently only github action is mirroring the repository to huggingface. 😅
| osbm/zenodo | [
"region:us"
] | 2022-09-26T10:04:40+00:00 | {"pretty_name": "Download Zenodo Dataset files"} | 2023-06-12T10:36:45+00:00 |
e10538f40436c73126e8fbcf08502cbc6bdb751b | ChickenHiiro/Duc_Luu | [
"license:artistic-2.0",
"region:us"
] | 2022-09-26T10:30:33+00:00 | {"license": "artistic-2.0"} | 2022-09-27T01:02:03+00:00 |
|
feb76ecc5e78064880e0b784bc0fe3daa92fc330 | ali4546/ma | [
"license:afl-3.0",
"region:us"
] | 2022-09-26T11:23:43+00:00 | {"license": "afl-3.0"} | 2022-09-26T11:23:43+00:00 |
|
6b7cdd494e42ae91bea2ac6aceeeed38132b12cd | EMBO/sd-nlp-v2 | [
"license:cc-by-4.0",
"region:us"
] | 2022-09-26T11:38:27+00:00 | {"license": "cc-by-4.0"} | 2022-09-26T11:47:16+00:00 |
|
62245ed0664652b85c4360f2320b59bbb8a83cb8 | tomvo/test_images | [
"region:us"
] | 2022-09-26T12:51:37+00:00 | {} | 2022-09-26T17:28:16+00:00 |
|
05f2b9a2b864e04ec1a969f6d31923a776307c53 | ........ | datascopum/datascopum | [
"region:us"
] | 2022-09-26T13:56:42+00:00 | {} | 2022-09-29T15:33:40+00:00 |
0a661c385f1c7ceaa45f8f5cd72abb8ea76d3851 | FerdinandASH/Ferdinand | [
"license:afl-3.0",
"region:us"
] | 2022-09-26T14:15:46+00:00 | {"license": "afl-3.0"} | 2022-09-26T14:16:41+00:00 |
|
68d75d195c960726ab362a157dfa311e075295a8 | # Dataset Card for "model-repos-stats"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | open-source-metrics/model-repos-stats | [
"region:us"
] | 2022-09-26T14:54:28+00:00 | {"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "repo_id", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "model_type", "dtype": "string"}, {"name": "files_per_repo", "dtype": "int64"}, {"name": "downloads_30d", "dtype": "int64"}, {"name": "library", "dtype": "string"}, {"name": "likes", "dtype": "int64"}, {"name": "pipeline", "dtype": "string"}, {"name": "pytorch", "dtype": "bool"}, {"name": "tensorflow", "dtype": "bool"}, {"name": "jax", "dtype": "bool"}, {"name": "license", "dtype": "string"}, {"name": "languages", "dtype": "string"}, {"name": "datasets", "dtype": "string"}, {"name": "co2", "dtype": "string"}, {"name": "prs_count", "dtype": "int64"}, {"name": "prs_open", "dtype": "int64"}, {"name": "prs_merged", "dtype": "int64"}, {"name": "prs_closed", "dtype": "int64"}, {"name": "discussions_count", "dtype": "int64"}, {"name": "discussions_open", "dtype": "int64"}, {"name": "discussions_closed", "dtype": "int64"}, {"name": "tags", "dtype": "string"}, {"name": "has_model_index", "dtype": "bool"}, {"name": "has_metadata", "dtype": "bool"}, {"name": "has_text", "dtype": "bool"}, {"name": "text_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 68539081, "num_examples": 245197}], "download_size": 14926618, "dataset_size": 68539081}} | 2023-07-03T00:35:17+00:00 |
da31b6c38403a4811b20342486bdf0ec2a724a2a | **Context**
Generating humor is a complex task in the domain of machine learning, and it requires the models to understand the deep semantic meaning of a joke in order to generate new ones. Such problems, however, are difficult to solve due to a number of reasons, one of which is the lack of a database that gives an elaborate list of jokes. Thus, a large corpus of over 0.2 million jokes has been collected by scraping several websites containing funny and short jokes.
You can visit the [Github repository](https://github.com/amoudgl/short-jokes-dataset) from [amoudgl](https://github.com/amoudgl) for more information regarding collection of data and the scripts used.
**Content**
This dataset is in the form of a csv file containing 231,657 jokes. Length of jokes ranges from 10 to 200 characters. Each line in the file contains a unique ID and joke.
**Disclaimer**
It has been attempted to keep the jokes as clean as possible. Since the data has been collected by scraping websites, it is possible that there may be a few jokes that are inappropriate or offensive to some people.
**Note**
This dataset is taken from Kaggle dataset that can be found [here](https://www.kaggle.com/datasets/abhinavmoudgil95/short-jokes). | ysharma/short_jokes | [
"license:mit",
"region:us"
] | 2022-09-26T15:57:00+00:00 | {"license": "mit"} | 2022-09-26T16:11:06+00:00 |
7f7c09a2950eca4bbafefca78196015ffaa3059f | Worldwars/caka | [
"license:cc0-1.0",
"region:us"
] | 2022-09-26T16:05:32+00:00 | {"license": "cc0-1.0"} | 2022-09-26T16:15:44+00:00 |
|
de93f205b1d46c99e45e3da694207776da2bbf63 |
# Dataset Card for CoSimLex
### Dataset Summary
The dataset contains human similarity ratings for pairs of words. The annotators were presented with contexts that contained both of the words in the pair and the dataset features two different contexts per pair. The words were sourced from the English, Croatian, Finnish and Slovenian versions of the original Simlex dataset.
Statistics:
- 340 English pairs (config `en`),
- 112 Croatian pairs (config `hr`),
- 111 Slovenian pairs (config `sl`),
- 24 Finnish pairs (config `fi`).
### Supported Tasks and Leaderboards
Graded word similarity in context.
### Languages
English, Croatian, Slovenian, Finnish.
## Dataset Structure
### Data Instances
A sample instance from the dataset:
```
{
'word1': 'absence',
'word2': 'presence',
'context1': 'African slaves from Angola and Mozambique were also present, but in fewer numbers than in other Brazilian areas, because Paraná was a poor region that did not need much slave manpower. The immigration grew in the mid-19th century, mostly composed of Italian, German, Polish, Ukrainian, and Japanese peoples. While Poles and Ukrainians are present in Paraná, their <strong>presence</strong> in the rest of Brazil is almost <strong>absence</strong>.',
'context2': 'The Chinese had become almost impossible to deal with because of the turmoil associated with the cultural revolution. The North Vietnamese <strong>presence</strong> in Eastern Cambodia had grown so large that it was destabilizing Cambodia politically and economically. Further, when the Cambodian left went underground in the late 1960s, Sihanouk had to make concessions to the right in the <strong>absence</strong> of any force that he could play off against them.',
'sim1': 2.2699999809265137,
'sim2': 1.3700000047683716,
'stdev1': 2.890000104904175,
'stdev2': 1.7899999618530273,
'pvalue': 0.2409999966621399,
'word1_context1': 'absence',
'word2_context1': 'presence',
'word1_context2': 'absence',
'word2_context2': 'presence'
}
```
### Data Fields
- `word1`: a string representing the first word in the pair. Uninflected form.
- `word2`: a string representing the second word in the pair. Uninflected form.
- `context1`: a string representing the first context containing the pair of words. The target words are marked with a `<strong></strong>` labels.
- `context2`: a string representing the second context containing the pair of words. The target words are marked with a `<strong></strong>` labels.
- `sim1`: a float representing the mean of the similarity scores within the first context.
- `sim2`: a float representing the mean of the similarity scores within the second context.
- `stdev1`: a float representing the standard Deviation for the scores within the first context.
- `stdev2`: a float representing the standard deviation for the scores within the second context.
- `pvalue`: a float representing the p-value calculated using the Mann-Whitney U test.
- `word1_context1`: a string representing the inflected version of the first word as it appears in the first context.
- `word2_context1`: a string representing the inflected version of the second word as it appears in the first context.
- `word1_context2`: a string representing the inflected version of the first word as it appears in the second context.
- `word2_context2`: a string representing the inflected version of the second word as it appears in the second context.
## Additional Information
### Dataset Curators
Carlos Armendariz; et al. (please see http://hdl.handle.net/11356/1308 for the full list)
### Licensing Information
GNU GPL v3.0.
### Citation Information
```
@inproceedings{armendariz-etal-2020-semeval,
title = "{SemEval-2020} {T}ask 3: Graded Word Similarity in Context ({GWSC})",
author = "Armendariz, Carlos S. and
Purver, Matthew and
Pollak, Senja and
Ljube{\v{s}}i{\'{c}}, Nikola and
Ul{\v{c}}ar, Matej and
Robnik-{\v{S}}ikonja, Marko and
Vuli{\'{c}}, Ivan and
Pilehvar, Mohammad Taher",
booktitle = "Proceedings of the 14th International Workshop on Semantic Evaluation",
year = "2020",
address="Online"
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
| cjvt/cosimlex | [
"task_categories:other",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:n<1K",
"language:en",
"language:hr",
"language:sl",
"language:fi",
"license:gpl-3.0",
"graded-word-similarity-in-context",
"region:us"
] | 2022-09-26T17:13:05+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en", "hr", "sl", "fi"], "license": ["gpl-3.0"], "multilinguality": ["multilingual"], "size_categories": ["n<1K"], "source_datasets": [], "task_categories": ["other"], "task_ids": [], "pretty_name": "CoSimLex", "tags": ["graded-word-similarity-in-context"]} | 2022-10-21T06:34:58+00:00 |
16e24521436eaf961e62b0406744617666a741ba |
# Dataset Card for Airplane Crashes and Fatalities
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/thedevastator/airplane-crashes-and-fatalities
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
## Airplane Crashes and Fatalities
_____
This dataset showcases Boeing 707 accidents that have occurred since 1948. The data includes information on the date, time, location, operator, flight number, route, type of aircraft, registration number, cn/In number of persons on board, fatalities, ground fatalities, and a summary of the accident
### How to use the dataset
This dataset includes information on over 5,000 airplane crashes around the world.
This is an absolutely essential dataset for anyone interested in aviation safety! Here you will find information on when and where each crash occurred, what type of plane was involved, how many people were killed, and much more.
This dataset is perfect for anyone interested in data visualization or analysis. With so much information available, there are endless possibilities for interesting stories and insights that can be gleaned from this data.
So whether you're a seasoned data pro or just getting started, this dataset is sure to give you plenty to work with. So get started today and see what you can discover!
### Research Ideas
1. Plot a map of all flight routes
2. Analyze what type of aircraft is involved in the most crashes
3. Identify patterns in where/when crashes occur
### Columns
- **index:** the index of the row
- **Date:** the date of the incident
- **Time:** the time of the incident
- **Location:** the location of the incident
- **Operator:** the operator of the aircraft
- **Flight #:** the flight number of the aircraft
- **Route:** the route of the aircraft
- **Type:** the type of aircraft
- **Registration:** the registration of the aircraft
- **cn/In:** the construction number/serial number of the aircraft
- **Aboard:** the number of people on board the aircraft
- **Fatalities:** the number of fatalities in the incident
- **Ground:** the number of people on the ground killed in the incident
- **Summary:** a summary of the incident
### Acknowledgements
This dataset was obtained from the Data Society. If you use this dataset in your research, please credit the Data Society.
Columns: index, Date, Time, Location, Operator, Flight #, Route, Type, Registration, cn/In, Aboard, Fatalities Ground Summary
> [Data Source](https://data.world/data-society)
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@thedevastator](https://kaggle.com/thedevastator)
### Licensing Information
The license for this dataset is cc-by-nc-sa-4.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] | nateraw/airplane-crashes-and-fatalities | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-09-26T18:02:55+00:00 | {"license": ["cc-by-nc-sa-4.0"], "converted_from": "kaggle", "kaggle_id": "thedevastator/airplane-crashes-and-fatalities"} | 2022-09-27T16:55:18+00:00 |
408981fdb52b04955973f83fa16827f73f351971 | cfilt/AI-OpenMic | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-09-26T19:25:07+00:00 | {"license": "cc-by-nc-sa-4.0"} | 2022-09-26T19:41:52+00:00 |
|
ad5e82960d05d634773914859e9e47c70614823c | Simple English Wikipedia it has only about 170k articles. We split these articles into paragraphs.
wikipedia_filepath = 'simplewiki-2020-11-01.jsonl.gz'
if not os.path.exists(wikipedia_filepath):
util.http_get('http://sbert.net/datasets/simplewiki-2020-11-01.jsonl.gz', wikipedia_filepath) | gfhayworth/wiki_mini | [
"region:us"
] | 2022-09-26T19:42:59+00:00 | {} | 2023-01-28T23:28:54+00:00 |
1a417e7ef6997cabeb2e864470118d1d5ed93b40 | valentinabrzt/datasettttttttt | [
"license:afl-3.0",
"region:us"
] | 2022-09-26T20:13:37+00:00 | {"license": "afl-3.0"} | 2022-09-26T20:13:37+00:00 |
|
f4daca16419351170bc5d882b03459f60524c9c7 | Kunling/layoutlm_resume_data | [
"license:bsd",
"region:us"
] | 2022-09-26T20:48:22+00:00 | {"license": "bsd"} | 2022-09-29T04:18:32+00:00 |
|
209c2baf698f5693e8b2f755a21cdcb804814b3e | srvs/training | [
"license:artistic-2.0",
"region:us"
] | 2022-09-26T22:21:44+00:00 | {"license": "artistic-2.0"} | 2022-09-26T22:21:44+00:00 |
|
d4548d8a0d713c364d69e6dafeec59d3c7717026 | Tweets containing '#Mets' from early August through late September | Ceetar/MetsTweets | [
"region:us"
] | 2022-09-26T22:22:51+00:00 | {} | 2022-09-26T23:08:51+00:00 |
5e26419ab91ed4a212eb945097dfc3b5d0687401 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: Tristan/opt-66b-copy
* Dataset: mathemakitten/winobias_antistereotype_dev
* Config: mathemakitten--winobias_antistereotype_dev
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Tristan](https://huggingface.co/Tristan) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-08a58b-1563555688 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-27T00:44:58+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_dev"], "eval_info": {"task": "text_zero_shot_classification", "model": "Tristan/opt-66b-copy", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_dev", "dataset_config": "mathemakitten--winobias_antistereotype_dev", "dataset_split": "validation", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-09-27T03:26:16+00:00 |
ec2cb334401dfe22f8b85a56ed47018c56350a44 | ltjabc/sanguosha | [
"license:other",
"region:us"
] | 2022-09-27T01:04:44+00:00 | {"license": "other"} | 2022-09-27T01:04:44+00:00 |
|
bdd784fd553e9e6546ca8167a7e23e7189e42c2f | cays/LX0 | [
"license:artistic-2.0",
"region:us"
] | 2022-09-27T01:25:48+00:00 | {"license": "artistic-2.0"} | 2022-09-27T01:29:16+00:00 |
|
5bf51cd1b371b4c8aa0fe48d64123e20b25cdaf7 |
# Aggregated Captcha Images and Text
## Credits
All the images (not the texts) here contained have been downloaded and selected from various datasets on kaggle.com
### What is this?
This is a dataset containing some hundreds of thousands of images taken from real and used captchas (reCaptcha, hCaptcha and various others) and containing an equally big amount of random 4-8 length texts generated each one in 363 different fonts and with different random noise, size, colors and scratches on them.
While the texts part might result difficult to recognize from the models you could train, the images quantity allows the model to offer a significant possibility of recognization of captcha images.
### Disclaimer
This dataset is NOT intended to break any ToS of any website or to execute malicious, illegal or unethical actions. This dataset is distributed with a purely informative and educative finality, namely the study of the weakness or strength of the current protection systems.
You will for example notice how puzzle based captchas are highly resistant to this kind of analysis. | tcsenpai/aggregated_captcha_images_and_text | [
"license:cc-by-nc-4.0",
"region:us"
] | 2022-09-27T01:36:22+00:00 | {"license": "cc-by-nc-4.0"} | 2022-09-27T02:31:17+00:00 |
76aeb129b64a67d72998420da80c2e51032c6907 |
# Dataset Card for Lexicap
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
-
## Dataset Structure
### Data Instances
Train and test dataset.
j
### Data Fields
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
### Contributions
| shubhamg2208/lexicap | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:dialogue-modeling",
"task_ids:language-modeling",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"karpathy,whisper,openai",
"region:us"
] | 2022-09-27T02:59:08+00:00 | {"language_creators": ["found"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-classification", "text-generation"], "task_ids": ["sentiment-analysis", "dialogue-modeling", "language-modeling"], "pretty_name": "Lexicap: Lex Fridman Podcast Whisper captions", "lexicap": ["found"], "tags": ["karpathy,whisper,openai"]} | 2022-09-27T03:41:00+00:00 |
840a29a57e1be9102cd03a752c7512ad0ecd1bee | Worldwars/caka1 | [
"license:artistic-2.0",
"region:us"
] | 2022-09-27T06:59:56+00:00 | {"license": "artistic-2.0"} | 2022-09-27T07:00:24+00:00 |
|
d5dfe0d2fdc72e5d881a47cd3e8e8e57c2ca5b1b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP17
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-ba6080-1564655701 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-27T07:14:27+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP17", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-09-28T11:45:02+00:00 |
5b001451c8a86ecabf3e8aa1486ab7780534b48a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP17
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-billsum-default-37bdaa-1564755702 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-27T07:14:31+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["billsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP17", "metrics": [], "dataset_name": "billsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "summary"}}} | 2022-09-28T13:20:08+00:00 |
ee5cf7dc24900b58bd4a0f8c0de335ad4f7bdb4d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP17
* Dataset: launch/gov_report
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-launch__gov_report-plain_text-45e121-1564955705 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-27T07:14:49+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP17", "metrics": [], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}} | 2022-09-27T22:02:40+00:00 |
b080eb0ef952f2c8283f6bf0186d2e03bf88b527 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15
* Dataset: launch/gov_report
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-launch__gov_report-plain_text-45e121-1564955706 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-27T07:14:55+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15", "metrics": [], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}} | 2022-09-27T22:17:35+00:00 |
64ce816c8fa6cffd09a52c77ed4bffe769228cb4 | asdfdvfbdb/efefw | [
"license:afl-3.0",
"region:us"
] | 2022-09-27T07:40:27+00:00 | {"license": "afl-3.0"} | 2022-09-27T16:22:01+00:00 |
|
5923b9d0b341ae83cb27a529a920ad206724c689 | astronomou/me | [
"license:other",
"region:us"
] | 2022-09-27T08:20:51+00:00 | {"license": "other"} | 2022-09-27T09:50:07+00:00 |
|
702c3ff0bee31d2479f7f98a1095210683c3fec0 | #### automatic-dissection
# **HuBMAP + HPA - Hacking the Human Body**
##### **Segment multi-organ functional tissue units in biopsy slides from several different organs.**
### **Overview**
When you think of "life hacks," normally you’d imagine productivity techniques. But how about the kind that helps you understand your body at a molecular level? It may be possible! Researchers must first determine the function and relationships among the 37 trillion cells that make up the human body. A better understanding of our cellular composition could help people live healthier, longer lives.
A previous Kaggle [competition](https://www.kaggle.com/c/hubmap-kidney-segmentation) aimed to annotate cell population neighborhoods that perform an organ’s main physiologic function, also called functional tissue units (FTUs). Manually annotating FTUs (e.g., glomeruli in kidney or alveoli in the lung) is a time-consuming process. In the average kidney, there are over 1 million glomeruli FTUs. While there are existing cell and FTU segmentation methods, we want to push the boundaries by building algorithms that generalize across different organs and are robust across different dataset differences.
The [Human BioMolecular Atlas Program](https://hubmapconsortium.org/) (HuBMAP) is working to create a [Human Reference Atlas](https://www.nature.com/articles/s41556-021-00788-6) at the cellular level. Sponsored by the National Institutes of Health (NIH), HuBMAP and Indiana University’s Cyberinfrastructure for Network Science Center (CNS) have partnered with institutions across the globe for this endeavor. A major partner is the [Human Protein Atlas](https://www.proteinatlas.org/) (HPA), a Swedish research program aiming to map the protein expression in human cells, tissues, and organs, funded by the Knut and Alice Wallenberg Foundation.
In this repository, we [aim](https://www.kaggle.com/competitions/hubmap-organ-segmentation/) to identify and segment functional tissue units (FTUs) across five human organs. We have to build a model using a dataset of tissue section images, with the best submissions segmenting FTUs as accurately as possible.
If successful, we can help accelerate the world’s understanding of the relationships between cell and tissue organization. With a better idea of the relationship of cells, researchers will have more insight into the function of cells that impact human health. Further, the Human Reference Atlas constructed by HuBMAP will be freely available for use by researchers and pharmaceutical companies alike, potentially improving and prolonging human life.
### **Dataset Description**
The goal is to identify the locations of each functional tissue unit (FTU) in biopsy slides from several different organs. The underlying data includes imagery from different sources prepared with different protocols at a variety of resolutions, reflecting typical challenges for working with medical data.
This project uses [data](https://huggingface.co/datasets/n1ghtf4l1/automatic-dissection) from two different consortia, the [Human Protein Atlas](https://www.proteinatlas.org/) (HPA) and [Human BioMolecular Atlas Program](https://hubmapconsortium.org/) (HuBMAP). The training dataset consists of data from public HPA data, the public test set is a combination of private HPA data and HuBMAP data, and the private test set contains only HuBMAP data. Adapting models to function properly when presented with data that was prepared using a different protocol will be one of the core challenges of this competition. While this is expected to make the problem more difficult, developing models that generalize is a key goal of this endeavor.
### **Files**
**[train/test].csv** Metadata for the train/test set. Only the first few rows of the test set are available for download.
- ```id``` - The image ID.
- ```organ``` - The organ that the biopsy sample was taken from.
- ```data_source``` - Whether the image was provided by HuBMAP or HPA.
- ```img_height``` - The height of the image in pixels.
- ```img_width``` - The width of the image in pixels.
- ```pixel_size``` - The height/width of a single pixel from this image in micrometers. All HPA images have a pixel size of 0.4 µm. For HuBMAP imagery the pixel size is 0.5 µm for kidney, 0.2290 µm for large intestine, 0.7562 µm for lung, 0.4945 µm for spleen, and 6.263 µm for prostate.
- ```tissue_thickness``` - The thickness of the biopsy sample in micrometers. All HPA images have a thickness of 4 µm. The HuBMAP samples have tissue slice thicknesses 10 µm for kidney, 8 µm for large intestine, 4 µm for spleen, 5 µm for lung, and 5 µm for prostate.
- ```rle``` - The target column. A run length encoded copy of the annotations. Provided for the training set only.
- ```age``` - The patient's age in years. Provided for the training set only.
- ```sex``` - The gender of the patient. Provided for the training set only.
**sample_submission.csv**
- ```id``` - The image ID.
- ```rle``` - A run length encoded mask of the FTUs in the image.
**[train/test]_images/** The images. Expect roughly 550 images in the hidden test set. All HPA images are 3000 x 3000 pixels with a tissue area within the image around 2500 x 2500 pixels. The HuBMAP images range in size from 4500x4500 down to 160x160 pixels. HPA samples were stained with antibodies visualized with 3,3'-diaminobenzidine (DAB) and counterstained with hematoxylin. HuBMAP images were prepared using Periodic acid-Schiff (PAS)/hematoxylin and eosin (H&E) stains. All images used have at least one FTU. All tissue data used in this competition is from healthy donors that pathologists identified as pathologically unremarkable tissue.
**train_annotations/** The annotations provided in the format of points that define the boundaries of the polygon masks of the FTUs.
| n1ghtf4l1/automatic-dissection | [
"license:mit",
"region:us"
] | 2022-09-27T08:45:43+00:00 | {"license": "mit"} | 2022-11-01T07:08:47+00:00 |
8be2f1f757989d37ca17221661f6a9f66e0b57c8 | Spammie/rev-stable-diff | [
"license:gpl-3.0",
"region:us"
] | 2022-09-27T10:12:05+00:00 | {"license": "gpl-3.0"} | 2022-09-27T10:12:05+00:00 |
|
3b0559e997b2dc1a5eb080364ba2420e29e4dd2d |
Converted to json version of dataset from [Koziev/NLP_Datasets](https://github.com/Koziev/NLP_Datasets/blob/master/Conversations/Data/extract_dialogues_from_anekdots.tar.xz) | artemsnegirev/dialogs_from_jokes | [
"task_categories:conversational",
"task_ids:dialogue-generation",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:ru",
"license:cc0-1.0",
"region:us"
] | 2022-09-27T10:32:40+00:00 | {"language": ["ru"], "license": "cc0-1.0", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "task_categories": ["conversational"], "task_ids": ["dialogue-generation"], "pretty_name": "Dialogs from Jokes"} | 2022-09-27T10:43:32+00:00 |
5500a07ad0e88dae61f0f78a46f17751d5a95c7f |
```sh
git clone https://github.com/rust-bio/rust-bio-tools
rm -f RustBioGPT-validate.csv && for i in `find . -name "*.rs"`;do paste -d "," <(echo "rust-bio-tools"|perl -pe "s/(.+)/\"\1\"/g") <(echo $i|perl -pe "s/(.+)/\"\1\"/g") <(perl -pe "s/\n/\\\n/g" $i|perl -pe s"/\"/\'/g" |perl -pe "s/(.+)/\"\1\"/g") <(echo "mit"|perl -pe "s/(.+)/\"\1\"/g") >> RustBioGPT-validate.csv; done
sed -i '1i "repo_name","path","content","license"' RustBioGPT-validate.csv
``` | jelber2/RustBioGPT-valid | [
"license:mit",
"region:us"
] | 2022-09-27T10:52:42+00:00 | {"license": "mit"} | 2022-09-27T11:01:37+00:00 |
9faf4c6b77e44eef775cb951bd9cb094db9f301a | musper/hr_dataset_repo | [
"license:unlicense",
"region:us"
] | 2022-09-27T11:05:51+00:00 | {"license": "unlicense"} | 2022-09-27T13:13:23+00:00 |
|
58684b7a75ae57ed0dbcfcb87bdbd8ff3541aade |
# laion2B-multi-chinese-subset
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
## 简介 Brief Introduction
取自Laion2B多语言多模态数据集中的中文部分,一共143M个图文对。
A subset from Laion2B (a multimodal dataset), around 143M image-text pairs (only Chinese).
## 数据集信息 Dataset Information
大约一共143M个中文图文对。大约占用19GB空间(仅仅是url等文本信息,不包含图片)。
- Homepage: [laion-5b](https://laion.ai/blog/laion-5b/)
- Huggingface: [laion/laion2B-multi](https://huggingface.co/datasets/laion/laion2B-multi)
## 下载 Download
```bash
mkdir laion2b_chinese_release && cd laion2b_chinese_release
for i in {00000..00012}; do wget https://huggingface.co/datasets/IDEA-CCNL/laion2B-multi-chinese-subset/resolve/main/data/train-$i-of-00013.parquet; done
cd ..
```
## Lisence
CC-BY-4.0
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
| IDEA-CCNL/laion2B-multi-chinese-subset | [
"task_categories:feature-extraction",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"language:zh",
"license:cc-by-4.0",
"arxiv:2209.02970",
"region:us"
] | 2022-09-27T11:22:38+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["zh"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "task_categories": ["feature-extraction"], "pretty_name": "laion2B-multi-chinese-subset"} | 2023-04-06T05:32:18+00:00 |
89fe7518d3d92bdf8c688e815b0b66fe86153978 | dhruvs00/whatever_dataset1 | [
"license:openrail",
"region:us"
] | 2022-09-27T11:26:35+00:00 | {"license": "openrail"} | 2022-09-27T11:26:35+00:00 |
|
4d17ebae87690692e4ce9f102f35d28fa7ed5b66 |
# Dataset Card for WinoGAViL
- [Dataset Description](#dataset-description)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Colab notebook code for Winogavil evaluation with CLIP](#colab-notebook-code-for-winogavil-evaluation-with-clip)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
WinoGAViL is a challenging dataset for evaluating vision-and-language commonsense reasoning abilities. Given a set of images, a cue, and a number K, the task is to select the K images that best fits the association. This dataset was collected via the WinoGAViL online game to collect vision-and-language associations, (e.g., werewolves to a full moon). Inspired by the popular card game Codenames, a spymaster gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating associations that are challenging for a rival AI model but still solvable by other human players. We evaluate several state-of-the-art vision-and-language models, finding that they are intuitive for humans (>90% Jaccard index) but challenging for state-of-the-art AI models, where the best model (ViLT) achieves a score of 52%, succeeding mostly where the cue is visually salient. Our analysis as well as the feedback we collect from players indicate that the collected associations require diverse reasoning skills, including general knowledge, common sense, abstraction, and more.
- **Homepage:**
https://winogavil.github.io/
- **Colab**
https://colab.research.google.com/drive/19qcPovniLj2PiLlP75oFgsK-uhTr6SSi
- **Repository:**
https://github.com/WinoGAViL/WinoGAViL-experiments/
- **Paper:**
https://arxiv.org/abs/2207.12576
- **Leaderboard:**
https://winogavil.github.io/leaderboard
- **Point of Contact:**
[email protected]; [email protected]
### Supported Tasks and Leaderboards
https://winogavil.github.io/leaderboard.
https://paperswithcode.com/dataset/winogavil.
## Colab notebook code for Winogavil evaluation with CLIP
https://colab.research.google.com/drive/19qcPovniLj2PiLlP75oFgsK-uhTr6SSi
### Languages
English.
## Dataset Structure
### Data Fields
candidates (list): ["bison", "shelter", "beard", "flea", "cattle", "shave"] - list of image candidates.
cue (string): pogonophile - the generated cue.
associations (string): ["bison", "beard", "shave"] - the images associated with the cue selected by the user.
score_fool_the_ai (int64): 80 - the spymaster score (100 - model score) for fooling the AI, with CLIP RN50 model.
num_associations (int64): 3 - The number of images selected as associative with the cue.
num_candidates (int64): 6 - the number of total candidates.
solvers_jaccard_mean (float64): 1.0 - three solvers scores average on the generated association instance.
solvers_jaccard_std (float64): 1.0 - three solvers scores standard deviation on the generated association instance
ID (int64): 367 - association ID.
### Data Splits
There is a single TEST split. In the accompanied paper and code we sample it to create different training sets, but the intended use is to use winogavil as a test set.
There are different number of candidates, which creates different difficulty levels:
-- With 5 candidates, random model expected score is 38%.
-- With 6 candidates, random model expected score is 34%.
-- With 10 candidates, random model expected score is 24%.
-- With 12 candidates, random model expected score is 19%.
<details>
<summary>Why random chance for success with 5 candidates is 38%?</summary>
It is a binomial distribution probability calculation.
Assuming N=5 candidates, and K=2 associations, there could be three events:
(1) The probability for a random guess is correct in 0 associations is 0.3 (elaborate below), and the Jaccard index is 0 (there is no intersection between the correct labels and the wrong guesses). Therefore the expected random score is 0.
(2) The probability for a random guess is correct in 1 associations is 0.6, and the Jaccard index is 0.33 (intersection=1, union=3, one of the correct guesses, and one of the wrong guesses). Therefore the expected random score is 0.6*0.33 = 0.198.
(3) The probability for a random guess is correct in 2 associations is 0.1, and the Jaccard index is 1 (intersection=2, union=2). Therefore the expected random score is 0.1*1 = 0.1.
* Together, when K=2, the expected score is 0+0.198+0.1 = 0.298.
To calculate (1), the first guess needs to be wrong. There are 3 "wrong" guesses and 5 candidates, so the probability for it is 3/5. The next guess should also be wrong. Now there are only 2 "wrong" guesses, and 4 candidates, so the probability for it is 2/4. Multiplying 3/5 * 2/4 = 0.3.
Same goes for (2) and (3).
Now we can perform the same calculation with K=3 associations.
Assuming N=5 candidates, and K=3 associations, there could be four events:
(4) The probability for a random guess is correct in 0 associations is 0, and the Jaccard index is 0. Therefore the expected random score is 0.
(5) The probability for a random guess is correct in 1 associations is 0.3, and the Jaccard index is 0.2 (intersection=1, union=4). Therefore the expected random score is 0.3*0.2 = 0.06.
(6) The probability for a random guess is correct in 2 associations is 0.6, and the Jaccard index is 0.5 (intersection=2, union=4). Therefore the expected random score is 0.6*5 = 0.3.
(7) The probability for a random guess is correct in 3 associations is 0.1, and the Jaccard index is 1 (intersection=3, union=3). Therefore the expected random score is 0.1*1 = 0.1.
* Together, when K=3, the expected score is 0+0.06+0.3+0.1 = 0.46.
Taking the average of 0.298 and 0.46 we reach 0.379.
Same process can be recalculated with 6 candidates (and K=2,3,4), 10 candidates (and K=2,3,4,5) and 123 candidates (and K=2,3,4,5,6).
</details>
## Dataset Creation
Inspired by the popular card game Codenames, a “spymaster” gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating
associations that are challenging for a rival AI model but still solvable by other
human players.
### Annotations
#### Annotation process
We paid Amazon Mechanical Turk Workers to play our game.
## Considerations for Using the Data
All associations were obtained with human annotators.
### Licensing Information
CC-By 4.0
### Citation Information
@article{bitton2022winogavil,
title={WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models},
author={Bitton, Yonatan and Guetta, Nitzan Bitton and Yosef, Ron and Elovici, Yuval and Bansal, Mohit and Stanovsky, Gabriel and Schwartz, Roy},
journal={arXiv preprint arXiv:2207.12576},
year={2022}
| severo/winogavil | [
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"commonsense-reasoning",
"visual-reasoning",
"arxiv:2207.12576",
"region:us"
] | 2022-09-27T13:06:01+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_ids": [], "paperswithcode_id": "winogavil", "pretty_name": "WinoGAViL", "tags": ["commonsense-reasoning", "visual-reasoning"], "extra_gated_prompt": "By clicking on \u201cAccess repository\u201d below, you also agree that you are using it solely for research purposes. The full license agreement is available in the dataset files."} | 2022-09-27T13:00:32+00:00 |
c20fb7cdff2c4b197e4c4125f850db01a559b4ab | Dataset for paper: Learning the Solution Operator of Boundary Value Problems using Graph Neural Networks
https://arxiv.org/abs/2206.14092 | winfried/gnn_bvp_solver | [
"license:mit",
"arxiv:2206.14092",
"region:us"
] | 2022-09-27T14:14:07+00:00 | {"license": "mit"} | 2022-09-27T15:52:13+00:00 |
6b02cd3afdb4739ec50cd9d492fb9fbfbc2f584d | dracoglacius/timit | [
"license:mit",
"region:us"
] | 2022-09-27T14:19:11+00:00 | {"license": "mit"} | 2022-09-27T14:39:35+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.