sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
e7d1d9a84343c125f093d40e051a7420a0a10481
|
# xnli_vi
- Num examples:
- 5,010 (test)
- 2,490 (validation)
- 392,702 (train)
- Language: Vietnamese, English
|
nlplabtdtu/xnli_vi
|
[
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:vi",
"language:en",
"LM",
"region:us"
] |
2023-04-06T11:16:34+00:00
|
{"language": ["vi", "en"], "size_categories": ["10K<n<100K"], "task_categories": ["question-answering"], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "test"}, {"name": "validation"}, {"name": "train"}]}, "tags": ["LM"]}
|
2023-04-09T00:44:26+00:00
|
a39f3bb369464605f4f1982a2a760206079d67d2
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
WilliamWen/combination_two
|
[
"task_categories:token-classification",
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-04-06T11:41:11+00:00
|
{"language": ["en"], "license": "apache-2.0", "task_categories": ["token-classification"]}
|
2023-04-06T12:51:17+00:00
|
bfce5c5c81059d417613363ab962df9aada93e6e
|
# Dataset Card for "mtg-pauper-blip-captions-human"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
vedalken/mtg-pauper-blip-captions-human
|
[
"region:us"
] |
2023-04-06T12:00:00+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 37106171.0, "num_examples": 450}], "download_size": 37094749, "dataset_size": 37106171.0}}
|
2023-04-06T12:00:06+00:00
|
fee5d3a65b4e8f508ac99db7c873f72eed052ea7
|
moneygreen/secdocs1
|
[
"license:apache-2.0",
"region:us"
] |
2023-04-06T12:01:22+00:00
|
{"license": "apache-2.0"}
|
2023-04-06T12:01:22+00:00
|
|
8664c087511628be0938855c08383015a965ff94
|
# Dataset Card for "m2e_6_4_padded_no_tags_no_augs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Sunbird/m2e_6_4_padded_no_tags_no_augs
|
[
"region:us"
] |
2023-04-06T12:02:08+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 4401362036, "num_examples": 2626111}, {"name": "valid", "num_bytes": 4190000, "num_examples": 2500}], "download_size": 384950492, "dataset_size": 4405552036}}
|
2023-04-06T12:03:25+00:00
|
12ccecce99bacde949661780b59805e5a36578e1
|
# Northwind Invoices and Related Documents
This dataset contains a collection of invoices and related documents from the Northwind database, a sample database used by Microsoft for demonstrating database functionalities.
The invoices include information about the customer, the salesperson, the order date, order ID, product IDs, product names, quantities, unit prices, and total prices. The related documents include shipping documents and stock documents.
This dataset was created by [CHERGUELAINE Ayoub](https://www.linkedin.com/in/ayoub-cherguelaine/) & [BOUBEKRI Faycal](https://www.linkedin.com/in/faycal-boubekri-832848199/) for the purpose of classifying company documents. It can be used for document classification and other NLP tasks.
# Northwind Invoices
This dataset contains a collection of invoices generated from the Northwind database, a sample database that represents a fictional food trading company.
The dataset includes invoice information such as the invoice number, customer name, invoice date, product names, quantities, unit prices, and total prices. The data spans over a period of several years and contains information about customers from various countries.
# Data
The dataset contains 2144 rows and 9 columns. The columns are as follows:
Customer Name: The name of the customer who made the purchase.
Salesperson Name: The name of the salesperson who generated the invoice.
Order Date: The date the order was placed.
Order ID: The unique ID of the order.
ProductID: The unique ID of the product.
Products: The name of the product.
Quantities: The quantity of the product ordered.
UnitPrice: The unit price of the product.
Total Price: The total price of the product ordered.
# Acknowledgements
The Northwind database was originally created by Microsoft for use in its Access and SQL Server software, and has been widely used as a sample database for educational and demonstration purposes. This dataset was extracted from the database and made available in CSV format for research purposes.
# Citation
If you use this dataset in your research, please cite it as follows:
```json
@misc{northwind_invoices,
authors = {CHERGUELAINE Ayoub , BOUBEKRI Faycal},
title = {Northwind Invoices},
year = {2023},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/AyoubChLin/north_wind_invoices},
}
```
|
AyoubChLin/northwind_invocies
|
[
"task_categories:feature-extraction",
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"finance",
"region:us"
] |
2023-04-06T12:15:14+00:00
|
{"language": ["en"], "license": "apache-2.0", "task_categories": ["feature-extraction", "text-classification"], "pretty_name": "PDF northwind", "tags": ["finance"]}
|
2023-04-06T13:55:23+00:00
|
6eae146f079cec1326bfdc40d97f6528cc29c5f4
|
# Dataset Card for "stack-exchange-sample10000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ArmelR/stack-exchange-sample10000
|
[
"region:us"
] |
2023-04-06T12:19:39+00:00
|
{"dataset_info": {"features": [{"name": "qid", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "metadata", "sequence": "string"}, {"name": "response_j", "dtype": "string"}, {"name": "response_k", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 27983797.447734267, "num_examples": 10000}], "download_size": 15522939, "dataset_size": 27983797.447734267}}
|
2023-04-06T12:19:45+00:00
|
e141c58145a39e4fc3b259d58b2d58e0b0a8f736
|
# Dataset Card for "insurance-qa-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
rvpierre/insurance-qa-en
|
[
"region:us"
] |
2023-04-06T12:38:01+00:00
|
{"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "topic_en", "dtype": "string"}, {"name": "question_en", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1044899, "num_examples": 12888}, {"name": "test", "num_bytes": 162551, "num_examples": 1999}, {"name": "valid", "num_bytes": 162498, "num_examples": 1999}], "download_size": 126622, "dataset_size": 1369948}}
|
2023-04-07T08:33:50+00:00
|
2b998e0fb744125cc9a2691a5acf4e367f02e85b
|
# Hill
The [Hill dataset](https://archive.ics.uci.edu/ml/datasets/Hill) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Do the plotted coordinates draw a hill?
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------------------------------|
| hill | Binary classification | Do the plotted coordinates draw a hill? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/hill")["train"]
```
# Features
Features are the coordinates of the drawn point. Feature `X{i}` is the `y` coordinate of the point `(i, X{i})`.
|
mstz/hill
|
[
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"hill",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] |
2023-04-06T12:42:23+00:00
|
{"language": ["en"], "license": "cc", "size_categories": ["n<1K"], "task_categories": ["tabular-classification"], "pretty_name": "Hill", "tags": ["hill", "tabular_classification", "binary_classification", "UCI"], "configs": ["hill"]}
|
2023-04-16T16:31:39+00:00
|
fdba38f7a5f871334afd07938c29c511ca7a59fc
|
# ILPD
The [ILPD dataset](https://archive.ics.uci.edu/ml/datasets/ILPD) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|---------------------------------------|
| liver | Binary classification | Does the patient have liver problems? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/liver")["train"]
```
|
mstz/liver
|
[
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"ilpd",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] |
2023-04-06T12:53:51+00:00
|
{"language": ["en"], "license": "cc", "size_categories": ["n<1K"], "task_categories": ["tabular-classification"], "pretty_name": "Liver", "tags": ["ilpd", "tabular_classification", "binary_classification", "multiclass_classification", "UCI"], "configs": ["liver"]}
|
2023-04-16T16:33:33+00:00
|
5593f7e4527c1e3d780c92cdbf57d54652b1b48a
|
<img src="https://s3.amazonaws.com/moonup/production/uploads/632eed9e04b24dbdb9eaa6d4/ToFJ26XGVkO2FTJ4dH-yH.png" width="256" height="256">
|
chavinlo/tempofunk-old
|
[
"task_categories:video-classification",
"task_categories:visual-question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:agpl-3.0",
"video",
"video generation",
"region:us"
] |
2023-04-06T13:07:31+00:00
|
{"language": ["en"], "license": "agpl-3.0", "size_categories": ["10K<n<100K"], "task_categories": ["video-classification", "visual-question-answering"], "pretty_name": "TempoFunk!", "tags": ["video", "video generation"]}
|
2023-04-14T05:19:04+00:00
|
28354db55c96b1f58635c05c59a83df9f4d586cf
|
# Ionosphere
The [Ionosphere dataset](https://archive.ics.uci.edu/ml/datasets/Ionosphere) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Census dataset including personal characteristic of a person, and their ionosphere threshold.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|---------------------------------------------------------------|
| ionosphere | Binary classification | Does the received signal indicate electrons in the ionosphere?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/ionosphere")["train"]
```
|
mstz/ionosphere
|
[
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"ionosphere",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] |
2023-04-06T13:08:12+00:00
|
{"language": ["en"], "license": "cc", "size_categories": ["n<1K"], "task_categories": ["tabular-classification"], "pretty_name": "Ionosphere", "tags": ["ionosphere", "tabular_classification", "binary_classification", "UCI"], "configs": ["ionosphere"]}
|
2023-04-16T16:32:10+00:00
|
d65199e3d5fabb98fd8fb72feaf1a2224c24240b
|
# Magic
The [Magic dataset](https://archive.ics.uci.edu/ml/datasets/Magic) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|---------------------------------------------------------------|
| magic | Binary classification | Classify the person's magic as over or under the threshold. |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/magic")["train"]
```
|
mstz/magic
|
[
"task_categories:tabular-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc",
"magic",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] |
2023-04-06T13:33:36+00:00
|
{"language": ["en"], "license": "cc", "size_categories": ["10K<n<100K"], "task_categories": ["tabular-classification"], "pretty_name": "Magic", "tags": ["magic", "tabular_classification", "binary_classification", "UCI"], "configs": ["magic"]}
|
2023-04-16T16:34:16+00:00
|
a4de1916794c7f7b4bda04a8b61f0197591584ae
|
# Mammography
The [Mammography dataset](https://archive.ics.uci.edu/ml/datasets/Mammography) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------------|
| mammography | Binary classification | Is the lesion benign? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/mammography")["train"]
```
|
mstz/mammography
|
[
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"mammography",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] |
2023-04-06T13:54:30+00:00
|
{"language": ["en"], "license": "cc", "size_categories": ["n<1K"], "task_categories": ["tabular-classification"], "pretty_name": "Mammography", "tags": ["mammography", "tabular_classification", "binary_classification", "UCI"], "configs": ["mammography"]}
|
2023-04-16T16:34:26+00:00
|
8032a539ba99e72523656ef0383198c297dfad27
|
LeslieC21/Mr_Red
|
[
"license:other",
"region:us"
] |
2023-04-06T13:57:55+00:00
|
{"license": "other"}
|
2023-04-06T13:57:55+00:00
|
|
15e787fd92585c2bbad08604a731da5430aeb99d
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
albert1234/albert1234
|
[
"task_categories:translation",
"size_categories:n<1K",
"language:en",
"license:mit",
"code",
"region:us"
] |
2023-04-06T14:12:26+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["translation"], "tags": ["code"]}
|
2023-04-06T14:20:28+00:00
|
80338a4c87974bc9bd0bd506976b51a3707c67fb
|
# Dataset Card for "jomleh"
## Dataset Summary
"Jomleh" is a high-quality Farsi language dataset consisting of sentences that have been carefully preprocessed to ensure they contain only Farsi characters, without any contamination from other languages. The data has been sourced from multiple sources and undergone a deduplication process to ensure that each sentence is unique. While the text in the dataset is not original, the focus on quality over quantity ensures that each sentence is useful and informative. Each sample in "Jomleh" is a sentence, making it a valuable resource for natural language processing tasks and language modeling.
This dataset is composed of 227M Farsi sentences, taking 13 GB in compressed files (39 GB decompressed).
## Sample code to load this dataset
This is how you can use this dataset:
```python
from datasets import load_dataset
dataset = load_dataset("mlengineer-ai/jomleh", split="train")
for example in dataset:
print("id: ", example["id"])
print("sentence: ", example["text"])
print("source: ", example["source"])
```
Since the whole dataset is one `train` slice, in case you needed test (or any other) slice, you can slice it any way you like this way:
```python
from datasets import load_dataset
dataset = load_dataset("mlengineer-ai/jomleh", split="train[:95%]")
for example in dataset:
print("id: ", example["id"])
print("sentence: ", example["text"])
print("source: ", example["source"])
```
## Source Data
The data used to curate Jomleh is taken from the following sources:
- [OSCAR](https://huggingface.co/datasets/oscar) (fa):
* [OSCAR-2109](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)
* [OSCAR-2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201)
* [OSCAR-2301](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301)
- [CommonCrawl](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/commoncrawl_fa_merged.txt)
- [Leipzig](https://wortschatz.uni-leipzig.de/en/download/Iranian%20Persian):
* Community:
- Year: 2017 -> Alle
* Web
- Year: 2011, Country: Iran -> 10K, 30K, 100K
- Year: 2015, Country: Iran -> 10K, 30K, 100K
- Year: 2019, Country: Iran -> 10K, 30K, 100K, 300K, 1M
* Web-public
- Year: 2019, Country: Iran -> 10K, 30K, 100K, 300K, 1M
* Web.public
- Year: 2019, Country: Iran -> 10K, 30K, 100K, 300K, 1M
* Wikipedia
- Year: 2016, Country: Iran -> 10K, 30K, 100K, 300K, 1M
- Year: 2021, Country: Iran -> 10K, 30K, 100K, 300K, 1M
- [VOA Persian](https://jon.dehdari.org/corpora/)
- [Persian poems corpus](https://github.com/amnghd/Persian_poems_corpus)
- [Web to Corpus](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0022-6133-9)
- [TEP](https://opus.nlpl.eu/TEP.php): Tehran English-Persian parallel corpus
### Number of samples contributed by each source
| Source | Code | Number of samples |
|----|----|-----:|
| OSCAR | oscar_2109 | 72,646,870 |
| OSCAR | oscar_2201 | 53,583,646 |
| OSCAR | oscar_2301 | 72,157,974 |
| CommonCrawl | cc | 22,596,629 |
| Leipzig | web-2019_1M | 387,098 |
| Leipzig | web-2019_10K | 3,597 |
| Leipzig | web-2019_30K | 10,790 |
| Leipzig | web-2019_100K | 35,833 |
| Leipzig | web-2019_300K | 106,932 |
| Leipzig | news_2019_10K | 3,542 |
| Leipzig | news_2019_30K | 10,256 |
| Leipzig | news_2019_100K | 31,967 |
| Leipzig | news_2019_300K | 75,117 |
| Leipzig | news_2020_10K | 2,609 |
| Leipzig | news_2020_30K | 7,714 |
| Leipzig | news_2020_100K | 24,815 |
| Leipzig | news_2020_300K | 65,336 |
| Leipzig | newscrawl_2011_1M | 419,538 |
| Leipzig | newscrawl_2015_1M | 419,455 |
| Leipzig | newscrawl_2015_10K | 3,569 |
| Leipzig | newscrawl_2015_30K | 10,779 |
| Leipzig | newscrawl_2015_100K | 35,481 |
| Leipzig | newscrawl_2015_300K | 105,316 |
| Leipzig | newscrawl_2016_1M | 332,953 |
| Leipzig | newscrawl_2016_10K | 2,225 |
| Leipzig | newscrawl_2016_30K | 6,396 |
| Leipzig | newscrawl_2016_100K | 21,312 |
| Leipzig | newscrawl_2016_300K | 61,081 |
| Leipzig | newscrawl_2017_1M | 246,362 |
| Leipzig | newscrawl_2017_10K | 1,368 |
| Leipzig | newscrawl_2017_30K | 4,016 |
| Leipzig | newscrawl_2017_100K | 13,334 |
| Leipzig | newscrawl_2017_300K | 38,218 |
| Leipzig | newscrawl_2019_1M | 298,688 |
| Leipzig | newscrawl_2019_10K | 1,954 |
| Leipzig | newscrawl_2019_30K | 5,641 |
| Leipzig | newscrawl_2019_100K | 18,821 |
| Leipzig | newscrawl_2019_300K | 53,830 |
| Leipzig | wikipedia_2010_10K | 2,143 |
| Leipzig | wikipedia_2010_30K | 6,262 |
| Leipzig | wikipedia_2010_100K | 19,379 |
| Leipzig | wikipedia_2010_300K | 46,844 |
| Leipzig | wikipedia_2012_10K | 1,525 |
| Leipzig | wikipedia_2012_30K | 4,517 |
| Leipzig | wikipedia_2012_100K | 14,503 |
| Leipzig | wikipedia_2012_300K | 38,298 |
| Leipzig | wikipedia_2014_1M | 143,336 |
| Leipzig | wikipedia_2014_10K | 597 |
| Leipzig | wikipedia_2014_30K | 1,931 |
| Leipzig | wikipedia_2014_100K | 6,031 |
| Leipzig | wikipedia_2014_300K | 16,645 |
| VOA Persian | voa | 116,671 |
| Persian poems corpus | poems | 1,016,806 |
| Web to Corpus| w2c | 1,629,616 |
| TEP | tep | 488,558 |
## Layout and Structure
The dataset is composed of 60 JSON-line files. As the samples are spread across these files randomly (using a uniform distribution), the number of samples per each file is not an exact number but generally speaking, there are roughly an equal number of samples per each file (roughly 190,000 samples per file).
Each line of a file is a sample formatted in JSON with the following layout:
```json
{
"id": <A sequential integer>,
"text": "<A Farsi sentence>",
"source": "<One of codes mentioned in the table above>"
}
```
## Data curation process
### 1. Preprocessing
The value of this dataset is its preprocessing step. The main struggle working with Farsi text is the fact that due to some historical challenges, there are so many different codings out there used to save Farsi text. On top of that, you can add the complexity of dealing with multiple character codes for the same letter. In Farsi, the look of a character depends on its neighbouring characters. For example, consider the very last letter of Farsi alphabet "Ye":
It has an isolated form:
<pre><font size="5">ﯼ - Unicode: &#64508</font></pre>
But when surronded with other characters, its medial form is used:
<pre><font size="5">ﯿ - Unicode: &#64511</font></pre>
The correct way of typing the "Yeh" letter is to use its character code (Unicode U+06CC A.K.A. &#1740). That means to render, its correct form should be selected based on its surroundings. This requirement is usually taken care of by the "substitution table" which is a feature of the fonts. But at the same time, some text don't rely on the fonts and use the Unicode designed for the specific form of the letters directly. From the readers' point of the view, both will look identical but printing the code, you'll have different numbers. This complicates text processing in Farsi since we need to identify each character with a unique code regardless of their position in the word. On top of that, add the problem of using Arabic characters which some times are used to type Farsi text. Since the two languages share very similar alphabets (visually speaking), one can successfully read a text in Farsi while it's been typed using Arabic characters.
To address these problems, the preprocessing used in Jomleh tries its best to map all the different characters that look alike to their Farsi counterpart. This is not an exact science but based on the best effort. For instance, if a sentence is actually an Arabic sentence, the preprocessing script used here will make things worse. But assuming that all the text used here as source are 100% Farsi, this script should help make them uniform.
The same cleaning process is also applied to digits and puncuations.
At the end, any character that can be found in the Jomleh dataset is one of the following:
- a Farsi alphabet letter (`ا` to `ی`)
- one of the: `آ`, `أ`, `ؤ`, `ئ`
- a Farsi digit (`۹` to `۰`)
- a zero-width non-joiner (`\u200c`)
- a space
- one of the Farsi punctuations (`.`, `!`, `؟`, `،`, `؛`)
Any other character found in the text is eliminated based on best effort and if the elimination of such characters could harm the integrity of the sentence, then that sentence is removed from the dataset altogether.
The script used for the preprocessing can be found [here](/datasets/mlengineer-ai/jomleh/blob/main/preprocess.py).
It's also worth mentioning that the preprocessing script will convert the text into vertical format which is expected by the third step (deduplication). Simply put, in vertical format spaces are replaced with a line feed. And also they are surrounded with a `<doc>` tag. Here's an example sample converted into vertical format:
```
<doc id="poems_merged.txt">
این
درگه
ما
درگه
نومیدی
نیست.
</doc>
```
In this example, the `id` attribute of the `<doc>` tag points to the file where the sample is coming from.
This is the command that executes the preprocessing script:
```
find 1_prepared -name "*.txt" | parallel 'python ./preprocess.py $(basename {}) < {} > ./2_cleaned_vertical/$(basename {})'
```
### 2. Merging into one text file
Once the raw source data was preprocessed, they are merged into a single large text file. This can easily be accomplished using a single command:
```
cat ./2_cleaned_vertical/* > ./3_temp/clean_merged.vert
```
### 3. Deduplication
Once all the text is transformed into vertical format and saved into a single text file, the `onion` program is used to eliminate any duplicate samples. You can find the onion program from [this website](https://corpus.tools/wiki/Onion) and it is used here like this:
```
onion -sm -n 5 -t 0.5 ./3_temp/clean_merged.vert > ./3_temp/deduplicated.vert
```
### 4. Postprocessing
The postprocessing involves:
1. Converting back from vertical format into a single line per sample format.
2. Mapping the file names mentioned in the `id` attribute of the `<doc>` tag into one of the codes mentioned above.
3. Formatting each sample as a JSON-line (one json per line).
4. Distributing and saving the samples randomly across 60 files, trying to get relatively same number of samples per file.
These steps are run using the following command:
```
python ./postprocess.py ./3_temp < ./3_temp/deduplicated.vert | parallel "echo '{}' | python ./add_id.py ./3_temp ./jomleh/files"
```
### 5. Compressing the files
The generated JSON-line files are compressed using Zstandard - Real-time data compression algorithm:
```
find ./jomleh/files/*.jsonl -type f | parallel 'zstd --rm {}'
```
### 6. Generating the checksum file
The checksum file plays a dual role. Firstly, it keeps the checksum for each of 60 files for future verification. And also, it plays the role of index so the script can list and load the files. This is how the checksum file is generated:
```
ls ./jomleh/files/*.zst | sort -t _ -k 2 -n | xargs sha256sum > ./jomleh/files/checksum.sha256
```
## Statistics
After applying all the steps mentioned above, the curated dataset has the following statistics:
| | Statistics on the collected sentences |
|---:|:---|
| Total number of sentences: | 227,404,724 |
| Average number of characters in a sentence: | 101.16 |
| Standard deviation of the number of characters in a sentence: | 88.86 |
| Average number of words in a sentence: | 19.93 |
| Standard devitaion of the number of words in a sentence: | 17.54 |
| Average number of characters in a word: | 4.12 |
| Standard devitaion of the number of characters in a word: | 1.99 |
|
mlengineer-ai/jomleh
|
[
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:10B<n<100B",
"language:fa",
"license:cc0-1.0",
"region:us"
] |
2023-04-06T14:21:34+00:00
|
{"language": ["fa"], "license": "cc0-1.0", "multilinguality": ["monolingual"], "size_categories": ["10B<n<100B"], "source_datasets": ["OSCAR (fa)", "CommonCrawl", "Leipzig", "VOA Persian", "Persian poems corpus", "Web to Corpus", {"TEP": "Tehran English-Persian parallel corpus"}], "task_categories": ["fill-mask", "text-generation"], "task_ids": ["language-modeling"], "pretty_name": "Jomleh"}
|
2023-04-23T06:13:07+00:00
|
a94d6a9f5e3e0da162c137f4fb2aa52bda7db399
|
# Dataset Card for "Splits_Subset_HotpotQa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
TimoImhof/Splits_Subset_HotpotQa
|
[
"region:us"
] |
2023-04-06T14:26:40+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "split_0", "num_bytes": 630133, "num_examples": 500}, {"name": "split_1", "num_bytes": 627711, "num_examples": 500}, {"name": "split_2", "num_bytes": 623620, "num_examples": 500}, {"name": "split_3", "num_bytes": 617119, "num_examples": 500}, {"name": "split_4", "num_bytes": 630174, "num_examples": 500}, {"name": "split_5", "num_bytes": 629078, "num_examples": 500}, {"name": "split_6", "num_bytes": 628646, "num_examples": 500}, {"name": "split_7", "num_bytes": 635427, "num_examples": 500}, {"name": "split_8", "num_bytes": 627200, "num_examples": 500}, {"name": "split_9", "num_bytes": 616542, "num_examples": 500}, {"name": "split_10", "num_bytes": 617491, "num_examples": 500}, {"name": "split_11", "num_bytes": 635742, "num_examples": 500}, {"name": "no_split", "num_bytes": 7519377, "num_examples": 6000}, {"name": "shortcut", "num_bytes": 7526101, "num_examples": 6000}], "download_size": 12592682, "dataset_size": 22564361}}
|
2023-05-11T09:05:35+00:00
|
926bcf72c8aa2ecd1e0b44edbc2ec22d7e984973
|
# Dataset Card for "Splits_Subset_TriviaQa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
TimoImhof/Splits_Subset_TriviaQa
|
[
"region:us"
] |
2023-04-06T14:27:19+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "split_0", "num_bytes": 751653, "num_examples": 500}, {"name": "split_1", "num_bytes": 740490, "num_examples": 500}, {"name": "split_2", "num_bytes": 746131, "num_examples": 500}, {"name": "split_3", "num_bytes": 742024, "num_examples": 500}, {"name": "split_4", "num_bytes": 748022, "num_examples": 500}, {"name": "split_5", "num_bytes": 738794, "num_examples": 500}, {"name": "split_6", "num_bytes": 741090, "num_examples": 500}, {"name": "split_7", "num_bytes": 738081, "num_examples": 500}, {"name": "split_8", "num_bytes": 742030, "num_examples": 500}, {"name": "split_9", "num_bytes": 745126, "num_examples": 500}, {"name": "split_10", "num_bytes": 745942, "num_examples": 500}, {"name": "split_11", "num_bytes": 740459, "num_examples": 500}, {"name": "no_split", "num_bytes": 8919842, "num_examples": 6000}, {"name": "shortcut", "num_bytes": 8964710, "num_examples": 6000}], "download_size": 16948194, "dataset_size": 26804394}}
|
2023-05-11T09:13:23+00:00
|
cf1b60d49c9bf7676fa45d2ad879f2a78c88b13a
|
# Dataset Card for "Splits_Subset_SQuAD"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
TimoImhof/Splits_Subset_SQuAD
|
[
"region:us"
] |
2023-04-06T14:27:43+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "split_0", "num_bytes": 449716, "num_examples": 500}, {"name": "split_1", "num_bytes": 444641, "num_examples": 500}, {"name": "split_2", "num_bytes": 464228, "num_examples": 500}, {"name": "split_3", "num_bytes": 445871, "num_examples": 500}, {"name": "split_4", "num_bytes": 456437, "num_examples": 500}, {"name": "split_5", "num_bytes": 460414, "num_examples": 500}, {"name": "split_6", "num_bytes": 452482, "num_examples": 500}, {"name": "split_7", "num_bytes": 454860, "num_examples": 500}, {"name": "split_8", "num_bytes": 452647, "num_examples": 500}, {"name": "split_9", "num_bytes": 457041, "num_examples": 500}, {"name": "split_10", "num_bytes": 457992, "num_examples": 500}, {"name": "split_11", "num_bytes": 463472, "num_examples": 500}, {"name": "no_split", "num_bytes": 5459801, "num_examples": 6000}, {"name": "shortcut", "num_bytes": 5452074, "num_examples": 6000}], "download_size": 9566317, "dataset_size": 16371676}}
|
2023-05-11T09:05:06+00:00
|
a268b49134b081edfcf6908ea28c827d2cf7905d
|
# Promoters
The [Promoters dataset](https://archive.ics.uci.edu/ml/datasets/Promoters) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|---------------------------------|
| promoters | Binary classification | Is this DNA string a promoter? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/promoters")["train"]
```
|
mstz/promoters
|
[
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"promoters",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] |
2023-04-06T14:47:50+00:00
|
{"language": ["en"], "license": "cc", "size_categories": ["n<1K"], "task_categories": ["tabular-classification"], "pretty_name": "Promoters", "tags": ["promoters", "tabular_classification", "binary_classification", "UCI"], "configs": ["promoters"]}
|
2023-04-16T16:58:13+00:00
|
1b0008072d8583c56f711d530c809e2ce0c6ef87
|
# Dataset Card for "sv_corpora_parliament_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ytiriyar/sv_corpora_parliament_processed
|
[
"region:us"
] |
2023-04-06T15:04:58+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 292351437, "num_examples": 1892723}], "download_size": 0, "dataset_size": 292351437}}
|
2023-04-06T17:47:56+00:00
|
4a5b1c2af87435cab835824e658bd3b355718a4a
|
# Northwind Shipping Orders and Related Documents
This dataset contains a collection of Shipping Orders and related documents from the Northwind database, a sample database used by Microsoft for demonstrating database functionalities.
The Shipping Orders include information about the ship name, Address , Region, postal code ,country, customer ,employee shipped date product names, quantities, unit prices, and total prices. The related documents include shipping documents and stock documents.
This dataset was created by [CHERGUELAINE Ayoub](https://www.linkedin.com/in/ayoub-cherguelaine/) & [BOUBEKRI Faycal](https://www.linkedin.com/in/faycal-boubekri-832848199/) for the purpose of classifying company documents. It can be used for document classification and other NLP tasks.
## Northwind Shipping Orders Dataset
# Overview
The Northwind Shipping Orders dataset contains data on shipping orders from the Northwind database. It includes information such as order ID, shipping details, customer information, employee information, shipper information, order dates, and product information.
# Dataset Details
The dataset is stored as a pdf's file and contains the following columns:
OrderID: Unique identifier for each order
Ship Name: Name of the person or company that received the order
Ship Address: Address where the order was shipped
Ship City: City where the order was shipped
Ship Region: Region where the order was shipped
Ship Postal Code: Postal code of the shipping address
Ship Country: Country where the order was shipped
Customer ID: Unique identifier for the customer who placed the order
Customer Name: Name of the customer who placed the order
Employee Name: Name of the employee who processed the order
Shipper ID: Unique identifier for the shipping company
Shipper Name: Name of the shipping company
Order Date: Date the order was placed
Shipped Date: Date the order was shipped
Product Name: Name of the product
Quantity: Number of units of the product ordered
Unit Price: Price per unit of the product
Total Price: Total price of the order
# Usage
This dataset can be used for a variety of purposes, such as:
Analyzing sales and order trends
Identifying popular products
Identifying popular shipping companies
Analyzing customer behavior
Predicting future sales and trends
# Acknowledgements
This dataset was sourced from the Northwind database, which is a sample database used by Microsoft for educational purposes.
```
|
AyoubChLin/northwind_Shipping_orders
|
[
"task_categories:feature-extraction",
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"finance",
"region:us"
] |
2023-04-06T15:07:12+00:00
|
{"language": ["en"], "license": "apache-2.0", "task_categories": ["feature-extraction", "text-classification"], "pretty_name": "PDF northwind", "tags": ["finance"]}
|
2023-04-06T15:14:34+00:00
|
f48ea381cb3ae21178d8a573bb1a8995d4ab6b11
|
# CORD 19
## Dataset Description
- **Homepage:** https://www.kaggle.com/datasets/allen-institute-for-ai/CORD-19-research-challenge
### Dataset Summary
In response to the COVID-19 pandemic, the White House and a coalition of leading research groups have prepared the COVID-19 Open Research Dataset (CORD-19). CORD-19 is a resource of over 1,000,000 scholarly articles, including over 400,000 with full text, about COVID-19, SARS-CoV-2, and related coronaviruses. This freely available dataset is provided to the global research community to apply recent advances in natural language processing and other AI techniques to generate new insights in support of the ongoing fight against this infectious disease.
This is a processed version of the dataset, where we removed some empty entries and formated it to be compatible with the alpaca training. For more details on the data, please refer to the original publicatio.
### Citation Information
```
@inproceedings{wang-etal-2020-cord,
title = "{CORD-19}: The {COVID-19} Open Research Dataset",
author = "Wang, Lucy Lu and Lo, Kyle and Chandrasekhar, Yoganand and Reas, Russell and Yang, Jiangjiang and Burdick, Doug and Eide, Darrin and Funk, Kathryn and Katsis, Yannis and Kinney, Rodney Michael and Li, Yunyao and Liu, Ziyang and Merrill, William and Mooney, Paul and Murdick, Dewey A. and Rishi, Devvret and Sheehan, Jerry and Shen, Zhihong and Stilson, Brandon and Wade, Alex D. and Wang, Kuansan and Wang, Nancy Xin Ru and Wilhelm, Christopher and Xie, Boya and Raymond, Douglas M. and Weld, Daniel S. and Etzioni, Oren and Kohlmeier, Sebastian",
booktitle = "Proceedings of the 1st Workshop on {NLP} for {COVID-19} at {ACL} 2020",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.nlpcovid19-acl.1"
}
```
|
medalpaca/medical_meadow_cord19
|
[
"task_categories:summarization",
"size_categories:100K<n<1M",
"language:en",
"region:us"
] |
2023-04-06T15:24:06+00:00
|
{"language": ["en"], "size_categories": ["100K<n<1M"], "task_categories": ["summarization"]}
|
2023-04-06T15:47:03+00:00
|
861ce9d15bac7ae8c4c5527a3acdd93a3f7b6761
|
# Dataset Card for "alpaca-es-hackaton-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mserras/alpaca-es-hackaton-test
|
[
"region:us"
] |
2023-04-06T15:24:54+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "null"}, {"name": "inputs", "struct": [{"name": "1-instruction", "dtype": "string"}, {"name": "2-input", "dtype": "string"}, {"name": "3-output", "dtype": "string"}]}, {"name": "prediction", "dtype": "null"}, {"name": "prediction_agent", "dtype": "null"}, {"name": "annotation", "dtype": "string"}, {"name": "annotation_agent", "dtype": "string"}, {"name": "vectors", "struct": [{"name": "input", "sequence": "float64"}, {"name": "instruction", "sequence": "float64"}, {"name": "output", "sequence": "float64"}]}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "struct": [{"name": "en_index", "dtype": "int64"}, {"name": "sf-unprocessable-score", "dtype": "float64"}, {"name": "tr-flag-1-instruction", "dtype": "bool"}, {"name": "tr-flag-2-input", "dtype": "bool"}, {"name": "tr-flag-3-output", "dtype": "bool"}]}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 984283413, "num_examples": 51942}], "download_size": 652179041, "dataset_size": 984283413}}
|
2023-04-06T15:26:18+00:00
|
3e463817c02767cd64d9ad0276c6d291c7f120aa
|
# Dataset Card for "stack-exchange-instruction"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ArmelR/stack-exchange-instruction
|
[
"region:us"
] |
2023-04-06T15:31:58+00:00
|
{"pretty_name": "stack exchange instruction"}
|
2023-05-26T07:37:42+00:00
|
bb29545aed53e31011ce124f2ae65cf21e6b20c4
|
This dataset was collected from Wikipedia : https://hu.wikipedia.org/wiki/Magyarorsz%C3%A1gon_anyak%C3%B6nyvezhet%C5%91_ut%C3%B3nevek_list%C3%A1ja
|
AlhitawiMohammed22/HungarianNames
|
[
"task_categories:text-generation",
"task_categories:feature-extraction",
"size_categories:n<1K",
"language:hu",
"doi:10.57967/hf/0595",
"region:us"
] |
2023-04-06T15:35:50+00:00
|
{"language": ["hu"], "size_categories": ["n<1K"], "task_categories": ["text-generation", "feature-extraction"]}
|
2023-05-19T10:06:03+00:00
|
d92798159eb6ebd9232331d42f17d1c933df1bc4
|
# Monks
The [Monk dataset](https://archive-beta.ics.uci.edu/dataset/70/monk+s+problems) from UCI.
# Configurations and tasks
| **Configuration** | **Task** |
|-------------------|---------------------------|
| monks1 | Binary classification |
| monks2 | Binary classification |
| monks3 | Binary classification |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/monks", "monks1")["train"]
```
|
mstz/monks
|
[
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"student performance",
"tabular_classification",
"UCI",
"region:us"
] |
2023-04-06T15:43:42+00:00
|
{"language": ["en"], "license": "cc", "size_categories": ["n<1K"], "task_categories": ["tabular-classification"], "pretty_name": "Monk", "tags": ["student performance", "tabular_classification", "UCI"], "configs": ["monks1", "monks2", "monks3"]}
|
2023-04-16T16:34:32+00:00
|
f76bc618ed2a3c56e7487875a7c12c249bf08c31
|
# Health Advice
## Dataset Description
- **Paper:** https://experts.syr.edu/en/publications/detecting-causal-language-use-in-science-findings
### Dataset Summary
This is the dataset use in the paper: Detecting Causal Language Use in Science Findings.
It was cleaned and formated to fit into the alpaca template.
### Citation Information
```
@inproceedings{yu-etal-2019-detecting,
title = "Detecting Causal Language Use in Science Findings",
author = "Yu, Bei and
Li, Yingya and
Wang, Jun",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-1473",
doi = "10.18653/v1/D19-1473",
pages = "4664--4674",
}
```
|
medalpaca/medical_meadow_health_advice
|
[
"task_categories:question-answering",
"task_categories:text-classification",
"language:en",
"region:us"
] |
2023-04-06T15:47:45+00:00
|
{"language": ["en"], "task_categories": ["question-answering", "text-classification"]}
|
2023-04-06T15:51:22+00:00
|
73472b260aa9c3216de8fd1c6a9e2e64a9a0b984
|
# MediQA
## Dataset Description
MEDIQA is a dataset of manually generated, question-driven summaries of multi and single document answers to consumer health questions.
- **Homepage:** https://osf.io/fyg46/?view_only=
### Citation Information
```
@article{savery2020question,
title={Question-driven summarization of answers to consumer health questions},
author={Savery, Max and Abacha, Asma Ben and Gayen, Soumya and Demner-Fushman, Dina},
journal={Scientific Data},
volume={7},
number={1},
pages={322},
year={2020},
publisher={Nature Publishing Group UK London}
}
```
|
medalpaca/medical_meadow_mediqa
|
[
"task_categories:question-answering",
"language:en",
"region:us"
] |
2023-04-06T15:51:50+00:00
|
{"language": ["en"], "task_categories": ["question-answering"]}
|
2023-04-16T15:30:36+00:00
|
c1b14eb49d9208bac653410f2badd64b2a9fa238
|
# Dataset Card for MedQA
## Dataset Description
- **Paper:**
### Dataset Summary
This is the data and baseline source code for the paper: Jin, Di, et al. "What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams."
From https://github.com/jind11/MedQA:
>The data that contains both the QAs and textbooks can be downloaded from [this google drive folder](https://drive.google.com/file/d/1ImYUSLk9JbgHXOemfvyiDiirluZHPeQw/view?usp=sharing). A bit of details of data are explained as below:
>
> For QAs, we have three sources: US, Mainland of China, and Taiwan District, which are put in folders, respectively. All files for QAs are in jsonl file format, where each line is a data sample as a dict. The "XX_qbank.jsonl" files contain all data samples while we also provide an official random split into train, dev, and test sets. Those files in the "metamap" folders are extracted medical related phrases using the Metamap tool.
>
> For QAs, we also include the "4_options" version in for US and Mainland of China since we reported results for 4 options in the paper.
>
> For textbooks, we have two languages: English and simplified Chinese. For simplified Chinese, we provide two kinds of sentence spliting: one is split by sentences, and the other is split by paragraphs.
### Citation Information
```
@article{jin2020disease,
title={What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams},
author={Jin, Di and Pan, Eileen and Oufattole, Nassim and Weng, Wei-Hung and Fang, Hanyi and Szolovits, Peter},
journal={arXiv preprint arXiv:2009.13081},
year={2020}
}
```
|
medalpaca/medical_meadow_medqa
|
[
"task_categories:question-answering",
"language:en",
"language:zh",
"medical",
"region:us"
] |
2023-04-06T15:56:15+00:00
|
{"language": ["en", "zh"], "task_categories": ["question-answering"], "tags": ["medical"]}
|
2023-04-06T15:59:02+00:00
|
8a80e781d33544d22b95d26a8d68cafbe8a6470e
|
# Dataset Card for Pubmed Causal
## Dataset Description
- **Paper:** https://aclanthology.org/D19-1473/
### Dataset Summary
This is the dataset used in the paper: Detecting Causal Language Use in Science Findings.
### Citation Information
```
@inproceedings{yu-etal-2019-detecting,
title = "Detecting Causal Language Use in Science Findings",
author = "Yu, Bei and
Li, Yingya and
Wang, Jun",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-1473",
doi = "10.18653/v1/D19-1473",
pages = "4664--4674",
}
```
|
medalpaca/medical_meadow_pubmed_causal
|
[
"task_categories:question-answering",
"language:en",
"region:us"
] |
2023-04-06T15:59:22+00:00
|
{"language": ["en"], "task_categories": ["question-answering"]}
|
2023-04-06T16:01:00+00:00
|
976bce16fd66f55e6828422b71249a7405a11aa4
|
# Dataset Card for WikiDoc
For the dataset containing patient information from wikidoc refer to [this dataset](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc_patient_information)
## Dataset Description
- **Source:** https://www.wikidoc.org/index.php/Main_Page
- **Repository:** https://github.com/kbressem/medalpaca
- **Paper:** TBA
### Dataset Summary
This dataset containes medical question-answer pairs extracted from [WikiDoc](https://www.wikidoc.org/index.php/Main_Page),
a collaborative platform for medical professionals to share and contribute to up-to-date medical knowledge.
The platform has to main subsites, the "Living Textbook" and "Patient Information". The "Living Textbook"
contains chapters for various medical specialties, which we crawled. We then used GTP-3.5-Turbo to rephrase
the paragraph heading to a question and used the paragraph as answer. Patient Information is structured differently,
in that each section subheading is already a question, making rephrasing them obsolete.
**Note:** This dataset is still a WIP. While the Q/A pairs from the patient information seems to be mostly correct,
the conversion using GPT-3.5-Turbo yielded some unsatisfactory results in approximately 30% of cases. We are in the process of cleaning this dataset.
### Citation Information
TBA
|
medalpaca/medical_meadow_wikidoc
|
[
"task_categories:question-answering",
"language:en",
"license:cc",
"region:us"
] |
2023-04-06T16:01:20+00:00
|
{"language": ["en"], "license": "cc", "task_categories": ["question-answering"]}
|
2023-04-06T16:05:18+00:00
|
2b4a6c47feeea572a3f8a8d68ff4dce93feb8bee
|
# Dataset Card for WikiDoc
For the dataset containing rephrased content from the living textbook refer to [this dataset](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc)
## Dataset Description
- **Source:** https://www.wikidoc.org/index.php/Main_Page
- **Repository:** https://github.com/kbressem/medalpaca
- **Paper:** TBA
### Dataset Summary
This dataset containes medical question-answer pairs extracted from [WikiDoc](https://www.wikidoc.org/index.php/Main_Page),
a collaborative platform for medical professionals to share and contribute to up-to-date medical knowledge.
The platform has to main subsites, the "Living Textbook" and "Patient Information". The "Living Textbook"
contains chapters for various medical specialties, which we crawled. We then used GTP-3.5-Turbo to rephrase
the paragraph heading to a question and used the paragraph as answer. Patient Information is structured differently,
in that each section subheading is already a question, making rephrasing them obsolete.
**Note:** This dataset is still a WIP. While the Q/A pairs from the patient information seems to be mostly correct,
the conversion using GPT-3.5-Turbo yielded some unsatisfactory results in approximately 30% of cases. We are in the process of cleaning this dataset.
### Citation Information
TBA
|
medalpaca/medical_meadow_wikidoc_patient_information
|
[
"task_categories:question-answering",
"language:en",
"license:cc",
"region:us"
] |
2023-04-06T16:05:50+00:00
|
{"language": ["en"], "license": "cc", "task_categories": ["question-answering"]}
|
2023-04-06T16:08:53+00:00
|
7597b32036d67c731cb91bae4f49717fcfe5d5f0
|
# Dataset Card for Medical Flashcards
## Dataset Description
- **Repository:** https://github.com/kbressem/medalpaca
- **Paper:** TBA
### Dataset Summary
Medicine as a whole encompasses a wide range of subjects that medical students and graduates must master
in order to practice effectively. This includes a deep understanding of basic medical sciences, clinical knowledge,
and clinical skills. The Anki Medical Curriculum flashcards are created and updated by medical students and cover the
entirety of this curriculum, addressing subjects such as anatomy, physiology, pathology, pharmacology, and more.
These flashcards frequently feature succinct summaries and mnemonics to aid in learning and retention of vital medical concepts.
In our study, we employed the flashcards as a resource for generating question-answer pairs for training purposes.
After removing cards that contained images, we utilized OpenAI's GPT-3.5-turbo to rephrase the cards into coherent,
contextually relevant question-answer pairs. In general the questions and answers are short and focused, as the flashcards
do not allow to add much information.
### Citation Information
TBA
|
medalpaca/medical_meadow_medical_flashcards
|
[
"task_categories:question-answering",
"language:en",
"license:cc",
"region:us"
] |
2023-04-06T16:09:17+00:00
|
{"language": ["en"], "license": "cc", "task_categories": ["question-answering"]}
|
2023-04-06T16:12:17+00:00
|
60adee206c73c3befa6799d3687ec001261819e1
|
# Dataset Card for "eval_harness_vs_multipl-e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
loubnabnl/eval_harness_vs_multipl-e
|
[
"region:us"
] |
2023-04-06T16:30:13+00:00
|
{"dataset_info": {"features": [{"name": "eval_harness", "sequence": "string"}, {"name": "multipl-e", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 53177250, "num_examples": 161}], "download_size": 9998845, "dataset_size": 53177250}}
|
2023-04-06T16:30:16+00:00
|
e5457dbfe414a749a1883de981967a3105dfa4de
|
# Mushroom
The [Mushroom dataset](https://archive.ics.uci.edu/ml/datasets/Mushroom) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|---------------------------|
| mushroom | Binary classification | Is the mushroom poisonous?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/mushroom")["train"]
```
|
mstz/mushroom
|
[
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"mushroom",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] |
2023-04-06T16:42:03+00:00
|
{"language": ["en"], "license": "cc", "size_categories": ["1K<n<10K"], "task_categories": ["tabular-classification"], "pretty_name": "Mushroom", "tags": ["mushroom", "tabular_classification", "binary_classification", "UCI"], "configs": ["mushroom"]}
|
2023-04-16T16:34:40+00:00
|
0227dd5ed204e3f99ff34e48d2c684b753f10467
|
# Dataset Card for "ts-training"
This is a subset of the TypeScript portion of [The Stack (dedup)](https://huggingface.co/datasets/bigcode/the-stack-dedup), uploaded to the Hugging Face Hub for convenience.
Files with dates _after_ the December 31, 2021 cutoff are excluded from this dataset, since we are using those files for evaluation. Therefore, the remaining files (in this dataset) are available for training.
A file is considered to be after the cutoff if all of `max_{stars|forks|issues}_repo_{stars|forks|issues}_event_min_datetime` (i.e., the first timestamp for a `{stars|forks|issues}` event) are after the cutoff. Otherwise (or if all timestamps are missing), the file is included in this dataset.
## Versions
The default version (`main`) is current `v1.1`.
|Version|Description|
|-|-|
|`v1.1` | Original version of the training dataset, based on v1.1 of the Stack. Applies the training cutoff (December 31, 2021). Used to train OpenTau. |
|`v1.1full` | Training dataset based on v1.1 of the Stack. Does not apply the training cutoff (December 31, 2021), but applies a filter to remove files that do not parse as valid TypeScript. |
|`v1.1p1` | Revision of v1.1. Applies a filter to remove files that do not parse as valid TypeScript. |
|
nuprl/ts-training
|
[
"region:us"
] |
2023-04-06T16:42:26+00:00
|
{"dataset_info": {"features": [{"name": "hexsha", "dtype": "string"}, {"name": "size", "dtype": "int64"}, {"name": "ext", "dtype": "string"}, {"name": "lang", "dtype": "string"}, {"name": "max_stars_repo_path", "dtype": "string"}, {"name": "max_stars_repo_name", "dtype": "string"}, {"name": "max_stars_repo_head_hexsha", "dtype": "string"}, {"name": "max_stars_repo_licenses", "sequence": "string"}, {"name": "max_stars_count", "dtype": "float64"}, {"name": "max_stars_repo_stars_event_min_datetime", "dtype": "string"}, {"name": "max_stars_repo_stars_event_max_datetime", "dtype": "string"}, {"name": "max_issues_repo_path", "dtype": "string"}, {"name": "max_issues_repo_name", "dtype": "string"}, {"name": "max_issues_repo_head_hexsha", "dtype": "string"}, {"name": "max_issues_repo_licenses", "sequence": "string"}, {"name": "max_issues_count", "dtype": "float64"}, {"name": "max_issues_repo_issues_event_min_datetime", "dtype": "string"}, {"name": "max_issues_repo_issues_event_max_datetime", "dtype": "string"}, {"name": "max_forks_repo_path", "dtype": "string"}, {"name": "max_forks_repo_name", "dtype": "string"}, {"name": "max_forks_repo_head_hexsha", "dtype": "string"}, {"name": "max_forks_repo_licenses", "sequence": "string"}, {"name": "max_forks_count", "dtype": "float64"}, {"name": "max_forks_repo_forks_event_min_datetime", "dtype": "string"}, {"name": "max_forks_repo_forks_event_max_datetime", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "avg_line_length", "dtype": "float64"}, {"name": "max_line_length", "dtype": "int64"}, {"name": "alphanum_fraction", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 42270977435, "num_examples": 12133148}], "download_size": 17360072228, "dataset_size": 42270977435}, "extra_gated_prompt": "## Terms of Use for The Stack\nThe Stack dataset is a collection of source code in over 300 programming languages. We ask that you read and acknowledge the following points before using the dataset:\n1. The Stack is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.\n2. The Stack is regularly updated to enact validated data removal requests. By clicking on \"Access repository\", you agree to update your own version of The Stack to the most recent usable version specified by the maintainers in [the following thread](https://huggingface.co/datasets/bigcode/the-stack/discussions/7). If you have questions about dataset versions and allowed uses, please also ask them in the dataset\u2019s [community discussions](https://huggingface.co/datasets/bigcode/the-stack/discussions/new). We will also notify users via email when the latest usable version changes.\n3. To host, share, or otherwise provide access to The Stack dataset, you must include [these Terms of Use](https://huggingface.co/datasets/bigcode/the-stack#terms-of-use-for-the-stack) and require users to agree to it.\n\nBy clicking on \"Access repository\" below, you accept that your contact information (email address and username) can be shared with the dataset maintainers as well.", "extra_gated_fields": {"Email": "text", "I have read the License and agree with its terms": "checkbox"}}
|
2023-05-23T18:34:07+00:00
|
1597f41354148dac50aa15b7d25c8d31aa2e9d2c
|
# Dataset Card for "VALUE_rte_dey_it"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_rte_dey_it
|
[
"region:us"
] |
2023-04-06T17:48:57+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "dev", "num_bytes": 4759, "num_examples": 12}, {"name": "test", "num_bytes": 47590, "num_examples": 117}, {"name": "train", "num_bytes": 59365, "num_examples": 125}], "download_size": 6768, "dataset_size": 111714}}
|
2023-04-09T00:26:32+00:00
|
dce75511f0df396c15043332d0fb29956fa4c3c5
|
# Dataset Card for "VALUE_rte_got"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_rte_got
|
[
"region:us"
] |
2023-04-06T17:49:14+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "dev", "num_bytes": 4498, "num_examples": 12}, {"name": "test", "num_bytes": 60760, "num_examples": 141}, {"name": "train", "num_bytes": 65507, "num_examples": 148}], "download_size": 8240, "dataset_size": 130765}}
|
2023-04-09T00:26:43+00:00
|
04f73d533d3c192bcc7cb32505d5a83dc0613ac8
|
# Dataset Card for "VALUE_rte_drop_aux"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_rte_drop_aux
|
[
"region:us"
] |
2023-04-06T17:49:16+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "dev", "num_bytes": 28610, "num_examples": 72}, {"name": "test", "num_bytes": 341914, "num_examples": 859}, {"name": "train", "num_bytes": 313116, "num_examples": 763}], "download_size": 25441, "dataset_size": 683640}}
|
2023-04-09T00:26:37+00:00
|
eb155daa8de8e4c1bdd72bf0513c22aa1c587cdb
|
# Dataset Card for "VALUE_rte_negative_concord"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_rte_negative_concord
|
[
"region:us"
] |
2023-04-06T17:49:17+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "dev", "num_bytes": 6788, "num_examples": 12}, {"name": "test", "num_bytes": 81330, "num_examples": 164}, {"name": "train", "num_bytes": 76553, "num_examples": 149}], "download_size": 11963, "dataset_size": 164671}}
|
2023-04-09T00:26:55+00:00
|
613c5d5978a6e9791749e3b517bb68df66835601
|
# Dataset Card for "VALUE_rte_been_done"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_rte_been_done
|
[
"region:us"
] |
2023-04-06T17:49:22+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "dev", "num_bytes": 44547, "num_examples": 105}, {"name": "test", "num_bytes": 492663, "num_examples": 1175}, {"name": "train", "num_bytes": 438233, "num_examples": 990}], "download_size": 637646, "dataset_size": 975443}}
|
2023-04-09T00:26:54+00:00
|
b232c7d11c26ba80b3b6f4e6c7f6becbe18c0483
|
# Dataset Card for "VALUE_rte_null_genetive"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_rte_null_genetive
|
[
"region:us"
] |
2023-04-06T17:49:26+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "dev", "num_bytes": 34867, "num_examples": 86}, {"name": "test", "num_bytes": 379955, "num_examples": 919}, {"name": "train", "num_bytes": 355576, "num_examples": 850}], "download_size": 30976, "dataset_size": 770398}}
|
2023-04-09T00:26:56+00:00
|
fb98dece6f24f3b89e8e738e4e8f6e098869ee42
|
# Dataset Card for "MULTI_VALUE_rte_regularized_reflexives"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_regularized_reflexives
|
[
"region:us"
] |
2023-04-06T17:51:42+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 14223, "num_examples": 30}, {"name": "train", "num_bytes": 15681, "num_examples": 34}], "download_size": 30171, "dataset_size": 29904}}
|
2023-04-06T17:51:46+00:00
|
ef474ae0365af4a3cfb4d02a42cf832660c86cd5
|
# Dataset Card for "MULTI_VALUE_rte_preposition_chopping"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_preposition_chopping
|
[
"region:us"
] |
2023-04-06T17:51:45+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 977, "num_examples": 4}, {"name": "train", "num_bytes": 1210, "num_examples": 5}], "download_size": 7922, "dataset_size": 2187}}
|
2023-04-06T17:51:49+00:00
|
aace4a1d4c56948aefdf417a2b2d4c5f741662dd
|
# Dataset Card for "MULTI_VALUE_rte_shadow_pronouns"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_shadow_pronouns
|
[
"region:us"
] |
2023-04-06T17:51:45+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 157300, "num_examples": 330}, {"name": "train", "num_bytes": 132738, "num_examples": 283}], "download_size": 195666, "dataset_size": 290038}}
|
2023-04-06T17:51:49+00:00
|
8aa15b6e756a74b600cddd52197170af0edba968
|
# Dataset Card for "MULTI_VALUE_rte_generalized_third_person_s"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_generalized_third_person_s
|
[
"region:us"
] |
2023-04-06T17:51:46+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 289136, "num_examples": 701}, {"name": "train", "num_bytes": 275487, "num_examples": 628}], "download_size": 372900, "dataset_size": 564623}}
|
2023-04-06T17:51:50+00:00
|
01b73e9396ed45799a5e0e6524ca8d90ab4632f6
|
# Dataset Card for "MULTI_VALUE_rte_drop_aux_yn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_drop_aux_yn
|
[
"region:us"
] |
2023-04-06T17:51:46+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 10968, "num_examples": 25}, {"name": "train", "num_bytes": 7291, "num_examples": 17}], "download_size": 23708, "dataset_size": 18259}}
|
2023-04-06T17:51:49+00:00
|
d35f6daed6e597bb02c5802d5fcd23b5c218d8e8
|
# Dataset Card for "MULTI_VALUE_rte_indefinite_for_zero"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_indefinite_for_zero
|
[
"region:us"
] |
2023-04-06T17:51:46+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 1020772, "num_examples": 2898}, {"name": "train", "num_bytes": 877252, "num_examples": 2380}], "download_size": 1205787, "dataset_size": 1898024}}
|
2023-04-06T17:51:50+00:00
|
b56b2e0b1dc94b3bbdc4387241a58fd95b5e84da
|
# Dataset Card for "MULTI_VALUE_rte_serial_verb_go"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_serial_verb_go
|
[
"region:us"
] |
2023-04-06T17:51:47+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 82654, "num_examples": 171}, {"name": "train", "num_bytes": 81323, "num_examples": 174}], "download_size": 116727, "dataset_size": 163977}}
|
2023-04-06T17:51:50+00:00
|
e37a72472c31d06f101bafd6497aad6a1af68129
|
# Dataset Card for "MULTI_VALUE_rte_corr_conjunction_doubling"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_corr_conjunction_doubling
|
[
"region:us"
] |
2023-04-06T17:51:48+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 154422, "num_examples": 328}, {"name": "train", "num_bytes": 143934, "num_examples": 283}], "download_size": 202310, "dataset_size": 298356}}
|
2023-04-06T17:51:52+00:00
|
c72ee96d08655252c46c28eac01b4d90f6df3989
|
# Dataset Card for "MULTI_VALUE_rte_null_genitive"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_null_genitive
|
[
"region:us"
] |
2023-04-06T17:51:48+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 368405, "num_examples": 898}, {"name": "train", "num_bytes": 340967, "num_examples": 822}], "download_size": 456509, "dataset_size": 709372}}
|
2023-04-06T17:51:52+00:00
|
09c6b898f056d051855c1a941529ef19fae116ed
|
# Dataset Card for "MULTI_VALUE_rte_simple_past_for_present_perfect"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_simple_past_for_present_perfect
|
[
"region:us"
] |
2023-04-06T17:51:49+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 267619, "num_examples": 623}, {"name": "train", "num_bytes": 231640, "num_examples": 497}], "download_size": 327349, "dataset_size": 499259}}
|
2023-04-06T17:51:52+00:00
|
8f314993841ad378ecb203756b35df56dba0a64c
|
# Dataset Card for "MULTI_VALUE_rte_chaining_main_verbs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_chaining_main_verbs
|
[
"region:us"
] |
2023-04-06T17:51:50+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 13509, "num_examples": 30}, {"name": "train", "num_bytes": 18902, "num_examples": 37}], "download_size": 30925, "dataset_size": 32411}}
|
2023-04-06T17:51:53+00:00
|
60ef5b5bdb0b52b59be05d8f2ff8f530e38101a5
|
# Dataset Card for "MULTI_VALUE_rte_analytic_superlative"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_analytic_superlative
|
[
"region:us"
] |
2023-04-06T17:51:50+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 101226, "num_examples": 242}, {"name": "train", "num_bytes": 83833, "num_examples": 200}], "download_size": 127296, "dataset_size": 185059}}
|
2023-04-06T17:51:54+00:00
|
87c3c32a956e9a6a634ccf2653695659f917614e
|
# Dataset Card for "MULTI_VALUE_rte_possessives_for_post"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_possessives_for_post
|
[
"region:us"
] |
2023-04-06T17:51:52+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 617408, "num_examples": 1492}, {"name": "train", "num_bytes": 554538, "num_examples": 1311}], "download_size": 754396, "dataset_size": 1171946}}
|
2023-04-06T17:51:56+00:00
|
bead4ce4964c1d07bd19be4d7646ba878d0572ad
|
# Dataset Card for "MULTI_VALUE_rte_drop_aux_have"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_drop_aux_have
|
[
"region:us"
] |
2023-04-06T17:51:53+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 376004, "num_examples": 896}, {"name": "train", "num_bytes": 319949, "num_examples": 718}], "download_size": 451775, "dataset_size": 695953}}
|
2023-04-06T17:51:56+00:00
|
2f06ae0d43b402dab4377f46093192efc6ca356a
|
# Dataset Card for "MULTI_VALUE_rte_participle_past_tense"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_participle_past_tense
|
[
"region:us"
] |
2023-04-06T17:51:53+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 199017, "num_examples": 488}, {"name": "train", "num_bytes": 155512, "num_examples": 361}], "download_size": 235435, "dataset_size": 354529}}
|
2023-04-06T17:51:56+00:00
|
75e96697515be0df35ac75b82a63b2195ecb6554
|
# Dataset Card for "MULTI_VALUE_rte_bare_perfect"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_bare_perfect
|
[
"region:us"
] |
2023-04-06T17:51:54+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 638983, "num_examples": 1605}, {"name": "train", "num_bytes": 566268, "num_examples": 1378}], "download_size": 783592, "dataset_size": 1205251}}
|
2023-04-06T17:51:58+00:00
|
4dd9a81749267d23bb1c018e1f2121359c9f7af6
|
# Dataset Card for "MULTI_VALUE_rte_relativizer_where"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_relativizer_where
|
[
"region:us"
] |
2023-04-06T17:51:55+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 225959, "num_examples": 491}, {"name": "train", "num_bytes": 212749, "num_examples": 457}], "download_size": 289780, "dataset_size": 438708}}
|
2023-04-06T17:51:59+00:00
|
2c56083e5e99fbaf3951d27d5f15e7695f4e5061
|
# Dataset Card for "MULTI_VALUE_rte_remove_det_indefinite"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_remove_det_indefinite
|
[
"region:us"
] |
2023-04-06T17:51:56+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 564445, "num_examples": 1401}, {"name": "train", "num_bytes": 492437, "num_examples": 1192}], "download_size": 693006, "dataset_size": 1056882}}
|
2023-04-06T17:51:59+00:00
|
ddf852080ecd2a9ef121d0e0ae043a375ec35151
|
# Dataset Card for "MULTI_VALUE_rte_demonstrative_no_number"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_demonstrative_no_number
|
[
"region:us"
] |
2023-04-06T17:51:56+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 60455, "num_examples": 116}, {"name": "train", "num_bytes": 55337, "num_examples": 103}], "download_size": 86937, "dataset_size": 115792}}
|
2023-04-06T17:52:00+00:00
|
7d7a7165fd43f4e35435cfe9b31808a0510c8b55
|
# Dataset Card for "MULTI_VALUE_rte_proximal_distal_demonstratives"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_proximal_distal_demonstratives
|
[
"region:us"
] |
2023-04-06T17:51:57+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 144054, "num_examples": 298}, {"name": "train", "num_bytes": 140356, "num_examples": 279}], "download_size": 194080, "dataset_size": 284410}}
|
2023-04-06T17:52:00+00:00
|
23722eaf1a1e192e8bf0f0dbe2dc5ad1518851da
|
# Dataset Card for "MULTI_VALUE_rte_who_what"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_who_what
|
[
"region:us"
] |
2023-04-06T17:51:57+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 339500, "num_examples": 755}, {"name": "train", "num_bytes": 301036, "num_examples": 660}], "download_size": 416577, "dataset_size": 640536}}
|
2023-04-06T17:52:01+00:00
|
5ad22556674a245ffc4df221034b4361d01f4b21
|
# Dataset Card for "MULTI_VALUE_rte_fixin_future"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_fixin_future
|
[
"region:us"
] |
2023-04-06T17:51:58+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 112582, "num_examples": 242}, {"name": "train", "num_bytes": 95330, "num_examples": 203}], "download_size": 140715, "dataset_size": 207912}}
|
2023-04-06T17:52:01+00:00
|
8fd047e44f161628f796f973673e3e294ee37851
|
# Dataset Card for "MULTI_VALUE_rte_double_past"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_double_past
|
[
"region:us"
] |
2023-04-06T17:52:00+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 171024, "num_examples": 359}, {"name": "train", "num_bytes": 134062, "num_examples": 282}], "download_size": 204183, "dataset_size": 305086}}
|
2023-04-06T17:52:03+00:00
|
c4dffd38991c70337aa7505c21e7e50800a30ef0
|
# Dataset Card for "MULTI_VALUE_rte_it_dobj"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_it_dobj
|
[
"region:us"
] |
2023-04-06T17:52:00+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 56744, "num_examples": 115}, {"name": "train", "num_bytes": 53956, "num_examples": 112}], "download_size": 85755, "dataset_size": 110700}}
|
2023-04-06T17:52:03+00:00
|
3834c011222e9b268703e34171020407b23a2411
|
# Dataset Card for "MULTI_VALUE_rte_drop_copula_be_locative"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_drop_copula_be_locative
|
[
"region:us"
] |
2023-04-06T17:52:00+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 65049, "num_examples": 134}, {"name": "train", "num_bytes": 63855, "num_examples": 156}], "download_size": 92353, "dataset_size": 128904}}
|
2023-04-06T17:52:03+00:00
|
dbd75947c484f32f0172c38451c17692347f3946
|
# Dataset Card for "MULTI_VALUE_rte_never_negator"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_never_negator
|
[
"region:us"
] |
2023-04-06T17:52:00+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 26205, "num_examples": 55}, {"name": "train", "num_bytes": 18752, "num_examples": 37}], "download_size": 40224, "dataset_size": 44957}}
|
2023-04-06T17:52:03+00:00
|
fa5662c2d207b6fcb9f933e7a8c566b9e769ae43
|
# Dataset Card for "MULTI_VALUE_rte_emphatic_reflex"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_emphatic_reflex
|
[
"region:us"
] |
2023-04-06T17:52:02+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 8293, "num_examples": 18}, {"name": "train", "num_bytes": 6961, "num_examples": 17}], "download_size": 17891, "dataset_size": 15254}}
|
2023-04-06T17:52:05+00:00
|
a1c8fcffa01ca7c1c5966909d017c7e7c8c18c8a
|
# Dataset Card for "MULTI_VALUE_rte_drop_copula_be_AP"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_drop_copula_be_AP
|
[
"region:us"
] |
2023-04-06T17:52:02+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 68851, "num_examples": 161}, {"name": "train", "num_bytes": 72469, "num_examples": 161}], "download_size": 101656, "dataset_size": 141320}}
|
2023-04-06T17:52:05+00:00
|
cb17cc4c819501cfec0b58308d329b79b5a3ed27
|
# Dataset Card for "MULTI_VALUE_rte_regularized_plurals"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_regularized_plurals
|
[
"region:us"
] |
2023-04-06T17:52:04+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 255574, "num_examples": 570}, {"name": "train", "num_bytes": 228228, "num_examples": 515}], "download_size": 316182, "dataset_size": 483802}}
|
2023-04-06T17:52:08+00:00
|
bbb3f67ee6fc2d46b5d0f51a82fffa1d9d19ff3e
|
# Dataset Card for "MULTI_VALUE_rte_aint_be"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_aint_be
|
[
"region:us"
] |
2023-04-06T17:52:04+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 84396, "num_examples": 160}, {"name": "train", "num_bytes": 66026, "num_examples": 124}], "download_size": 111407, "dataset_size": 150422}}
|
2023-04-06T17:52:07+00:00
|
954ee57bd3f99af51cd5cd798b76c03640822cef
|
# Dataset Card for "MULTI_VALUE_rte_anaphoric_it"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_anaphoric_it
|
[
"region:us"
] |
2023-04-06T17:52:06+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 81072, "num_examples": 157}, {"name": "train", "num_bytes": 78377, "num_examples": 162}], "download_size": 113524, "dataset_size": 159449}}
|
2023-04-06T17:52:09+00:00
|
c1eff2ca13354dcde08947de57b8c070b9a154db
|
# Dataset Card for "MULTI_VALUE_rte_drop_aux_be_progressive"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_drop_aux_be_progressive
|
[
"region:us"
] |
2023-04-06T17:52:06+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 97837, "num_examples": 211}, {"name": "train", "num_bytes": 78434, "num_examples": 169}], "download_size": 125103, "dataset_size": 176271}}
|
2023-04-06T17:52:09+00:00
|
529c4ac60e2eb620ea517a657f483cc9bdf46bfe
|
# Dataset Card for "MULTI_VALUE_rte_come_future"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_come_future
|
[
"region:us"
] |
2023-04-06T17:52:07+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 112888, "num_examples": 242}, {"name": "train", "num_bytes": 95599, "num_examples": 203}], "download_size": 140603, "dataset_size": 208487}}
|
2023-04-06T17:52:10+00:00
|
72eb850a7b294e4049e5fa5d28327541609ae0f3
|
# Dataset Card for "MULTI_VALUE_rte_em_subj_pronoun"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_em_subj_pronoun
|
[
"region:us"
] |
2023-04-06T17:52:07+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 318976, "num_examples": 664}, {"name": "train", "num_bytes": 264563, "num_examples": 550}], "download_size": 385119, "dataset_size": 583539}}
|
2023-04-06T17:52:10+00:00
|
5621c78f392fbbf78c9eee2572db032959bf6ac9
|
# Dataset Card for "MULTI_VALUE_rte_present_for_exp_perfect"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_present_for_exp_perfect
|
[
"region:us"
] |
2023-04-06T17:52:07+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 267218, "num_examples": 623}, {"name": "train", "num_bytes": 231287, "num_examples": 497}], "download_size": 327918, "dataset_size": 498505}}
|
2023-04-06T17:52:10+00:00
|
a810333f9aa57f5553be282112f52a24a0e5ec22
|
# Dataset Card for "MULTI_VALUE_rte_degree_adj_for_adv"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_degree_adj_for_adv
|
[
"region:us"
] |
2023-04-06T17:52:07+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 4837, "num_examples": 12}, {"name": "train", "num_bytes": 5343, "num_examples": 12}], "download_size": 16693, "dataset_size": 10180}}
|
2023-04-06T17:52:10+00:00
|
9d0cc5723d0d0da3756fd74f34fa5419a22cc05b
|
# Dataset Card for "MULTI_VALUE_rte_future_sub_gon"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_future_sub_gon
|
[
"region:us"
] |
2023-04-06T17:52:08+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 110115, "num_examples": 242}, {"name": "train", "num_bytes": 93154, "num_examples": 203}], "download_size": 139911, "dataset_size": 203269}}
|
2023-04-06T17:52:11+00:00
|
c6a0b642e8ea96abe68c4227902705097a0ef05b
|
# Dataset Card for "MULTI_VALUE_rte_correlative_constructions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_correlative_constructions
|
[
"region:us"
] |
2023-04-06T17:52:08+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 78081, "num_examples": 157}, {"name": "train", "num_bytes": 53024, "num_examples": 116}], "download_size": 94086, "dataset_size": 131105}}
|
2023-04-06T17:52:11+00:00
|
5ca3334b86d405e47cbd6b68a05dce64ede787ec
|
# Dataset Card for "MULTI_VALUE_rte_bare_ccomp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_bare_ccomp
|
[
"region:us"
] |
2023-04-06T17:52:09+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 47993, "num_examples": 108}, {"name": "train", "num_bytes": 41966, "num_examples": 89}], "download_size": 70451, "dataset_size": 89959}}
|
2023-04-06T17:52:13+00:00
|
3cb419ef6aedc8dd8f8f0d30696a198fc74b2a92
|
# Dataset Card for "MULTI_VALUE_rte_nasal_possessive_pron"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_nasal_possessive_pron
|
[
"region:us"
] |
2023-04-06T17:52:09+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 408613, "num_examples": 946}, {"name": "train", "num_bytes": 366231, "num_examples": 829}], "download_size": 500740, "dataset_size": 774844}}
|
2023-04-06T17:52:13+00:00
|
9d251046a0b255547664acfa9fa2fdea4aec8a6f
|
# Dataset Card for "MULTI_VALUE_rte_say_complementizer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_say_complementizer
|
[
"region:us"
] |
2023-04-06T17:52:09+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 298321, "num_examples": 627}, {"name": "train", "num_bytes": 286475, "num_examples": 601}], "download_size": 381820, "dataset_size": 584796}}
|
2023-04-06T17:52:13+00:00
|
4c40324603b49573fd95cf050c5992fd4d1d7919
|
# Dataset Card for "MULTI_VALUE_rte_comparative_as_to"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_comparative_as_to
|
[
"region:us"
] |
2023-04-06T17:52:09+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 15337, "num_examples": 32}, {"name": "train", "num_bytes": 15304, "num_examples": 33}], "download_size": 30090, "dataset_size": 30641}}
|
2023-04-06T17:52:13+00:00
|
c18d31e79aa76376f4e0b0d0a0ae5ae8da06cf7f
|
# Dataset Card for "MULTI_VALUE_rte_our_us"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_our_us
|
[
"region:us"
] |
2023-04-06T17:52:10+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 21298, "num_examples": 39}, {"name": "train", "num_bytes": 18967, "num_examples": 37}], "download_size": 37018, "dataset_size": 40265}}
|
2023-04-06T17:52:14+00:00
|
349fe5866a29d809b9f225562318da24aa5dbf5c
|
# Dataset Card for "MULTI_VALUE_rte_existential_got"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_existential_got
|
[
"region:us"
] |
2023-04-06T17:52:10+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 45160, "num_examples": 112}, {"name": "train", "num_bytes": 53000, "num_examples": 115}], "download_size": 72445, "dataset_size": 98160}}
|
2023-04-06T17:52:14+00:00
|
2a1c8ecb353b569a245fc6149b8f28bb5c326554
|
# Dataset Card for "MULTI_VALUE_rte_more_much"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_more_much
|
[
"region:us"
] |
2023-04-06T17:52:11+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 80145, "num_examples": 187}, {"name": "train", "num_bytes": 84239, "num_examples": 189}], "download_size": 117525, "dataset_size": 164384}}
|
2023-04-06T17:52:14+00:00
|
5ea7cf6de0d3df629e787750de019cfd485af420
|
# Dataset Card for "MULTI_VALUE_rte_he_inanimate_objects"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_he_inanimate_objects
|
[
"region:us"
] |
2023-04-06T17:52:11+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 117412, "num_examples": 240}, {"name": "train", "num_bytes": 111211, "num_examples": 227}], "download_size": 157920, "dataset_size": 228623}}
|
2023-04-06T17:52:15+00:00
|
b62c6ffd54ee41ef4be4e926718d58da64720a4a
|
# Dataset Card for "MULTI_VALUE_rte_zero_degree"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_zero_degree
|
[
"region:us"
] |
2023-04-06T17:52:11+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 60122, "num_examples": 141}, {"name": "train", "num_bytes": 55937, "num_examples": 126}], "download_size": 84266, "dataset_size": 116059}}
|
2023-04-06T17:52:14+00:00
|
145e6315a26411d364db7831a41f5226607306ee
|
# Dataset Card for "MULTI_VALUE_rte_existential_it"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_existential_it
|
[
"region:us"
] |
2023-04-06T17:52:11+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 45793, "num_examples": 113}, {"name": "train", "num_bytes": 54576, "num_examples": 117}], "download_size": 73601, "dataset_size": 100369}}
|
2023-04-06T17:52:15+00:00
|
c2fc3e3e9721a4c173132b91f9c7baa510317703
|
# Dataset Card for "MULTI_VALUE_rte_negative_inversion"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_negative_inversion
|
[
"region:us"
] |
2023-04-06T17:52:11+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 5440, "num_examples": 9}, {"name": "train", "num_bytes": 3661, "num_examples": 7}], "download_size": 18249, "dataset_size": 9101}}
|
2023-04-06T17:52:14+00:00
|
466f798a52bddb7b9d7c04b191c8342ffaaac3f5
|
# Dataset Card for "MULTI_VALUE_rte_who_which"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_who_which
|
[
"region:us"
] |
2023-04-06T17:52:11+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 153268, "num_examples": 324}, {"name": "train", "num_bytes": 120711, "num_examples": 257}], "download_size": 184670, "dataset_size": 273979}}
|
2023-04-06T17:52:14+00:00
|
1b52a900c6267a0f0a95b845aa0846ba5864c597
|
# Dataset Card for "MULTI_VALUE_rte_those_them"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_those_them
|
[
"region:us"
] |
2023-04-06T17:52:11+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 32717, "num_examples": 63}, {"name": "train", "num_bytes": 22627, "num_examples": 44}], "download_size": 47207, "dataset_size": 55344}}
|
2023-04-06T17:52:15+00:00
|
2d645987458b120d083eebe8f719743cfd48e799
|
# Dataset Card for "MULTI_VALUE_rte_null_referential_pronouns"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_null_referential_pronouns
|
[
"region:us"
] |
2023-04-06T17:52:12+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 343545, "num_examples": 715}, {"name": "train", "num_bytes": 300545, "num_examples": 622}], "download_size": 423359, "dataset_size": 644090}}
|
2023-04-06T17:52:16+00:00
|
9e9aeab18f001616e067046c80ee260e61a99b34
|
# Dataset Card for "MULTI_VALUE_rte_double_modals"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_double_modals
|
[
"region:us"
] |
2023-04-06T17:52:12+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 341284, "num_examples": 759}, {"name": "train", "num_bytes": 296520, "num_examples": 658}], "download_size": 411372, "dataset_size": 637804}}
|
2023-04-06T17:52:16+00:00
|
071cb46d17fb9838abcca3fc9a5e4f7da93cf610
|
# Dataset Card for "MULTI_VALUE_rte_dont"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_dont
|
[
"region:us"
] |
2023-04-06T17:52:12+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 6997, "num_examples": 13}, {"name": "train", "num_bytes": 1780, "num_examples": 4}], "download_size": 17848, "dataset_size": 8777}}
|
2023-04-06T17:52:15+00:00
|
48a216b179a59c1b88cca64547f2537f7fd053e2
|
# Dataset Card for "MULTI_VALUE_rte_completive_done"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_completive_done
|
[
"region:us"
] |
2023-04-06T17:52:13+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 481976, "num_examples": 1156}, {"name": "train", "num_bytes": 421696, "num_examples": 962}], "download_size": 578766, "dataset_size": 903672}}
|
2023-04-06T17:52:16+00:00
|
7152f39098b3ad8ab3afb946a89632187eb10da3
|
# Dataset Card for "MULTI_VALUE_rte_definite_abstract"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/MULTI_VALUE_rte_definite_abstract
|
[
"region:us"
] |
2023-04-06T17:52:13+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 273439, "num_examples": 613}, {"name": "train", "num_bytes": 250785, "num_examples": 557}], "download_size": 342132, "dataset_size": 524224}}
|
2023-04-06T17:52:17+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.