sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
a14b7f79f47327c8a8376caa2c9a5a92266ae40f
|
An imitation learning environment for the mujoco_ant environment, sample for the policy mujoco_ant_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_mujoco_ant_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-24T08:15:22+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-27T08:56:23+00:00
|
b8b5a606d1beac38978fb6cac27249264b235433
|
NimaBoscarino/fuego-20230224-001529-ccd0a4
|
[
"fuego",
"region:us"
] |
2023-02-24T08:15:31+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230224-001529-ccd0a4", "status": "preparing", "script": "train.py", "requirements_file": "requirements.txt", "space_id": "NimaBoscarino/fuego-20230224-001529-ccd0a4", "space_hardware": "cpu-basic"}}
|
2023-02-24T08:15:35+00:00
|
|
c77de937fae4582741c990a324f740ed7a851e41
|
NimaBoscarino/fuego-20230224-001852-4ba85c
|
[
"fuego",
"region:us"
] |
2023-02-24T08:18:54+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230224-001852-4ba85c", "status": "preparing", "script": "train.py", "requirements_file": "requirements.txt", "space_id": "NimaBoscarino/fuego-20230224-001852-4ba85c", "space_hardware": "cpu-basic"}}
|
2023-02-24T08:18:58+00:00
|
|
a84e216e166baa9e8dce8b5e08e193abf329867b
|
NimaBoscarino/fuego-20230224-002224-7dec99
|
[
"fuego",
"region:us"
] |
2023-02-24T08:22:25+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230224-002224-7dec99", "status": "preparing", "script": "train.py", "requirements_file": "requirements.txt", "space_id": "NimaBoscarino/fuego-20230224-002224-7dec99", "space_hardware": "cpu-basic"}}
|
2023-02-24T08:22:29+00:00
|
|
e2d92def551805aaec0b903eaac9c52234b33409
|
NimaBoscarino/fuego-20230224-002946-1eb1c5
|
[
"fuego",
"region:us"
] |
2023-02-24T08:29:48+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230224-002946-1eb1c5", "status": "done", "script": "train.py", "requirements_file": "requirements.txt", "space_id": "NimaBoscarino/fuego-20230224-002946-1eb1c5", "space_hardware": "cpu-basic"}}
|
2023-02-24T08:50:21+00:00
|
|
981cfe1eb94a6ff327fb1cc45a742ffd4c557359
|
An imitation learning environment for the mujoco_halfcheetah environment, sample for the policy mujoco_halfcheetah_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_mujoco_halfcheetah_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-24T08:33:21+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-27T08:57:46+00:00
|
72b6dd69ccd82a9118ca6a16e00548d97df66440
|
NimaBoscarino/fuego-20230224-003323-ca1442
|
[
"fuego",
"region:us"
] |
2023-02-24T08:33:24+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230224-003323-ca1442", "status": "preparing", "script": "train.py", "requirements_file": "requirements.txt", "space_id": "NimaBoscarino/fuego-20230224-003323-ca1442", "space_hardware": "cpu-basic"}}
|
2023-02-24T08:33:28+00:00
|
|
16fb4cab7a86ce27ad0fbfb423357f728fd9b16f
|
An imitation learning environment for the mujoco_hopper environment, sample for the policy mujoco_hopper_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_mujoco_hopper_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-24T08:34:58+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-27T08:59:21+00:00
|
f049ac8a423c5eca230b199cb6d21a7241e60241
|
An imitation learning environment for the mujoco_doublependulum environment, sample for the policy mujoco_doublependulum_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_mujoco_doublependulum_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-24T08:36:24+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-27T09:00:44+00:00
|
8501300bc82a779d33c7eb56fce642326506ee9e
|
An imitation learning environment for the mujoco_pendulum environment, sample for the policy mujoco_pendulum_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_mujoco_pendulum_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-24T08:37:47+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-27T09:02:10+00:00
|
722419d7c3d970cd665cb24dd90318e02f4b0f92
|
An imitation learning environment for the mujoco_reacher environment, sample for the policy mujoco_reacher_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_mujoco_reacher_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-24T08:40:51+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-27T09:05:27+00:00
|
227c9167907bf1c79fe2ffb60a94404d77e24778
|
An imitation learning environment for the mujoco_swimmer environment, sample for the policy mujoco_swimmer_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_mujoco_swimmer_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-24T08:42:15+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-27T09:06:54+00:00
|
b4d1c920b24d7c09b95b692230a7e8d2d1b14708
|
An imitation learning environment for the mujoco_walker environment, sample for the policy mujoco_walker_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_mujoco_walker_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-24T08:43:46+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-27T09:08:27+00:00
|
f00ab21518ceab68bc30400716868c4a4442345b
|
NimaBoscarino/fuego-20230224-005635-529b94
|
[
"fuego",
"region:us"
] |
2023-02-24T08:56:36+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230224-005635-529b94", "status": "done", "script": "train.py", "requirements_file": "requirements.txt", "space_id": "NimaBoscarino/fuego-20230224-005635-529b94", "space_hardware": "cpu-basic"}}
|
2023-02-24T08:59:34+00:00
|
|
0d72edc61587548ab910eaa1cd5cecb233d7ef9e
|
## Source:
Creator:
David J. Slate
Odesta Corporation; 1890 Maple Ave; Suite 115; Evanston, IL 60201
Donor:
David J. Slate (dave '@' math.nwu.edu) (708) 491-3867
## Data Set Information:
The objective is to identify each of a large number of black-and-white rectangular pixel displays as one of the 26 capital letters in the English alphabet. The character images were based on 20 different fonts and each letter within these 20 fonts was randomly distorted to produce a file of 20,000 unique stimuli. Each stimulus was converted into 16 primitive numerical attributes (statistical moments and edge counts) which were then scaled to fit into a range of integer values from 0 through 15. We typically train on the first 16000 items and then use the resulting model to predict the letter category for the remaining 4000. See the article cited above for more details.
### Attribute Information:
1. x-box horizontal position of box (integer)
2. y-box vertical position of box (integer)
3. width width of box (integer)
4. high height of box (integer)
5. onpix total # on pixels (integer)
6. x-bar mean x of on pixels in box (integer)
7. y-bar mean y of on pixels in box (integer)
8. x2bar mean x variance (integer)
9. y2bar mean y variance (integer)
10. xybar mean x y correlation (integer)
11. x2ybr mean of x * x * y (integer)
12. xy2br mean of x * y * y (integer)
13. x-ege mean edge count left to right (integer)
14. xegvy correlation of x-ege with y (integer)
15. y-ege mean edge count bottom to top (integer)
16. yegvx correlation of y-ege with x (integer)
|
wwydmanski/tabular-letter-recognition
|
[
"task_categories:tabular-classification",
"size_categories:10K<n<100K",
"tabular",
"region:us"
] |
2023-02-24T09:09:00+00:00
|
{"size_categories": ["10K<n<100K"], "task_categories": ["tabular-classification"], "pretty_name": "Tabular letter recognition", "tags": ["tabular"]}
|
2023-02-24T09:36:30+00:00
|
7f137bd1ff4ba1bdca9d8a4a709094e3970a3a27
|
# Dataset Card for "diffusion.4.text_to_image"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lansinuote/diffusion.4.text_to_image
|
[
"region:us"
] |
2023-02-24T10:14:17+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 119636585.0, "num_examples": 833}], "download_size": 0, "dataset_size": 119636585.0}}
|
2023-04-07T07:48:17+00:00
|
732320613bd46e6f6d01570c4cabe9c21e484e54
|
# Dataset Card for "bad_good_method2test_10k_tokonized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Minata/bad_good_method2test_10k_tokonized
|
[
"region:us"
] |
2023-02-24T10:17:34+00:00
|
{"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 16822480, "num_examples": 10000}], "download_size": 4814929, "dataset_size": 16822480}}
|
2023-02-24T20:22:57+00:00
|
49bd43ab6fc78dc0ea0ac98aac6e351e4e9df224
|
## Publication Abstract
Colorectal cancer, a commonly diagnosed cancer in the elderly, often develops slowly from benign polyps called adenoma. The gut microbiota is believed to be directly involved in colorectal carcinogenesis. The identity and functional capacity of the adenoma- or carcinoma-related gut microbe(s), however, have not been surveyed in a comprehensive manner. Here we perform a metagenome-wide association study (MGWAS) on stools from advanced adenoma and carcinoma patients and from healthy subjects, revealing microbial genes, strains and functions enriched in each group. An analysis of potential risk factors indicates that high intake of red meat relative to fruits and vegetables appears to associate with outgrowth of bacteria that might contribute to a more hostile gut environment. These findings suggest that faecal microbiome-based strategies may be useful for early diagnosis and treatment of colorectal adenoma or carcinoma.
## Dataset
156 metagenomic shotgun-sequenced faecal samples from colorectal adenoma and carcinoma patients and healthy controls
### Configurations
- `presence-absence`
- `CLR`
## Usage
```python
dataset = load_dataset("wwydmanski/colorectal-carcinoma-microbiome-fengq", "presence-absence")
train_dataset, test_dataset = dataset['train'], dataset['test']
X_train = np.array(train_dataset['values'])
y_train = np.array(train_dataset['target'])
X_test = np.array(test_dataset['values'])
y_test = np.array(test_dataset['target'])
```
|
wwydmanski/colorectal-carcinoma-microbiome-fengq
|
[
"task_categories:tabular-classification",
"size_categories:n<1K",
"microbiome",
"tabular",
"gut-microbiota",
"region:us"
] |
2023-02-24T10:27:04+00:00
|
{"size_categories": ["n<1K"], "task_categories": ["tabular-classification"], "pretty_name": "Colorectal Carcinoma Feng Q 2015", "tags": ["microbiome", "tabular", "gut-microbiota"]}
|
2023-02-25T15:34:21+00:00
|
3a1fe03a983610d426ba707e80d26e1af0c09ad8
|
### Context
The dataset consists of lists of unique objects of popular portals for the sale of real estate in Russia. More than 540 thousand objects.
The dataset contains 540000 real estate objects in Russia.
### Content
The Russian real estate market has a relatively short history. In the Soviet era, all properties were state-owned; people only had the right to use them with apartments allocated based on one's place of work. As a result, options for moving were fairly limited. However, after the fall of the Soviet Union, the Russian real estate market emerged and Muscovites could privatize and subsequently sell and buy properties for the first time. Today, Russian real estate is booming. It offers many exciting opportunities and high returns for lifestyle and investment.
The real estate market has been in a growth phase for several years, which means that you can still find properties at very attractive prices, but with good chances of increasing their value in the future.
### Dataset
The dataset has 13 fields.
- date - date of publication of the announcement;
- time - the time when the ad was published;
- geo_lat - Latitude
- geo_lon - Longitude
- region - Region of Russia. There are 85 subjects in the country in total.
- building_type - Facade type. 0 - Other. 1 - Panel. 2 - Monolithic. 3 - Brick. 4 - Blocky. 5 - Wooden
- object_type - Apartment type. 1 - Secondary real estate market; 2 - New building;
- level - Apartment floor
- levels - Number of storeys
- rooms - the number of living rooms. If the value is "-1", then it means "studio apartment"
- area - the total area of the apartment
- kitchen_area - Kitchen area
- price - Price. in rubles
### Attention.
The dataset may contain erroneous data due to input errors on services, as well as outliers, and so on.
### :)
Using this dataset, we offer Kagglers algorithms that use a wide range of functions to predict real estate prices. Competitors will rely on a vast dataset that includes housing data and macroeconomic models. An accurate forecasting model provides more confidence to its clients in a volatile economy.
|
daniilak/Russia_Real_Estate_2018_2021
|
[
"license:cc",
"region:us"
] |
2023-02-24T10:34:39+00:00
|
{"license": "cc"}
|
2023-02-24T14:42:01+00:00
|
2d2e8c704b810f97afe76188e66ab4708735cda4
|
The Sentiment DKSF (Digikala/Snappfood comments) is a dataset for sentiment analysis.
|
hezarai/sentiment-dksf
|
[
"task_categories:text-classification",
"language:fa",
"region:us"
] |
2023-02-24T10:39:43+00:00
|
{"language": ["fa"], "task_categories": ["text-classification"], "pretty_name": "Digikala/SnappFood comments sentiment analysis"}
|
2023-09-02T09:33:35+00:00
|
910c5ce511484f18e31b50712c5da72ad9e746cf
|
Real estate ads in Russia are published on the websites avito.ru, realty.yandex.ru, cian.ru, sob.ru, youla.ru, n1.ru, moyareklama.ru. The ads-api.ru service allows you to upload real estate ads for a fee. The parser of the service works strangely and duplicates real estate ads in the database if the authors extended them after some time. Also in the Russian market there are a lot of outbids (bad realtors) who steal ads and publish them on their own behalf. Before publishing this dataset, my task was to select the original ad from a bunch of ads.
Russian real estate services allow ad authors to manually write data about an apartment or house. Therefore, it often happens that a user can publish an ad with errors or typos. Also, the user may not know, for example, the type of walls near his house.
The user also specifies the address of the object being sold. He may make a mistake and simply indicate the address, for example, "Moscow". Which street? Which house? We will never know.
# Dataset
The real estate market in Russia is of two types, in the dataset it is used as object type 0 - Secondary real estate market; 2 - New building.
I found it necessary to determine the geolocation for each ad address and add the coordinates to this dataset. Also there is a number of the region of Russia. For example, the number of the Chuvash region is 21. Additionally, there is a house number that is synchronized through the federal public database of the Federal Tax Service "FIAS". Since the data is obtained through a paid third party service, I cannot publish the results, however, I can anonymize them and publish parameters such as Street ID and House ID.
Basically, all houses are built from blocks such as brick, wood, panel and others. I marked them with numbers: building type - 0 - Don't know. 1 - Other. 2 - panel. 3 - Monolithic. 4 - Brick. 5 - blocky. 6- Wooden
The number of rooms can also be as 1, 2 or more. However, there is a type of apartment that is called a studio apartment. I've labeled them "-1".
# Ideas
I hope that the publication of this dataset will improve developments in the field of global real estate.
You can create apartment price forecasts.
You can analyze real estate markets.
You can understand that there is a need to publish free real estate datasets.
And much more
# Others
The license for this dataset is public, you can use it in your scientific research, design work and other works. The only condition is the publication of a link to this dataset.
You can send suggestions (or complaints) on the dataset by mail [email protected]
You can find more information about the data on the website https://dom.realtycloud.ru/
|
daniilak/Russia_Real_Estate_2021
|
[
"license:cc",
"region:us"
] |
2023-02-24T10:52:04+00:00
|
{"license": "cc"}
|
2023-02-24T14:41:47+00:00
|
ffc7f1d3062d0fb05c98b7ac4c9061fd8324b16d
|
## Shaded relief image dataset for geomorphological studies of Polish postglacial landscape
This dataset contains a list of 138 png images of shaded relief cut into the 128x128 arrays. The area that the dataset covers is compacted within the
two main geomorphological spheres in Poland - postglacial denuded and nondenuded landscape. Arrays representing one of two categories are labeled accordingly.
Shaded relief scene has been calculated with exposition and sunlight paramiters set to direct south (thus, in this case - 180 degrees).
|
Pacoch/postglacial-shaded-relief
|
[
"task_categories:image-classification",
"task_categories:feature-extraction",
"size_categories:1M<n<10M",
"license:mit",
"geomorphology",
"image",
"png",
"region:us"
] |
2023-02-24T10:59:37+00:00
|
{"license": "mit", "size_categories": ["1M<n<10M"], "task_categories": ["image-classification", "feature-extraction"], "pretty_name": "Shaded relief image dataset for geomorphological studies of Polish postglacial landscape", "tags": ["geomorphology", "image", "png"]}
|
2023-02-24T11:35:00+00:00
|
de997f1ead35161a98384db6f8cafa4a9670c09d
|
ymalusare/yash
|
[
"license:openrail",
"region:us"
] |
2023-02-24T11:30:47+00:00
|
{"license": "openrail"}
|
2023-02-24T11:30:48+00:00
|
|
977fa674b9f29f0d5446aab4797d2e70c5ec5cb2
|
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
polinaeterna/test-user
|
[
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"language:pl",
"license:mit",
"region:us"
] |
2023-02-24T11:51:23+00:00
|
{"annotations_creators": ["expert-generated"], "language": ["pl"], "license": ["mit"], "multilinguality": ["monolingual"], "dataset_info": [{"config_name": "config", "features": [{"name": "audio_id", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}]}], "duplicated_from": "j-krzywdziak/test"}
|
2023-02-24T12:06:17+00:00
|
13f358a1e069609324b7046f3376a5b325f57348
|
ynt/collabba
|
[
"license:unknown",
"region:us"
] |
2023-02-24T12:01:02+00:00
|
{"license": "unknown"}
|
2023-02-28T10:15:18+00:00
|
|
704290ae05ae4b267de4827f216ebaf68a929721
|
# Dataset Card for "music-tags-to-spectrogram"
This dataset is extracted from the [MTG-Jamendo Datase](https://github.com/MTG/mtg-jamendo-dataset/tree/ef1248f0fc295a4bd2189531b0e2b4d158d219dc)
### We applied the following transformation:
- Convert audio to spectrogram diagram.
- Join tagging list into one-line text.
For example, there is a genre list that has three tags: chillout, downtempo, and easylistening.
The raw text of this example will be `"chillout downtempo easylistening"`.
|
luli0034/music-tags-to-spectrogram
|
[
"region:us"
] |
2023-02-24T12:18:40+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12031171376.431, "num_examples": 1543}], "download_size": 12061543955, "dataset_size": 12031171376.431}}
|
2023-02-24T15:56:26+00:00
|
cfa72f9ee1131fdf19abdb77520fd2e9c82d744f
|
MeilingShi/legal_argument_mining
|
[
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-02-24T12:27:27+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"]}
|
2023-02-26T11:08:04+00:00
|
|
40691c5059923d37bb9bdebeb0aaf389b72469b2
|
# AutoTrain Dataset for project: tax_issues
## Dataset Description
This dataset has been automatically processed by AutoTrain for project tax_issues.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "How is Inheritance Tax calculated?",
"target": 10,
"feat_Unnamed: 2": null,
"feat_Unnamed: 3": null,
"feat_Unnamed: 4": null
},
{
"text": "What happens if I work part-time or have multiple jobs as an international student in the UK?",
"target": 13,
"feat_Unnamed: 2": null,
"feat_Unnamed: 3": null,
"feat_Unnamed: 4": null
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['Question1', 'Question10', 'Question11', 'Question12', 'Question13', 'Question14', 'Question15', 'Question16', 'Question17', 'Question18', 'Question19', 'Question2', 'Question20', 'Question21', 'Question22', 'Question23', 'Question24', 'Question25', 'Question26', 'Question27', 'Question28', 'Question29', 'Question3', 'Question30', 'Question31', 'Question32', 'Question33', 'Question34', 'Question35', 'Question36', 'Question37', 'Question38', 'Question39', 'Question4', 'Question40', 'Question41', 'Question42', 'Question43', 'Question44', 'Question45', 'Question46', 'Question47', 'Question49', 'Question5', 'Question50', 'Question6', 'Question7', 'Question8', 'Question9', 'question48'], id=None)",
"feat_Unnamed: 2": "Value(dtype='float64', id=None)",
"feat_Unnamed: 3": "Value(dtype='float64', id=None)",
"feat_Unnamed: 4": "Value(dtype='float64', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 2377 |
| valid | 622 |
|
ram119900/TAX_ISSUES
|
[
"task_categories:text-classification",
"region:us"
] |
2023-02-24T13:19:03+00:00
|
{"task_categories": ["text-classification"]}
|
2023-02-24T13:35:01+00:00
|
185eada63ac04543145fa315c136796f294c9252
|
# Dataset Card for "ecthr_a"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
huynguyendayrui/ecthr_a
|
[
"region:us"
] |
2023-02-24T13:27:20+00:00
|
{"dataset_info": {"features": [{"name": "text", "sequence": "string"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "2", "1": "3", "2": "5", "3": "6", "4": "8", "5": "9", "6": "10", "7": "11", "8": "14", "9": "P1-1"}}}}, {"name": "law", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 267388077, "num_examples": 9000}, {"name": "test", "num_bytes": 35341614, "num_examples": 1000}, {"name": "validation", "num_bytes": 33910427, "num_examples": 1000}], "download_size": 157580405, "dataset_size": 336640118}}
|
2023-02-24T13:29:03+00:00
|
111caa2717651b92277935b36a29c5d146d5ef63
|
ronnelrobles/baybayin_characters
|
[
"license:apache-2.0",
"region:us"
] |
2023-02-24T13:28:31+00:00
|
{"license": "apache-2.0"}
|
2023-02-24T13:38:50+00:00
|
|
aee513058486de75958cb2c909a617cd7f641de3
|
# Dataset Card for "bot_issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
loubnabnl/bot_issues
|
[
"region:us"
] |
2023-02-24T13:31:51+00:00
|
{"dataset_info": {"features": [{"name": "repo", "dtype": "string"}, {"name": "org", "dtype": "string"}, {"name": "issue_id", "dtype": "int64"}, {"name": "issue_number", "dtype": "int64"}, {"name": "pull_request", "struct": [{"name": "number", "dtype": "int64"}, {"name": "repo", "dtype": "string"}, {"name": "user_login", "dtype": "string"}]}, {"name": "events", "list": [{"name": "action", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "comment_id", "dtype": "float64"}, {"name": "datetime", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "user_count", "dtype": "int64"}, {"name": "event_count", "dtype": "int64"}, {"name": "text_size", "dtype": "int64"}, {"name": "old_events", "list": [{"name": "action", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "comment_id", "dtype": "float64"}, {"name": "datetime", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "bot_issue", "dtype": "bool"}, {"name": "modified_by_bot", "dtype": "bool"}, {"name": "text_size_bots", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 7767638, "num_examples": 1155}], "download_size": 2267409, "dataset_size": 7767638}}
|
2023-02-24T13:32:00+00:00
|
c2f68cb79058d5c59b1f3a33f2a8738ff5b5d330
|
# Dataset Card for "VQAv2_minival_google_flan_t5_xxl_mode_VQAv2_visclues_ns_25994_open_ended"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_minival_google_flan_t5_xxl_mode_VQAv2_visclues_ns_25994_open_ended
|
[
"region:us"
] |
2023-02-24T13:35:37+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_bs_128", "num_bytes": 3760069, "num_examples": 25994}], "download_size": 0, "dataset_size": 3760069}}
|
2023-02-24T17:03:09+00:00
|
4076709e2e4b16d63386a0848a6c1eb5ebf062ca
|
herman925/mysef
|
[
"license:openrail",
"region:us"
] |
2023-02-24T14:32:13+00:00
|
{"license": "openrail"}
|
2023-02-24T14:37:01+00:00
|
|
9329931203c58e00373f9cc4bac258ee44c5d9c8
|
# Dataset Card for "preprocessed-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
loubnabnl/preprocessed-issues
|
[
"region:us"
] |
2023-02-24T14:54:11+00:00
|
{"dataset_info": {"features": [{"name": "repo", "dtype": "string"}, {"name": "org", "dtype": "string"}, {"name": "issue_id", "dtype": "int64"}, {"name": "issue_number", "dtype": "int64"}, {"name": "pull_request", "struct": [{"name": "number", "dtype": "int64"}, {"name": "repo", "dtype": "string"}, {"name": "user_login", "dtype": "string"}]}, {"name": "events", "list": [{"name": "action", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "comment_id", "dtype": "float64"}, {"name": "datetime", "dtype": "int64"}, {"name": "masked_author", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "user_count", "dtype": "int64"}, {"name": "event_count", "dtype": "int64"}, {"name": "text_size", "dtype": "int64"}, {"name": "bot_issue", "dtype": "bool"}, {"name": "modified_by_bot", "dtype": "bool"}, {"name": "text_size_no_bots", "dtype": "int64"}, {"name": "modified_usernames", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 15868077, "num_examples": 7351}], "download_size": 7504145, "dataset_size": 15868077}}
|
2023-02-24T14:54:29+00:00
|
aa665b890915f7a42f8615bee868a9f3447e178f
|
# Dataset Card for InstructPix2Pix CLIP-filtered
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.timothybrooks.com/instruct-pix2pix
- **Repository:** https://github.com/timothybrooks/instruct-pix2pix
- **Paper:** https://arxiv.org/abs/2211.09800
## Dataset Summary
The dataset can be used to train models to follow edit instructions. Edit instructions
are available in the `edit_prompt`. `original_image` can be used with the `edit_prompt` and
`edited_image` denotes the image after applying the `edit_prompt` on the `original_image`.
Refer to the [GitHub repository](https://github.com/timothybrooks/instruct-pix2pix) to know more about
how this dataset can be used to train a model that can follow instructions.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text descriptions are in English.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The license for this dataset is a custom license. Refer to the licensing file to know more.
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@sayakpaul](https://github.com/sayakpaul) for contributing this dataset.
|
timbrooks/instructpix2pix-clip-filtered
|
[
"size_categories:100K<n<1M",
"language:en",
"arxiv:2211.09800",
"region:us"
] |
2023-02-24T14:55:53+00:00
|
{"language": ["en"], "size_categories": ["100K<n<1M"], "dataset_info": {"features": [{"name": "original_prompt", "dtype": "string"}, {"name": "original_image", "dtype": "image"}, {"name": "edit_prompt", "dtype": "string"}, {"name": "edited_prompt", "dtype": "string"}, {"name": "edited_image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 130930966429.88, "num_examples": 313010}], "download_size": 63067247926, "dataset_size": 130930966429.88}}
|
2023-03-02T11:19:16+00:00
|
eea253c8f415887e99256974b46e6be1934adb36
|
As the title suggests, this dataset is a derivative of the cc100-sr common-crawl based dataset for Serbian.
It was deduplicated on the sentence level, and transliterated into latin alphabet.
Dataset is constituted of JSON files, where the textual sentences are located in the "sents" attribute of the object root and can be obtianed via:
```python
from json import load
with open("cc100-sr-ded") as jf:
sentences = load(jf)["sents"]
```
|
jerteh/cc100-sr-jerteh
|
[
"language:sr",
"region:us"
] |
2023-02-24T15:11:59+00:00
|
{"language": ["sr"], "pretty_name": "cc100-sr derivation by JeRTeh"}
|
2023-02-24T17:56:39+00:00
|
4688373bb270bf6fac69277394c97fc7e1d6fc69
|
# Dataset Card for "VQAv2_minival_google_flan_t5_xxl_mode_VQAv2_visclues_detection_ns_25994_open_ended"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_minival_google_flan_t5_xxl_mode_VQAv2_visclues_detection_ns_25994_open_ended
|
[
"region:us"
] |
2023-02-24T15:20:49+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 3799851, "num_examples": 25994}], "download_size": 0, "dataset_size": 3799851}}
|
2023-02-24T18:20:22+00:00
|
1d5378f9cc83f95113228dfb6e863b32b1ceddc4
|
NimaBoscarino/fuego-20230224-073501-22f4ff
|
[
"fuego",
"region:us"
] |
2023-02-24T15:35:02+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230224-073501-22f4ff", "status": "done", "script": "train.py", "requirements_file": "requirements.txt", "space_id": "NimaBoscarino/fuego-20230224-073501-22f4ff", "space_hardware": "cpu-basic"}}
|
2023-02-24T15:38:42+00:00
|
|
6c03499df609f4a9ad87f63a531bbf46011f354e
|
botmaster/mother-2-battle-sprites
|
[
"task_categories:text-to-image",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:other",
"region:us"
] |
2023-02-24T15:55:12+00:00
|
{"annotations_creators": [], "language_creators": ["found"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": [], "task_categories": ["text-to-image"], "task_ids": [], "pretty_name": "Mother 2 sprites", "tags": []}
|
2023-02-24T17:38:18+00:00
|
|
664da407ba422304b1500dfc64011688075030a0
|
# Dataset Card for "VQAv2_minival_google_flan_t5_xxl_mode_VQAv2_visclues_detection_ns_10_open_ended"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_minival_google_flan_t5_xxl_mode_VQAv2_visclues_detection_ns_10_open_ended
|
[
"region:us"
] |
2023-02-24T16:06:16+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 1822, "num_examples": 10}], "download_size": 3607, "dataset_size": 1822}}
|
2023-02-24T16:06:18+00:00
|
8d8f396f88f286741f3b1ca08a859a1ad027c156
|
# Dataset Card for "large-text-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
loubnabnl/large-text-issues
|
[
"region:us"
] |
2023-02-24T16:14:49+00:00
|
{"dataset_info": {"features": [{"name": "repo", "dtype": "string"}, {"name": "org", "dtype": "string"}, {"name": "issue_id", "dtype": "int64"}, {"name": "issue_number", "dtype": "int64"}, {"name": "pull_request", "struct": [{"name": "number", "dtype": "int64"}, {"name": "repo", "dtype": "string"}, {"name": "user_login", "dtype": "string"}]}, {"name": "events", "list": [{"name": "action", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "comment_id", "dtype": "float64"}, {"name": "datetime", "dtype": "int64"}, {"name": "large_text", "dtype": "bool"}, {"name": "masked_author", "dtype": "string"}, {"name": "nb_lines", "dtype": "int64"}, {"name": "size", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "user_count", "dtype": "int64"}, {"name": "event_count", "dtype": "int64"}, {"name": "text_size", "dtype": "int64"}, {"name": "bot_issue", "dtype": "bool"}, {"name": "modified_by_bot", "dtype": "bool"}, {"name": "text_size_no_bots", "dtype": "int64"}, {"name": "modified_usernames", "dtype": "bool"}, {"name": "contains_large", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 3807857, "num_examples": 163}], "download_size": 1040266, "dataset_size": 3807857}}
|
2023-03-01T19:20:20+00:00
|
943bee78d757ea4f58faa9da487684994b62f95c
|
# Dataset Card for "VQAv2_minival_google_flan_t5_xxl_mode_VQAv2_visclues_detection_ns_4_open_ended"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_minival_google_flan_t5_xxl_mode_VQAv2_visclues_detection_ns_4_open_ended
|
[
"region:us"
] |
2023-02-24T16:18:13+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 783, "num_examples": 4}], "download_size": 0, "dataset_size": 783}}
|
2023-02-24T16:45:58+00:00
|
48ffae0db42808186b81d7d39579dff124a08587
|
# AutoTrain Dataset for project: klasifikasi-tutupan-lahan
## Dataset Description
This dataset has been automatically processed by AutoTrain for project klasifikasi-tutupan-lahan.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<64x64 RGB PIL image>",
"target": 8
},
{
"image": "<64x64 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['AnnualCrop', 'Forest', 'HerbaceousVegetation', 'Highway', 'Industrial', 'Pasture', 'PermanentCrop', 'Residential', 'River', 'SeaLake'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 400 |
| valid | 100 |
|
martinms20/eurosat50-land-cover
|
[
"task_categories:image-classification",
"region:us"
] |
2023-02-24T16:26:41+00:00
|
{"task_categories": ["image-classification"]}
|
2023-02-24T16:30:39+00:00
|
893b20897a66275ef9a5105da5b4a98847cc1694
|
## Dataset Description
Spanish-BFF is the first Spanish AI-generated dictionary using GPT3.
- **Paper:** [Spanish Built Factual Freectianary (Spanish-BFF): the first IA-generated free dictionary](https://arxiv.org/abs/2302.12746)
- **Point of Contact:** [email protected] , [email protected]
### Dataset Summary
Spanish-BFF contains a total of 66353 lemmas with its definitions (only one definiton per lemma).
These lemmas correspond to nominal, adjetival, verbal and adverbial classes.
### Languages
- Spanish (es)
## Dataset Structure
### Data Instances
<pre>
{
'id': 'b0o8',
'lemma': 'fomo',
'definition': 'FOMO es un acrónimo de "miedo a perderse", y se refiere a la ansiedad que uno puede sentir cuando ve que otros están disfrutando de algo que él o ella no está haciendo.',
}
</pre>
### Data Fields
<pre>
{
id: str
lemma: str
definition: str
}
</pre>
### Data Splits
| Split | Size |
| ------------- | ------------- |
| `train` | 66,353 |
## Content analysis
### Number of nouns, adjetives, adverbs and verbs
* Number of nouns: 38093 (57.41 %)
* Number of adjetives: 17424 (26.26 %)
* Number of verbs: 9296 (14.01 %)
* Number of adverbs: 1540 (2.32 %)
### Statistics
Uncertainties provided by a coverage factor k=1 within the standard deviation of the population of definitions.
* Total words in definitions: 551878
* Average words/definition: 8.3 +/- 5.1 words
* Average characters/definitions: 49.1 +/- 28.4 characters
## Dataset Creation
### Prompting
Each one of the definitons were generated in batches using the following prompt:
<pre>
Generate in Spanish a definition of the word "[word]"
</pre>
## Considerations for Using the Data
### Social Impact of Dataset
This corpus is the first open-source complete dictionary produced by LLMs. We intend to contribute to a better understanding and development of NLP and promote responsible use.
### Biases and Hallucinations
This version has not been postprocessed to mitigate potential errors, biases or hallucinations the AI model could have generated.
## Citation
```
@misc{https://doi.org/10.48550/arxiv.2302.12746,
doi = {10.48550/ARXIV.2302.12746},
url = {https://arxiv.org/abs/2302.12746},
author = {Ortega-Martín, Miguel and García-Sierra, Óscar and Ardoiz, Alfonso and Armenteros, Juan Carlos and Álvarez, Jorge and Alonso, Adrián},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Spanish Built Factual Freectianary (Spanish-BFF): the first AI-generated free dictionary},
publisher = {arXiv},
year = {2023},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
MMG/SpanishBFF
|
[
"annotations_creators:AI-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:es",
"license:gpl-3.0",
"arxiv:2302.12746",
"region:us"
] |
2023-02-24T16:28:06+00:00
|
{"annotations_creators": ["AI-generated"], "language": ["es"], "license": "gpl-3.0", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "pretty_name": "Spanish Built Factual Freectianary (Spanish-BFF)"}
|
2023-03-01T13:27:48+00:00
|
e9ee3e926789efd03d1d5d4737b447a37e844c99
|
# Dataset Card for "pre-processed-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
loubnabnl/pre-processed-issues
|
[
"region:us"
] |
2023-02-24T16:53:07+00:00
|
{"dataset_info": {"features": [{"name": "repo", "dtype": "string"}, {"name": "org", "dtype": "string"}, {"name": "issue_id", "dtype": "int64"}, {"name": "issue_number", "dtype": "int64"}, {"name": "pull_request", "struct": [{"name": "number", "dtype": "int64"}, {"name": "repo", "dtype": "string"}, {"name": "user_login", "dtype": "string"}]}, {"name": "events", "list": [{"name": "action", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "comment_id", "dtype": "float64"}, {"name": "datetime", "dtype": "int64"}, {"name": "masked_author", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "text_size", "dtype": "int64"}, {"name": "bot_issue", "dtype": "bool"}, {"name": "modified_by_bot", "dtype": "bool"}, {"name": "user_count", "dtype": "int64"}, {"name": "event_count", "dtype": "int64"}, {"name": "modified_usernames", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 15607937, "num_examples": 6759}], "download_size": 7397345, "dataset_size": 15607937}}
|
2023-02-24T16:53:22+00:00
|
4dc56e367663251eef564b67f51c532038ff1707
|
SrpELTeC is a corpus of old Serbian novels for the first time published in the period 1840-1920. years of digitized within COST ACTION CO16204: Distant Reading for European Literary History, 2018-2022.
The corpus includes 120 novels with 5,263.071 words, 22700 pages, 2557 chapters, 158,317 passages, 567 songs, 2972 verses, 803 segments in foreign language and 949 mentioned works.
Dataset is constituted of JSON files, where the textual sentences are located in the "sents" attribute of the object root and can be obtianed via:
```python
from json import load
with open("ELTeC.json") as jf:
sentences = load(jf)["sents"]
```
|
jerteh/SrpELTeC
|
[
"size_categories:1M<n<10M",
"language:sr",
"license:cc-by-4.0",
"region:us"
] |
2023-02-24T17:41:53+00:00
|
{"language": ["sr"], "license": "cc-by-4.0", "size_categories": ["1M<n<10M"], "pretty_name": "Serbian Literary Text Collection", "field": "sents"}
|
2023-09-10T05:13:01+00:00
|
dd72c79733e21d00028273a9998ef501140eb156
|
Dataset contain text from Wikipedia articles in Serbian (obtained in early 2020) totaling in 477473 articles, as well as some of the WikiSource.
Dataset is constituted of JSON files, where the textual sentences are located in the "sents" attribute of the object root and can be obtianed via:
```python
from json import load
with open("WikiKorpus.json") as jf:
sentences = load(jf)["sents"]
```
|
jerteh/SrpWiki
|
[
"language:sr",
"license:cc-by-4.0",
"region:us"
] |
2023-02-24T17:50:51+00:00
|
{"language": ["sr"], "license": "cc-by-4.0", "pretty_name": "Serbian WikiMedia dataset"}
|
2023-02-24T18:00:15+00:00
|
bbab3c0f434dbb177ddfaa0189ad1e0b34528b37
|
# Dataset Card for "sample_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Maiia/sample_dataset
|
[
"region:us"
] |
2023-02-24T18:09:36+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int64"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "I-education", "1": "I-title", "2": "O", "3": "B-title", "4": "B-education", "5": "I-collaborate_with", "6": "B-skill", "7": "I-skill", "8": "I-certification", "9": "B-collaborate_with", "10": "B-certification"}}}}], "splits": [{"name": "train", "num_bytes": 1363328, "num_examples": 5000}], "download_size": 233389, "dataset_size": 1363328}}
|
2023-02-24T18:09:44+00:00
|
4272fb8a9f930a4e89acc4541407a2a9cf7e9b17
|
# Dataset Card for "text2text-10-predictions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
davidberenstein1957/text2text-10-predictions
|
[
"region:us"
] |
2023-02-24T18:24:19+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 104132.0, "num_examples": 72}, {"name": "test", "num_bytes": 26033.0, "num_examples": 18}], "download_size": 86213, "dataset_size": 130165.0}}
|
2023-03-02T10:37:03+00:00
|
67b65cfa1347e0fd914fa27161a0b2eed2458a0c
|
Thanks and please support:
Ecigator is one of the well-known vape brands spun off from Giftsoar Technology Co., Ltd, it’s an ISO-certified [disposable vape manufacturer](https://ecigator.com/) for OEMs, ODMs, and OBM since 2010.
[https://ecigator.com/](https://ecigator.com/)
|
vapecig/promptsai
|
[
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:bsd",
"region:us"
] |
2023-02-24T18:37:23+00:00
|
{"language": ["en"], "license": "bsd", "size_categories": ["n<1K"], "task_categories": ["text-generation"], "pretty_name": "Awesome chatGPT prompts"}
|
2023-02-24T19:56:14+00:00
|
dabe901881c7bec30a0b1dadfafab826a4ad0b96
|
# Dataset Card for "wikipedia_512_pretraining"
Wikipedia preprocessed for pretraining of models. Each sample in the dataset has an average tokenized length of 512 `RoBERTa-Base` tokens.
|
lucadiliello/wikipedia_512_pretraining
|
[
"size_categories:1M<n<10M",
"language:en",
"region:us"
] |
2023-02-24T18:40:57+00:00
|
{"language": ["en"], "size_categories": ["1M<n<10M"], "pretty_name": "Wikipedia preprocessed for 512 tokens pretraining.", "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9828026640.785877, "num_examples": 6699666}, {"name": "dev", "num_bytes": 146694277.60706097, "num_examples": 100000}, {"name": "test", "num_bytes": 146694277.60706097, "num_examples": 100000}], "download_size": 6454536577, "dataset_size": 10121415196}}
|
2023-03-24T08:03:19+00:00
|
f62928f52da2c0b8b666df90939d1cab743acd6f
|
# Dataset Card for "SRV-NLLB-Europarl-mt-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tj-solergibert/SRV-NLLB-Europarl-mt-es
|
[
"region:us"
] |
2023-02-24T19:18:12+00:00
|
{"dataset_info": {"features": [{"name": "source_text", "dtype": "string"}, {"name": "dest_text", "dtype": "string"}, {"name": "dest_lang", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 404359305, "num_examples": 473141}, {"name": "valid", "num_bytes": 56341952, "num_examples": 64517}, {"name": "test", "num_bytes": 59615690, "num_examples": 68017}], "download_size": 222999862, "dataset_size": 520316947}}
|
2023-02-24T19:19:08+00:00
|
16951295d18831440403037902df0df4201cc377
|
# Dataset Card for "SRV-NLLB-Europarl-mt-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tj-solergibert/SRV-NLLB-Europarl-mt-en
|
[
"region:us"
] |
2023-02-24T19:21:41+00:00
|
{"dataset_info": {"features": [{"name": "source_text", "dtype": "string"}, {"name": "dest_text", "dtype": "string"}, {"name": "dest_lang", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 403651116, "num_examples": 498086}, {"name": "valid", "num_bytes": 57524298, "num_examples": 69178}, {"name": "test", "num_bytes": 61047362, "num_examples": 72950}], "download_size": 221747155, "dataset_size": 522222776}}
|
2023-02-24T19:22:34+00:00
|
7a109ed1fa8347991607fd9c2d75d914a70ec603
|
# Dataset Card for "pgvs_166k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tamisalex/pgvs_166k
|
[
"region:us"
] |
2023-02-24T19:29:27+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "title", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18305310384.0, "num_examples": 133230}, {"name": "test", "num_bytes": 2201461998.552, "num_examples": 16612}, {"name": "validation", "num_bytes": 2329645319.194, "num_examples": 16593}], "download_size": 0, "dataset_size": 22836417701.746002}}
|
2023-02-28T20:12:16+00:00
|
1b86bfd46e12f9906938fb5f9102acaa8d09f32a
|
# Dataset Card for "comments_preceding_bots"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
loubnabnl/comments_preceding_bots
|
[
"region:us"
] |
2023-02-24T19:34:28+00:00
|
{"dataset_info": {"features": [{"name": "events", "list": [{"name": "action", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "comment_id", "dtype": "float64"}, {"name": "datetime", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "data_with_bots", "list": [{"name": "bot", "dtype": "string"}, {"name": "previous", "struct": [{"name": "meta", "dtype": "string"}, {"name": "text", "dtype": "string"}]}]}, {"name": "data_without_bots", "list": [{"name": "previous", "struct": [{"name": "meta", "dtype": "string"}, {"name": "text", "dtype": "string"}]}]}], "splits": [{"name": "train", "num_bytes": 286942.17, "num_examples": 183}], "download_size": 236154, "dataset_size": 286942.17}}
|
2023-02-24T19:50:14+00:00
|
e1396c12a47074eba3246b5f4cc7dc4b357bb64c
|
# Dataset Card for "FontsSmall"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
arbml/FontsSmall
|
[
"region:us"
] |
2023-02-24T19:34:52+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4309913.0, "num_examples": 999}], "download_size": 2730806, "dataset_size": 4309913.0}}
|
2023-02-24T19:34:54+00:00
|
51477db22cb1cdd3842c3c55cefab6b7ba491f55
|
# AutoTrain Dataset for project: multifamily
## Dataset Description
This dataset has been automatically processed by AutoTrain for project multifamily.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<500x500 RGB PIL image>",
"target": 40
},
{
"image": "<500x500 RGB PIL image>",
"target": 34
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Balcony-Patio', 'Bathroom', 'Bedroom', 'Bike', 'Building', 'Business Center', 'Business Center Conference Room', 'Closet', 'Clubhouse', 'Clubhouse Dining Room', 'Construction', 'Dining Room', 'Dog Park', 'Fire Pit', 'Fitness Center', 'Floorplan', 'Fountain', 'Green Space', 'Grilling Area', 'Hallway', 'Headshot', 'Home Office', 'Hot Tub', 'Kitchen', 'Laundry Facility', 'Laundry Washer-Dryer', 'Leasing Office', 'Living Room', 'Living Room Fireplace', 'Logo', 'Lounge Area', 'Mail Box', 'Monument Sign', 'Neighborhood', 'Packages', 'Parking', 'Pet Washing', 'Picnic Area', 'Play Park', 'Pool', 'Pool Cabanas', 'Pool Table', 'Private Garage', 'Site-plan', 'Stock Photo', 'Tennis Court', 'View-Aerial', 'room'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 681 |
| valid | 185 |
|
lineups-io/autotrain-data-multifamily
|
[
"task_categories:image-classification",
"region:us"
] |
2023-02-24T19:46:41+00:00
|
{"task_categories": ["image-classification"]}
|
2023-02-25T00:44:29+00:00
|
8395d7463af19931263b2dff5e54d2b91bfcd382
|
williamtg/TestAudioDataSet
|
[
"license:openrail",
"region:us"
] |
2023-02-24T20:28:04+00:00
|
{"license": "openrail"}
|
2023-03-09T10:22:09+00:00
|
|
b9c4e6adf7a46cf3a0e5325455add07a21bef106
|
# Dataset Card for dev_mode-wtq
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [WikiTableQuestions homepage](https://nlp.stanford.edu/software/sempre/wikitable)
- **Repository:** [WikiTableQuestions repository](https://github.com/ppasupat/WikiTableQuestions)
- **Paper:** [Compositional Semantic Parsing on Semi-Structured Tables](https://arxiv.org/abs/1508.00305)
- **Leaderboard:** [WikiTableQuestions leaderboard on PaperWithCode](https://paperswithcode.com/dataset/wikitablequestions)
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The dev_mode-wtq dataset is a small-scale dataset for the task of question answering on semi-structured tables.
This data includes the `aggregation_label` and `answer_coordinates` to make it easy to train this model on any [TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas#usage-finetuning) based modles.
### Supported Tasks and Leaderboards
question-answering, table-question-answering
### Languages
en
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 27.91 MB
- **Size of the generated dataset:** 45.68 MB
- **Total amount of disk used:** 73.60 MB
An example of 'validation' looks as follows:
```
{
"id": "nt-0",
"question": "What is the duration for the last invocation?",
"answers": [
"340 ms"
],
"table": {
"header": [
"recent",
"type",
"spans",
"logs",
"errors",
"warnings",
"duration",
"resource"
],
"rows": [
[
"1",
"span",
"1",
"1",
"1",
"2",
"340 ms",
"aws-lambda-typescript-express-dev-express"
]
]
}
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `id`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a `list` of `string` feature.
- `answers_coordinates`: a `list` of `int,int` tuples.
- `aggregation_label`: a `string` feature.
- `table`: a dictionary feature containing:
- `header`: a `list` of `string` features.
- `rows`: a `list` of `list` of `string` features:
- `name`: a `string` feature.
### Data Splits
TBA
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Panupong Pasupat and Percy Liang
### Licensing Information
Creative Commons Attribution Share Alike 4.0 International
### Citation Information
```
@inproceedings{pasupat-liang-2015-compositional,
title = "Compositional Semantic Parsing on Semi-Structured Tables",
author = "Pasupat, Panupong and Liang, Percy",
booktitle = "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = jul,
year = "2015",
address = "Beijing, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P15-1142",
doi = "10.3115/v1/P15-1142",
pages = "1470--1480",
}
```
### Contributions
Thanks to [@SivilTaram](https://github.com/SivilTaram) for adding this dataset.
|
Serverless/dev_mode-wtq
|
[
"task_categories:question-answering",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:wikitablequestions",
"language:en",
"license:cc-by-4.0",
"table-question-answering",
"arxiv:1508.00305",
"region:us"
] |
2023-02-24T20:28:22+00:00
|
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["wikitablequestions"], "task_categories": ["question-answering"], "task_ids": [], "pretty_name": "WikiTableQuestions-wtq", "tags": ["table-question-answering"]}
|
2023-02-24T20:33:26+00:00
|
750847f1d6a6dbe218a55f064a9f30f56f78441b
|
# Dataset for project: Pet-Ray
## Dataset Description
This G-Ray dataset has been processed by AutoTrain for Pet-Ray.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<1800x4000 RGB PIL image>",
"target": 0
},
{
"image": "<1800x4000 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['chubs'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 11 |
| valid | 3 |
|
Squirrl/autotrain-data-petscan
|
[
"task_categories:image-classification",
"region:us"
] |
2023-02-24T21:04:15+00:00
|
{"task_categories": ["image-classification"]}
|
2023-02-24T23:30:54+00:00
|
ca1455915d9c2bc260a10fde63511599b5649bdf
|
# Dataset Card for "VQAv2_sample_validation_with_img2prompt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_sample_validation_with_img2prompt
|
[
"region:us"
] |
2023-02-24T21:51:00+00:00
|
{"dataset_info": {"features": [{"name": "question_type", "dtype": "string"}, {"name": "multiple_choice_answer", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "answers_original", "list": [{"name": "answer", "dtype": "string"}, {"name": "answer_confidence", "dtype": "string"}, {"name": "answer_id", "dtype": "int64"}]}, {"name": "id_image", "dtype": "int64"}, {"name": "answer_type", "dtype": "string"}, {"name": "question_id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "id", "dtype": "int64"}, {"name": "clip_tags_ViT_L_14", "sequence": "string"}, {"name": "blip_caption", "dtype": "string"}, {"name": "DETA_detections_deta_swin_large_o365_coco_classes", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float32"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float32"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14", "sequence": "string"}, {"name": "DETA_detections_deta_swin_large_o365_coco_classes_ViT_L_14", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "DETA_detections_deta_swin_large_o365_clip_ViT_L_14", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "caption", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "detections_img2prompt", "dtype": "string"}, {"name": "new_info_captions", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "caption", "dtype": "string"}, {"name": "captions_module", "sequence": {"sequence": "string"}}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "new_info_captions2", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "caption", "dtype": "string"}, {"name": "captions_module", "sequence": {"sequence": "string"}}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "new_info_captions3", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "caption", "dtype": "string"}, {"name": "captions_module", "sequence": {"sequence": "string"}}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}], "splits": [{"name": "validation", "num_bytes": 17864005.0, "num_examples": 100}], "download_size": 15925587, "dataset_size": 17864005.0}}
|
2023-02-26T23:11:26+00:00
|
70e038b67d6310daa37d3089abe9da0768eb4a91
|
- This Dataset has been downloaded from PubMed
- It has abstracts and titles that are related to HIV
- the data has been cleaned before uploading
- it could be used for any NLP task, such as Domain Adaptation
|
Gaborandi/HIV_pubmed_abstracts
|
[
"region:us"
] |
2023-02-24T21:52:15+00:00
|
{}
|
2023-02-24T21:53:22+00:00
|
b1e1da9881b01d86c4771c89c057160ec694d69c
|
# Dataset Card for "dev_mode-wtq"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
danwakeem/dev_mode-wtq
|
[
"region:us"
] |
2023-02-24T22:14:50+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "table", "struct": [{"name": "header", "sequence": "string"}, {"name": "rows", "sequence": {"sequence": "string"}}]}, {"name": "aggregation_label", "dtype": "string"}, {"name": "answer_coordinates", "sequence": {"sequence": "int64"}}], "splits": [{"name": "train", "num_bytes": 3765, "num_examples": 6}], "download_size": 5600, "dataset_size": 3765}}
|
2023-02-24T22:14:55+00:00
|
a549ac7d74db5a3af3f157d3216cb2292366975e
|
mzakany23/dataset1
|
[
"license:mit",
"region:us"
] |
2023-02-24T22:30:50+00:00
|
{"license": "mit"}
|
2023-02-24T22:32:31+00:00
|
|
4d73f08344589f55c3d547476370fd2015020010
|
# Dataset Card for "patched_test_p_150_f_membrane_m1_predictions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
roa7n/patched_test_p_150_f_membrane_m1_predictions
|
[
"region:us"
] |
2023-02-24T22:31:01+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence_str", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "m1_preds", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 1561849554, "num_examples": 2394171}], "download_size": 136945465, "dataset_size": 1561849554}}
|
2023-02-24T22:31:24+00:00
|
f80887743f290365b2a21e428cc6b6d07edcaf62
|
# Dataset Card for "method2test_10k_tokonized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Minata/method2test_10k_tokonized
|
[
"region:us"
] |
2023-02-24T22:34:11+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "test", "num_bytes": 502280468, "num_examples": 75335}, {"name": "train", "num_bytes": 66680000, "num_examples": 10000}], "download_size": 34924994, "dataset_size": 568960468}}
|
2023-02-24T23:07:18+00:00
|
56442b23a1ac967f004824efe49bcd30ab763304
|
Jalinvel3/Geneautry
|
[
"license:artistic-2.0",
"region:us"
] |
2023-02-24T22:39:38+00:00
|
{"license": "artistic-2.0"}
|
2023-02-24T22:39:38+00:00
|
|
6722554083b14f942d01d0d20c47d2685af892b6
|
# Dataset Card for "method2test_10k_tokonizedv2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Minata/method2test_10k_tokonizedv2
|
[
"region:us"
] |
2023-02-24T23:10:18+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 66680000, "num_examples": 10000}], "download_size": 4572262, "dataset_size": 66680000}}
|
2023-02-24T23:10:28+00:00
|
f66a447c8bcc75cc0d393ba629cd8b70444ee774
|
# Dataset Card for "Zhihu-KOL"
Zhihu data for training Open Assitant
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
wangrui6/Zhihu-KOL
|
[
"task_categories:question-answering",
"language:zh",
"region:us"
] |
2023-02-25T00:21:29+00:00
|
{"language": ["zh"], "task_categories": ["question-answering"], "dataset_info": {"features": [{"name": "INSTRUCTION", "dtype": "string"}, {"name": "RESPONSE", "dtype": "string"}, {"name": "SOURCE", "dtype": "string"}, {"name": "METADATA", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2295601241, "num_examples": 1006218}], "download_size": 1501204472, "dataset_size": 2295601241}}
|
2023-04-23T12:26:03+00:00
|
d6dc31a234c13535906c9b30dd42e71227db78e5
|
dog/fuego-20230225-025842-d1b8af
|
[
"fuego",
"region:us"
] |
2023-02-25T01:58:43+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230225-025842-d1b8af", "status": "done", "script": "run.py", "requirements_file": "requirements.txt", "space_id": "dog/actlearn-fuego-runner", "space_hardware": "cpu-basic"}}
|
2023-02-25T02:03:32+00:00
|
|
51d076e073f1c1a65ee7a16fa1886486aad9fd8f
|
# pCLUE
pCLUE: Large-scale Prompt-based Dataset for Multi-task and Zero-shot Learning in Chinese
pCLUE:基于提示的大规模预训练数据集,用于多任务学习和零样本学习
### 已转化数据集
数据量: 120万训练数据,73个Prompt
1. 训练集 train.json: 1,200,705
2. 验证集 dev.json: 100,000
3. 公开测试集 test_public.json: 129,556
4. 测试集 test.json: 250,461
具体数据,见:./datasets
### 目前已经有包含9个数据集:
1.单分类tnews
2.单分类iflytek
3.自然语言推理ocnli
4.语义匹配afqmc
5.指代消解-cluewsc2020
6.关键词识别-csl
7.阅读理解-自由式c3
8.阅读理解-抽取式cmrc2018
9.阅读理解-成语填空chid
### 字段说明及评价标准:
input:模型的输入
target:模型的输出
type:任务类型,阅读理解(mrc),分类(classify),生成(generate),自然语言推理(nli)
评价标准:阅读理解(em),分类(acc),生成(em),自然语言推理(acc)
answer_choices:选项(只有分类、推理类任务有)
### 提交样例:
见resources/promptclue_submit_examples。只需提交一个文件,每行是一个json,如:{"target": "2000万元"}
### 示例:
{"input": "哪个类别最好的描述了这篇新闻?扣篮王拉文:精彩暴扣表演!炸\n选项:故事,文化,娱乐,体育,财经,房产,汽车,教育,科技,军事,旅游,国际,股票,农业,游戏\n答案:", "target": "电竞", "answer_choices": ["故事", "文化", "娱乐", "体育", "财经", "房产", "汽车", "教育", "科技", "军事", "旅游", "国际", "股票", "农业", "游戏"], "type": "classify"}
{"input": "你会把这个描述推荐给哪方面的人?银行,社区,电商,支付,经营,卡牌,借贷,驾校,理财,职考,新闻,旅游,交通,魔幻,医疗,影像,动作,工具,体育,小说,运动,相机,工具,快递,教育,股票,菜谱,行车,仙侠,亲子,购物,射击,漫画,小学,同城,成人,求职,电子,艺术,赚钱,约会,经营,兼职,视频,音乐,英语,棋牌,摄影,养生,办公,政务,视频,论坛,彩票,直播,其他,休闲,策略,通讯,买车,违章,地图,民航,电台,语言,搞笑,婚恋,超市,养车,杂志,在线,家政,影视,装修,资讯,社交,餐饮,美颜,挂号,飞行,预定,票务,笔记,买房,外卖,母婴,打车,情侣,日程,租车,博客,百科,绘画,铁路,生活,租房,酒店,保险,问答,收款,竞技,唱歌,技术,减肥,工作,团购,记账,女性,公务,二手,美妆,汽车,行程,免费,教辅,两性,出国,婚庆,民宿快来施放属于你的寒冰魔法吧特殊效果雪花缓缓从上方飘落,手指触碰之处有冰魔法出现爱莎女王脱掉了封印魔法她的手套,在冰雪天地中建造了属于她一个人的辉煌宫殿。安娜中了冰魔法需要真爱之吻才能获救,最终姐妹二人齐心揭穿了异国王子的阴谋拯救了阿伦戴尔。解锁方法随意滑动屏幕一定距离后解锁要是觉得好玩,记得推荐给好朋友哦,,1.新增多张精美冰雪奇缘壁纸2.增加冰雪图钉,锁定当前壁纸功能3.内存,减小电量消耗\n答案:", "target": "休闲益智", "answer_choices": ["银行", "社区", "电商", "支付", "经营", "卡牌", "借贷", "驾校", "理财", "职考", "新闻", "旅游", "交通", "魔幻", "医疗", "影像", "动作", "工具", "体育", "小说", "运动", "相机", "工具", "快递", "教育", "股票", "菜谱", "行车", "仙侠", "亲子", "购物", "射击", "漫画", "小学", "同城", "成人", "求职", "电子", "艺术", "赚钱", "约会", "经营", "兼职", "视频", "音乐", "英语", "棋牌", "摄影", "养生", "办公", "政务", "视频", "论坛", "彩票", "直播", "其他", "休闲", "策略", "通讯", "买车", "违章", "地图", "民航", "电台", "语言", "搞笑", "婚恋", "超市", "养车", "杂志", "在线", "家政", "影视", "装修", "资讯", "社交", "餐饮", "美颜", "挂号", "飞行", "预定", "票务", "笔记", "买房", "外卖", "母婴", "打车", "情侣", "日程", "租车", "博客", "百科", "绘画", "铁路", "生活", "租房", "酒店", "保险", "问答", "收款", "竞技", "唱歌", "技术", "减肥", "工作", "团购", "记账", "女性", "公务", "二手", "美妆", "汽车", "行程", "免费", "教辅", "两性", "出国", "婚庆", "民宿"], "type": "classify"}
{"input": "阅读以下文章,并选择一个合适的成语。文章:\n赵宝刚导演表示,当看到温家宝总理在灾区安慰失去亲人__的孩子时,他再也控制不住自己的感情,不禁潸然泪下。他非常关心灾区的孤儿,目前正计划为孩子们做一些更有意义的事情。当记者问到是否会考虑日后拍一部地震题材的影片时,赵宝刚导演则明确表示自己更愿意为灾区做一些实事,目前正在积极了解灾区儿童的需要,为下一步援助工作做准备。\n 候选成语:忧心忡忡,提心吊胆,后顾之忧,土豪劣绅,叫苦不迭,用武之地,无计可施,明眸皓齿,孤立无援,步步为营。答案是:", "target": "孤立无援", "answer_choices": ["忧心忡忡", "提心吊胆", "后顾之忧", "土豪劣绅", "叫苦不迭", "用武之地", "无计可施", "明眸皓齿", "孤立无援", "步步为营"], "type": "mrc"}
{"input": "这是关于哪方面的新闻?黄埔军校老师有哪些?\n选项:故事,文化,娱乐,体育,财经,房产,汽车,教育,科技,军事,旅游,国际,股票,农业,游戏\n答案:", "target": "军事", "answer_choices": ["故事", "文化", "娱乐", "体育", "财经", "房产", "汽车", "教育", "科技", "军事", "旅游", "国际", "股票", "农业", "游戏"], "type": "classify"}
{"input": "这个是关于哪方面的App应用程序的描述?银行,社区,电商,支付,经营,卡牌,借贷,驾校,理财,职考,新闻,旅游,交通,魔幻,医疗,影像,动作,工具,体育,小说,运动,相机,工具,快递,教育,股票,菜谱,行车,仙侠,亲子,购物,射击,漫画,小学,同城,成人,求职,电子,艺术,赚钱,约会,经营,兼职,视频,音乐,英语,棋牌,摄影,养生,办公,政务,视频,论坛,彩票,直播,其他,休闲,策略,通讯,买车,违章,地图,民航,电台,语言,搞笑,婚恋,超市,养车,杂志,在线,家政,影视,装修,资讯,社交,餐饮,美颜,挂号,飞行,预定,票务,笔记,买房,外卖,母婴,打车,情侣,日程,租车,博客,百科,绘画,铁路,生活,租房,酒店,保险,问答,收款,竞技,唱歌,技术,减肥,工作,团购,记账,女性,公务,二手,美妆,汽车,行程,免费,教辅,两性,出国,婚庆,民宿“魅爱同城美女主动视频陪聊神器,女神绝密私照,一对一视频畅聊,保护你的私密。清纯的萌妹子、火辣的舞女郎,惊艳的时装秀,浪漫的午夜邂逅,伴你告别寂寞和美女主播视频聊天、交友、热舞零距离互动。让你随时随地享受偶遇的激情与惊喜与网红视频网红主播与你在线视频交友,浪漫邂逅。生活动态圈高颜值女神用短视频和照片与你分享生活中的点滴。\n答案:", "target": "约会社交", "answer_choices": ["银行", "社区", "电商", "支付", "经营", "卡牌", "借贷", "驾校", "理财", "职考", "新闻", "旅游", "交通", "魔幻", "医疗", "影像", "动作", "工具", "体育", "小说", "运动", "相机", "工具", "快递", "教育", "股票", "菜谱", "行车", "仙侠", "亲子", "购物", "射击", "漫画", "小学", "同城", "成人", "求职", "电子", "艺术", "赚钱", "约会", "经营", "兼职", "视频", "音乐", "英语", "棋牌", "摄影", "养生", "办公", "政务", "视频", "论坛", "彩票", "直播", "其他", "休闲", "策略", "通讯", "买车", "违章", "地图", "民航", "电台", "语言", "搞笑", "婚恋", "超市", "养车", "杂志", "在线", "家政", "影视", "装修", "资讯", "社交", "餐饮", "美颜", "挂号", "飞行", "预定", "票务", "笔记", "买房", "外卖", "母婴", "打车", "情侣", "日程", "租车", "博客", "百科", "绘画", "铁路", "生活", "租房", "酒店", "保险", "问答", "收款", "竞技", "唱歌", "技术", "减肥", "工作", "团购", "记账", "女性", "公务", "二手", "美妆", "汽车", "行程", "免费", "教辅", "两性", "出国", "婚庆", "民宿"], "type": "classify"}
{"input": "阅读理解:\n有一次,有人问马克·吐温是否记得他第一次是怎样挣到钱的。他想了很久,然后说:“对,我还记得很清楚,那是我在小学读书的时候。那时,小学生们都不尊重自己的老师,而且不爱惜学校的财产,经常弄坏桌椅。所以我们学校就定了一条规则,哪个学生用铅笔或小刀弄坏了桌椅,他就得在全校学生面前挨老师的打,或者交五元罚款。有一天,我弄坏了我的书桌,只好回家对父亲说,我违反了学校的规定,要么罚五元,要么在全校学生面前受到挨打的处分。父亲说当着全校学生的面挨打真是太丢脸了,他答应给我五块钱,让我交给学校。但是在给我这五块钱之前,他把我带到楼上,狠狠地打了我一顿。我想,既然我已经挨过一顿打了,那就干脆当着全校学生的面再挨一顿,这样就可以把那五块钱留下来。我真的这样做了,那就是我第一次挣到的钱。” \n问:父亲为什么给马克·吐温钱? 选项:喜欢他,奖励他,怕丢脸,感谢他\n答案:", "target": "怕丢脸", "type": "mrc", "answer_choices": ["喜欢他", "奖励他", "怕丢脸", "感谢他"]}
{"input": "“全面加强教师特别是农村教师培训,鼓励大学生、师范生到基层、农村任教”根据前面的段落,以下是否是真的“农村教师的培训需要特别重视”?是的,不是,或也许?\n答案:", "target": "是的", "answer_choices": ["是的", "不是", "也许"], "type": "nli"}
{"input": "给定“国民经济保持较快增长”我们应该假定“国民经济一个月内还会保持快速增长”是真的吗?是的,不是,或也许?\n答案:", "target": "也许", "answer_choices": ["是的", "不是", "也许"], "type": "nli"}
{"input": "这个是关于哪方面的App应用程序的描述?银行,社区,电商,支付,经营,卡牌,借贷,驾校,理财,职考,新闻,旅游,交通,魔幻,医疗,影像,动作,工具,体育,小说,运动,相机,工具,快递,教育,股票,菜谱,行车,仙侠,亲子,购物,射击,漫画,小学,同城,成人,求职,电子,艺术,赚钱,约会,经营,兼职,视频,音乐,英语,棋牌,摄影,养生,办公,政务,视频,论坛,彩票,直播,其他,休闲,策略,通讯,买车,违章,地图,民航,电台,语言,搞笑,婚恋,超市,养车,杂志,在线,家政,影视,装修,资讯,社交,餐饮,美颜,挂号,飞行,预定,票务,笔记,买房,外卖,母婴,打车,情侣,日程,租车,博客,百科,绘画,铁路,生活,租房,酒店,保险,问答,收款,竞技,唱歌,技术,减肥,工作,团购,记账,女性,公务,二手,美妆,汽车,行程,免费,教辅,两性,出国,婚庆,民宿移动吧是移动官方面向青海移动用户推出的移动智能终端网上营业厅。新版的移动吧为用户提供方便快捷的账单查询、业务办理、积分查询、通讯录等功能。随时随地尽享青海移动的贴心服务,方便触手可及。查询更丰富直观准确、消费透明充值更优惠专享优惠、充值赠费办理更便捷套餐流量、随时办理好友更亲密相互关注、贴心关怀活动更精彩活动不停、优惠不断更新内容1修复已知Bug;2优化客户端访问速度;3提升活动体验,丰富奖励资源。\n答案:", "target": "工具", "answer_choices": ["银行", "社区", "电商", "支付", "经营", "卡牌", "借贷", "驾校", "理财", "职考", "新闻", "旅游", "交通", "魔幻", "医疗", "影像", "动作", "工具", "体育", "小说", "运动", "相机", "工具", "快递", "教育", "股票", "菜谱", "行车", "仙侠", "亲子", "购物", "射击", "漫画", "小学", "同城", "成人", "求职", "电子", "艺术", "赚钱", "约会", "经营", "兼职", "视频", "音乐", "英语", "棋牌", "摄影", "养生", "办公", "政务", "视频", "论坛", "彩票", "直播", "其他", "休闲", "策略", "通讯", "买车", "违章", "地图", "民航", "电台", "语言", "搞笑", "婚恋", "超市", "养车", "杂志", "在线", "家政", "影视", "装修", "资讯", "社交", "餐饮", "美颜", "挂号", "飞行", "预定", "票务", "笔记", "买房", "外卖", "母婴", "打车", "情侣", "日程", "租车", "博客", "百科", "绘画", "铁路", "生活", "租房", "酒店", "保险", "问答", "收款", "竞技", "唱歌", "技术", "减肥", "工作", "团购", "记账", "女性", "公务", "二手", "美妆", "汽车", "行程", "免费", "教辅", "两性", "出国", "婚庆", "民宿"], "type": "classify"}
{"input": "足三两()是麦当劳推出的一种汉堡包,为继巨无霸后的另一招牌食品。英文名称的意思是「四分之一磅」,因为牛肉重量大约等如四分之一磅(烹调前计),而四分之一磅大约等于三两重,故在香港被称为「足-{}-三两」。在麦当劳于1975年进入香港市场时,Quarter Pounder曾被命名为「大汉-{}-堡」,而Quarter Pounder with Cheese则被命名为「大芝-{}-士汉-{}-堡」,但于1980年代后停售。2000年代初,曾经作为推广产品重新命名为「足-{}-三两」(或写作足-{}-三両),但推广期后便继续停售。直至2007年起,麦当劳在香港推出「Double足-{}-三两」(Double Quarter Pounder,即是双重份量的足-{}-三两)作为MacTonight套餐,于香港时间每晚21:00至翌日凌晨04:00间供应。由于反应理想,香港麦当劳于2009年将其发售时段提早至上午11时开始,并重新引入常规版的「足-{}-三两」作为长期发售的项目。Double足-{}-三两已于2017年初停售,常规版足-{}-三两亦于同年3月9日起停售。事实上,在香港售卖的「足-{}-三两」实际重量只有100克。香港麦当劳的餐牌上足-{}-三两及Double足-{}-三两都会以小字体加上「烹调前」标签,以符合香港海关《商品说明条例》的规定。一个正常的足三两,包括有四分之一磅(113.4克)牛肉(烹调前计)、两块芝麻面包、酸瓜、茄酱及生洋葱,而很多时候足三两也会有一块芝士。\n 从上面的段落中,根据一个合理的答案:麦当劳\n那么问题可能是:", "target": "足三两是哪个品牌的招牌食品之一?", "type": "mrc"}
{"input": "“切实转变工作作风”根据前面的段落,以下是否是真的“这是公文话语”?是的,不是,或也许?\n答案:", "target": "是的", "answer_choices": ["是的", "不是", "也许"], "type": "nli"}
{"input": "“逐步实行中等职业教育免费,今年先从农村家庭经济困难学生和涉农专业做起”记住上面的文字,考虑:“后年就能够全面实现中等职业教育免费”这是总是,绝不,或有时正确的?\n答案:", "target": "有时", "answer_choices": ["总是", "绝不", "有时"], "type": "nli"}
{"input": "阅读下列论文的摘要,然后生成这篇摘要的多个关键词。摘要:通过对泥河湾盆地43条剖面和6个钻孔晚新生代地层和微体古生物(介形类和有孔虫)的调查研究,发现非常丰富的介形类,计26属70余种,有孔虫4属4种,其中介形类自下而上可明显地划分为5个组合带:(1)Potamocyprisplana-Candoniella-Ilyocypris组合带;(2)Leucocythere-Ilyocypris-Candoniella组合带;(3)Leucocythere-Cytherissa-Limnocythere组合带;(4)Ilyocypris-Limnocythereflexa-Limnocytheredubiosa组合带;(5)Limnocytheredubiosa-Limnocytheresancti-Patricii-Ilyocypris组合带.按以上5个介形类组合带的分布,第1组合带及所含地层红崖村组和石匣组的时代为上新世;第2~4组合带及所含地层泥河湾组的时代为早更新世;第5组合带为中-晚更新世,分布于虎头梁组和许家窑组,虎头梁组置中更新世为宜,许家窑组为晚更新世.根据5个介形类组合带和有孔虫的分布及介形类的始现、繁盛、兴衰的演替特征,对泥河湾古湖和盆地的形成经历了上新世的起始,早更新世早期的扩展,中、晚期稳定、发展、湖面最大,中更新世向西部退缩和晚更新世消亡、桑干河水系形成五个发展阶段的演化进行了探讨.。摘要的关键词有这些:\n答案:", "target": "介形类,晚新生代,环境演化,生物地层", "answer_choices": "", "type": "generate"}
{"input": "这个App应用程序的描述会出现在哪个栏目?•只需随身携带手机即可随时了解您步行、跑步和骑车的运动情况。达成健身目标•设定时长或步数目标,并了解自己的进度。•获得根据健身效果提供的运动目标建议。全面掌握健身情况•将第三方设备和应用与Google健身关联后,您就可以在一个地方集中查看您的所有健身数据。随时随地使用•兼容所有AndroidWer设备。•还可以通过浏览器www.google.com/fit和平板电脑使用Google健身。更新内容提升体验,修复部分问题。\n选项:银行,社区,电商,支付,经营,卡牌,借贷,驾校,理财,职考,新闻,旅游,交通,魔幻,医疗,影像,动作,工具,体育,小说,运动,相机,工具,快递,教育,股票,菜谱,行车,仙侠,亲子,购物,射击,漫画,小学,同城,成人,求职,电子,艺术,赚钱,约会,经营,兼职,视频,音乐,英语,棋牌,摄影,养生,办公,政务,视频,论坛,彩票,直播,其他,休闲,策略,通讯,买车,违章,地图,民航,电台,语言,搞笑,婚恋,超市,养车,杂志,在线,家政,影视,装修,资讯,社交,餐饮,美颜,挂号,飞行,预定,票务,笔记,买房,外卖,母婴,打车,情侣,日程,租车,博客,百科,绘画,铁路,生活,租房,酒店,保险,问答,收款,竞技,唱歌,技术,减肥,工作,团购,记账,女性,公务,二手,美妆,汽车,行程,免费,教辅,两性,出国,婚庆,民宿\n答案:", "target": "运动健身", "answer_choices": ["银行", "社区", "电商", "支付", "经营", "卡牌", "借贷", "驾校", "理财", "职考", "新闻", "旅游", "交通", "魔幻", "医疗", "影像", "动作", "工具", "体育", "小说", "运动", "相机", "工具", "快递", "教育", "股票", "菜谱", "行车", "仙侠", "亲子", "购物", "射击", "漫画", "小学", "同城", "成人", "求职", "电子", "艺术", "赚钱", "约会", "经营", "兼职", "视频", "音乐", "英语", "棋牌", "摄影", "养生", "办公", "政务", "视频", "论坛", "彩票", "直播", "其他", "休闲", "策略", "通讯", "买车", "违章", "地图", "民航", "电台", "语言", "搞笑", "婚恋", "超市", "养车", "杂志", "在线", "家政", "影视", "装修", "资讯", "社交", "餐饮", "美颜", "挂号", "飞行", "预定", "票务", "笔记", "买房", "外卖", "母婴", "打车", "情侣", "日程", "租车", "博客", "百科", "绘画", "铁路", "生活", "租房", "酒店", "保险", "问答", "收款", "竞技", "唱歌", "技术", "减肥", "工作", "团购", "记账", "女性", "公务", "二手", "美妆", "汽车", "行程", "免费", "教辅", "两性", "出国", "婚庆", "民宿"], "type": "classify"}
{"input": "这个是关于哪方面的App应用程序的描述?银行,社区,电商,支付,经营,卡牌,借贷,驾校,理财,职考,新闻,旅游,交通,魔幻,医疗,影像,动作,工具,体育,小说,运动,相机,工具,快递,教育,股票,菜谱,行车,仙侠,亲子,购物,射击,漫画,小学,同城,成人,求职,电子,艺术,赚钱,约会,经营,兼职,视频,音乐,英语,棋牌,摄影,养生,办公,政务,视频,论坛,彩票,直播,其他,休闲,策略,通讯,买车,违章,地图,民航,电台,语言,搞笑,婚恋,超市,养车,杂志,在线,家政,影视,装修,资讯,社交,餐饮,美颜,挂号,飞行,预定,票务,笔记,买房,外卖,母婴,打车,情侣,日程,租车,博客,百科,绘画,铁路,生活,租房,酒店,保险,问答,收款,竞技,唱歌,技术,减肥,工作,团购,记账,女性,公务,二手,美妆,汽车,行程,免费,教辅,两性,出国,婚庆,民宿神秘又惊喜的万圣节到啦快来宝宝超市挑选你最爱的南瓜灯和面具吧还可以挑个礼服画个妆,打造超炫的万圣节造型呢和奇奇一起学会在超市购物,成为妈妈购物的好帮手吧丰富商品水果,蔬菜,玩具,零食…各种商品一应俱全模拟真实超市购物的场景,让宝宝体验超市购物的乐趣。根据清单购物你能帮妈妈买到清单上的东西吗对照清单购买需要的东西,让孩子有目的性的逛超市,帮宝宝树立正确的消费观。模拟结账别忘记结账哟~所有商品一共8元,付了10元,该找回多少钱呢,你能帮奇奇算一算吗丰富小游戏鱼缸捞鱼、搭配你喜欢的蛋糕、帮试妆员化上美丽的妆…丰富趣味小游戏,乐趣无穷宝宝巴士以孩子的兴趣启蒙为出发点,从健康、语言、社会、科学、艺术五大领域关注幼儿成长,吸取蒙氏教育精髓,根据幼儿不同年龄段左右脑发育、敏感期特点和学习重点来设计产品,打造“年龄+能力”的多元化产品体系。让孩子在游戏中独立思考,自由学习,享受探索世界的乐趣。宝宝巴士儿童早教pp,众多儿童早教产品的一致选择,孩子从小学宝宝巴士儿歌,贝瓦儿歌,儿歌点点,宝宝树,小伴龙,贝乐虎儿歌,咔哒故事,伴鱼绘本,宝宝手工零食,宝宝时尚设计师等使用者的一致推荐。设计理念宝宝巴士BbyBus,专注启蒙,而不仅仅是教育。我们专注于启发,而不只是学习。我们专注于能力培养,而不只是单一认知。我们专注于寓教于乐,而不是填鸭式教学。宝宝巴士,快乐启蒙全球3.5亿家庭用户的早教首选,您身边的幼儿教育专家搜索宝宝巴士,就可以下载宝宝巴士的所有早教APP了哦~欢迎联系微信宝宝巴士微博@宝宝巴士官网http//www.bbybus.com邮箱[email protected]更新内容不放过任何可以提升体验的地方,优化细节,让游戏体验更上一层楼贴心的小bug修复,提升稳定性和流畅度,畅玩无压力搜索宝宝巴士,就可以下载宝宝巴士的所有早教APP了哦~欢迎加入宝宝巴士官方Q群288190979,一起为孩子做更多更好的产品。\n答案:", "target": "亲子儿童", "answer_choices": ["银行", "社区", "电商", "支付", "经营", "卡牌", "借贷", "驾校", "理财", "职考", "新闻", "旅游", "交通", "魔幻", "医疗", "影像", "动作", "工具", "体育", "小说", "运动", "相机", "工具", "快递", "教育", "股票", "菜谱", "行车", "仙侠", "亲子", "购物", "射击", "漫画", "小学", "同城", "成人", "求职", "电子", "艺术", "赚钱", "约会", "经营", "兼职", "视频", "音乐", "英语", "棋牌", "摄影", "养生", "办公", "政务", "视频", "论坛", "彩票", "直播", "其他", "休闲", "策略", "通讯", "买车", "违章", "地图", "民航", "电台", "语言", "搞笑", "婚恋", "超市", "养车", "杂志", "在线", "家政", "影视", "装修", "资讯", "社交", "餐饮", "美颜", "挂号", "飞行", "预定", "票务", "笔记", "买房", "外卖", "母婴", "打车", "情侣", "日程", "租车", "博客", "百科", "绘画", "铁路", "生活", "租房", "酒店", "保险", "问答", "收款", "竞技", "唱歌", "技术", "减肥", "工作", "团购", "记账", "女性", "公务", "二手", "美妆", "汽车", "行程", "免费", "教辅", "两性", "出国", "婚庆", "民宿"], "type": "classify"}
{"input": "参考下面的段落,回答下列问题:\n段落:因吊钟的花朵通常在农历新年前后开花,故英文又名为Chinese New Year Flower,意即中国新年花。在清代中叶开始已有吊钟作为年花的习俗,取其「金钟一响,黄金万两」的吉兆,同时吊钟花的花朵都是生长在枝顶上,亦有高中科举之寓意,古时百姓因希望子弟能高中科举,就砍伐吊钟花带回家作为年花。不过近年因人们觉“吊钟”和“吊终”谐音,不吉利,所以较少人以吊钟作为年花。吊钟是一种落叶或半常绿灌木,可高约7米,但常高3米。树皮呈灰黄色,多分枝,小枝呈淡褐色。叶长圆形或倒卵状长圆形,先端渐尖,基部渐狭而成短柄,常密集生于枝顶,互生,革质,表面绿色而背面淡绿色,长5-10厘米,阔2-4厘米,全缘或顶部疏生细齿,叶两面无毛,侧脉6-7对,中脉两面清晰呈羽状伸出,网脉两面清晰,叶短柄长约5-20厘米,灰黄色呈圆柱状无毛。花为伞房花序顶生,花粉红色或红色,常5-8朵,下垂呈钟型,从枝顶覆瓦状排列的红色大苞片内生出,苞片长圆形或长方形,膜质,花梗绿色无毛,长约1.5-2厘米,花萼5裂,披针形先端披纤毛,长约2-4厘米,花冠呈宽钟状,口部5裂,裂片长约1-1.2厘米,裂片钝圆,轻微反卷白色,雄蕊8枚,雌蕊1枚,雌蕊较雄蕊长。果为蒴果,椭圆形无毛,淡黄色,具5梭,长约8-12厘米,果柄直立粗壮,长约3-5厘米。种子有3-5角或翅。喜温暖湿润,日光充足,土壤肥沃含腐殖质及排水良好的土壤。可以使用播种、扦插法及压条法繁殖。\n问题:吊钟花如何进行繁殖?\n答案:", "target": "播种、扦插法及压条法", "type": "mrc"}
{"input": "从医院打完针、开了药回来。母亲就赶到单位去上班了。走前,她把我托付给禾寡妇(候选词),请她(代词)关照我。。上面的句子中,代词“她”指代的是“禾寡妇”吗?选项:是的,不是。答案:", "target": "是的", "type": "anaphora_resolution", "answer_choices": ["是的", "不是"]}
{"input": "《1997年郡尉职权法案》()于1997年生效,是一项英国国会法案,来厘订大不列颠委任的郡尉(Lord Lieutenant)所管辖的地区。根据《1888年地方政府法案》,郡尉是被委派到每一个郡。可是,这个法案所定义的区域混杂了新的行政郡及郡的自治区。实际上,影响很微小,因为只有少数行政郡的边界跟原来的不一样。直到1965年大伦敦及亨廷登-彼得伯勒郡的成立,导致米德尔塞克斯郡尉办公室、伦敦郡郡尉办公室、亨廷登郡郡尉办公室被废除,取而代之就是大伦敦郡尉及亨廷登-彼得伯勒郡尉。1974年,英格兰及威尔斯内的行政郡及郡自治区被废除。一项大型改革也同时推行。所有郡尉辖区都被划分为都会郡和非都会郡。而1973年《苏格兰地方政府法案》则不跟从新的苏格兰地区来厘订郡尉辖区,反而从传统郡中拼合起来。因此,两者结合导致产生出来的郡尉辖区完全不跟从原有的郡。大部分这些郡尉辖区都没有留下来。在1990年代中期的英国地方政府改革中,很多非都会郡都开始重组成为单一管理区。苏格兰及威尔斯的地方政府过渡成为只由单一管理区所组成。这个时候开始草拟这个法案的计划,把郡尉辖区从地方政府再次分出来。虽然法案没有使用这个计划,但这些地方成了英格兰的名誉郡。\n 参考上述上下文,改革推行后,所有郡尉辖区被划分为什么?\n答案:", "target": "都会郡和非都会郡", "type": "mrc"}
{"input": "香港2004年继去年七一游行后再次经历了巨大政治争议,4月全国人民代表大会常务委员会第二次行使权力解释基本法,并否决了0708年双普选。5月,商业电台多名著名节目主持人指受到压力相继暂停节目,发生了「商台名嘴封咪事件」。7月1日,仍有数以十万计市民参与七一游行表达争取民主诉求。9月,第三届立法会选举刷新了历届投票纪录,有178万多人投票(投票率55.64%)。经济方面,去年发生沙士事件后情况逐渐改善,失业率下跌至2004年第四季的6.5%,是近三年以来的低位,年内本地生产总值增长8.1%,是自1987年以来的第二快增长,历时68个月的通缩终于结束,经济复苏主要受惠于东亚、欧美国等主要市场的强劲需求,以及中国内地对外贸易畅旺和内部需求殷切所带动。然而去年沙士期间,带来经济下滑以及增加开支,政府账目录得赤字401亿。下列节庆,如无注明,均是香港的公众假期,同时亦是法定假日(俗称劳工假期)。有 # 号者,不是公众假期或法定假日(除非适逢星期日或其它假期),但在商业炒作下,市面上有一定节庆气氛,传媒亦对其活动有所报导。详情可参看香港节日与公众假期。\n 从上面的段落中,根据一个合理的答案:受惠于东亚、欧美国等主要市场的强劲需求,以及中国内地对外贸易畅旺和内部需求殷切所带动。\n那么问题可能是:", "target": "香港2004年经济复苏的原因是什么?", "type": "mrc"}
{"input": "这是关于哪方面的新闻: 故事,文化,娱乐,体育,财经,房产,汽车,教育,科技,军事,旅游,国际,股票,农业,游戏?首次承认落后,美媒披露中国高超音速导弹技术领先美国\n答案:", "target": "军事", "answer_choices": ["故事", "文化", "娱乐", "体育", "财经", "房产", "汽车", "教育", "科技", "军事", "旅游", "国际", "股票", "农业", "游戏"], "type": "classify"}
{"input": "这是关于哪方面的新闻: 故事,文化,娱乐,体育,财经,房产,汽车,教育,科技,军事,旅游,国际,股票,农业,游戏?未来5年,教师会成为高收入人群吗?\n答案:", "target": "国际", "answer_choices": ["故事", "文化", "娱乐", "体育", "财经", "房产", "汽车", "教育", "科技", "军事", "旅游", "国际", "股票", "农业", "游戏"], "type": "classify"}
{"input": "阅读下面短文,从短文后给出的候选项中选出最佳选项。\n 新浪体育讯叠泉自开业以来,以其球场精良的设计、球会周到的服务,在业界的影响力不断提高,吸引了大批高尔夫爱好者慕名来到球会,这其中包括大家__的各界知名人士,政界、财经、实业、演艺界等有社会公众影响力的人物#idiom593805#。然而他们却拥有着很多共同点:他们都是社会各界的领袖精英;他们都在各自的领域颇有建树;他们都在接触叠泉后被其美丽而又富有挑战的场地所折服,#idiom593806#。 \n 候选项:神龙见首,各式各样,耳熟能详,不一而足,一应俱全,流连忘反,不胜枚举,沾沾自喜,一无所有,衣食住行。最佳选项是:", "target": "耳熟能详", "answer_choices": ["神龙见首", "各式各样", "耳熟能详", "不一而足", "一应俱全", "流连忘反", "不胜枚举", "沾沾自喜", "一无所有", "衣食住行"], "type": "mrc"}
{"input": "唐音是日本汉字音(音读)的一类。广义的「唐音」(唐宋音)指镰仓时代以后直至近代传入日本的汉字音,也就是明清时期的南方标准语「南京官话」。包含室町时代传入的「宋音」与狭义的「唐音」,即江户时代(明清)传入的汉字音。「唐音」的「唐」与「吴音」的「吴」和「汉音」的「汉」一样,并非指朝代,而是对中国的泛称。本文以论述狭义的唐音为主。江户时代传入的「唐音」与之前的「宋音」一样,主要限于佛典诵读及学问研究等,对一般用语的影响很小,仅限于特定的词语。唐音内部尚有不同的系统。就来源而言,大体分为以下三系。第一是隐元隆琦(福州府福清县人)于承应三年(1654)渡日后建立的黄檗宗所传承的用于诵读清规的明代音。第二是延宝五年(1677)渡日的曹洞宗心越派开祖心越兴俦(杭州人)所传的清规和琴谱(明乐)的诵读音。第三是江户时代的汉语学者(1674-1728)及韵镜学者文雄(1700-1763)等研究者通过长崎的通事(翻译官)等所学的中国音。有坂秀世氏将此三类分别称为黄檗唐音、心越系唐音和译官系唐音。这些音皆主要源于明末清初的南京官话音。相比于镰仓时代的宋音反映出更新的音韵变化。唐音由于母胎音的关系,带有明显的类似于现代官话和吴语发音的特色。甚至宕摄入声字也有的以エツ表示,如 阁ケツ。反映这些韵的韵腹为中母音。唐音的例词如下列举(此处一并列举可能为宋音的词)。椅子(イス) 蒲団(フトン) 行灯(アンドン) 行脚(アンギャ) 馅(アン)明(ミン) 清(シン) 普请(フシン) 白汤(パイタン) 石灰(シックイ) 馒头(マンジュウ)\n 从上面的段落中产生一个问题:", "target": "「唐音」的「唐」与「吴音」的「吴」和「汉音」的「汉」都指什么", "type": "mrc"}
{"input": "“还还没有,没有回来呢.”仅使用以上描述和你对世界所了解的,“有人还没有回来”是正确,错误,或未知?\n答案:", "target": "正确", "answer_choices": ["正确", "错误", "未知"], "type": "nli"}
{"input": "这些关键词“通用航空,导航系统,航图管理,航空器”代表了这篇论文的摘要:“为满足通用航空器对结构简单、价格低廉的导航系统的需求,提出一种机载便携式导航系统方案。系统以航路图作为背景,通过标定技术实现航图像素坐标与经纬度坐标的配准,并通过对航图的分割与四叉树管理,降低了对设备内存的需求,随着航空器位置更新,系统通过平移、旋转航图实现对航空器的导航。仿真实验结果表明,航空器在航图上定位精确,系统对于航图的平移、旋转响应准确,便携式导航系统可以满足通用航空器导航的需求,对通航飞行安全提供了一定的技术支持。”。这是正确的吗?\n选项:是的,不是\n答案:", "target": "不是", "answer_choices": ["是的", "不是"], "type": "classify"}
{"input": "根据短文内容,选出缺少的成语填在下划线处。\n 梅柏肯__。“你未经我的许可就擅自结婚,对我而言,要废除这个婚姻#idiom588293#。”他的眼睛闪着微光。“事实上,我相信你会发现登记你们结婚的记录员已经神秘失踪,而替你们主持婚礼的牧师已搬到法国。你想要证明自己结了婚恐怕是难上加难。” \n 候选成语:借花献佛,嗤之以鼻,易如反掌,投桃报李,求之不得,大失所望,虚位以待,无人之境,喜出望外,落井下石。 正确答案是:", "target": "嗤之以鼻", "answer_choices": ["借花献佛", "嗤之以鼻", "易如反掌", "投桃报李", "求之不得", "大失所望", "虚位以待", "无人之境", "喜出望外", "落井下石"], "type": "mrc"}
{"input": "这是关于哪方面的新闻?买家付了款却没有购房资格,卖家能解除房屋买卖合同吗?\n选项:故事,文化,娱乐,体育,财经,房产,汽车,教育,科技,军事,旅游,国际,股票,农业,游戏\n答案:", "target": "房产", "answer_choices": ["故事", "文化", "娱乐", "体育", "财经", "房产", "汽车", "教育", "科技", "军事", "旅游", "国际", "股票", "农业", "游戏"], "type": "classify"}
{"input": "阅读短文:\n 方宏进在与律师商量后决定于今日将__于天下。方宏进昨日接受了个别媒体的电话采访,并不避讳自己现在很麻烦。据悉,方宏进身上牵扯的官司不止此次今麦郎这一起,之前还和多家企业发生矛盾,精通金融知识的他一直希望在商业场上大展拳脚,加之其之前央视名嘴的身份,他一直坚信自己能成功。不过,成立了北京澳卫时代广告公司(简称澳卫)的他生意方面却不顺利,记者昨日得悉,该公司已被吊销了营业执照,公司原址也已易主。记者从方宏进一位朋友那边了解到,方宏进经常用酒精麻痹自己,日前接受记者电话采访,还用一起喝酒来“打掩护”,拒绝回应实质性内容。 \n 从候选成语“扫地出门,一网打尽,顺藤摸瓜,狗血喷头,真相大白,走投无路,逍遥法外,治病救人,东窗事发,名正言顺”中选出最适合填在下划线处的成语。正确答案是:", "target": "真相大白", "answer_choices": ["扫地出门", "一网打尽", "顺藤摸瓜", "狗血喷头", "真相大白", "走投无路", "逍遥法外", "治病救人", "东窗事发", "名正言顺"], "type": "mrc"}
{"input": "“也是作践你自己,好歹我总是你的女儿”我们这样说有道理吗“我是你的女儿改变不了”?是的,不是,或也许?\n答案:", "target": "是的", "answer_choices": ["是的", "不是", "也许"], "type": "nli"}
{"input": "阅读以下文章,并选择一个合适的成语。文章:\n新浪娱乐讯一向在银幕上保持文艺、内敛气质的黄璐,近日在最新写真中彰显出自身阳光、青春的一面,粉色系运动装扮搭配__的绿茵场背景,如夏日般朝气蓬勃的年轻气息扑面而来,吸引众人目光。\n 候选成语:郁郁葱葱,万家灯火,高楼大厦,车水马龙,欣欣向荣,浮光掠影,东西南北,乔装打扮,下里巴人,四通八达。答案是:", "target": "郁郁葱葱", "answer_choices": ["郁郁葱葱", "万家灯火", "高楼大厦", "车水马龙", "欣欣向荣", "浮光掠影", "东西南北", "乔装打扮", "下里巴人", "四通八达"], "type": "mrc"}
{"input": "阅读以下对话并回答问题。\n女:今天已经三月十五号了,那个调研报告什么时候可以完成?男:下个月中旬应该可以。问题:男的打算什么时候完成报告?选项:3月初,3月15号,4月中旬,4月底\n答案:", "target": "4月中旬", "answer_choices": ["3月初", "3月15号", "4月中旬", "4月底"], "type": "mrc"}
{"input": "阅读下列论文摘要,然后判断下面的这些关键词是否都是论文摘要合适的关键词?\n摘要:集成多跳中继技术的WiMAXMesh网络中,当发送功率和信道数目一定时,用户接入链路的传输速率直接取决于用户到中继的距离.在满足用户到中继距离要求的条件下,研究最少中继部署问题具有保证网络性能、降低组网成本的意义.文中将该问题转化为最少团划分问题,基于用户邻居信息提出启发式算法MAXDCP,基于用户位置信息提出启发式算法GEOCP.模拟结果表明:与该问题的最新算法MIS相比,在相同时间复杂度下,MAXDCP部署中继的个数平均减少23.8%,GEOCP平均减少35%;与已有PTAS算法HS相比,GEOCP部署中继个数平均减少18.5%,且时间复杂度更低.MAXDCP和GEOCP很好地保证了网络性能、降低了组网成本.\n关键词:问题,信息,中继,组网。答案是:\n选项:是的,不是\n答案:", "target": "不是", "answer_choices": ["是的", "不是"], "type": "classify"}
{"input": "哪个类别最好的描述了这篇新闻?芦淞区档案史志局指导档案规范化管理工作\n选项:故事,文化,娱乐,体育,财经,房产,汽车,教育,科技,军事,旅游,国际,股票,农业,游戏\n答案:", "target": "财经", "answer_choices": ["故事", "文化", "娱乐", "体育", "财经", "房产", "汽车", "教育", "科技", "军事", "旅游", "国际", "股票", "农业", "游戏"], "type": "classify"}
{"input": "根据短文内容,选出缺少的成语填在下划线处。\n 慢慢地,“朝圣”变成对亚洲无法满足的好奇,而不是倒拨世纪之钟的时针,寻觅历史的源头。于是,他想到哪儿就到哪儿,不管亚历山大大帝是不是到过那个地方。他骑马翻过东土耳其的__,看见积雪覆盖着山坡,从撒哈拉大沙漠#idiom598242#吹来的黄沙,又将那山坡变成粉红色。现在,让他#idiom598243#的是,大自然神奇的力量和人类如何面对大自然、改造大自然。 \n 候选成语:崇山峻岭,冰天雪地,肃然起敬,一望无际,翻山越岭,各抒己见,一马平川,玄之又玄,开诚布公,成年累月。 正确答案是:", "target": "崇山峻岭", "answer_choices": ["崇山峻岭", "冰天雪地", "肃然起敬", "一望无际", "翻山越岭", "各抒己见", "一马平川", "玄之又玄", "开诚布公", "成年累月"], "type": "mrc"}
{"input": "摘要:为了解汉族民间童帽所隐含的民俗审美及民俗文化,以江南大学民间服饰传习馆藏品为研究对象,通过实物归纳法对其装饰用色、图案、配件,以及装饰元素的布局特点、装饰纹样造型特点进行分析研究.结果表明:近代汉族民间童帽装饰元素丰富,充满童趣,形成了自己的装饰规范,较其他类服饰更具特色;童帽装饰元素与民间生活密切相关,并非偶然形成.其丰富的文化内涵为研究与儿童相关的民俗风俗提供参考,为儿童服饰设计提供了丰富的素材.\n 以下的关键词都是这篇摘要合适的关键词吗?关键词:童帽,图案,装饰。答案是:\n选项:是的,不是\n答案:", "target": "不是", "answer_choices": ["是的", "不是"], "type": "classify"}
{"input": "给定“王琦瑶嘴里说抱歉的话,心里却想:严师母的意思其实是说她不识抬举”保证是真实的吗“王琦瑶在心里反思以后该怎么做的更好”?是的,不是,或也许?\n答案:", "target": "不是", "answer_choices": ["是的", "不是", "也许"], "type": "nli"}
{"input": "给定“当然了,当然我这身材等于男模横着放,所以我不走秀,我坐秀”保证是真实的吗““我”喜欢坐着不爱动”?是的,不是,或也许?\n答案:", "target": "也许", "answer_choices": ["是的", "不是", "也许"], "type": "nli"}
{"input": "哪个类别最好的描述了这篇新闻?魅力乡村|忻州岢岚宋家沟村新貌\n选项:故事,文化,娱乐,体育,财经,房产,汽车,教育,科技,军事,旅游,国际,股票,农业,游戏\n答案:", "target": "旅游", "answer_choices": ["故事", "文化", "娱乐", "体育", "财经", "房产", "汽车", "教育", "科技", "军事", "旅游", "国际", "股票", "农业", "游戏"], "type": "classify"}
{"input": "\n段落:日本传统歌舞剧场有一条奇特的规定:观众即使看到入迷处,也只能心领神会,而不准喝彩,否则会被他人侧目而视。而台下寥寥无几的喝彩者则是剧院特邀的职业喝彩师,受过专门的喝彩训练,熟谙什么时候用什么方式喝彩,以便同台上的演员上下呼应,使演出更加趣味盎然。这些职业喝彩师多为男性,社会地位颇高,著名的喝彩大师甚至同演员齐名。他们可以自由出入剧场,坐特等包厢,有的剧团和剧院还特邀大名鼎鼎的喝彩大师光临以抬高身价。自然,喝彩大师领取的报酬也很高。不过,现在日本的喝彩师已越来越少,因而培养职业喝彩师已成为日本传统歌舞的当务之急。 \n问:目前急需解决的是什么? 选项:邀请喝彩大师,抬高喝彩大师身份,喝彩大师能自由出入,尽快培养职业喝彩师 \n答案:", "target": "尽快培养职业喝彩师", "type": "mrc", "answer_choices": ["邀请喝彩大师", "抬高喝彩大师身份", "喝彩大师能自由出入", "尽快培养职业喝彩师"]}
{"input": "摘要:针对采用一次二阶矩法计算复杂、高度非线性功能函数的可靠指标时,求解功能函数对随机变量的偏导数极其困难,并且偏导数形式非常复杂等问题,提出用响应面函数代替原功能函数的方法,使其求导过程方便,并且使偏导数形式转化为随机变量的线性表达式,便于程序化求解.然后以计算三维Hoek-Brown强度准则的可靠度为例,确认响应面法在复杂、高度非线性功能函数可靠度计算中的可行性,并与变量代换法和复合函数求导法则的计算结果进行比较,说明利用响应面法计算的结果具有较高的精度.最后,用响应面法分析强度准则参数分布类型和岩体参数之间的相关性对三维Hoek-Brown准则可靠度的影响规律.研究结果表明:该方法具有较高精度;强度准则参数分布类型对可靠指标的敏感性较弱;岩体参数的负相关系数与可靠指标线性相关,对可靠指标的影响不大.\n 以下的关键词都是这篇摘要合适的关键词吗?关键词:Hoek-Brown准则,功能,响应面法。答案是:\n选项:是的,不是\n答案:", "target": "不是", "answer_choices": ["是的", "不是"], "type": "classify"}
{"input": "以下两句话的意思相同的吗?“怎么我的蚂蚁借呗不能用了”,“怎么我不能使用蚂蚁借呗”。选项:是的,不是。答案:", "target": "是的", "answer_choices": ["是的", "不是"], "type": "classify"}
{"input": "“现在婴儿的健康状况仍很严重”记住上面的文字,考虑:“婴儿已经完全康复了。”这是总是,绝不,或有时正确的?\n答案:", "target": "绝不", "answer_choices": ["总是", "绝不", "有时"], "type": "nli"}
{"input": "这是一个成语填空任务。上文是:早上锻炼还可以提高你一天的。 \n下文是:,所以调整一下作息时间,早起30分钟,锻炼一下吧。导语:如果你2011年的计划之一是减肥,希望你在1号的时候没有满脑子想着“从明天开始”减肥没有捷径,但是可以有“jumpstart”,就是一个见效快的开始。那些“常年”减肥的女性朋友们,都应当知道减肥最难得是后期的坚持和养成一个健康的生活方式。\n候选的成语:安然无恙,误打误撞,起死回生,新陈代谢,故态复萌,自食其力,死里逃生,因祸得福,返老还童,开山祖师。请问:我们应该填写哪个成语?\n答案:", "target": "新陈代谢", "answer_choices": ["安然无恙", "误打误撞", "起死回生", "新陈代谢", "故态复萌", "自食其力", "死里逃生", "因祸得福", "返老还童", "开山祖师"], "type": "mrc"}
{"input": "阅读以下段落:\n我想找个演外国旧片的影院,走了两家都满座。走到一家剧场,有人迎上来问我要不要退票。我只肯出一张电影票的价,那人踌躇一下,索性把票子白送给我,我进剧场时不禁有些怀疑。剧场里只有稀稀拉拉儿个观众,台上一个古装少女在跳着徐缓但十分舒展的中国古典舞。水袖在淡蓝的光中拖来曳去,腰肢婀娜地扭动,筝和琵琶流水般地倾泻,天幕一片辽远清丽的冷调子。曲终舞罢,灯光暗下来。尽管我很入迷,也没鼓掌。舞台再次亮起来时,这个姑娘穿得很少地跳出来。跳了一会儿我才明白,她跳的是一个神话中的女英雄。在共工那个倒霉蛋头触不周山、造成__的严重后果后,这个女人像瓦匠一样把天重新砌好,使我们人类得以继续繁衍。据说,也是这个女人,同她的同胞交尾产卵,提供了第一批人种。值得欣慰的是编导没让这个女孩子裹上一层蛇皮,否则,她就不能向我们展现她那双极富表现力、#idiom598598#的腿。最后,我还是觉得扫兴。我以为不该让一个女孩子向成年人表现雄壮、慈悲,即使她是好心眼。我对这个女孩子印象深刻,因为她表现#idiom598599#后接踵而来的死亡很传神,简直可以说死得#idiom598600#。\n其中下划线处需要填写成语,有以下候选项:生气勃勃,洋洋得意,明媒正娶,怨气冲天,内忧外患,阒其无人,功成名遂,祸从天降,祸不单行,天塌地陷。下划线处合适的成语是:", "target": "天塌地陷", "answer_choices": ["生气勃勃", "洋洋得意", "明媒正娶", "怨气冲天", "内忧外患", "阒其无人", "功成名遂", "祸从天降", "祸不单行", "天塌地陷"], "type": "mrc"}
{"input": "这个是关于哪方面的App应用程序的描述?银行,社区,电商,支付,经营,卡牌,借贷,驾校,理财,职考,新闻,旅游,交通,魔幻,医疗,影像,动作,工具,体育,小说,运动,相机,工具,快递,教育,股票,菜谱,行车,仙侠,亲子,购物,射击,漫画,小学,同城,成人,求职,电子,艺术,赚钱,约会,经营,兼职,视频,音乐,英语,棋牌,摄影,养生,办公,政务,视频,论坛,彩票,直播,其他,休闲,策略,通讯,买车,违章,地图,民航,电台,语言,搞笑,婚恋,超市,养车,杂志,在线,家政,影视,装修,资讯,社交,餐饮,美颜,挂号,飞行,预定,票务,笔记,买房,外卖,母婴,打车,情侣,日程,租车,博客,百科,绘画,铁路,生活,租房,酒店,保险,问答,收款,竞技,唱歌,技术,减肥,工作,团购,记账,女性,公务,二手,美妆,汽车,行程,免费,教辅,两性,出国,婚庆,民宿界面简洁清晰,没有多余的装饰,方便您更加直观的查阅分析各彩种信息动态。主推时下热门彩种的开奖信息、历史开奖、走势分析、预测选号、彩种排行等。是您分析走势的必备工具。,,提升体验,修复部分问题。\n答案:", "target": "彩票", "answer_choices": ["银行", "社区", "电商", "支付", "经营", "卡牌", "借贷", "驾校", "理财", "职考", "新闻", "旅游", "交通", "魔幻", "医疗", "影像", "动作", "工具", "体育", "小说", "运动", "相机", "工具", "快递", "教育", "股票", "菜谱", "行车", "仙侠", "亲子", "购物", "射击", "漫画", "小学", "同城", "成人", "求职", "电子", "艺术", "赚钱", "约会", "经营", "兼职", "视频", "音乐", "英语", "棋牌", "摄影", "养生", "办公", "政务", "视频", "论坛", "彩票", "直播", "其他", "休闲", "策略", "通讯", "买车", "违章", "地图", "民航", "电台", "语言", "搞笑", "婚恋", "超市", "养车", "杂志", "在线", "家政", "影视", "装修", "资讯", "社交", "餐饮", "美颜", "挂号", "飞行", "预定", "票务", "笔记", "买房", "外卖", "母婴", "打车", "情侣", "日程", "租车", "博客", "百科", "绘画", "铁路", "生活", "租房", "酒店", "保险", "问答", "收款", "竞技", "唱歌", "技术", "减肥", "工作", "团购", "记账", "女性", "公务", "二手", "美妆", "汽车", "行程", "免费", "教辅", "两性", "出国", "婚庆", "民宿"], "type": "classify"}
{"input": "带着问题来阅读文章并回答问题:\n问:教授想说明什么道理? \n选项:装满杯子可以有多种方式,如何去解决生活中的问题,人生必须要实现一些目标,别让烦恼和忧郁占据生活 \n段落:一位教授在一个空杯子里装满大石块,又倒进一些小石子,并轻轻摇动杯子,让小石子滚进石块之间的空隙;然后教授拿出一些沙子倒进杯子,摇动杯子,把小石子间的空隙都填满;最后他又往杯子里倒水,把杯子所有的空间都填满。做完这些,教授对学生们说:“现在,我想让大家把这个杯子理解为生活。里面的大石块代表生命中最珍贵的东西,比如说家庭、伴侣、健康、孩子等等,所有这些对我们来说都极为重要,一旦失去将永远无法弥补;小石子代表生命中较为重要的东西,如工作、房子、车子等等;沙子代表生命中的日常小事;水代表烦恼、忧郁。请记住,如果我们先把水和沙子装进杯子,那就没有空间去装大石块和小石子了。”\n答案:", "target": "别让烦恼和忧郁占据生活", "type": "mrc", "answer_choices": ["装满杯子可以有多种方式", "如何去解决生活中的问题", "人生必须要实现一些目标", "别让烦恼和忧郁占据生活"]}
{"input": "对话:男:欢迎你,刘经理,好久不见了。女:是啊,如果不是因为工作,我们还真是难得见一次面。男:这次我要好好儿请你吃个饭,上次你走得太急了。女:那就太谢谢你了。问题:他们可能是什么关系?选项:夫妻,朋友,师生\n答案:", "target": "朋友", "answer_choices": ["夫妻", "朋友", "师生"], "type": "mrc"}
{"input": "阅读文章:\n“没关系,”他尽量__地说,“我也迟到了。杰克和米莉。布坎南打架了,我正要走的时候他来到我家。我给他吃了一杯酒,打发他上床了。”他为她倒了一杯酒,可她没有接杯子。“他就是你办公室的那位吗?我是说,在卡尔参议员办公室工作的那位吗?”她虽然没见过他的同事,但是他们的\n其中下划线的地方需要填写成语,有以下候选的成语:心平气和,以理服人,认祖归宗,开诚布公,依然故我,生吞活剥,和颜悦色,将心比心,不动声色,一本正经。正确的成语是:", "target": "心平气和", "answer_choices": ["心平气和", "以理服人", "认祖归宗", "开诚布公", "依然故我", "生吞活剥", "和颜悦色", "将心比心", "不动声色", "一本正经"], "type": "mrc"}
{"input": "这是关于哪方面的新闻?有哪些娱乐圈里面的明星追星?\n选项:故事,文化,娱乐,体育,财经,房产,汽车,教育,科技,军事,旅游,国际,股票,农业,游戏\n答案:", "target": "娱乐", "answer_choices": ["故事", "文化", "娱乐", "体育", "财经", "房产", "汽车", "教育", "科技", "军事", "旅游", "国际", "股票", "农业", "游戏"], "type": "classify"}
{"input": "摘要:提应用常规观测资料、NCEP再分析资料,对比分析了山东两次春季黄淮气旋暴雨落区异同点。发现春季影响山东的黄淮气旋暴雨区集中出现在气旋中心北侧的偏东风中,且主要位于东北气流中。暴雨区偏北的程度,与影响系统的后倾程度及我国东北地区是否存在高压有关。当系统明显后倾时,锋面坡度小,暖湿气流沿锋面向北爬升的更远,暴雨区更偏北;当我国东北地区存在高压时,其南侧东北气流经渤海侵入850hPa低涡后部,与低涡前东南气流在风向上渐近辐合,在低涡北侧产生辐合中心,从而产生暴雨区。此外,地面东北风形成的冷垫,有利于南方暖湿气流向北爬升。实际暴雨落区预报中,需综合分析系统的空间结构、周围系统的影响及温度场的配置等。 \n关键词:hPa低涡,5,暴雨落区,系统空间结构。请问:上面的关键词都是这篇摘要合适的关键词吗?\n选项:是的,不是\n答案:", "target": "是的", "answer_choices": ["是的", "不是"], "type": "classify"}
### 使用pCLUE数据集进行模型训练
* 使用pCLUE数据集在colab上进行训练、预测和效果验证, pytorch实现
[](https://colab.research.google.com/drive/1QIQDWAACkV7-iRrkrk18XrRjEekMhOtv?usp=sharing)
|
wbbbbb/pclue
|
[
"task_categories:text-generation",
"language:zh",
"license:apache-2.0",
"region:us"
] |
2023-02-25T02:39:00+00:00
|
{"language": ["zh"], "license": "apache-2.0", "task_categories": ["text-generation"]}
|
2023-02-25T08:20:02+00:00
|
039f4147d0c4dffbdcee5e91976aefda2048857c
|
# Dataset Card for "fanfiction_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
roborovski/fanfiction_dataset
|
[
"region:us"
] |
2023-02-25T02:50:39+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 26068982, "num_examples": 180704}, {"name": "test", "num_bytes": 6302044, "num_examples": 43636}], "download_size": 19782089, "dataset_size": 32371026}}
|
2023-02-25T02:50:45+00:00
|
142f48624088ac150dd80345decb21a82a6c21ee
|
# Dataset Card for "wiki_book_corpus_complete_raw_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
gokuls/wiki_book_corpus_complete_raw_dataset
|
[
"region:us"
] |
2023-02-25T03:51:56+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24500165181, "num_examples": 80462898}], "download_size": 14400389437, "dataset_size": 24500165181}}
|
2023-02-25T03:58:51+00:00
|
feceaecbf45a14e9065ab25c8e6fea89e52ae18d
|
# Dataset Card for "SNLI_French"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dhananjay1210/SNLI_French
|
[
"region:us"
] |
2023-02-25T05:15:15+00:00
|
{"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 11099017, "num_examples": 100000}, {"name": "validation", "num_bytes": 1102463, "num_examples": 9842}, {"name": "test", "num_bytes": 1097688, "num_examples": 9824}], "download_size": 4310301, "dataset_size": 13299168}}
|
2023-02-25T05:15:34+00:00
|
8829d83be6ebd8ef390871d4c0d9adc55557083c
|
# Dataset Card for "fnli-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
parsa-mz/fnli-dataset
|
[
"region:us"
] |
2023-02-25T05:53:58+00:00
|
{"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 1092127, "num_examples": 9824}, {"name": "dev", "num_bytes": 1097461, "num_examples": 9842}, {"name": "train", "num_bytes": 60781325, "num_examples": 549367}], "download_size": 20372252, "dataset_size": 62970913}}
|
2023-02-25T05:59:07+00:00
|
5fa0555e051b4f5e3ef48a0e7c6af77669664b75
|
# Dataset Card for "SROIE_image"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Ssunbell/SROIE_image
|
[
"region:us"
] |
2023-02-25T06:29:27+00:00
|
{"dataset_info": {"features": [{"name": "file_name", "dtype": "string"}, {"name": "image", "sequence": {"sequence": {"sequence": "uint8"}}}], "splits": [{"name": "image", "num_bytes": 149110305, "num_examples": 973}], "download_size": 64353573, "dataset_size": 149110305}}
|
2023-02-25T06:29:53+00:00
|
8d70336d606cc4d2567ca2f49c54c749a5b41017
|
# Dataset Card for UTS_Dictionary
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
The UTS_Dictionary is an open source Vietnamese dictionary that provides users with an extensive collection of words and definitions in the Vietnamese language. It is a free resource that is available to anyone who wishes to use it, and it has been designed to be easily accessible and user-friendly.
The UTS_Dictionary is a collaborative project that has been developed by a team of passionate volunteers who are dedicated to promoting the Vietnamese language and culture. The project is aimed at providing an accurate and comprehensive dictionary that can be used by both native speakers and those who are learning the language.
With the UTS_Dictionary, users can search for words in Vietnamese and find their definitions in a matter of seconds. The dictionary includes a wide range of words, including technical terms, slang, idioms, and colloquialisms, making it a valuable resource for anyone who wants to understand the nuances of the Vietnamese language.
Overall, the UTS_Dictionary is a valuable resource for anyone who wants to learn or improve their Vietnamese language skills. Its open source nature allows for continuous improvement and expansion, making it an essential tool for anyone interested in the Vietnamese language and culture.
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
| name | train |
|---------|---------:|
| data | 72547 |
## Dataset Creation
### Curation Rationale
### Source Data
### Annotations
## Additional Information
### Licensing Information
The dataset is released under Apache 2.0.
### Citation Information
### Contributions
|
undertheseanlp/UTS_Dictionary
|
[
"task_categories:text-generation",
"annotations_creators:no-annotation",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:vi",
"license:apache-2.0",
"region:us"
] |
2023-02-25T06:32:48+00:00
|
{"annotations_creators": ["no-annotation"], "language": ["vi"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "pretty_name": "UTS_Text"}
|
2023-07-26T12:59:21+00:00
|
afe96c6ab9dfac906e06fe49ffcfb1c1f7b89722
|
# AutoTrain Dataset for project: pick_a_card
## Dataset Description
This dataset has been automatically processed by AutoTrain for project pick_a_card.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<224x224 RGB PIL image>",
"target": 0
},
{
"image": "<224x224 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['ace of clubs', 'ace of diamonds', 'ace of hearts', 'ace of spades', 'eight of clubs', 'eight of diamonds', 'eight of hearts', 'eight of spades', 'five of clubs', 'five of diamonds', 'five of hearts', 'five of spades', 'four of clubs', 'four of diamonds', 'four of hearts', 'four of spades', 'jack of clubs', 'jack of diamonds', 'jack of hearts', 'jack of spades', 'joker', 'king of clubs', 'king of diamonds', 'king of hearts', 'king of spades', 'nine of clubs', 'nine of diamonds', 'nine of hearts', 'nine of spades', 'queen of clubs', 'queen of diamonds', 'queen of hearts', 'queen of spades', 'seven of clubs', 'seven of diamonds', 'seven of hearts', 'seven of spades', 'six of clubs', 'six of diamonds', 'six of hearts', 'six of spades', 'ten of clubs', 'ten of diamonds', 'ten of hearts', 'ten of spades', 'three of clubs', 'three of diamonds', 'three of hearts', 'three of spades', 'two of clubs', 'two of diamonds', 'two of hearts', 'two of spades'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 7624 |
| valid | 265 |
|
rwcuffney/autotrain-data-pick_a_card
|
[
"task_categories:image-classification",
"region:us"
] |
2023-02-25T06:37:35+00:00
|
{"task_categories": ["image-classification"]}
|
2023-02-27T20:30:50+00:00
|
c070de872542f19bfbd8f45cdbde01f077b47561
|
dog/fuego-20230225-074209-a2dfb7
|
[
"fuego",
"region:us"
] |
2023-02-25T06:42:10+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230225-074209-a2dfb7", "status": "done", "script": "run.py", "requirements_file": "requirements.txt", "space_id": "dog/actlearn-fuego-runner", "space_hardware": "cpu-basic"}}
|
2023-02-25T06:47:01+00:00
|
|
51cd488df39a5d8db011301580446195331bb773
|
# Dataset Card for "ISSAI_KSC_335RS_v_1_1"
Kazakh Speech Corpus (KSC)
Identifier: SLR102
Summary: A crowdsourced open-source Kazakh speech corpus developed by ISSAI (330 hours)
Category: Speech
License: Attribution 4.0 International (CC BY 4.0)
Downloads (use a mirror closer to you):
ISSAI_KSC_335RS_v1.1_flac.tar.gz [19G] (speech, transcripts and metadata ) Mirrors: [US] [EU] [CN]
About this resource:
A crowdsourced open-source speech corpus for the Kazakh language. The KSC contains around 332 hours of transcribed audio comprising over 153,000 utterances spoken by participants from different regions and age groups, as well as both genders. It was carefully inspected by native Kazakh speakers to ensure high quality. The dataset is primarily intended to be used for training automatic speech recognition systems.
You can find more information about the dataset here.
To cite the dataset, please use the following BibTeX entry:
@inproceedings{khassanov-etal-2021-crowdsourced,
title = "A Crowdsourced Open-Source {K}azakh Speech Corpus and Initial Speech Recognition Baseline",
author={Yerbolat Khassanov and Saida Mussakhojayeva and Almas Mirzakhmetov and Alen Adiyev and Mukhamet Nurpeiissov and Huseyin Atakan Varol},
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-main.58",
doi = "10.18653/v1/2021.eacl-main.58",
pages = "697--706"
}
|
Shirali/ISSAI_KSC_335RS_v_1_1
|
[
"task_categories:automatic-speech-recognition",
"language:kk",
"region:us"
] |
2023-02-25T06:43:34+00:00
|
{"language": ["kk"], "task_categories": ["automatic-speech-recognition"], "dataset_info": {"features": [{"name": "uttID", "dtype": "string"}, {"name": "deviceID", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "audio", "dtype": "audio"}], "splits": [{"name": "dev", "num_bytes": 391608860.227, "num_examples": 3283}, {"name": "test", "num_bytes": 372725363.792, "num_examples": 3334}, {"name": "train", "num_bytes": 19832618976.144, "num_examples": 147236}], "download_size": 19079278086, "dataset_size": 20596953200.163002}}
|
2023-03-07T03:18:44+00:00
|
79501d98eff3649315622efb7849a9e3c4da767a
|
# Dataset Card for "als_classification_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LightFury9/als_classification_data
|
[
"region:us"
] |
2023-02-25T06:50:09+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text_label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3570, "num_examples": 54}], "download_size": 2351, "dataset_size": 3570}}
|
2023-02-25T06:50:13+00:00
|
c3266a5956ee5e202426e349fe71e346deaee24d
|
# Dataset Card for "SROIE_baseline"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Ssunbell/SROIE_baseline
|
[
"region:us"
] |
2023-02-25T07:06:12+00:00
|
{"dataset_info": {"features": [{"name": "guid", "dtype": "string"}, {"name": "words", "dtype": "string"}, {"name": "labels", "sequence": "int64"}, {"name": "boxes", "sequence": {"sequence": "int64"}}, {"name": "actual_bboxes", "sequence": {"sequence": "int64"}}, {"name": "file_name", "dtype": "string"}, {"name": "page_size", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 9880768, "num_examples": 65151}, {"name": "val", "num_bytes": 1097601, "num_examples": 7239}, {"name": "test", "num_bytes": 5973104, "num_examples": 39682}], "download_size": 4547916, "dataset_size": 16951473}}
|
2023-02-25T07:06:39+00:00
|
bf7d7a9c3feccac8e934b78a5543c58322d23b7c
|
# Dataset Card for "wiki_book_corpus_complete_processed_bert_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
gokuls/wiki_book_corpus_complete_processed_bert_dataset
|
[
"region:us"
] |
2023-02-25T07:22:50+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 22201610400.0, "num_examples": 6167114}], "download_size": 2763194793, "dataset_size": 22201610400.0}}
|
2023-02-25T19:22:14+00:00
|
21c5b6f46b76af9f386a67362c67c5f41de98666
|
# ABC Open Archives
Created from The ABC Open Archives.
Image file can either be downloaded with your own script using the url column, or use the image data saved directly into the image column.
<https://www.flickr.com/people/abcarchives/>
Parquet file created here: <https://github.com/mediocreatmybest/gaslightingeveryone/blob/main/tools/images2parq.py>
File can also be extracted from here: <https://github.com/mediocreatmybest/gaslightingeveryone/blob/main/tools/parq2folder.py>
The available meta data from the text columns:
TEXT: Original text from the source archives
alt_text_a: GIT/COCO generated captions
alt_text_b: CLIP captions
tags: Tags from the source archives
|
Mediocreatmybest/ABC_Open_Archive
|
[
"language:en",
"license:cc0-1.0",
"region:us"
] |
2023-02-25T08:13:16+00:00
|
{"language": ["en"], "license": "cc0-1.0"}
|
2023-02-25T11:00:27+00:00
|
be56f4e4bad95a799f0a3ce644b3fde1e8d76080
|
dog/fuego-20230225-091905-51505b
|
[
"fuego",
"region:us"
] |
2023-02-25T08:19:06+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230225-091905-51505b", "status": "done", "script": "run.py", "requirements_file": "requirements.txt", "space_id": "dog/actlearn-fuego-runner", "space_hardware": "cpu-basic"}}
|
2023-02-25T08:23:08+00:00
|
|
61adaedfa1070d12e5e484ef61b879f5b4909956
|
dog/fuego-20230225-093257-f43db5
|
[
"fuego",
"region:us"
] |
2023-02-25T08:32:58+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230225-093257-f43db5", "status": "done", "script": "run.py", "requirements_file": "requirements.txt", "space_id": "dog/actlearn-fuego-runner", "space_hardware": "cpu-basic"}}
|
2023-02-25T08:37:01+00:00
|
|
d425a20a41c3a19575a47feecd43bd0eb978be48
|
# Dataset Card for "actlearn_labeled_samples"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dog/actlearn_labeled_samples
|
[
"region:us"
] |
2023-02-25T08:52:21+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3", "4": "4", "5": "5", "6": "6", "7": "7", "8": "8", "9": "9"}}}}], "splits": [{"name": "train", "num_bytes": 1696094.375, "num_examples": 6005}], "download_size": 1402264, "dataset_size": 1696094.375}}
|
2023-02-25T09:12:06+00:00
|
410730000610092922b7b165b94a96a9ca76768d
|
# Dataset Card for "actlearn_unlabeled_samples"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dog/actlearn_unlabeled_samples
|
[
"region:us"
] |
2023-02-25T08:52:22+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 15047431.461246412, "num_examples": 53990}], "download_size": 12834484, "dataset_size": 15047431.461246412}}
|
2023-02-25T09:15:57+00:00
|
1df280706d351d73a50fecd9a745be2940710017
|
# Dataset Card for "actlearn_test_mnist"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dog/actlearn_test_mnist
|
[
"region:us"
] |
2023-02-25T08:52:25+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3", "4": "4", "5": "5", "6": "6", "7": "7", "8": "8", "9": "9"}}}}], "splits": [{"name": "test", "num_bytes": 2875182.0, "num_examples": 10000}], "download_size": 2383462, "dataset_size": 2875182.0}}
|
2023-02-25T08:55:48+00:00
|
e494746ec193a4147d830bd05d5cde759540e0b4
|
# Dataset Card for "actlearn_to_label_samples"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dog/actlearn_to_label_samples
|
[
"region:us"
] |
2023-02-25T09:12:03+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1393.5387535882953, "num_examples": 5}], "download_size": 3840, "dataset_size": 1393.5387535882953}}
|
2023-02-25T09:15:50+00:00
|
833d44bd5da90fc2808a06631156c4f3997b11c7
|
dog/fuego-20230225-101207-2bab16
|
[
"fuego",
"region:us"
] |
2023-02-25T09:12:08+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230225-101207-2bab16", "status": "done", "script": "run.py", "requirements_file": "requirements.txt", "space_id": "dog/actlearn-fuego-runner", "space_hardware": "cpu-basic"}}
|
2023-02-25T09:15:59+00:00
|
|
788810eb003cf5dd9a52ced292dd988c97c20870
|
# Dataset Card for "boudoir-dataset"
### Dataset Summary
Images scrapped from selected Galleries on Behance.
|
soymia/boudoir-dataset
|
[
"task_categories:text-to-image",
"size_categories:1K<n<10K",
"license:apache-2.0",
"region:us"
] |
2023-02-25T09:36:56+00:00
|
{"license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-to-image"], "pretty_name": "Boudoir Dataset", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 96479861.365, "num_examples": 1055}], "download_size": 95036573, "dataset_size": 96479861.365}}
|
2023-03-01T10:39:34+00:00
|
9bb788d7c946955dd27a20c817b51a9c4e81f0dc
|
# Dataset Card for "SROIE_sequence"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Ssunbell/SROIE_sequence
|
[
"region:us"
] |
2023-02-25T10:01:08+00:00
|
{"dataset_info": {"features": [{"name": "guid", "sequence": "string"}, {"name": "words", "sequence": "string"}, {"name": "labels", "sequence": "int64"}, {"name": "boxes", "sequence": {"sequence": "int64"}}, {"name": "actual_bboxes", "sequence": {"sequence": "int64"}}, {"name": "page_size", "sequence": {"sequence": "int64"}}, {"name": "file_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8546028, "num_examples": 594}, {"name": "val", "num_bytes": 430461, "num_examples": 32}, {"name": "test", "num_bytes": 4875824, "num_examples": 347}], "download_size": 2540498, "dataset_size": 13852313}}
|
2023-02-25T10:01:34+00:00
|
d11421a4229bb64f8975614abb64ef9ed02c817a
|
PersianRonin/I
|
[
"license:openrail",
"region:us"
] |
2023-02-25T10:10:44+00:00
|
{"license": "openrail"}
|
2023-02-25T10:10:44+00:00
|
|
eb1c4c6aa0532082f26fe58002584c94db4d418c
|
Vuno/Sk
|
[
"license:apache-2.0",
"region:us"
] |
2023-02-25T11:05:17+00:00
|
{"license": "apache-2.0"}
|
2023-02-25T11:05:17+00:00
|
|
4cd355c1fa4cbd28e2eb0452119f548becaf6ea1
|
samu/emb_tutor_dataset
|
[
"license:mit",
"region:us"
] |
2023-02-25T11:07:13+00:00
|
{"license": "mit"}
|
2023-02-25T11:09:28+00:00
|
|
944a9cde125d75edb6c767613454952907e3bd95
|
Hantao/ChemImages
|
[
"license:mit",
"region:us"
] |
2023-02-25T11:33:16+00:00
|
{"license": "mit"}
|
2023-02-25T11:41:13+00:00
|
|
19917f1b6388303748983c69b58946054fc9b583
|
# Dataset Card for "AutoGenArabicDataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
arbml/AutoGenArabicDataset
|
[
"region:us"
] |
2023-02-25T11:48:42+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1408506.0, "num_examples": 379}], "download_size": 790726, "dataset_size": 1408506.0}}
|
2023-02-25T11:48:43+00:00
|
8d5cc40e525aab39b9ffeb9bc633a0e70cd2fd1b
|
# Dataset Card for "twitter_he_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
carexl8/twitter_he_ru
|
[
"region:us"
] |
2023-02-25T11:55:49+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "created_at", "dtype": "timestamp[ns, tz=UTC]"}, {"name": "tokens", "sequence": "string"}, {"name": "language tags", "sequence": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 603593, "num_examples": 853}], "download_size": 0, "dataset_size": 603593}}
|
2023-05-09T13:25:37+00:00
|
97ec858c7886876298e010f6b2f0f7c99f6e0cd0
|
# Dataset Card for "telegram_he_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
carexl8/telegram_he_ru
|
[
"region:us"
] |
2023-02-25T11:55:59+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "time", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "language tags", "sequence": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 30629039, "num_examples": 43336}], "download_size": 8829228, "dataset_size": 30629039}}
|
2023-04-07T10:23:47+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.