sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
617b74132044bd7d52d8f88f91a7966406a660f5
|
# Dataset Card for "VQAv2_sample_validation_facebook_opt_2.7b_VQAv2_visclues_ns_16"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_sample_validation_facebook_opt_2.7b_VQAv2_visclues_ns_16
|
[
"region:us"
] |
2023-02-14T20:43:51+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_8", "num_bytes": 404667, "num_examples": 16}], "download_size": 80966, "dataset_size": 404667}}
|
2023-02-14T20:43:54+00:00
|
ab28bdcabff9cd5b24c0319998deba86fe6b483e
|
# Dataset Card for "VQAv2_sample_validation_facebook_opt_2.7b_VQAv2_visclues_ns_32"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_sample_validation_facebook_opt_2.7b_VQAv2_visclues_ns_32
|
[
"region:us"
] |
2023-02-14T20:47:20+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_8", "num_bytes": 809098, "num_examples": 32}], "download_size": 152186, "dataset_size": 809098}}
|
2023-02-14T20:47:23+00:00
|
2863c97d29665a148241081b85db57957d803d5d
|
# Dataset Card for "VQAv2_sample_validation_facebook_opt_2.7b_VQAv2_visclues_ns_64"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_sample_validation_facebook_opt_2.7b_VQAv2_visclues_ns_64
|
[
"region:us"
] |
2023-02-14T20:50:19+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_8", "num_bytes": 1618060, "num_examples": 64}], "download_size": 293591, "dataset_size": 1618060}}
|
2023-02-14T20:50:22+00:00
|
c03a9cfbfffedb0b5300e593a0b170eb6bc8aefe
|
# Dataset Card for "VQAv2_sample_validation_facebook_opt_6.7b_VQAv2_visclues_ns_64"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_sample_validation_facebook_opt_6.7b_VQAv2_visclues_ns_64
|
[
"region:us"
] |
2023-02-14T20:53:53+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 1618019, "num_examples": 64}], "download_size": 318934, "dataset_size": 1618019}}
|
2023-02-14T20:53:56+00:00
|
dd4110f7dde72bb5134b43dbd30e416d90ef5992
|
# Dataset Card for "VQAv2_sample_validation_facebook_opt_6.7b_VQAv2_visclues_ns_128"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_sample_validation_facebook_opt_6.7b_VQAv2_visclues_ns_128
|
[
"region:us"
] |
2023-02-14T21:02:15+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 3262469, "num_examples": 128}, {"name": "fewshot_1_bs_16", "num_bytes": 3301613, "num_examples": 128}], "download_size": 1302280, "dataset_size": 6564082}}
|
2023-02-14T23:50:49+00:00
|
665b3f7fd78e3099d6949adedfb90eb9213d97e3
|
# Dataset Card for "mscoco_100k_30k_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
JotDe/mscoco_100k_30k_test
|
[
"region:us"
] |
2023-02-14T21:08:24+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2740227007.482, "num_examples": 29997}], "download_size": 507305710, "dataset_size": 2740227007.482}}
|
2023-02-14T21:13:33+00:00
|
b3cfd179ea2a7f078181e7a208a6dc7b219d1d68
|
# 岁己SUI的直播音频和大部分字幕
不能预览是因为它不支持aac,也没必要预览
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
岁己每月直播的音频,因为录制直播流网络不稳定、断流,会导致部分文件时间码错误,使用时建议先转码为wav/flac等无损格式
PM结尾的字幕包括当天和次日凌晨的录播,主播的作息懂的都懂
下面是一个简单的aac转wav的powershell脚本
```powershell
$OutPutPath = ".\"
$InputSuffix = "aac"
$OutputSuffix = "wav"
New-Item $OutPutPath -Type Directory
foreach($Files in Get-Item * -Include *$InputSuffix){
$OutputFile = $OutPutPath + $Files.BaseName + "." + $OutputSuffix
ffmpeg.exe -i $Files $OutputFile
#如果同时要转换为单声道:
#ffmpeg.exe -i $Files -ac 1 $OutputFile
}
Pause
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Chinese(98%)
English(1%)
Japanese(1%)
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
Miuzarte/SUILiveAudio
|
[
"language:zh",
"AIvtuber",
"VirtuaReal",
"region:us"
] |
2023-02-14T21:15:36+00:00
|
{"language": ["zh"], "tags": ["AIvtuber", "VirtuaReal"]}
|
2023-04-20T03:15:12+00:00
|
f9c69d9cbc5a36025e04debcf517eda664c64e4c
|
# Dataset Card for "mscoco_100k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
JotDe/mscoco_100k
|
[
"region:us"
] |
2023-02-14T21:15:55+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8199285732.23, "num_examples": 99990}], "download_size": 2449411067, "dataset_size": 8199285732.23}}
|
2023-02-14T23:22:53+00:00
|
18dd8037d570a2a638eaaf3148c6295b033a4371
|
# Dataset Card for "VQAv2_sample_validation_facebook_opt_13b_VQAv2_visclues_ns_128"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_sample_validation_facebook_opt_13b_VQAv2_visclues_ns_128
|
[
"region:us"
] |
2023-02-14T21:33:47+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_8", "num_bytes": 3262495, "num_examples": 128}], "download_size": 641392, "dataset_size": 3262495}}
|
2023-02-14T21:33:50+00:00
|
b82a88ac3d097432af1f5dc23dae96c27d1676c8
|
# Amazon Berkeley Objects (c) by Amazon.com
## License
This work is licensed under the Creative Commons Attribution-NonCommercial 4.0
International Public License. To obtain a copy of the full license, see
`LICENSE-CC-BY-NC-4.0.txt`, visit
[CreativeCommons.org](https://creativecommons.org/licenses/by-nc/4.0/)
or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.
Under the following terms:
* Attribution — You must give appropriate credit, provide a link to the
license, and indicate if changes were made. You may do so in any reasonable
manner, but not in any way that suggests the licensor endorses you or your
use.
* NonCommercial — You may not use the material for commercial purposes.
* No additional restrictions — You may not apply legal terms or technological
measures that legally restrict others from doing anything the license
permits.
## Attribution
Credit for the data, including all images and 3d models, must be given to:
> Amazon.com
Credit for building the dataset, archives and benchmark sets must be given to:
> Matthieu Guillaumin (Amazon.com), Thomas Dideriksen (Amazon.com),
> Kenan Deng (Amazon.com), Himanshu Arora (Amazon.com),
> Jasmine Collins (UC Berkeley) and Jitendra Malik (UC Berkeley)
## Description
Amazon Berkeley Objects is a collection of 147,702 product listings with
multilingual metadata and 398,212 unique catalog images. 8,222 listings come
with turntable photography (also referred as *spin* or *360º-View* images), as
sequences of 24 or 72 images, for a total of 586,584 images in 8,209 unique
sequences. For 7,953 products, the collection also provides high-quality 3d
models, as glTF 2.0 files.
The collection is made of the following files:
* `README.md` - The present file.
* `LICENSE-CC-BY-NC-4.0.txt` - The License file. You must read, agree and
comply to the License before using the Amazon Berkeley Objects data.
* `listings/metadata/listings_<i>.json.gz` - Product description and metadata.
Each of the 16 files is encoded with UTF-8 and gzip-compressed. Each line of
the decompressed files corresponds to one product as a JSON object (see
http://ndjson.org/ or https://jsonlines.org/ ). Each product JSON object
(a.k.a dictionary) has any number of the following keys:
- `brand`
- Content: Brand name
- Format: `[{ "language_tag": <str>, "value": <str> }, ...]`
- `bullet_point`
- Content: Important features of the products
- Format: `[{ "language_tag": <str>, "value": <str> }, ...]`
- `color`
- Content: Color of the product as text
- Format: `[{"language_tag": <str>, "standardized_values": [<str>],
"value": <str>}, ...]`
- `color_code`
- Content: Color of the product as HTML color code
- Format: `[<str>, ...]`
- `country`
- Content: Country of the marketplace, as an
[ISO 3166-1 alpha 2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2)
code
- Format: `<str>`
- `domain_name`
- Content: Domain name of the marketplace where the product is found.
A product listing in this collection is uniquely identified by
(`item_id`, `domain_name`)
- Format: `<str>`
- `fabric_type`
- Content: Description of product fabric
- Format: `[{ "language_tag": <str>, "value": <str> }, ...]`
- `finish_type`
- Content: Description of product finish
- Format: `[{ "language_tag": <str>, "value": <str> }, ...]`
- `item_dimensions`
- Content: Dimensions of the product (height, width, length)
- Format: `{"height": {"normalized_value": {"unit": <str>, "value":
<float>}, "unit": <str>, "value": <float>}, "length":
{"normalized_value": {"unit": <str>, "value": <float>}, "unit": <str>,
"value": <float>}, "width": {"normalized_value": {"unit": <str>,
"value": <float>}, "unit": <str>, "value": <float>}}}`
- `item_id`
- Content: The product reference id. A product listing in this
collection is uniquely identified by (`item_id`, `domain_name`).
A corresponding product page may exist at
`https://www.<domain_name>/dp/<item_id>`
- Format: `<str>`
- `item_keywords`
- Content: Keywords for the product
- Format: `[{ "language_tag": <str>, "value": <str> }, ...]`
- `item_name`
- Content: The product name
- Format: `[{ "language_tag": <str>, "value": <str> }, ...]`
- `item_shape`
- Content: Description of the product shape
- Format: `[{ "language_tag": <str>, "value": <str> }, ...]`
- `item_weight`
- Content: The product weight
- Format: `[{"normalized_value": {"unit": <str>, "value": <float>},
"unit": <str>, "value": <float>}, ...]`
- `main_image_id`
- Content: The main product image, provided as an `image_id`. See the
descripton of `images/metadata/images.csv.gz` below
- Format: `<str>`
- `marketplace`
- Content: Retail website name (Amazon, AmazonFresh, AmazonGo, ...)
- Format: `<str>`
- `material`
- Content: Description of the product material
- Format: `[{ "language_tag": <str>, "value": <str> }, ...]`
- `model_name`
- Content: Model name
- Format: `[{ "language_tag": <str>, "value": <str> }, ...]`
- `model_number`
- Content: Model number
- Format: `[{ "language_tag": <str>, "value": <str> }, ...]`
- `model_year`
- Content: Model year
- Format: `[{ "language_tag": <str>, "value": <int> }, ...]`
- `node`
- Content: Location of the product in the category tree. A node page
may exist at `https://www.<domain_name>/b/?node=<node_id>` for
browsing
- Format: `[{ "node_id": <int>, "path": <str>}, ...]`
- `other_image_id`
- Content: Other available images for the product, provided as
`image_id`. See the description of `images/metadata/images.csv.gz`
below
- Format: `[<str>, ...]`
- `pattern`
- Content: Product pattern
- Format: `[{ "language_tag": <str>, "value": <int> }, ...]`
- `product_description`
- Content: Product description as HTML
- Format: `[{ "language_tag": <str>, "value": <int> }, ...]`
- `product_type`
- Content: Product type (category)
- Format: `<str>`
- `spin_id`
- Content: Reference to the 360º View image sequence. See the
description of `spins/metadata/spins.csv.gz` below
- Format: `<str>`
- `style`
- Content: Style of the product
- Format: `[{ "language_tag": <str>, "value": <int> }, ...]`
- `3dmodel_id`
- Content: Reference to the 3d model of the product. See the description
of `3dmodels/metadata/3models.csv.gz`
- Format: `<str>`
* `images/metadata/images.csv.gz` - Image metadata. This file is a
gzip-compressed comma-separated value (CSV) file with the following
columns: `image_id`, `height`, `width`, and `path`.
- `image_id` (string): this id uniquely refers to a product image. This id
can be used to retrieve the image data from Amazon's Content Delivery
Network (CDN) using the template:
`https://m.media-amazon.com/image/I/<image_id>.<extension>` [^1],
where `<extension>` is composed of the characters following the dot in the
`path` field. Any value occurring in the `main_image` and `other_images`
attributes of product metadata is an `image_id` present in this file.
- `height` (int) and `width` (int): respectively, the height and width of
the original image.
- `path`: the location of the image file relative to the `images/original/`
or `images/small/` directories. A path is composed of lowercase hex
characters (`0-9a-f`) that also uniquely identifies images. The first two
characters are used to build a file hierarchy and reduce the number of
images in a single directory. The extension is `jpg` except for few `png`
files.
Below are are first 10 lines of `images/metadata/images.csv.gz`:
```
image_id,height,width,path
010-mllS7JL,106,106,14/14fe8812.jpg
01dkn0Gyx0L,122,122,da/daab0cad.jpg
01sUPg0387L,111,111,d2/d2daaae9.jpg
1168jc-5r1L,186,186,3a/3a4e88e6.jpg
11RUV5Fs65L,30,500,d9/d91ab9cf.jpg
11X4pFHqYOL,35,500,20/20098c4d.jpg
11Y+Xpt1lfL,103,196,99/9987a1c8.jpg
11rL64ZLPYL,64,500,89/89a2ff4d.jpg
11xjmNF5TAL,117,88,ee/ee239f0f.jpg
```
* `images/original/<path>` - Original image data. This directory contains the
original high-resolution version of the images. See
`images/metadata/images.csv.gz` for details of image naming.
* `images/small/<path>` - Downscaled image data. This directory contains the
version of the images, where they have been downscaled such that their
largest axis (height or width) is a maximum of 256 pixels. See
`images/metadata/images.csv.gz` for details of image naming.
* `spins/metadata/spins.csv.gz` - Spin / 360º-View image metadata. This file
is a gzip-compressed comma-separated value (CSV) file with the following
fields: `spin_id`, `azimuth`, `image_id`, `height`, `width`, and `path`.
- `spin_id`: a unique identifier for the image sequence.
- `azimuth`: an integer between 0 and 71, representing the index in the spin
sequence and the azimuth of the camera (in steps of 5º).
- `image_id`: this id uniquely refers to an image. This is can be used
to retrieve the image data using the template:
`https://m.media-amazon.com/image/I/<image_id>.jpg` [^1].
- `height` and `width`: respectively, the height and width of the image.
- `path`: the location of the image file relative to the `spins/original/`
directory. The extension is `jpg` for all the images. The `path` is build
from the `spin_id` and `azimuth` using the template
`<spin_id>_<azimuth:02d>.jpg` and the first two characters of `spin_id`
are used to build a file hierarchy and reduce the number of files in a
single directory.
Below are are first 10 lines of `spins/metadata/spins.csv.gz`:
```
spin_id,azimuth,image_id,height,width,path
61c91265,0,41wqHws7a6L,248,1075,61/61c91265/61c91265_00.jpg
61c91265,1,41++eZZHP9L,248,1075,61/61c91265/61c91265_01.jpg
61c91265,2,41YF86LhGDL,248,1075,61/61c91265/61c91265_02.jpg
61c91265,3,41I5Zz-kbAL,248,1075,61/61c91265/61c91265_03.jpg
61c91265,4,41lAQM2Ys5L,248,1075,61/61c91265/61c91265_04.jpg
61c91265,5,41OJT+p8JgL,248,1075,61/61c91265/61c91265_05.jpg
61c91265,6,412kYqOnqHL,248,1075,61/61c91265/61c91265_06.jpg
61c91265,7,41rgUZ0NuFL,248,1075,61/61c91265/61c91265_07.jpg
61c91265,8,41PJ4ks-cWL,248,1075,61/61c91265/61c91265_08.jpg
```
* `spins/original/<path>` - Spin / 360º-View image files. Each file
corresponds to one row in `spins/metadata/spins.csv.gz`, named by the value
of the `path` column.
* `3dmodels/metadata/3dmodels.csv.gz` - 3d model metadata. This file is a
gzip-compressed comma-separated value (CSV) file with the following fields:
`3dmodel_id`, `path`, `meshes`, `materials`, `textures`, `images`,
`image_height_max`, `image_height_min`, `image_width_max`,
`image_width_min`, `vertices`, `faces`, `extent_x`, `extent_y`, `extent_z`.
- `3dmodel_id`: Reference for the 3d model, as provided in the `3dmodel_id`
field of the listings metadata
- `path`: Location of the 3d model, relative to `3dmodels/original/`
- `meshes`: Number of meshes in the geometry
- `materials`: Number of materials in the 3d model
- `textures`: Number of textures in the 3d model
- `images`: Number of image resources in the 3d model
- `image_{heigh,width}_{min,max}`: Minimal and maximal dimensions of the
image resources in the 3d model
- `vertices`: Number of vertices in the geometry
- `faces`: Number of faces in the geometry
- `extent_{x,y,z}`: Extent of the geometry in each dimension
Below are are first 10 lines of `3dmodels/metadata/3dmodels.csv.gz`:
```
3dmodel_id,path,meshes,materials,textures,images,image_height_max,image_height_min,image_width_max,image_width_min,vertices,faces,extent_x,extent_y,extent_z
B01N2PLWIL,L/B01N2PLWIL.glb,1,1,3,3,4096,4096,4096,4096,10990,14380,0.571499943733216,0.11684001048680606,0.07111982014177629
B075QFCHM9,9/B075QFCHM9.glb,1,1,3,3,2048,2048,2048,2048,11973,19568,1.840071976184845,1.0669103860855103,2.3675915002822876
B07H469871,1/B07H469871.glb,1,1,3,3,4096,4096,4096,4096,1602,1950,1.1113524436950684,1.3880813121795654,0.39794909954071045
B07H8V49M2,2/B07H8V49M2.glb,1,1,3,3,4096,4096,4096,4096,3760,5710,1.4998703368605968,2.11988401412964,0.5897879977087402
B07DBHPK4G,G/B07DBHPK4G.glb,1,1,3,3,4096,4096,4096,4096,13704,22736,0.37921489775180817,1.6228150129318237,0.37921497225761414
B0842LM2DN,N/B0842LM2DN.glb,1,1,3,3,4096,4096,4096,4096,4078,7584,0.22779017686843872,0.2348586767911911,0.22779015451669693
B07HK6B4D7,7/B07HK6B4D7.glb,1,1,4,4,2048,2048,2048,2048,12221,19268,0.1887289583683014,0.6650936603546143,0.421692430973053
B07B4FZN9H,H/B07B4FZN9H.glb,1,1,3,3,2048,2048,2048,2048,13595,22644,3.3838289976119995,0.9963648915290833,2.048073887825012
B07B4Z9BS4,4/B07B4Z9BS4.glb,1,1,4,4,2048,2048,2048,2048,9259,16178,0.2793540060520172,0.2693932056427002,0.2793540358543396
```
* `3dmodels/original/<path>` - 3d model files. The 3d models are provided in
the [glTF-2.0 format](
https://github.com/KhronosGroup/glTF/blob/master/specification/2.0/README.md
) (GLB/binary representation). All models adhere to the following
conventions:
1. Positive `Y` direction is up.
2. Positive `Z` direction is pointing to the *natural* front-side of the
product, wherever applicable.
3. Products that are designed to stand on a surface (i.e. a floor) are
centered on the origin, but translated up (towards positive `Y`) such
that they *stand* on the `Y=0` plane.
4. Products that are designed to hang from a surface (i.e. a ceiling) are
centered on the origin, but translated down (towards negative `Y`) such
that they *hang* from the `Y=0` plane.
5. Products that are designed to hang on a wall are centered on the origin,
but translated forward (towards positive `Z`) such that their backside
aligns with the `Z=0` plane.
* `archives/abo-listings.tar` - Contains all the files in `listings/` as a
tar archive.
* `archives/abo-images-original.tar` - Contains the metadata and original
images from `images/original/` as a tar archive.
* `archives/abo-images-small.tar` - Contains the metadata and downscaled
images from `images/small/` as a tar archive.
* `archives/abo-spins.tar` - Contains the metadata and images from `spins/`
as a tar archive.
* `archives/abo-3dmodels.tar` - Contains the metadata and images from `3dmodels/`
as a tar archive.
## Footnotes
[^1]: Importantly, there is no guarantee that those URLs will remain unchanged
and available on the long term, we thus recommend using the images provided in this
archive instead.
|
bstds/abo_listings
|
[
"region:us"
] |
2023-02-14T21:48:21+00:00
|
{}
|
2023-02-14T21:49:49+00:00
|
75f0bc9fde5a110864cf39a840469c0bb64d1903
|
# Dataset Card for "samsum_nor_final"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jkorsvik/samsum_nor_final
|
[
"region:us"
] |
2023-02-14T21:50:01+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "dialogue", "dtype": "string"}, {"name": "summary", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6447450, "num_examples": 14732}, {"name": "test", "num_bytes": 358829, "num_examples": 819}, {"name": "validation", "num_bytes": 354580, "num_examples": 818}], "download_size": 4502615, "dataset_size": 7160859}}
|
2023-02-14T21:50:30+00:00
|
6e69ccfa82a536d6a3ab0e6b91df8cfeaf804801
|
# Dataset Card for "wikitext2_VALUE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/wikitext2_VALUE
|
[
"region:us"
] |
2023-02-14T22:19:50+00:00
|
{"dataset_info": {"features": [{"name": "sentence-glue", "dtype": "string"}, {"name": "sentence-glue-html", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "sentence-ass", "dtype": "int64"}, {"name": "sentence-been_done", "dtype": "int64"}, {"name": "sentence-dey_it", "dtype": "int64"}, {"name": "sentence-drop_aux", "dtype": "int64"}, {"name": "sentence-got", "dtype": "int64"}, {"name": "sentence-lexical", "dtype": "int64"}, {"name": "sentence-negative_concord", "dtype": "int64"}, {"name": "sentence-negative_inversion", "dtype": "int64"}, {"name": "sentence-null_genetive", "dtype": "int64"}, {"name": "sentence-null_relcl", "dtype": "int64"}, {"name": "sentence-total", "dtype": "int64"}, {"name": "sentence-uninflect", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 4493287, "num_examples": 2891}, {"name": "train", "num_bytes": 38101936, "num_examples": 23754}, {"name": "validation", "num_bytes": 3962066, "num_examples": 2411}], "download_size": 24290252, "dataset_size": 46557289}}
|
2023-02-14T22:19:56+00:00
|
ed95c56542702cfac8cbd172a18400dfb3e00e0d
|
# Dataset Card for "dilbert-comic-sample-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
akadhim-ai/dilbert-comic-sample-dataset
|
[
"region:us"
] |
2023-02-14T23:35:56+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 530433.0, "num_examples": 7}], "download_size": 531593, "dataset_size": 530433.0}}
|
2023-02-14T23:36:04+00:00
|
3937b259bc835478f6a14ba7dd016d8a4f08f808
|
# Dataset Card for "iris"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
comet-team/iris
|
[
"region:us"
] |
2023-02-15T00:53:32+00:00
|
{"dataset_info": {"features": [{"name": "Id", "dtype": "int64"}, {"name": "SepalLengthCm", "dtype": "float64"}, {"name": "SepalWidthCm", "dtype": "float64"}, {"name": "PetalLengthCm", "dtype": "float64"}, {"name": "PetalWidthCm", "dtype": "float64"}, {"name": "Species", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8600, "num_examples": 150}], "download_size": 4333, "dataset_size": 8600}}
|
2023-02-15T00:56:17+00:00
|
e1ee838c6f7894c8d695bfd14c1b5f1ce5134c7d
|
# svg_icons
## Dataset Description
- **Homepage:[text_to_icon.kmoz.dev](https://text_to_icon.kmoz.dev)**
- **Repository: [@KM8Oz/text_to_icon](https://github.com/KM8Oz/text_to_icon)**
### Dataset Summary
This dataset card aims to be classify svgs icons intos images/label set
## Dataset Structure
- dataset_info:
- features:
- name: word
- dtype: string
- name: icon
- dtype: string
|
niceblueman/icons_dataset
|
[
"size_categories:100M<n<1B",
"language:en",
"license:apache-2.0",
"icons",
"svgs",
"doi:10.57967/hf/0375",
"region:us"
] |
2023-02-15T02:07:41+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["100M<n<1B"], "pretty_name": "SQuAD", "dataset_info": {"features": [{"name": "word", "dtype": "string"}, {"name": "icon", "dtype": "string"}], "config_name": "svg_icons", "splits": [{"name": "train", "num_bytes": 240641, "num_examples": 22}], "download_size": 0, "dataset_size": 240641}, "tags": ["icons", "svgs"]}
|
2023-02-15T15:01:32+00:00
|
08eca7f2a1d782c4ae429b1eb0ce2b68147ea317
|
dog/fuego-20230214-213154-0a6d57
|
[
"fuego",
"region:us"
] |
2023-02-15T02:31:55+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230214-213154-0a6d57", "status": "done", "script": "run.py", "requirements_file": "requirements.txt", "space_id": "dog/fuego-20230214-213154-0a6d57", "space_hardware": "cpu-basic"}}
|
2023-02-15T02:35:23+00:00
|
|
371c1c897229e71d824a1273ae6ec716aaf01008
|
dog/fuego-20230214-214112-1d6fb3
|
[
"fuego",
"region:us"
] |
2023-02-15T02:41:14+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230214-214112-1d6fb3", "status": "done", "script": "run.py", "requirements_file": "requirements.txt", "space_id": "dog/fuego-20230214-214112-1d6fb3", "space_hardware": "cpu-basic"}}
|
2023-02-15T02:48:33+00:00
|
|
7249ed81841aaddd38e440daa09bab6ff924bb43
|
dog/fuego-20230214-215051-12eb46
|
[
"fuego",
"region:us"
] |
2023-02-15T02:50:52+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230214-215051-12eb46", "status": "done", "script": "run.py", "requirements_file": "requirements.txt", "space_id": "dog/fuego-20230214-215051-12eb46", "space_hardware": "cpu-basic"}}
|
2023-02-15T02:54:24+00:00
|
|
758ea4b2c7b8ba339ae0af055ee9a40813dc8a95
|
# AutoTrain Dataset for project: code-mixed-language-identification
## Dataset Description
This dataset has been automatically processed by AutoTrain for project code-mixed-language-identification.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_Unnamed: 0": 1104,
"tokens": [
"@user",
"salah",
"satu",
"dari",
"4",
"anak",
"dr",
"sunardi",
"ada",
"yg",
"berprofesi",
"sbg",
"dokter",
"juga",
",",
"lulusan",
"unair",
",",
"sudah",
"selesai",
"koas",
"dan",
"intern",
"tolong",
"disupport",
"pak",
"anak",
"beliau"
],
"tags": [
6,
1,
1,
1,
6,
1,
6,
6,
1,
1,
1,
1,
1,
1,
6,
1,
6,
6,
1,
1,
1,
1,
0,
1,
3,
1,
1,
1
]
},
{
"feat_Unnamed: 0": 239,
"tokens": [
"@user",
"kamu",
"pake",
"apa",
"toh",
"?",
"aku",
"pake",
"xl",
"banter",
"lho",
"di",
"apartemen",
"pun",
"bisa",
"download",
"yutub"
],
"tags": [
6,
1,
1,
1,
1,
6,
1,
1,
6,
1,
1,
1,
1,
1,
1,
0,
6
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_Unnamed: 0": "Value(dtype='int64', id=None)",
"tokens": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"tags": "Sequence(feature=ClassLabel(names=['EN', 'ID', 'JV', 'MIX_ID_EN', 'MIX_ID_JV', 'MIX_JV_EN', 'OTH'], id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1105 |
| valid | 438 |
|
fathan/autotrain-data-code-mixed-language-identification
|
[
"task_categories:token-classification",
"region:us"
] |
2023-02-15T02:54:00+00:00
|
{"task_categories": ["token-classification"]}
|
2023-02-15T03:19:07+00:00
|
12a1afae069f33e6b223411807297870e7e87262
|
dog/fuego-20230214-215453-17bd4b
|
[
"fuego",
"region:us"
] |
2023-02-15T02:54:55+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230214-215453-17bd4b", "status": "done", "script": "run.py", "requirements_file": "requirements.txt", "space_id": "dog/fuego-20230214-215453-17bd4b", "space_hardware": "cpu-basic"}}
|
2023-02-15T02:58:17+00:00
|
|
611c3b1a210b3b1fc2f487d613ff131c91efda33
|
# Dataset Card for "VQAv2_sample_validation_facebook_opt_350m_VQAv2_visclues_ns_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_sample_validation_facebook_opt_350m_VQAv2_visclues_ns_1000
|
[
"region:us"
] |
2023-02-15T02:57:08+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_8", "num_bytes": 25491823, "num_examples": 1000}], "download_size": 4914865, "dataset_size": 25491823}}
|
2023-02-15T02:57:11+00:00
|
034beb7c55289273e23c00633cf3c26fe55c38e8
|
7 datasets used for RSE
All contains sentence pairs and their respective relations.
|
binwang/RSE-sentence-relational-data
|
[
"region:us"
] |
2023-02-15T03:05:15+00:00
|
{}
|
2023-02-15T03:25:34+00:00
|
62997fd132563623648bfd27b7fa0b2187c47e6d
|
dog/fuego-20230215-041847-955498
|
[
"fuego",
"region:us"
] |
2023-02-15T03:18:48+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230215-041847-955498", "status": "done", "script": "run.py", "requirements_file": "requirements.txt", "space_id": "dog/fuego-20230215-041847-955498", "space_hardware": "cpu-basic"}}
|
2023-02-15T03:23:25+00:00
|
|
bae822a80b5b6b6ba58ff78cdd08ac25888a505b
|
# Dataset Card for "VQAv2_sample_validation_facebook_opt_1.3b_VQAv2_visclues_ns_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_sample_validation_facebook_opt_1.3b_VQAv2_visclues_ns_1000
|
[
"region:us"
] |
2023-02-15T03:24:28+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_8", "num_bytes": 25491982, "num_examples": 1000}], "download_size": 4915915, "dataset_size": 25491982}}
|
2023-02-15T03:24:32+00:00
|
5f72aa2ee768ef6df7368bb78dcca0a654e0ff6c
|
Static split of Anthropic's Helpful Harmless dataset. Contains base-online and rejection sampled outputs.
|
Dahoas/static-hh
|
[
"region:us"
] |
2023-02-15T03:53:36+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 143664651, "num_examples": 96256}, {"name": "test", "num_bytes": 7649255, "num_examples": 5103}], "download_size": 90825631, "dataset_size": 151313906}}
|
2023-03-06T00:11:55+00:00
|
8b6f8f0a92044af89b17ac29fbcfe509c64d8e45
|
# Dataset Card for "VQAv2_sample_validation_facebook_opt_2.7b_VQAv2_visclues_ns_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_sample_validation_facebook_opt_2.7b_VQAv2_visclues_ns_1000
|
[
"region:us"
] |
2023-02-15T04:19:26+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_8", "num_bytes": 25492223, "num_examples": 1000}], "download_size": 4915735, "dataset_size": 25492223}}
|
2023-02-15T04:19:29+00:00
|
fea8e4d61fb7d9d253d0fc8a9f1a93719285ade8
|
# Misc Datasets
Here i will upload datasets (images + captions) of concepts/styles/characters for anyone to use on their models, as i am not able to do LoRA's myself, alongside other datasets i've used for other models.</br>
Some are handcropped and/or handpicked, some not. If it's a big dataset it's probably automatically cropped (https://www.birme.net, 1280x1280, jpeg 95% quality) and not handpicked.
I've also included a python script for anyone that wants to use gallery-dl to download images, since its tags are pretty fucked up.</br>
It basically fixes its main problems and also removes metatags like 'commentary', 'translated' and similar, and gives the option to change underscores with spaces and other stuff.
<details>
<summary>Characters</summary>
- [Neru (Blue Archive)](https://huggingface.co/datasets/Cosk/misc-datasets/resolve/main/characters/neru_ba.rar)
- [Jibril (No Game No Life)](https://huggingface.co/datasets/Cosk/misc-datasets/resolve/main/characters/jibril.rar)
- [Fubuki (One Punch Man)](https://huggingface.co/datasets/Cosk/misc-datasets/resolve/main/characters/fubuki.rar) Doesn't include captions! You might want to use something like WD Tagger.
</details>
<details>
<summary>Styles</summary>
- [Cutesexyrobutts](https://huggingface.co/datasets/Cosk/cutesexyrobutts)
- [One Punch Man - Yuusuke Murata](https://huggingface.co/datasets/Cosk/misc-datasets/resolve/main/styles/opm_murata.rar) Doesn't include captions! You might want to use something like WD Tagger.
- [Phantom IX Row](https://huggingface.co/datasets/Cosk/misc-datasets/resolve/main/styles/phantom_ix_row.rar)
- [Mamimi (Mamamimi)](https://huggingface.co/datasets/Cosk/misc-datasets/resolve/main/styles/mamimi.rar)
</details>
<details>
<summary>Concepts</summary>
- [Breasts On Glass](https://huggingface.co/datasets/Cosk/misc-datasets/resolve/main/concepts/brst_gls.rar) Doesn't include captions! You might want to use something like WD Tagger.
- [Fingering](https://huggingface.co/datasets/Cosk/misc-datasets/resolve/main/concepts/fingering.rar)
- [Oversized Breast Cup](https://huggingface.co/datasets/Cosk/misc-datasets/resolve/main/concepts/oversized_cup.rar)
- [White Eyelashes](https://huggingface.co/datasets/Cosk/misc-datasets/resolve/main/concepts/white_eyelashes.rar)
- [Mizumizuni Fellatio](https://huggingface.co/datasets/Cosk/misc-datasets/resolve/main/concepts/mizumizuni.rar)
- [Unaligned Breasts Doggystyle](https://huggingface.co/datasets/Cosk/misc-datasets/resolve/main/concepts/unbr_doggy.rar)
- [Milking Handjob](https://huggingface.co/datasets/Cosk/misc-datasets/resolve/main/concepts/mlk_handjob.rar)
- [Fellatio + View Between Legs](https://huggingface.co/datasets/Cosk/misc-datasets/resolve/main/concepts/between_legs_fella.rar)
</details>
|
cosc/misc-datasets
|
[
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"art",
"dataset",
"concept",
"character",
"style",
"dreambooth",
"lora",
"textual inversion",
"region:us"
] |
2023-02-15T05:38:11+00:00
|
{"language": ["en"], "license": "creativeml-openrail-m", "pipeline_tag": "text-to-image", "tags": ["stable-diffusion", "art", "dataset", "concept", "character", "style", "dreambooth", "lora", "textual inversion"]}
|
2023-03-14T02:57:49+00:00
|
cabb3f889c7812202a8f4807a4016f7ba3ecfab7
|
# Dataset Card for "VQAv2_sample_validation_facebook_opt_6.7b_VQAv2_visclues_ns_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_sample_validation_facebook_opt_6.7b_VQAv2_visclues_ns_1000
|
[
"region:us"
] |
2023-02-15T06:05:09+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 25491851, "num_examples": 1000}], "download_size": 4916575, "dataset_size": 25491851}}
|
2023-02-15T06:05:12+00:00
|
edc4edb5bded6de6c469fc518ce3cc35d2a9d1bc
|
# Dataset Card for "VALUE_wikitext103_got"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_wikitext103_got
|
[
"region:us"
] |
2023-02-15T06:27:19+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 99055, "num_examples": 123}, {"name": "train", "num_bytes": 44057216, "num_examples": 53727}, {"name": "validation", "num_bytes": 77641, "num_examples": 91}], "download_size": 27214474, "dataset_size": 44233912}}
|
2023-02-15T06:27:26+00:00
|
7d6f4c3b0af9c9c55e99914691a2002da4b0d4dd
|
# Dataset Card for "VALUE_wikitext103_negative_inversion"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_wikitext103_negative_inversion
|
[
"region:us"
] |
2023-02-15T06:30:47+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 647, "num_examples": 1}, {"name": "train", "num_bytes": 1026, "num_examples": 1}, {"name": "validation", "num_bytes": 631, "num_examples": 1}], "download_size": 18138, "dataset_size": 2304}}
|
2023-02-15T06:30:52+00:00
|
d75bb029e05380ece15280d09668add7c493848e
|
# Dataset Card for "VALUE_wikitext103_negative_concord"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_wikitext103_negative_concord
|
[
"region:us"
] |
2023-02-15T06:32:03+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 168320, "num_examples": 185}, {"name": "train", "num_bytes": 76726939, "num_examples": 83977}, {"name": "validation", "num_bytes": 151295, "num_examples": 173}], "download_size": 47218418, "dataset_size": 77046554}}
|
2023-02-15T06:32:09+00:00
|
0fb05489dbc22df7f7c5e1658024b99081645e40
|
# Dataset Card for "patched_test_p_20_m1_predictions_v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
roa7n/patched_test_p_20_m1_predictions_v3
|
[
"region:us"
] |
2023-02-15T06:42:26+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence_str", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "m1_preds", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 1534784568, "num_examples": 2775054}], "download_size": 135840078, "dataset_size": 1534784568}}
|
2023-02-15T06:42:49+00:00
|
7739c1b59de9e7d167cbc36d812ad0e2236de6da
|
# Dataset Card for "VALUE_wikitext103_drop_aux"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_wikitext103_drop_aux
|
[
"region:us"
] |
2023-02-15T06:53:16+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 292533, "num_examples": 396}, {"name": "train", "num_bytes": 131089043, "num_examples": 174077}, {"name": "validation", "num_bytes": 232418, "num_examples": 340}], "download_size": 78562593, "dataset_size": 131613994}}
|
2023-02-15T06:53:23+00:00
|
fa5094290caf692b28f495eba8916a4641988e07
|
# Dataset Card for "VALUE_wikitext103_dey_it"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_wikitext103_dey_it
|
[
"region:us"
] |
2023-02-15T07:06:29+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 121288, "num_examples": 141}, {"name": "train", "num_bytes": 50962893, "num_examples": 58184}, {"name": "validation", "num_bytes": 101616, "num_examples": 126}], "download_size": 31535043, "dataset_size": 51185797}}
|
2023-02-15T07:06:34+00:00
|
47c100c11102b3f059f35adafcee173a477168e2
|
# Dataset Card for "VALUE_wikitext103_null_relcl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_wikitext103_null_relcl
|
[
"region:us"
] |
2023-02-15T07:10:10+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 465774, "num_examples": 528}, {"name": "train", "num_bytes": 199511886, "num_examples": 229800}, {"name": "validation", "num_bytes": 389980, "num_examples": 465}], "download_size": 120237073, "dataset_size": 200367640}}
|
2023-02-15T07:10:18+00:00
|
4f6f34d75b1c8cffe833a240b8848bb5045c8858
|
# Dataset Card for "VALUE_wikitext103_been_done"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_wikitext103_been_done
|
[
"region:us"
] |
2023-02-15T07:12:40+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 631030, "num_examples": 752}, {"name": "train", "num_bytes": 250823074, "num_examples": 294089}, {"name": "validation", "num_bytes": 553077, "num_examples": 673}], "download_size": 148523130, "dataset_size": 252007181}}
|
2023-02-15T07:12:48+00:00
|
db8df122d901f065d3708725bb0869e62500f54f
|
dog/fuego-20230215-081313-fc71e8
|
[
"fuego",
"region:us"
] |
2023-02-15T07:13:14+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230215-081313-fc71e8", "status": "done", "script": "run.py", "requirements_file": "requirements.txt", "space_id": "dog/actlearn-fuego-runner", "space_hardware": "cpu-basic"}}
|
2023-02-15T07:16:00+00:00
|
|
965dc8b6a47349e0494941396c23a8b66e7be2ee
|
# Dataset Card for "CHISTES_spanish_jokes"
Dataset from [Workshop for NLP introduction with Spanish jokes](https://github.com/liopic/chistes-nlp)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mrm8488/CHISTES_spanish_jokes
|
[
"task_categories:text-classification",
"task_categories:text-generation",
"language:es",
"region:us"
] |
2023-02-15T07:19:30+00:00
|
{"language": ["es"], "task_categories": ["text-classification", "text-generation"], "pretty_name": "chistes", "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "keywords", "dtype": "string"}, {"name": "funny", "dtype": "int64"}, {"name": "category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 814817, "num_examples": 2419}], "download_size": 504749, "dataset_size": 814817}}
|
2023-02-17T10:26:57+00:00
|
d3d9ea1690b70e5e4fcfe093807ca00ab2e780a8
|
# Dataset Card for "VALUE_wikitext103_null_genetive"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_wikitext103_null_genetive
|
[
"region:us"
] |
2023-02-15T07:19:36+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 657089, "num_examples": 838}, {"name": "train", "num_bytes": 313568270, "num_examples": 385999}, {"name": "validation", "num_bytes": 619642, "num_examples": 793}], "download_size": 184632397, "dataset_size": 314845001}}
|
2023-02-15T07:19:49+00:00
|
c0d53aa05de6ebc0edd97dee39d94339ace34a25
|
# Dataset Card for OSCAR-2019-Burmese-fix
## Dataset Description
This dataset is a cleand version of Myanmar language in OSCAR 2019 dataset.
### Contributions
[Swan Htet Aung](https://github.com/swanhtet1992)
|
5w4n/OSCAR-2019-Burmese-fix
|
[
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|oscar",
"language:my",
"license:cc0-1.0",
"burmese",
"myanmar",
"myanmar-news",
"myanmar-corpus",
"region:us"
] |
2023-02-15T07:31:43+00:00
|
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["my"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|oscar"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "paperswithcode_id": "oscar", "pretty_name": "OSCAR-2019-Burmese-fix", "configs": ["unshuffled_deduplicated_cleaned_my"], "tags": ["burmese", "myanmar", "myanmar-news", "myanmar-corpus"]}
|
2023-02-16T09:01:07+00:00
|
e18cf38bfbb91ef0867cfb5d6f111e80eeb10690
|
Joe02/uneo_refs
|
[
"license:other",
"region:us"
] |
2023-02-15T08:18:41+00:00
|
{"license": "other"}
|
2023-02-15T08:19:21+00:00
|
|
677c2ef1fbd49b53f9c3a945f8c4cf5255fc493c
|
A dataset of translated novels based on: https://opus.nlpl.eu/Books.php (English-Dutch)
To be used as part of the Machine Translation assignment (week 3: fine-tuning NMT)
BSc Information Science course, RUG
|
GroNLP/ik-mt-2023-books
|
[
"region:us"
] |
2023-02-15T08:22:32+00:00
|
{}
|
2023-02-15T08:32:14+00:00
|
db7d54b864ea7df7986d1c1c67b8d631a91dcd41
|
dog/fuego-20230215-094051-1a615e
|
[
"fuego",
"region:us"
] |
2023-02-15T08:40:52+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230215-094051-1a615e", "status": "done", "script": "run.py", "requirements_file": "requirements.txt", "space_id": "dog/actlearn-fuego-runner", "space_hardware": "cpu-basic"}}
|
2023-02-15T08:42:38+00:00
|
|
c1bbe7aba13342ada22d029d37bded9f7c24adb5
|
# Dataset Card for XMediaSum
### Dataset Summary
We present XMediaSum, a cross-lingual dialogue summarization dataset with 40K English(dialogues)->Chinese(summaries) and 40K English (dialogues)->German(summaries) samples. XMediaSum is created by manually translating the English summaries of MediaSum (a English monolingual dialogue summarization dataset) to both Chinese and German.
- Paper: [ClidSum: A Benchmark Dataset for Cross-Lingual Dialogue Summarization](https://aclanthology.org/2022.emnlp-main.526/) (EMNLP 2022)
- GitHub: https://github.com/krystalan/ClidSum
### Supported Task
- Cross-Lingual Summarization
- Cross-Lingual Dialogue Summarization
### Languages
- source language: English
- target language: Chinese and German
## Dataset Structure
### Data Instances
One example is given below in JSON format:
```json
{
"dialogue": "MADELELEINE BRAND, host: OK, here's some good news on the jobs front for both men and women. A new survey out today from the employment firm Manpower finds that about a quarter of employers will add jobs this summer. That's for adults, but for teenagers this summer's job market is shaping up to be the weakest in more than 50 years.\r\nALEX COHEN, host: So, how do you get your teenage kids not to spend the entire summer glued to the couch? You're about to get some tips from Michelle Singletary. She's Day to Day's personal finance contributor. Hi, Michelle!\r\nMICHELLE SINGLETARY: Hi!\r\nALEX COHEN, host: So why is the summer job market so hard for teens this year?\r\nMICHELLE SINGLETARY: Lot of things going on right now. We've got a tough economy. We've got a lot of college graduates going into the market. We have people who are losing their jobs and taking jobs that would traditionally go to teens, like in restaurants and retailers. And we have a lot of older people holding on to their jobs and not retiring because they can't afford to retire. And that puts teens at the end of the line when it comes to these types of jobs.\r\nALEX COHEN, host: So you've got a teenager at home, a little bit young for the working world just yet, but what would you say to a teenager who's out there hunting around for a job?\r\nMICHELLE SINGLETARY: If you absolutely need a job, keep looking. You know, obviously the types of jobs that teens tend to go for in retail, fast food, you know, they still need people. And oftentimes you know, listen, you may not get the job at the beginning of the summer, but hold on because in late summer, when some of those college students are going back and perhaps some of those people who lost their jobs are finding permanent positions with more pay, you might be able to still get that job. So don't give up, you may spend a month or month and a half without it, but go back to those retailers and those restaurants and those fast food places to see if they still need someone.\r\nALEX COHEN, host: And now I know parents like having the break from providing allowance. But, you know, is - are there reasons maybe not to push your teen towards taking a job?\r\nMICHELLE SINGLETARY: I think it absolutely is. In fact I think too many teens are working and they don't need to work. They're some who absolutely need, they're contributing to their household or they're putting money into their own college fund. But more often than not, what parents do is say you've got to get a job, and then the teens get the job and they spend all the money on clothes and you know videos and iPods and paying their cell phone bills because they don't need a cell phone anyway.\r\nALEX COHEN, host: So it's not going towards the college tuition at all.\r\nMICHELLE SINGLETARY: It is not. It's just disposable income that they're disposing of. And parents are not setting any limits and you know and then the kids get used to the fact that they're using all of their paycheck. That's another bad habit. Because they don't have to pay bills and all, all their income goes through you know this stuff.\r\nMICHELLE SINGLETARY: And when it comes time to get a real job, they're surprised they don't have enough money. And so you know what? You can wait to work. Instead, maybe they can spend the summer volunteering at a charitable organization or you know going back to school and boosting up their math skills or their English skills. We push the teens out into the market too soon, I think for some families.\r\nALEX COHEN, host: But now let's say your kid is working. What tips can parents provide in terms of holding on to that summer money?\r\nMICHELLE SINGLETARY: You know, before they get their job, they need to sit down with them and do a budget. So before they actually work and get that first paycheck I mean, you know, have them draw up a budge where the money is going. And you ought to have some requirements for some of their money. That's right, be a parent.\r\nMICHELLE SINGLETARY: So make them put some of it towards their college fund, if in fact they're headed for college. You know what? Make them put some away, I call it the tax fund, even though they may not have to pay taxes, but to pay for long-term things that they may want. You know, books once they get to college, or maybe they want to get a car, and they can actually pay cash for it, with some of these funds. Don't let them just go out and spend it on movies and stuff. You ought to set some guidelines - this is where you should put the money. And look at their budget.\r\nALEX COHEN, host: Day to Day's personal finance contributor Michelle Singletary. Thank you, Michelle!\r\nMICHELLE SINGLETARY: You're welcome.\r\nALEX COHEN, host: Stay with us. NPR's Day to Day continues.",
"summary": "The tight job market could be bad news for teens seeking summer work. If your teen does find a job, will he or she know how to manage those paychecks? Our personal finance contributor talks with Alex Cohen about ways to help teens find a job.",
"summary_de": "Der angespannte Arbeitsmarkt könnte für Jugendliche, die Sommerarbeit suchen, eine schlechte Nachricht sein. Wenn Ihr Teenager einen Job findet, wird er oder sie wissen, wie er mit diesen Gehaltsschecks umgeht? Unser Mitarbeiter für persönliche Finanzen spricht mit Alex Cohen darüber, wie Teenager bei der Jobsuche unterstützt werden können.",
"summary_zh": "紧张的就业市场对寻找暑期工作的青少年来说可能是个坏消息。如果你的孩子找到了一份工作,他/她懂得怎么管理这些薪水吗?我们的个人理财撰稿人与亚历克斯·科恩谈论如何帮助青少年找到工作。"
},
```
### Data Fields
- 'dialogue': An English dialogue
- 'summary': the original English summary of the corresponding dialogue (provided by MediaSum)
- 'summary_de': the human-translated German summary
- 'summary_zh': the human-translated Chinese summary
### Data Splits
- training set: 20K samples
- validation set: 10K samples
- testing set: 10K samples
## Dataset Creation
Please refer to [our paper](https://aclanthology.org/2022.emnlp-main.526/) for more details.
## Considerations for Using the Data
Please refer to [our paper](https://aclanthology.org/2022.emnlp-main.526/) for more details.
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/krystalan/ClidSum)
### Licensing Information
License: CC BY-NC-SA 4.0
### Citation Information
```
@inproceedings{wang-etal-2022-clidsum,
title = "{C}lid{S}um: A Benchmark Dataset for Cross-Lingual Dialogue Summarization",
author = "Wang, Jiaan and
Meng, Fandong and
Lu, Ziyao and
Zheng, Duo and
Li, Zhixu and
Qu, Jianfeng and
Zhou, Jie",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.526",
pages = "7716--7729",
abstract = "We present ClidSum, a benchmark dataset towards building cross-lingual summarization systems on dialogue documents. It consists of 67k+ dialogue documents and 112k+ annotated summaries in different target languages. Based on the proposed ClidSum, we introduce two benchmark settings for supervised and semi-supervised scenarios, respectively. We then build various baseline systems in different paradigms (pipeline and end-to-end) and conduct extensive experiments on ClidSum to provide deeper analyses. Furthermore, we propose mDialBART which extends mBART via further pre-training, where the multiple objectives help the pre-trained model capture the structural characteristics as well as key content in dialogues and the transformation from source to the target language. Experimental results show the superiority of mDialBART, as an end-to-end model, outperforms strong pipeline models on ClidSum. Finally, we discuss specific challenges that current approaches faced with this task and give multiple promising directions for future research. We have released the dataset and code at https://github.com/krystalan/ClidSum.",
}
```
### Contributions
Thanks to [@krystalan](https://github.com/krystalan) for adding this dataset.
|
Krystalan/xmediasum
|
[
"task_categories:summarization",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:zh",
"language:de",
"license:cc-by-nc-sa-4.0",
"region:us"
] |
2023-02-15T08:50:38+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["en", "zh", "de"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": [], "pretty_name": "xmediasum", "tags": []}
|
2023-02-15T13:58:33+00:00
|
0c74dd45842b2b7abc6762f48e7eed2a8f1295a4
|
SAGAY/Bert-distilbert
|
[
"license:other",
"region:us"
] |
2023-02-15T08:58:15+00:00
|
{"license": "other"}
|
2023-02-15T08:59:10+00:00
|
|
e438eb8c802a09296e2976b0df675fdb490dd9ca
|
dog/fuego-20230215-095845-3f00ed
|
[
"fuego",
"region:us"
] |
2023-02-15T08:58:46+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230215-095845-3f00ed", "status": "done", "script": "run.py", "requirements_file": "requirements.txt", "space_id": "dog/actlearn-fuego-runner", "space_hardware": "cpu-basic"}}
|
2023-02-15T09:00:43+00:00
|
|
e8bf7d00b79bab7d6b981ad6a6b2b1c71b76ad89
|
PeerNorback/fuego-20230215-092030-c202d8
|
[
"fuego",
"region:us"
] |
2023-02-15T09:20:31+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230215-092030-c202d8", "status": "done", "script": "main.py", "requirements_file": "requirements.txt", "space_id": "PeerNorback/fuego-20230215-092030-c202d8", "space_hardware": "cpu-basic", "github_repo_id": "pytorch/examples", "github_repo_branch": "main", "github_repo_sha": "e4e8da8467d55d28920dbd137261d82255f68c71"}}
|
2023-02-15T09:24:42+00:00
|
|
1b30a0dacaaed9f63636c275246414713bf89283
|
# Dataset Card for "helpful-raw-anthropic"
This is a dataset derived from Anthropic's [HH-RLHF data](https://huggingface.co/datasets/Anthropic/hh-rlhf) of instructions and model-generated demonstrations. We combined training splits from the following two subsets:
* `helpful-base`
* `helpful-online`
To convert the multi-turn dialogues into `(instruction, demonstration)` pairs, just the first response from the Assistant was included. This heuristic captures the most obvious answers, but overlooks more complex questions where multiple turns were required to get a helpful response. Some additional filtering is likely required (e.g. defining a minimun length or computing ROUGE-L scores with the instruction/demonstration).
|
HuggingFaceH4/helpful-anthropic-raw
|
[
"license:mit",
"human-feedback",
"region:us"
] |
2023-02-15T10:06:51+00:00
|
{"license": "mit", "pretty_name": "Helpful Raw Anthropic", "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "demonstration", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 34540085.04363476, "num_examples": 65499}], "download_size": 0, "dataset_size": 34540085.04363476}, "tags": ["human-feedback"]}
|
2023-02-20T09:00:56+00:00
|
7717cb083fd90bec9479069823ee25be1ed263bc
|
eshanbhanura/chatslB
|
[
"license:unknown",
"region:us"
] |
2023-02-15T10:11:52+00:00
|
{"license": "unknown"}
|
2023-02-15T10:11:52+00:00
|
|
8040edc6f111547bf80befe756703a45ecd07d09
|
# 岁己SUI的sovits底模数据集
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
#### ForBaseModel.zip:
数据质量不高,只用于岁己音色的底模训练(洗去G_0.pth和D_0.pth的音色)
采样频率为44.1kHz,使用前请注意预处理
取自岁己22年12月、23年1月的录播(除电台,共计211:13:21),经过以下步骤筛选处理
1. 挑取BGM音量较低的直播片段(20:39:21)_[[LowBGM.zip]](https://huggingface.co/datasets/Miuzarte/SUISovitsDataForBaseModel/blob/main/%E6%9C%89%E7%9A%84%E6%B2%A1%E7%9A%84/LowBGM.zip)
2. [UVR5](https://github.com/Anjok07/ultimatevocalremovergui) VR Architecture 5_HP-Karaoke-UVR统一处理,尽量除去了BGM中的人声(20:39:20,反正确实就是少了1s)_[[UVR-ed.zip]](https://huggingface.co/datasets/Miuzarte/SUISovitsDataForBaseModel/blob/main/%E6%9C%89%E7%9A%84%E6%B2%A1%E7%9A%84/UVR-ed.zip)
3. [Audio Slicer](https://github.com/flutydeer/audio-slicer)切片(12:45:29)_[[Slice-d.zip]](https://huggingface.co/datasets/Miuzarte/SUISovitsDataForBaseModel/blob/main/%E6%9C%89%E7%9A%84%E6%B2%A1%E7%9A%84/Slice-d.zip)
4. [Fish Audio Preprocessor](https://github.com/fishaudio/audio-preprocess)响度标准化并删除过短过长的片段(11:24:06)_[[LoudnessNorm-ed.zip]](https://huggingface.co/datasets/Miuzarte/SUISovitsDataForBaseModel/blob/main/%E6%9C%89%E7%9A%84%E6%B2%A1%E7%9A%84/LoudnessNorm-ed.zip)
5. [Spliter Wav by IceKyrin](https://github.com/IceKyrin)声纹识别稳定数据(06:47:46)_[[ForBaseModel.zip]](https://huggingface.co/datasets/Miuzarte/SUISovitsDataForBaseModel/blob/main/ForBaseModel.zip)
文件结构:
```
ForBaseModel.zip
├── 25788785-20221201-195959-658_01_(Vocals)_1.wav
├── 25788785-20221201-195959-658_01_(Vocals)_3.wav
├── ......
├── 25788785-20230201-005152-235_03_(Vocals)_9.wav
└── 25788785-20230201-005152-235_03_(Vocals)_10.wav
```
#### ForBaseModel_sovits3.0.zip:
ForBaseModel.zip经过预处理后的数据集,可以直接投入sovits3.0_48k使用,采样频率为48kHz
文件结构:
```
ForBaseModel_sovits3.0.zip
├── configs
│ └── config.json
├── dataset
│ └── 48k
│ └── suijiSUI
│ ├── 25788785-20221201-195959-658_01_(Vocals)_1.wav
│ ├── 25788785-20221201-195959-658_01_(Vocals)_1.wav.f0.npy
│ ├── 25788785-20221201-195959-658_01_(Vocals)_1.wav.soft.pt
│ ├── ......
│ ├── 25788785-20230201-005152-235_03_(Vocals)_10.wav
│ ├── 25788785-20230201-005152-235_03_(Vocals)_10.wav.f0.npy
│ └── 25788785-20230201-005152-235_03_(Vocals)_10.wav.soft.pt
└── filelists
├── test.txt
├── train.txt
└── val.txt
```
#### ForBaseModel_sovits4.0.zip:
ForBaseModel.zip经过预处理后的数据集,可以直接投入sovits4.0使用,采样频率为44.1kHz
注意:4.0开始config.json中的batch_size默认为6,我又给改回12了
文件结构:
```
ForBaseModel_sovits4.0.zip
├── configs
│ └── config.json
├── dataset
│ └── 44k
│ └── suijiSUI
│ ├── 25788785-20221201-195959-658_01_(Vocals)_1.wav
│ ├── 25788785-20221201-195959-658_01_(Vocals)_1.wav.f0.npy
│ ├── 25788785-20221201-195959-658_01_(Vocals)_1.wav.soft.pt
│ ├── ......
│ ├── 25788785-20230201-005152-235_03_(Vocals)_10.wav
│ ├── 25788785-20230201-005152-235_03_(Vocals)_10.wav.f0.npy
│ └── 25788785-20230201-005152-235_03_(Vocals)_10.wav.soft.pt
└── filelists
├── test.txt
├── train.txt
└── val.txt
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Chinese(98%)
English(1%)
Japanese(1%)
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
Miuzarte/SUISovitsDataForBaseModel
|
[
"language:zh",
"AIvtuber",
"VirtuaReal",
"region:us"
] |
2023-02-15T10:33:06+00:00
|
{"language": ["zh"], "tags": ["AIvtuber", "VirtuaReal"]}
|
2023-03-10T04:49:43+00:00
|
24d09ffda4313e126ce37ef5f3217399d0719c6c
|
# Dataset Card for "t5-Europarl-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tj-solergibert/t5-Europarl-en
|
[
"region:us"
] |
2023-02-15T11:03:34+00:00
|
{"dataset_info": {"features": [{"name": "dest_lang", "dtype": {"class_label": {"names": {"0": "de", "1": "en", "2": "es", "3": "fr", "4": "it", "5": "nl", "6": "pl", "7": "pt", "8": "ro"}}}}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 417227830, "num_examples": 561067}, {"name": "test", "num_bytes": 61238209, "num_examples": 80606}, {"name": "valid", "num_bytes": 57930051, "num_examples": 76911}], "download_size": 125777513, "dataset_size": 536396090}}
|
2023-02-15T11:03:44+00:00
|
432cf5ada2ecb4791e81014f89a0757e6ba022b5
|
rroyc20/trainL
|
[
"license:afl-3.0",
"region:us"
] |
2023-02-15T11:32:08+00:00
|
{"license": "afl-3.0"}
|
2023-02-15T11:56:03+00:00
|
|
450271e87d7e7a39a03ec52324081485265008e1
|
yunfeicloudfly/lora
|
[
"license:openrail",
"region:us"
] |
2023-02-15T12:26:23+00:00
|
{"license": "openrail"}
|
2023-08-10T04:47:31+00:00
|
|
ea8e1b6f3a1d3b436d2e7f1e7adf17e738c25ec9
|
# Dataset Card for Fill50K
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset is converted from fill50k example dataset of [ControlNet](https://github.com/lllyasviel/ControlNet)
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[fill50k.zip](https://huggingface.co/lllyasviel/ControlNet/blob/main/training/fill50k.zip)
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
HighCWu/fill50k
|
[
"language:en",
"license:openrail",
"region:us"
] |
2023-02-15T12:48:42+00:00
|
{"language": ["en"], "license": "openrail", "pretty_name": "a", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "guide", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 454411979, "num_examples": 50000}], "download_size": 316021131, "dataset_size": 454411979}}
|
2023-02-15T15:45:27+00:00
|
8942cf2176383cd493738c23ca6a7c31b3409dda
|
ZJW666/KFC
|
[
"license:mit",
"region:us"
] |
2023-02-15T12:57:09+00:00
|
{"license": "mit"}
|
2023-02-15T13:00:35+00:00
|
|
9037f14e7c8313e3e9f8d3d0af747e1f841dd9c0
|
# Dataset Card for "helpful-anthropic-raw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lewtun/helpful-anthropic-raw
|
[
"region:us"
] |
2023-02-15T13:42:37+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "demonstration", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 26008407, "num_examples": 65842}], "download_size": 15735838, "dataset_size": 26008407}}
|
2023-02-15T13:42:56+00:00
|
15296bd1c7a8b69a5a9771e37f0f2686dbc0e9b1
|
# Dataset Card for XMediaSum
### Dataset Summary
We present XMediaSum, a cross-lingual dialogue summarization dataset with 40K English(dialogues)->Chinese(summaries) and 40K English (dialogues)->German(summaries) samples. XMediaSum is created by manually translating the English summaries of MediaSum (a English monolingual dialogue summarization dataset) to both Chinese and German.
- Paper: [ClidSum: A Benchmark Dataset for Cross-Lingual Dialogue Summarization](https://aclanthology.org/2022.emnlp-main.526/) (EMNLP 2022)
- GitHub: https://github.com/krystalan/ClidSum
### Supported Task
- Cross-Lingual Summarization
- Cross-Lingual Dialogue Summarization
### Languages
- source language: English
- target language: Chinese and German
## Dataset Structure
### Data Instances
One example is given below in JSON format:
```json
{
"dialogue": "MADELELEINE BRAND, host: OK, here's some good news on the jobs front for both men and women. A new survey out today from the employment firm Manpower finds that about a quarter of employers will add jobs this summer. That's for adults, but for teenagers this summer's job market is shaping up to be the weakest in more than 50 years.\r\nALEX COHEN, host: So, how do you get your teenage kids not to spend the entire summer glued to the couch? You're about to get some tips from Michelle Singletary. She's Day to Day's personal finance contributor. Hi, Michelle!\r\nMICHELLE SINGLETARY: Hi!\r\nALEX COHEN, host: So why is the summer job market so hard for teens this year?\r\nMICHELLE SINGLETARY: Lot of things going on right now. We've got a tough economy. We've got a lot of college graduates going into the market. We have people who are losing their jobs and taking jobs that would traditionally go to teens, like in restaurants and retailers. And we have a lot of older people holding on to their jobs and not retiring because they can't afford to retire. And that puts teens at the end of the line when it comes to these types of jobs.\r\nALEX COHEN, host: So you've got a teenager at home, a little bit young for the working world just yet, but what would you say to a teenager who's out there hunting around for a job?\r\nMICHELLE SINGLETARY: If you absolutely need a job, keep looking. You know, obviously the types of jobs that teens tend to go for in retail, fast food, you know, they still need people. And oftentimes you know, listen, you may not get the job at the beginning of the summer, but hold on because in late summer, when some of those college students are going back and perhaps some of those people who lost their jobs are finding permanent positions with more pay, you might be able to still get that job. So don't give up, you may spend a month or month and a half without it, but go back to those retailers and those restaurants and those fast food places to see if they still need someone.\r\nALEX COHEN, host: And now I know parents like having the break from providing allowance. But, you know, is - are there reasons maybe not to push your teen towards taking a job?\r\nMICHELLE SINGLETARY: I think it absolutely is. In fact I think too many teens are working and they don't need to work. They're some who absolutely need, they're contributing to their household or they're putting money into their own college fund. But more often than not, what parents do is say you've got to get a job, and then the teens get the job and they spend all the money on clothes and you know videos and iPods and paying their cell phone bills because they don't need a cell phone anyway.\r\nALEX COHEN, host: So it's not going towards the college tuition at all.\r\nMICHELLE SINGLETARY: It is not. It's just disposable income that they're disposing of. And parents are not setting any limits and you know and then the kids get used to the fact that they're using all of their paycheck. That's another bad habit. Because they don't have to pay bills and all, all their income goes through you know this stuff.\r\nMICHELLE SINGLETARY: And when it comes time to get a real job, they're surprised they don't have enough money. And so you know what? You can wait to work. Instead, maybe they can spend the summer volunteering at a charitable organization or you know going back to school and boosting up their math skills or their English skills. We push the teens out into the market too soon, I think for some families.\r\nALEX COHEN, host: But now let's say your kid is working. What tips can parents provide in terms of holding on to that summer money?\r\nMICHELLE SINGLETARY: You know, before they get their job, they need to sit down with them and do a budget. So before they actually work and get that first paycheck I mean, you know, have them draw up a budge where the money is going. And you ought to have some requirements for some of their money. That's right, be a parent.\r\nMICHELLE SINGLETARY: So make them put some of it towards their college fund, if in fact they're headed for college. You know what? Make them put some away, I call it the tax fund, even though they may not have to pay taxes, but to pay for long-term things that they may want. You know, books once they get to college, or maybe they want to get a car, and they can actually pay cash for it, with some of these funds. Don't let them just go out and spend it on movies and stuff. You ought to set some guidelines - this is where you should put the money. And look at their budget.\r\nALEX COHEN, host: Day to Day's personal finance contributor Michelle Singletary. Thank you, Michelle!\r\nMICHELLE SINGLETARY: You're welcome.\r\nALEX COHEN, host: Stay with us. NPR's Day to Day continues.",
"summary": "The tight job market could be bad news for teens seeking summer work. If your teen does find a job, will he or she know how to manage those paychecks? Our personal finance contributor talks with Alex Cohen about ways to help teens find a job.",
"summary_de": "Der angespannte Arbeitsmarkt könnte für Jugendliche, die Sommerarbeit suchen, eine schlechte Nachricht sein. Wenn Ihr Teenager einen Job findet, wird er oder sie wissen, wie er mit diesen Gehaltsschecks umgeht? Unser Mitarbeiter für persönliche Finanzen spricht mit Alex Cohen darüber, wie Teenager bei der Jobsuche unterstützt werden können.",
"summary_zh": "紧张的就业市场对寻找暑期工作的青少年来说可能是个坏消息。如果你的孩子找到了一份工作,他/她懂得怎么管理这些薪水吗?我们的个人理财撰稿人与亚历克斯·科恩谈论如何帮助青少年找到工作。"
},
```
### Data Fields
- 'dialogue': An English dialogue
- 'summary': the original English summary of the corresponding dialogue (provided by MediaSum)
- 'summary_de': the human-translated German summary
- 'summary_zh': the human-translated Chinese summary
### Data Splits
- training set: 20K samples
- validation set: 10K samples
- testing set: 10K samples
## Dataset Creation
Please refer to [our paper](https://aclanthology.org/2022.emnlp-main.526/) for more details.
## Considerations for Using the Data
Please refer to [our paper](https://aclanthology.org/2022.emnlp-main.526/) for more details.
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/krystalan/ClidSum)
### Licensing Information
License: CC BY-NC-SA 4.0
### Citation Information
```
@inproceedings{wang-etal-2022-clidsum,
title = "{C}lid{S}um: A Benchmark Dataset for Cross-Lingual Dialogue Summarization",
author = "Wang, Jiaan and
Meng, Fandong and
Lu, Ziyao and
Zheng, Duo and
Li, Zhixu and
Qu, Jianfeng and
Zhou, Jie",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.526",
pages = "7716--7729",
abstract = "We present ClidSum, a benchmark dataset towards building cross-lingual summarization systems on dialogue documents. It consists of 67k+ dialogue documents and 112k+ annotated summaries in different target languages. Based on the proposed ClidSum, we introduce two benchmark settings for supervised and semi-supervised scenarios, respectively. We then build various baseline systems in different paradigms (pipeline and end-to-end) and conduct extensive experiments on ClidSum to provide deeper analyses. Furthermore, we propose mDialBART which extends mBART via further pre-training, where the multiple objectives help the pre-trained model capture the structural characteristics as well as key content in dialogues and the transformation from source to the target language. Experimental results show the superiority of mDialBART, as an end-to-end model, outperforms strong pipeline models on ClidSum. Finally, we discuss specific challenges that current approaches faced with this task and give multiple promising directions for future research. We have released the dataset and code at https://github.com/krystalan/ClidSum.",
}
```
### Contributions
Thanks to [@krystalan](https://github.com/krystalan) for adding this dataset.
|
GEM/xmediasum
|
[
"task_categories:summarization",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:zh",
"language:de",
"license:cc-by-nc-sa-4.0",
"region:us"
] |
2023-02-15T14:01:13+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["en", "zh", "de"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": [], "pretty_name": "xmediasum", "tags": []}
|
2023-02-15T14:01:56+00:00
|
5fe98b0c7725c5974e3f296c87d2b969c088f5eb
|
# Dataset Card for "trainval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
rroyc20/trainval
|
[
"region:us"
] |
2023-02-15T14:18:54+00:00
|
{"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "clean_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3933706, "num_examples": 42415}, {"name": "val", "num_bytes": 1691755, "num_examples": 18178}], "download_size": 3490856, "dataset_size": 5625461}}
|
2023-02-15T14:46:20+00:00
|
3651b36b6337f3f8fdfa6991d5095e1ecbefab69
|
# Dataset Card for "salvadoran-news-edh"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
justinian336/salvadoran-news-edh
|
[
"region:us"
] |
2023-02-15T14:34:40+00:00
|
{"dataset_info": {"features": [{"name": "image_src", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": {"class_label": {"names": {"0": "fotogalerias", "1": "noticias", "2": "deportes/zona-mundialista", "3": "entretenimiento", "4": "vida", "5": "opinion", "7": "opinion/caricaturas", "8": "deportes", "9": "videos"}}}}, {"name": "link", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 196407515, "num_examples": 55345}], "download_size": 111585532, "dataset_size": 196407515}}
|
2024-02-12T00:59:15+00:00
|
6986f37a1f34467621dd78a5b0263a7400133897
|
# Dataset Card for "salvadoran-news-elmundo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
justinian336/salvadoran-news-elmundo
|
[
"region:us"
] |
2023-02-15T14:38:15+00:00
|
{"dataset_info": {"features": [{"name": "image_src", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": {"class_label": {"names": {"0": "Tecnomundo", "1": "Guia Mundialista", "2": "Economia", "3": "Confidencial", "4": "Editorial", "5": "Politica", "6": "El Mundo", "7": "Nacionales"}}}}, {"name": "date", "dtype": "string"}, {"name": "link", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 89785805, "num_examples": 45983}], "download_size": 49911188, "dataset_size": 89785805}}
|
2024-02-12T01:01:10+00:00
|
edd945d6e6ab10d339d7a847ee2f8bcd613d9d94
|
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
j-krzywdziak/test2
|
[
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"language:pl",
"license:mit",
"region:us"
] |
2023-02-15T14:53:13+00:00
|
{"annotations_creators": ["expert-generated"], "language": ["pl"], "license": ["mit"], "multilinguality": ["monolingual"], "dataset_info": [{"config_name": "config", "features": [{"name": "audio_id", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}]}]}
|
2023-02-17T13:13:40+00:00
|
8b289705d3be6e47c53b7859a9c0f15b1112dac5
|
# Dataset Card for "fer2013test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Piro17/fer2013test
|
[
"region:us"
] |
2023-02-15T15:02:15+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "angry", "1": "disgust", "2": "fear", "3": "happy", "4": "neutral", "5": "sad", "6": "surprise"}}}}], "splits": [{"name": "train", "num_bytes": 11521798.802, "num_examples": 7178}], "download_size": 10231842, "dataset_size": 11521798.802}}
|
2023-02-15T15:02:30+00:00
|
c9af165f20a7aa64d5baf1da1da5d792ad850075
|
madrylab/imagenet-star
|
[
"license:mit",
"region:us"
] |
2023-02-15T15:05:21+00:00
|
{"license": "mit"}
|
2023-03-07T16:41:15+00:00
|
|
4bccca2357674563484cd290ef21bfe7fe54ac1b
|
This dataset contains the tokens for ImageNet* from the paper [Dataset Interfaces: Diagnosing Model Failures Using Controllable Counterfactual Generation](https://arxiv.org/abs/2302.07865)
Download the tokens from the files page, or run:
```
wget https://huggingface.co/datasets/madrylab/imagenet-star-tokens/resolve/main/tokens.zip
```
|
madrylab/imagenet-star-tokens
|
[
"license:mit",
"arxiv:2302.07865",
"region:us"
] |
2023-02-15T15:06:23+00:00
|
{"license": "mit"}
|
2023-02-16T02:45:47+00:00
|
314720d976222834b3fe09a215738299ba2abe3b
|
# Dataset Card for FrenchMedMCQA : A French Multiple-Choice Question Answering Corpus for Medical domain
## Table of Contents
- [Dataset Card for FrenchMedMCQA : A French Multiple-Choice Question Answering Corpus for Medical domain](#dataset-card-for-frenchmedmcqa--a-french-multiple-choice-question-answering-corpus-for-medical-domain)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contact](#contact)
## Dataset Description
- **Homepage:** https://deft2023.univ-avignon.fr/
- **Repository:** https://deft2023.univ-avignon.fr/
- **Paper:** [FrenchMedMCQA: A French Multiple-Choice Question Answering Dataset for Medical domain](https://hal.science/hal-03824241/document)
- **Leaderboard:** Coming soon
- **Point of Contact:** [Yanis LABRAK](mailto:[email protected])
### Dataset Summary
This paper introduces FrenchMedMCQA, the first publicly available Multiple-Choice Question Answering (MCQA) dataset in French for medical domain. It is composed of 3,105 questions taken from real exams of the French medical specialization diploma in pharmacy, mixing single and multiple answers.
Each instance of the dataset contains an identifier, a question, five possible answers and their manual correction(s).
We also propose first baseline models to automatically process this MCQA task in order to report on the current performances and to highlight the difficulty of the task. A detailed analysis of the results showed that it is necessary to have representations adapted to the medical domain or to the MCQA task: in our case, English specialized models yielded better results than generic French ones, even though FrenchMedMCQA is in French. Corpus, models and tools are available online.
### Supported Tasks and Leaderboards
Multiple-Choice Question Answering (MCQA)
### Languages
The questions and answers are available in French.
## Dataset Structure
### Data Instances
```json
{
"id": "1863462668476003678",
"question": "Parmi les propositions suivantes, laquelle (lesquelles) est (sont) exacte(s) ? Les chylomicrons plasmatiques :",
"answers": {
"a": "Sont plus riches en cholestérol estérifié qu'en triglycérides",
"b": "Sont synthétisés par le foie",
"c": "Contiennent de l'apolipoprotéine B48",
"d": "Contiennent de l'apolipoprotéine E",
"e": "Sont transformés par action de la lipoprotéine lipase"
},
"correct_answers": [
"c",
"d",
"e"
],
"subject_name": "pharmacie",
"type": "multiple"
}
```
### Data Fields
- `id` : a string question identifier for each example
- `question` : question text (a string)
- `answer_a` : Option A
- `answer_b` : Option B
- `answer_c` : Option C
- `answer_d` : Option D
- `answer_e` : Option E
- `correct_answers` : Correct options, i.e., A, D and E
- `choice_type` ({"single", "multiple"}): Question choice type.
- "single": Single-choice question, where each choice contains a single option.
- "multiple": Multi-choice question, where each choice contains a combination of multiple options.
### Data Splits
| # Answers | Training | Validation | Test | Total |
|:---------:|:--------:|:----------:|:----:|:-----:|
| 1 | 595 | 164 | 321 | 1,080 |
| 2 | 528 | 45 | 97 | 670 |
| 3 | 718 | 71 | 141 | 930 |
| 4 | 296 | 30 | 56 | 382 |
| 5 | 34 | 2 | 7 | 43 |
| Total | 2171 | 312 | 622 | 3,105 |
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The questions and their associated candidate answer(s) were collected from real French pharmacy exams on the remede website. Questions and answers were manually created by medical experts and used during examinations. The dataset is composed of 2,025 questions with multiple answers and 1,080 with a single one, for a total of 3,105 questions. Each instance of the dataset contains an identifier, a question, five options (labeled from A to E) and correct answer(s). The average question length is 14.17 tokens and the average answer length is 6.44 tokens. The vocabulary size is of 13k words, of which 3.8k are estimated medical domain-specific words (i.e. a word related to the medical field). We find an average of 2.49 medical domain-specific words in each question (17 % of the words) and 2 in each answer (36 % of the words). On average, a medical domain-specific word is present in 2 questions and in 8 answers.
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Additional Information
### Dataset Curators
The dataset was created by Labrak Yanis and Bazoge Adrien and Dufour Richard and Daille Béatrice and Gourraud Pierre-Antoine and Morin Emmanuel and Rouvier Mickael.
### Licensing Information
Apache 2.0
### Citation Information
If you find this useful in your research, please consider citing the dataset paper :
```latex
@inproceedings{labrak-etal-2022-frenchmedmcqa,
title = "{F}rench{M}ed{MCQA}: A {F}rench Multiple-Choice Question Answering Dataset for Medical domain",
author = "Labrak, Yanis and
Bazoge, Adrien and
Dufour, Richard and
Daille, Beatrice and
Gourraud, Pierre-Antoine and
Morin, Emmanuel and
Rouvier, Mickael",
booktitle = "Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.louhi-1.5",
pages = "41--46",
abstract = "This paper introduces FrenchMedMCQA, the first publicly available Multiple-Choice Question Answering (MCQA) dataset in French for medical domain. It is composed of 3,105 questions taken from real exams of the French medical specialization diploma in pharmacy, mixing single and multiple answers. Each instance of the dataset contains an identifier, a question, five possible answers and their manual correction(s). We also propose first baseline models to automatically process this MCQA task in order to report on the current performances and to highlight the difficulty of the task. A detailed analysis of the results showed that it is necessary to have representations adapted to the medical domain or to the MCQA task: in our case, English specialized models yielded better results than generic French ones, even though FrenchMedMCQA is in French. Corpus, models and tools are available online.",
}
```
### Contact
Thanks to contact [Yanis LABRAK](https://github.com/qanastek) for more information about this dataset.
|
DEFT-2023/DEFT2023
|
[
"task_categories:question-answering",
"task_categories:multiple-choice",
"task_ids:multiple-choice-qa",
"task_ids:open-domain-qa",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1k<n<10k",
"source_datasets:original",
"language:fr",
"license:apache-2.0",
"region:us"
] |
2023-02-15T15:23:14+00:00
|
{"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["fr"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1k<n<10k"], "source_datasets": ["original"], "task_categories": ["question-answering", "multiple-choice"], "task_ids": ["multiple-choice-qa", "open-domain-qa"], "paperswithcode_id": "frenchmedmcqa", "pretty_name": "FrenchMedMCQA"}
|
2023-05-28T16:37:23+00:00
|
2d1967b67f938d206926effa200d115b5d29a672
|
# Dataset Card for "model_cards_with_readmes_sections"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
davanstrien/model_cards_with_readmes_sections
|
[
"region:us"
] |
2023-02-15T15:32:03+00:00
|
{"dataset_info": {"features": [{"name": "license", "dtype": "string"}, {"name": "tags", "dtype": "string"}, {"name": "is_nc", "dtype": "bool"}, {"name": "readme_section", "dtype": "string"}, {"name": "hash", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 28801782.8572217, "num_examples": 32124}], "download_size": 13668782, "dataset_size": 28801782.8572217}}
|
2023-02-15T15:32:28+00:00
|
30a30d2ae61878baac354c5205ee7aa2d4a9e08d
|
# Dataset Card for "helpful-self-instruct-raw"
This dataset is derived from the `finetuning` subset of [Self-Instruct](https://github.com/yizhongw/self-instruct), with some light formatting to remove trailing spaces and `<|endoftext|>` tokens.
|
HuggingFaceH4/helpful-self-instruct-raw
|
[
"license:apache-2.0",
"human-feedback",
"region:us"
] |
2023-02-15T15:32:48+00:00
|
{"license": "apache-2.0", "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "demonstration", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20412870, "num_examples": 82612}], "download_size": 12532431, "dataset_size": 20412870}, "tags": ["human-feedback"]}
|
2023-02-15T16:04:31+00:00
|
f7cc5f972394c3e973e9844736853d9cf5e1a7ba
|
# Dataset Card for "GID"
## Dataset Description
- **Paper** [Land-cover classification with high-resolution remote sensing images using transferable deep models](https://www.sciencedirect.com/science/article/pii/S0034425719303414)
### Licensing Information
Public domain.
## Citation Information
[Land-cover classification with high-resolution remote sensing images using transferable deep models](https://www.sciencedirect.com/science/article/pii/S0034425719303414)
```
@article{GID2020,
title = {Land-cover classification with high-resolution remote sensing images using transferable deep models},
author = {Tong, Xin-Yi and Xia, Gui-Song and Lu, Qikai and Shen, Huanfeng and Li, Shengyang and You, Shucheng and Zhang, Liangpei},
year = 2020,
journal = {Remote Sensing of Environment},
volume = 237,
pages = 111322
}
```
|
jonathan-roberts1/GID
|
[
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:other",
"region:us"
] |
2023-02-15T16:42:03+00:00
|
{"license": "other", "task_categories": ["image-classification", "zero-shot-image-classification"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "arbor woodland", "1": "artificial grassland", "2": "dry cropland", "3": "garden plot", "4": "industrial land", "5": "irrigated land", "6": "lake", "7": "natural grassland", "8": "paddy field", "9": "pond", "10": "river", "11": "rural residential", "12": "shrub land", "13": "traffic land", "14": "urban residential"}}}}], "splits": [{"name": "train", "num_bytes": 1777210275, "num_examples": 30000}], "download_size": 1263253291, "dataset_size": 1777210275}}
|
2023-03-31T14:38:31+00:00
|
07773c7351ebfce04f666a9c9a165d1e0d59af36
|
# Dataset Card for "CLRS"
## Dataset Description
- **Paper** [CLRS: Continual Learning Benchmark for Remote Sensing Image Scene Classification](https://www.mdpi.com/1424-8220/20/4/1226/pdf)
-
### Licensing Information
For academic purposes.
## Citation Information
[CLRS: Continual Learning Benchmark for Remote Sensing Image Scene Classification](https://www.mdpi.com/1424-8220/20/4/1226/pdf)
```
@article{s20041226,
title = {CLRS: Continual Learning Benchmark for Remote Sensing Image Scene Classification},
author = {Li, Haifeng and Jiang, Hao and Gu, Xin and Peng, Jian and Li, Wenbo and Hong, Liang and Tao, Chao},
year = 2020,
journal = {Sensors},
volume = 20,
number = 4,
doi = {10.3390/s20041226},
issn = {1424-8220},
url = {https://www.mdpi.com/1424-8220/20/4/1226},
article-number = 1226,
pubmedid = 32102294,
}
```
|
jonathan-roberts1/CLRS
|
[
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:other",
"region:us"
] |
2023-02-15T16:46:17+00:00
|
{"license": "other", "task_categories": ["image-classification", "zero-shot-image-classification"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "airport", "1": "bare land", "2": "beach", "3": "bridge", "4": "commercial", "5": "desert", "6": "farmland", "7": "forest", "8": "golf course", "9": "highway", "10": "industrial", "11": "meadow", "12": "mountain", "13": "overpass", "14": "park", "15": "parking", "16": "playground", "17": "port", "18": "railway", "19": "railway station", "20": "residential", "21": "river", "22": "runway", "23": "stadium", "24": "storage tank"}}}}], "splits": [{"name": "train", "num_bytes": 2969926932, "num_examples": 15000}], "download_size": 2327956775, "dataset_size": 2969926932}}
|
2023-03-31T14:35:22+00:00
|
a1657fd6bb5aab11fc2df492816f5f7f2e919025
|
# Dataset Card for "streamlit-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
andfanilo/streamlit-issues
|
[
"region:us"
] |
2023-02-15T16:50:20+00:00
|
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "repository_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "comments_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "user", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "labels", "list": [{"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "color", "dtype": "string"}, {"name": "default", "dtype": "bool"}, {"name": "description", "dtype": "string"}]}, {"name": "state", "dtype": "string"}, {"name": "locked", "dtype": "bool"}, {"name": "assignee", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "assignees", "list": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "milestone", "dtype": "null"}, {"name": "comments", "dtype": "int64"}, {"name": "created_at", "dtype": "timestamp[s]"}, {"name": "updated_at", "dtype": "timestamp[s]"}, {"name": "closed_at", "dtype": "timestamp[s]"}, {"name": "author_association", "dtype": "string"}, {"name": "active_lock_reason", "dtype": "null"}, {"name": "body", "dtype": "string"}, {"name": "reactions", "struct": [{"name": "url", "dtype": "string"}, {"name": "total_count", "dtype": "int64"}, {"name": "+1", "dtype": "int64"}, {"name": "-1", "dtype": "int64"}, {"name": "laugh", "dtype": "int64"}, {"name": "hooray", "dtype": "int64"}, {"name": "confused", "dtype": "int64"}, {"name": "heart", "dtype": "int64"}, {"name": "rocket", "dtype": "int64"}, {"name": "eyes", "dtype": "int64"}]}, {"name": "timeline_url", "dtype": "string"}, {"name": "performed_via_github_app", "dtype": "null"}, {"name": "state_reason", "dtype": "string"}, {"name": "draft", "dtype": "bool"}, {"name": "pull_request", "struct": [{"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "diff_url", "dtype": "string"}, {"name": "patch_url", "dtype": "string"}, {"name": "merged_at", "dtype": "timestamp[s]"}]}, {"name": "is_pull_request", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 15843221, "num_examples": 5000}], "download_size": 3914406, "dataset_size": 15843221}}
|
2023-02-15T16:50:25+00:00
|
ac05b361866a336d733c666b48bcc63c21b4c5cd
|
# Dataset Card for "cppe-5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
comet-team/cppe-5
|
[
"region:us"
] |
2023-02-15T17:12:20+00:00
|
{"dataset_info": {"features": [{"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "width", "dtype": "int32"}, {"name": "height", "dtype": "int32"}, {"name": "objects", "sequence": [{"name": "id", "dtype": "int64"}, {"name": "area", "dtype": "int64"}, {"name": "bbox", "sequence": "float32", "length": 4}, {"name": "category", "dtype": {"class_label": {"names": {"0": "Coverall", "1": "Face_Shield", "2": "Gloves", "3": "Goggles", "4": "Mask"}}}}]}], "splits": [{"name": "train", "num_bytes": 240463364.0, "num_examples": 1000}, {"name": "test", "num_bytes": 4172164.0, "num_examples": 29}], "download_size": 239989523, "dataset_size": 244635528.0}}
|
2023-02-15T17:18:06+00:00
|
8333b655cb9786ed6e68d58231ba348fc41c2e8a
|
# Dataset Card for "mastodon-instances"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
comet-team/mastodon-instances
|
[
"region:us"
] |
2023-02-15T17:17:09+00:00
|
{"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "short_description", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "uptime", "dtype": "float64"}, {"name": "up", "dtype": "bool"}, {"name": "https_score", "dtype": "int64"}, {"name": "https_rank", "dtype": "string"}, {"name": "ipv6", "dtype": "bool"}, {"name": "openRegistrations", "dtype": "bool"}, {"name": "users", "dtype": "int64"}, {"name": "statuses", "dtype": "string"}, {"name": "connections", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 816425, "num_examples": 1868}], "download_size": 536440, "dataset_size": 816425}}
|
2023-02-15T17:17:15+00:00
|
76dc6d02ccc4bdd30038ea55ddfd94971fd9887c
|
kimetsu/Timit
|
[
"license:other",
"region:us"
] |
2023-02-15T17:22:04+00:00
|
{"license": "other"}
|
2023-02-15T19:35:51+00:00
|
|
5ad1aa816cd428759634413f30a3b524f2ff2011
|
# Dataset Card for "bmfr-finetuning-dictmed-med"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ell-hol/bmfr-finetuning-dictmed-med
|
[
"region:us"
] |
2023-02-15T17:23:43+00:00
|
{"dataset_info": {"features": [{"name": "bambara", "dtype": "string"}, {"name": "french", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 190972.0, "num_examples": 1539}, {"name": "test", "num_bytes": 52325.0, "num_examples": 192}], "download_size": 147976, "dataset_size": 243297.0}}
|
2023-02-15T18:52:22+00:00
|
fcf7de2af244520e06f424c6149235166a561642
|
# Dataset Card for incivility-arizona-daily-star-comments
This is a collection of more than 6000 comments on Arizona Daily Star news articles from 2011 that have been manually annotated for various forms of incivility including aspersion, namecalling, sarcasm, and vulgarity.
## Dataset Structure
Each instance in the dataset corresponds to a single comment from a single commenter.
An instance's `text` field contains the text of the comment with any quotes of other commenters removed.
The remaining fields in each instance provide binary labels for each type of incivility annotated:
`aspersion`, `hyperbole`, `lying`, `namecalling`, `noncooperation`, `offtopic`, `pejorative`, `sarcasm`, `vulgarity`, and `other_incivility`.
The dataset provides three standard splits: `train`, `validation`, and `test`.
## Dataset Creation
The original annotation effort is described in:
- Kevin Coe, Kate Kenski, Stephen A. Rains.
[Online and Uncivil? Patterns and Determinants of Incivility in Newspaper Website Comments](https://doi.org/10.1111/jcom.12104).
Journal of Communication, Volume 64, Issue 4, August 2014, Pages 658–679.
That dataset was converted to a computer-friendly form as described in section 4.2.1 of:
- Farig Sadeque.
[User behavior in social media: engagement, incivility, and depression](https://repository.arizona.edu/handle/10150/633192).
PhD thesis. The University of Arizona. 2019.
The current upload is a 2023 conversion of that form to a huggingface Dataset.
## Considerations for Using the Data
The data is intended for the study of incivility.
It should not be used to train models to generate incivility.
The human coders and their trainers were mostly [Western, educated, industrialized, rich and democratic (WEIRD)](https://www.nature.com/articles/466029a), which may have shaped how they evaluated incivility.
## Citation
```bibtex
@article{10.1111/jcom.12104,
author = {Coe, Kevin and Kenski, Kate and Rains, Stephen A.},
title = {Online and Uncivil? Patterns and Determinants of Incivility in Newspaper Website Comments},
journal = {Journal of Communication},
volume = {64},
number = {4},
pages = {658-679},
year = {2014},
month = {06},
issn = {0021-9916},
doi = {10.1111/jcom.12104},
url = {https://doi.org/10.1111/jcom.12104},
}
```
|
civility-lab/incivility-arizona-daily-star-comments
|
[
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"social media",
"incivility",
"aspersion",
"hyperbole",
"lying",
"namecalling",
"noncooperation",
"pejorative",
"sarcasm",
"vulgarity",
"region:us"
] |
2023-02-15T18:25:12+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-label-classification"], "pretty_name": "Incivility in Arizona Daily Star Comments", "tags": ["social media", "incivility", "aspersion", "hyperbole", "lying", "namecalling", "noncooperation", "pejorative", "sarcasm", "vulgarity"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "aspersion", "dtype": "int64"}, {"name": "hyperbole", "dtype": "int64"}, {"name": "lying", "dtype": "int64"}, {"name": "namecalling", "dtype": "int64"}, {"name": "noncooperation", "dtype": "int64"}, {"name": "offtopic", "dtype": "int64"}, {"name": "other_incivility", "dtype": "int64"}, {"name": "pejorative", "dtype": "int64"}, {"name": "sarcasm", "dtype": "int64"}, {"name": "vulgarity", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1568771, "num_examples": 3910}, {"name": "validation", "num_bytes": 398667, "num_examples": 976}, {"name": "test", "num_bytes": 486262, "num_examples": 1228}], "download_size": 1400753, "dataset_size": 2453700}}
|
2023-02-15T23:18:17+00:00
|
f93c62c5fdb80499a0a9e971ba6a4d689010b268
|
marmolpen3/sla_example
|
[
"region:us"
] |
2023-02-15T19:04:07+00:00
|
{"viewer": true}
|
2023-04-20T22:48:33+00:00
|
|
ff3f0ad352b0fd99ef7ba510c66f2e80a0356453
|
Eppinette/C.Net.Poses
|
[
"license:openrail",
"region:us"
] |
2023-02-15T19:04:29+00:00
|
{"license": "openrail"}
|
2023-02-15T19:28:11+00:00
|
|
106ae27e9d25e58900f98c46de5d6465156138db
|
joweyel/munzels
|
[
"license:unknown",
"region:us"
] |
2023-02-15T19:14:22+00:00
|
{"license": "unknown", "dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3167640.0, "num_examples": 20}], "download_size": 3168629, "dataset_size": 3167640.0}}
|
2023-02-15T21:13:41+00:00
|
|
4ee3b371381bcccd14723b74001be3febc6f8179
|
Rahmaa/SciTLDR_ClEaN
|
[
"license:openrail",
"region:us"
] |
2023-02-15T19:23:53+00:00
|
{"license": "openrail"}
|
2023-02-19T17:57:31+00:00
|
|
091860c29d4d69c06bf41f15090e03c787424fda
|
Rahmaa/ElsevieR_ClEaN
|
[
"license:openrail",
"region:us"
] |
2023-02-15T19:47:05+00:00
|
{"license": "openrail"}
|
2023-02-19T17:57:46+00:00
|
|
78a5b42b2450e91b71e4c8bb7f4972f35237524d
|
# Dataset Card for "resd_annotated"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Aniemore/resd_annotated
|
[
"task_categories:audio-classification",
"size_categories:1K<n<10K",
"language:ru",
"license:mit",
"voice",
"emotions",
"annotated",
"classification",
"doi:10.57967/hf/1272",
"region:us"
] |
2023-02-15T20:00:40+00:00
|
{"language": "ru", "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["audio-classification"], "pretty_name": "RESD", "dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "speech", "dtype": "audio"}, {"name": "text", "dtype": "string"}, {"name": "emotion", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 398878916.336, "num_examples": 1116}, {"name": "test", "num_bytes": 96643276, "num_examples": 280}], "download_size": 485513605, "dataset_size": 495522192.336}, "tags": ["voice", "emotions", "annotated", "classification"]}
|
2023-07-14T06:59:51+00:00
|
3665be7a939e8bb32b39be9593681753e24c495e
|
ahmedyehia/fuego-20230215-215057-ed542f
|
[
"fuego",
"region:us"
] |
2023-02-15T20:50:58+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230215-215057-ed542f", "status": "done", "script": "main.py", "requirements_file": "requirements.txt", "space_id": "ahmedyehia/fuego-20230215-215057-ed542f", "space_hardware": "cpu-basic", "github_repo_id": "pytorch/examples", "github_repo_branch": "main", "github_repo_sha": "e4e8da8467d55d28920dbd137261d82255f68c71"}}
|
2023-02-15T20:55:56+00:00
|
|
a921529120ea58fed1b13d257caac4956bd9b183
|
Haru6/Unicorn-vs-Horse
|
[
"license:openrail",
"region:us"
] |
2023-02-15T21:04:00+00:00
|
{"license": "openrail"}
|
2023-02-15T21:04:00+00:00
|
|
30340cd1cb5084bec7ffd7d622220676b88bc8a6
|
# Dataset Card for "VALUE_wikitext103_lexical"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_wikitext103_lexical
|
[
"region:us"
] |
2023-02-15T21:23:07+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 1246241, "num_examples": 1871}, {"name": "train", "num_bytes": 522253807, "num_examples": 756745}, {"name": "validation", "num_bytes": 1085449, "num_examples": 1603}], "download_size": 311609435, "dataset_size": 524585497}}
|
2023-02-15T21:23:20+00:00
|
410184b25871c6fb918595b300f69c1ee21f8931
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
|
HuggingFaceH4/pmp-se-test-dataset
|
[
"license:cc-by-sa-4.0",
"region:us"
] |
2023-02-15T22:00:29+00:00
|
{"license": "cc-by-sa-4.0", "dataset_info": {"features": [{"name": "qid", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "answers", "list": [{"name": "AnswerID", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "pm_score", "dtype": "int64"}, {"name": "selected", "dtype": "bool"}, {"name": "Author", "dtype": "string"}, {"name": "AuthorID", "dtype": "int64"}, {"name": "AuthorProfile", "dtype": "string"}]}, {"name": "date", "dtype": "string"}, {"name": "metadata", "dtype": "string"}]}}
|
2023-02-16T00:06:13+00:00
|
9048b8414e4281b9a0c3c4d54eece3603996714c
|
# Dataset Card for Cocktail Recipes
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
## Dataset Description
### Dataset Summary
Cocktail Recipes Dataset for Semi-Structured Text Generation.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
```json
{"title": "Final Ward",
"ingredients": ["0.75 oz. Rye Whiskey",
"0.75 oz. Lemon Juice",
"0.75 oz. Maraschino Liqueur",
"0.75 oz. Green Chartreuse"],
"directions": ["shake on ice and strain"],
"misc":[],
"source": "Death & Co.",
"ner":["whiskey",
"chartreuse",
"maraschino liqueur"]}
```
### Data Fields
- `title` (`str`): Title of the recipe.
- `ingredients` (`list` of `str`): Ingredients.
- `directions` (`list` of `str`): Instruction steps.
- `source` (`str`): Origin of each recipe
- `ner` (`list` of `str`): NER entities.
### Data Splits
The dataset contains a single `train` split.
## Dataset Creation
[More Information Needed]
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
|
brianarbuckle/cocktail_recipes
|
[
"task_categories:text2text-generation",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-retrieval",
"task_categories:summarization",
"task_ids:document-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:explanation-generation",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] |
2023-02-15T22:01:34+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text2text-generation", "text-generation", "fill-mask", "text-retrieval", "summarization"], "task_ids": ["document-retrieval", "entity-linking-retrieval", "explanation-generation", "language-modeling", "masked-language-modeling"], "pretty_name": "Cocktail Recipes", "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "ingredients", "sequence": "string"}, {"name": "directions", "sequence": "string"}, {"name": "misc", "sequence": "string"}, {"name": "source", "dtype": "string"}, {"name": "ner", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 301501, "num_examples": 875}], "download_size": 96915, "dataset_size": 301501}}
|
2023-02-28T04:14:39+00:00
|
01ff8b7e27a1e13206b9708161939569936beaaf
|
TheLastBen/RNPD
|
[
"license:cc-by-nc-4.0",
"region:us"
] |
2023-02-15T23:44:26+00:00
|
{"license": "cc-by-nc-4.0"}
|
2023-12-14T10:35:07+00:00
|
|
26c456a2d394d66a0415fa87d1df5a036bec8414
|
Inital example files to test an easy way to store and manage data text and images.
Created from python scripts available at https://github.com/mediocreatmybest/gaslightingeveryone/tree/main/tools
Creation script: https://github.com/mediocreatmybest/gaslightingeveryone/blob/main/tools/images2parq.py
Extraction script: https://github.com/mediocreatmybest/gaslightingeveryone/blob/main/tools/parq2folder.py
|
Mediocreatmybest/Example
|
[
"license:cc0-1.0",
"region:us"
] |
2023-02-15T23:54:05+00:00
|
{"license": "cc0-1.0"}
|
2023-02-16T00:11:05+00:00
|
7d2be385f615d2791da434b97293aece089c9d94
|
# Dataset Card for "french-snli"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sugam11/french-snli
|
[
"region:us"
] |
2023-02-16T01:16:49+00:00
|
{"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "translated_premise", "dtype": "string"}, {"name": "translated_hypothesis", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 2296826, "num_examples": 10000}, {"name": "train", "num_bytes": 122642216, "num_examples": 550152}, {"name": "validation", "num_bytes": 2303892, "num_examples": 10000}], "download_size": 40422094, "dataset_size": 127242934}}
|
2023-02-16T01:17:44+00:00
|
b2b12c4340dedd7dbd57052caca7007eb230f8fb
|
# Dataset Card for "c4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lsb/c4
|
[
"region:us"
] |
2023-02-16T01:29:19+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "timestamp", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 828588742863, "num_examples": 364868892}, {"name": "validation", "num_bytes": 825766822, "num_examples": 364608}], "download_size": 511302989842, "dataset_size": 829414509685}}
|
2023-02-16T19:40:43+00:00
|
3338b2d5da9e407fc5444ca90caf0fe3ff077da9
|
ctem049/diff_data
|
[
"license:cc-by-nc-nd-4.0",
"region:us"
] |
2023-02-16T02:55:32+00:00
|
{"license": "cc-by-nc-nd-4.0"}
|
2023-04-06T13:13:18+00:00
|
|
277382fd035cabc0b744c1c5e49e79761ad31420
|
# Dataset Card for "VALUE_wikitext103_uninflect"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_wikitext103_uninflect
|
[
"region:us"
] |
2023-02-16T03:31:56+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 849818, "num_examples": 1118}, {"name": "train", "num_bytes": 368188365, "num_examples": 469979}, {"name": "validation", "num_bytes": 796548, "num_examples": 1053}], "download_size": 215340313, "dataset_size": 369834731}}
|
2023-02-16T03:32:05+00:00
|
743efbffe71951d3eef53dd2504505da4d9d4649
|
wooden-ufo/MyStorage
|
[
"license:other",
"region:us"
] |
2023-02-16T03:33:22+00:00
|
{"license": "other"}
|
2023-02-20T22:41:16+00:00
|
|
5b7cd23dc3363409b775c77e79d855d02c68b46e
|
# Dataset Card for "affectnethq"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Piro17/affectnethq
|
[
"region:us"
] |
2023-02-16T06:47:30+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "anger", "1": "disgust", "2": "fear", "3": "happy", "4": "neutral", "5": "sad", "6": "surprise"}}}}], "splits": [{"name": "train", "num_bytes": 5858852632.634, "num_examples": 27823}], "download_size": 0, "dataset_size": 5858852632.634}}
|
2023-02-16T06:56:12+00:00
|
6a5f4275917093f9e8c5593120965ae065cbe459
|
# Dataset Card for SRSD-Feynman (Easy set with Dummy Variables)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/omron-sinicx/srsd-benchmark
- **Paper:** [Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery](https://arxiv.org/abs/2206.10540)
- **Point of Contact:** [Yoshitaka Ushiku](mailto:[email protected])
### Dataset Summary
Our SRSD (Feynman) datasets are designed to discuss the performance of Symbolic Regression for Scientific Discovery.
We carefully reviewed the properties of each formula and its variables in [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html) to design reasonably realistic sampling range of values so that our SRSD datasets can be used for evaluating the potential of SRSD such as whether or not an SR method con (re)discover physical laws from such datasets.
This is the ***Easy set with dummy variables*** of our SRSD-Feynman datasets, which consists of the following 30 different physics formulas:
[](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_easy_dummy/resolve/main/problem_table.pdf)
Dummy variables were randomly generated, and symbolic regression models should not use the dummy variables as part of their predictions.
The following datasets contain
**1 dummy variable**: I.12.1, I.12.4, I.12.5, I.18.12, I.25.13, I.47.23
**2 dummy variables**: I.14.3, I.18.16, I.43.16, II.3.24, II.8.31, II.10.9, II.13.17, II.15.5, II.27.18, III.7.38, III.12.43
**3 dummy variables**: I.14.4, I.26.2, I.27.6, I.30.5, II.2.42, II.4.23, II.15.4, II.27.16, II.34.11, II.34.29b, II.38.3, II.38.14, III.15.27
More details of these datasets are provided in [the paper and its supplementary material](https://arxiv.org/abs/2206.10540).
### Supported Tasks and Leaderboards
Symbolic Regression
## Dataset Structure
### Data Instances
Tabular data + Ground-truth equation per equation
Tabular data: (num_samples, num_variables+1), where the last (rightmost) column indicate output of the target function for given variables.
Note that the number of variables (`num_variables`) varies from equation to equation.
Ground-truth equation: *pickled* symbolic representation (equation with symbols in sympy) of the target function.
### Data Fields
For each dataset, we have
1. train split (txt file, whitespace as a delimiter)
2. val split (txt file, whitespace as a delimiter)
3. test split (txt file, whitespace as a delimiter)
4. true equation (pickle file for sympy object)
### Data Splits
- train: 8,000 samples per equation
- val: 1,000 samples per equation
- test: 1,000 samples per equation
## Dataset Creation
### Curation Rationale
We chose target equations based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html).
### Annotations
#### Annotation process
We significantly revised the sampling range for each variable from the annotations in the Feynman Symbolic Regression Database.
First, we checked the properties of each variable and treat physical constants (e.g., light speed, gravitational constant) as constants.
Next, variable ranges were defined to correspond to each typical physics experiment to confirm the physical phenomenon for each equation.
In cases where a specific experiment is difficult to be assumed, ranges were set within which the corresponding physical phenomenon can be seen.
Generally, the ranges are set to be sampled on log scales within their orders as 10^2 in order to take both large and small changes in value as the order changes.
Variables such as angles, for which a linear distribution is expected are set to be sampled uniformly.
In addition, variables that take a specific sign were set to be sampled within that range.
#### Who are the annotators?
The main annotators are
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
We annotated this dataset, assuming typical physical experiments. The dataset will engage research on symbolic regression for scientific discovery (SRSD) and help researchers discuss the potential of symbolic regression methods towards data-driven scientific discovery.
### Discussion of Biases
Our choices of target equations are based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html), which are focused on a field of Physics.
### Other Known Limitations
Some variables used in our datasets indicate some numbers (counts), which should be treated as integer.
Due to the capacity of 32-bit integer, however, we treated some of such variables as float e.g., number of molecules (10^{23} - 10^{25})
## Additional Information
### Dataset Curators
The main curators are
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
### Licensing Information
Creative Commons Attribution 4.0
### Citation Information
[[Preprint](https://arxiv.org/abs/2206.10540)]
```bibtex
@article{matsubara2022rethinking,
title={Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery},
author={Matsubara, Yoshitomo and Chiba, Naoya and Igarashi, Ryo and Ushiku, Yoshitaka},
journal={arXiv preprint arXiv:2206.10540},
year={2022}
}
```
### Contributions
Authors:
- Yoshitomo Matsubara (@yoshitomo-matsubara)
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
- Yoshitaka Ushiku (@yushiku)
|
yoshitomo-matsubara/srsd-feynman_easy_dummy
|
[
"task_categories:tabular-regression",
"annotations_creators:expert",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended",
"language:en",
"license:cc-by-4.0",
"arxiv:2206.10540",
"doi:10.57967/hf/0760",
"region:us"
] |
2023-02-16T06:56:39+00:00
|
{"annotations_creators": ["expert"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended"], "task_categories": ["tabular-regression"], "task_ids": [], "pretty_name": "SRSD-Feynman (Easy w/ Dummy Variables)"}
|
2023-10-11T01:07:46+00:00
|
b7fcb12b7137ebbe5b7e4eb7c6c0f5c86174a83b
|
# Dataset Card for SRSD-Feynman (Medium set with Dummy Variables)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/omron-sinicx/srsd-benchmark
- **Paper:** [Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery](https://arxiv.org/abs/2206.10540)
- **Point of Contact:** [Yoshitaka Ushiku](mailto:[email protected])
### Dataset Summary
Our SRSD (Feynman) datasets are designed to discuss the performance of Symbolic Regression for Scientific Discovery.
We carefully reviewed the properties of each formula and its variables in [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html) to design reasonably realistic sampling range of values so that our SRSD datasets can be used for evaluating the potential of SRSD such as whether or not an SR method con (re)discover physical laws from such datasets.
This is the ***Medium set with dummy variables*** of our SRSD-Feynman datasets, which consists of the following 40 different physics formulas:
[](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_medium_dummy/resolve/main/problem_table.pdf)
Dummy variables were randomly generated, and symbolic regression models should not use the dummy variables as part of their predictions.
The following datasets contain
**1 dummy variable**: I.10.7, I.12.2, I.13.12, I.16.6, I.32.5, I.43.31, II.11.3, II.34.2, II.34.29a, III.14.14, III.15.14, B8
**2 dummy variables**: I.11.19, I.12.11, I.13.4, I.15.10, I.18.4, I.24.6, I.34.8, I.38.12, I.39.11, I.43.43, I.48.2, II.6.11, II.21.32, II.34.2a, III.4.32, III.13.18, III.15.12, III.17.37
**3 dummy variables**: I.8.14, I.29.4, I.34.10, I.34.27, I.39.10, II.8.7, II.37.1, III.8.54, III.19.51, B18
More details of these datasets are provided in [the paper and its supplementary material](https://arxiv.org/abs/2206.10540).
### Supported Tasks and Leaderboards
Symbolic Regression
## Dataset Structure
### Data Instances
Tabular data + Ground-truth equation per equation
Tabular data: (num_samples, num_variables+1), where the last (rightmost) column indicate output of the target function for given variables.
Note that the number of variables (`num_variables`) varies from equation to equation.
Ground-truth equation: *pickled* symbolic representation (equation with symbols in sympy) of the target function.
### Data Fields
For each dataset, we have
1. train split (txt file, whitespace as a delimiter)
2. val split (txt file, whitespace as a delimiter)
3. test split (txt file, whitespace as a delimiter)
4. true equation (pickle file for sympy object)
### Data Splits
- train: 8,000 samples per equation
- val: 1,000 samples per equation
- test: 1,000 samples per equation
## Dataset Creation
### Curation Rationale
We chose target equations based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html).
### Annotations
#### Annotation process
We significantly revised the sampling range for each variable from the annotations in the Feynman Symbolic Regression Database.
First, we checked the properties of each variable and treat physical constants (e.g., light speed, gravitational constant) as constants.
Next, variable ranges were defined to correspond to each typical physics experiment to confirm the physical phenomenon for each equation.
In cases where a specific experiment is difficult to be assumed, ranges were set within which the corresponding physical phenomenon can be seen.
Generally, the ranges are set to be sampled on log scales within their orders as 10^2 in order to take both large and small changes in value as the order changes.
Variables such as angles, for which a linear distribution is expected are set to be sampled uniformly.
In addition, variables that take a specific sign were set to be sampled within that range.
#### Who are the annotators?
The main annotators are
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
We annotated this dataset, assuming typical physical experiments. The dataset will engage research on symbolic regression for scientific discovery (SRSD) and help researchers discuss the potential of symbolic regression methods towards data-driven scientific discovery.
### Discussion of Biases
Our choices of target equations are based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html), which are focused on a field of Physics.
### Other Known Limitations
Some variables used in our datasets indicate some numbers (counts), which should be treated as integer.
Due to the capacity of 32-bit integer, however, we treated some of such variables as float e.g., number of molecules (10^{23} - 10^{25})
## Additional Information
### Dataset Curators
The main curators are
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
### Licensing Information
Creative Commons Attribution 4.0
### Citation Information
[[Preprint](https://arxiv.org/abs/2206.10540)]
```bibtex
@article{matsubara2022rethinking,
title={Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery},
author={Matsubara, Yoshitomo and Chiba, Naoya and Igarashi, Ryo and Ushiku, Yoshitaka},
journal={arXiv preprint arXiv:2206.10540},
year={2022}
}
```
### Contributions
Authors:
- Yoshitomo Matsubara (@yoshitomo-matsubara)
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
- Yoshitaka Ushiku (@yushiku)
|
yoshitomo-matsubara/srsd-feynman_medium_dummy
|
[
"task_categories:tabular-regression",
"annotations_creators:expert",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended",
"language:en",
"license:cc-by-4.0",
"arxiv:2206.10540",
"doi:10.57967/hf/0759",
"region:us"
] |
2023-02-16T07:01:48+00:00
|
{"annotations_creators": ["expert"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended"], "task_categories": ["tabular-regression"], "task_ids": [], "pretty_name": "SRSD-Feynman (Medium w/ Dummy Variables)"}
|
2023-10-11T01:08:13+00:00
|
7a97bef50a6720c18e4b79ebd7f29e9c6b084b1b
|
# Dataset Card for SRSD-Feynman (Hard set with Dummy Variables)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/omron-sinicx/srsd-benchmark
- **Paper:** [Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery](https://arxiv.org/abs/2206.10540)
- **Point of Contact:** [Yoshitaka Ushiku](mailto:[email protected])
### Dataset Summary
Our SRSD (Feynman) datasets are designed to discuss the performance of Symbolic Regression for Scientific Discovery.
We carefully reviewed the properties of each formula and its variables in [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html) to design reasonably realistic sampling range of values so that our SRSD datasets can be used for evaluating the potential of SRSD such as whether or not an SR method con (re)discover physical laws from such datasets.
This is the ***Hard set with dummy variables*** of our SRSD-Feynman datasets, which consists of the following 50 different physics formulas:
[](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_hard_dummy/resolve/main/problem_table.pdf)
Dummy variables were randomly generated, and symbolic regression models should not use the dummy variables as part of their predictions.
The following datasets contain
**1 dummy variable**: I.15.3x, I.30.3, II.6.15a, II.11.17, II.11.28, II.13.23, II.13.34, II.24.17, B1, B6, B12, B16, B17
**2 dummy variables**: I.6.20, I.6.20b, I.9.18, I.15.3t, I.29.16, I.34.14, I.39.22, I.44.4, II.11.20, II.11.27, II.35.18, III.9.52, III.10.19, III.21.20, B2, B3, B7, B9
**3 dummy variables**: I.6.20a, I.32.17, I.37.4, I.40.1, I.41.16, I.50.26, II.6.15b, II.35.21, II.36.38, III.4.33, B4, B5, B10, B11, B13, B14, B15, B19, B20
More details of these datasets are provided in [the paper and its supplementary material](https://arxiv.org/abs/2206.10540).
### Supported Tasks and Leaderboards
Symbolic Regression
## Dataset Structure
### Data Instances
Tabular data + Ground-truth equation per equation
Tabular data: (num_samples, num_variables+1), where the last (rightmost) column indicate output of the target function for given variables.
Note that the number of variables (`num_variables`) varies from equation to equation.
Ground-truth equation: *pickled* symbolic representation (equation with symbols in sympy) of the target function.
### Data Fields
For each dataset, we have
1. train split (txt file, whitespace as a delimiter)
2. val split (txt file, whitespace as a delimiter)
3. test split (txt file, whitespace as a delimiter)
4. true equation (pickle file for sympy object)
### Data Splits
- train: 8,000 samples per equation
- val: 1,000 samples per equation
- test: 1,000 samples per equation
## Dataset Creation
### Curation Rationale
We chose target equations based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html).
### Annotations
#### Annotation process
We significantly revised the sampling range for each variable from the annotations in the Feynman Symbolic Regression Database.
First, we checked the properties of each variable and treat physical constants (e.g., light speed, gravitational constant) as constants.
Next, variable ranges were defined to correspond to each typical physics experiment to confirm the physical phenomenon for each equation.
In cases where a specific experiment is difficult to be assumed, ranges were set within which the corresponding physical phenomenon can be seen.
Generally, the ranges are set to be sampled on log scales within their orders as 10^2 in order to take both large and small changes in value as the order changes.
Variables such as angles, for which a linear distribution is expected are set to be sampled uniformly.
In addition, variables that take a specific sign were set to be sampled within that range.
#### Who are the annotators?
The main annotators are
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
We annotated this dataset, assuming typical physical experiments. The dataset will engage research on symbolic regression for scientific discovery (SRSD) and help researchers discuss the potential of symbolic regression methods towards data-driven scientific discovery.
### Discussion of Biases
Our choices of target equations are based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html), which are focused on a field of Physics.
### Other Known Limitations
Some variables used in our datasets indicate some numbers (counts), which should be treated as integer.
Due to the capacity of 32-bit integer, however, we treated some of such variables as float e.g., number of molecules (10^{23} - 10^{25})
## Additional Information
### Dataset Curators
The main curators are
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
### Licensing Information
Creative Commons Attribution 4.0
### Citation Information
[[Preprint](https://arxiv.org/abs/2206.10540)]
```bibtex
@article{matsubara2022rethinking,
title={Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery},
author={Matsubara, Yoshitomo and Chiba, Naoya and Igarashi, Ryo and Ushiku, Yoshitaka},
journal={arXiv preprint arXiv:2206.10540},
year={2022}
}
```
### Contributions
Authors:
- Yoshitomo Matsubara (@yoshitomo-matsubara)
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
- Yoshitaka Ushiku (@yushiku)
|
yoshitomo-matsubara/srsd-feynman_hard_dummy
|
[
"task_categories:tabular-regression",
"annotations_creators:expert",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended",
"language:en",
"license:cc-by-4.0",
"arxiv:2206.10540",
"doi:10.57967/hf/0758",
"region:us"
] |
2023-02-16T07:05:02+00:00
|
{"annotations_creators": ["expert"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended"], "task_categories": ["tabular-regression"], "task_ids": [], "pretty_name": "SRSD-Feynman (Hard w/ Dummy Variables)"}
|
2024-02-10T22:46:01+00:00
|
8007cbafadb06a0bdfaa0ba534b2d7a5b2f1052b
|
# FB15k Dataset
The details of it can be got by this paper titled:
+ [Translating Embeddings for Modeling Multi-relational Data](http://dl.acm.org/doi/10.5555/2999792.2999923)
|
VLyb/FB15k
|
[
"size_categories:10K<n<100K",
"language:en",
"license:unlicense",
"link-prediction",
"region:us"
] |
2023-02-16T07:30:38+00:00
|
{"language": ["en"], "license": "unlicense", "size_categories": ["10K<n<100K"], "pretty_name": "FB15k", "tags": ["link-prediction"]}
|
2023-02-16T07:44:47+00:00
|
9648d68f54fe13d0372c3f89e365fec8e4c98856
|
## Dataset Description
- **Homepage:** https://github.com/gijswijnholds/sick_nl
- **Repository:** https://github.com/gijswijnholds/sick_nl
- **Paper:** https://aclanthology.org/2021.eacl-main.126/
- **Point of Contact:** [Gijs Wijnholds](mailto:[email protected])
### Dataset Summary
An automatically translated, manually corrected translation of the SICK dataset of [Marelli et al. 2014](https://www.aclweb.org/anthology/L14-1314), intended to boost research in Dutch NLP.
### Languages
The dataset is in Dutch.
## Dataset Structure
### Data Fields
- pair_ID: sentence pair ID
- sentence_A: sentence A
- sentence_B: sentence B
- label: textual entailment gold label: entailment (0), neutral (1) or contradiction (2)
- relatedness_score: semantic relatedness gold score (on a 1-5 continuous scale)
- entailment_AB: entailment for the A-B order (A_neutral_B, A_entails_B, or A_contradicts_B)
- entailment_BA: entailment for the B-A order (B_neutral_A, B_entails_A, or B_contradicts_A)
- sentence_A_original: original sentence from which sentence A is derived
- sentence_B_original: original sentence from which sentence B is derived
- sentence_A_dataset: dataset from which the original sentence A was extracted (FLICKR vs. SEMEVAL)
- sentence_B_dataset: dataset from which the original sentence B was extracted (FLICKR vs. SEMEVAL)
### Data Splits
Train Trial Test
4439 495 4906
## Dataset Creation
The dataset was created by first automatically translating all sentences, then by manually correcting any translation errors. This guarantees naturality of the examples while aligning the relatedness scores and entailment labels. Since the data IDs are preserved the dataset is fully aligned on the sentence level.
## Additional Information
### Licensing Information
This dataset falls under an MIT License.
### Citation Information
```
@inproceedings{wijnholds-etal-2021-sicknl,
title = "SICK-NL: A Dataset for Dutch Natural Language Inference",
author = "Wijnholds, Gijs and Moortgat, Michael",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.eacl-main.126/",
}
```
### Contributions
Thanks to [@maximedb](https://huggingface.co/maximedb) for adding this dataset.
|
maximedb/sick_nl
|
[
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:nl",
"license:mit",
"region:us"
] |
2023-02-16T07:44:25+00:00
|
{"language": ["nl"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "pretty_name": "SICK-NL", "dataset_info": {"features": [{"name": "pair_ID", "dtype": "int64"}, {"name": "sentence_A", "dtype": "string"}, {"name": "sentence_B", "dtype": "string"}, {"name": "entailment_label", "dtype": "string"}, {"name": "relatedness_score", "dtype": "float64"}, {"name": "entailment_AB", "dtype": "string"}, {"name": "entailment_BA", "dtype": "string"}, {"name": "sentence_A_original", "dtype": "string"}, {"name": "sentence_B_original", "dtype": "string"}, {"name": "sentence_A_dataset", "dtype": "string"}, {"name": "sentence_B_dataset", "dtype": "string"}, {"name": "SemEval_set", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "label_seq2seq", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1359887, "num_examples": 4439}, {"name": "validation", "num_bytes": 153417, "num_examples": 495}, {"name": "test", "num_bytes": 1496660, "num_examples": 4906}], "download_size": 822658, "dataset_size": 3009964}}
|
2023-04-25T09:19:43+00:00
|
23ad91b3ad96d719252315253876f3b6c7dc90d2
|
VLyb/WN18
|
[
"license:unlicense",
"region:us"
] |
2023-02-16T07:51:24+00:00
|
{"license": "unlicense"}
|
2023-02-16T07:53:54+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.