sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
ec612458a66d1228180cb2c8a3dc923fb546e0c7
|
# Zellic 2023 Smart Contract Source Index
Zellic is making publicly available a dataset of known Ethereum mainnet smart contract source code.
Our aim is to provide a contract source code dataset that is readily available to the public to download in bulk. We believe this dataset will help advance the frontier of smart contract security research. Applications include static analysis, machine learning, and more. This effort is part of Zellic’s mission to create a world with no smart contract hacks.
## Methodology
First, we accumulated a list of all deployed contracts on Ethereum mainnet as of block 16860349. This does not include contracts that have been `SELFDESTRUCT`ed. We progressively built up this index by performing a full sync from the genesis block using the modified Geth instance. Whenever a new contract was created, we added it to our index. When a contract `SELFDESTRUCT`ed, we removed it from the index. This list is available in this dataset as the file `address_bytecodehash_index`.
Next, we collected contract source code from publicly available online sources. All data was obtained from publicly accessible resources.
Finally, we calculated all of the Keccak256 hashes of the deployed runtime EVM bytecode of each contract. We deduplicated contract source code by bytecode hash. In other words, we organized the contract source code set by the bytecode hash of their corresponding verified contracts. For example, if source codes A and B are both verified against smart contracts X and Y with the same deployed EVM bytecode, we only include one of A or B in this dataset. Choosing among duplicates was arbitrarily.
## Dataset Statistics
**Number of unique source codes, by bytecode hash**: 149,386
**Contracts with code available**: 3,897,319 (This is more than the previous number, because MANY contracts share identical bytecode)
**Number of smart contracts in global index**: 30,586,657 (not all have source code available, see Methodology)
| **Chars (wc -c)** | **Words (wc -w)** | **LoC (code)** | **LoC (comments)** | **LoC (whitespace)** | **LoC (total)** |
|-------------------|-------------------|----------------|--------------------|----------------------|-----------------|
| 6,473,548,073 | 712,444,206 | 90,562,628 | 62,503,873 | 24,485,549 | 177,552,050 |
**Unique words**: 939,288
## Dataset Structure
### Index
The `address_bytecodehash_index` file contains a list of known smart contract addresses mapped to the Keccak256 hash of their EVM bytecode.
Look up the smart contract address in this file to find the source. This file also serves as a list of all deployed smart contracts as of block 16860349.
**Not all contracts in the index file will have source code available.** This is a list of **all** deployed smart contracts as of block 16860349. (See Methodology).
Excerpt of data from the index for preview purposes:
```
...
00012e87fa9172d0c613f69d0abf752bb00310ec:4f5a5f6706dc853cb3ae2279729e0d7e24dda128a77358144e4c0fd3e5d60e98
00012c8ef0fef0a06e1644ab91107fe8584fb91e:a828ef7f5f6d2ebb1203de12878e16aa5ba6984c12ededff4e19876233533505
00012df38ea3a6dabefb8407a59219a0c7dd0bc8:c279544d07d9631b1e37d835cadfe7098d60e508cf8f18a89ddb8b176d56874d
00012d92a0e7ee1b19f8e018267c97a3a7e99aa7:0865cec1e9ac3048b12a85fc3b9fbc682c3831784e3396416635df4cb88c3fdd
00012f07e281c1d8a9d790358050b6015eef942c:ab7af4c77ed6371c7eda04ba317a134f0b06593c0dc2851bf4c709a367ea50ed
00012e198745e53293bf09ddec8da1284963fded:ce33220d5c7f0d09d75ceff76c05863c5e7d6e801c70dfe7d5d45d4c44e80654
00012ec2c9fc4a1692176da5202a44a4aea5e177:ce33220d5c7f0d09d75ceff76c05863c5e7d6e801c70dfe7d5d45d4c44e80654
...
```
### Contract Sources
Smart Contract sources are organized by folder in the `organized_contracts` directory.
For example, a contract with the bytecode hash `beef3d7d1884c4fee50548cfe762415fe494e3feb1e6ca181352ef023ba1ff7a` would be in the directory `organized_contracts/be/beef3d7d1884c4fee50548cfe762415fe494e3feb1e6ca181352ef023ba1ff7a/`.
Each folder for a smart contract contains the source files as well as a `metadata.json` that contains information about the contract such as the compiler version and optimizations used. These settings can be used to attempt to reproduce the build.
Example of metadata.json for preview purposes (unminified for ease of viewing):
```json
{
"ContractName": "MageSpace",
"CompilerVersion": "v0.8.10+commit.fc410830",
"Runs": 200,
"OptimizationUsed": false,
"BytecodeHash": "c2f8f4e79a9d7c23d8a398768e1476f03f0e11c44fc7441c021e098c71678d03"
}
```
#### Source Formats
Contracts may come in one of three source formats. Single file, multiple files, and [Solidity Compiler JSON](https://docs.soliditylang.org/en/v0.8.19/using-the-compiler.html#compiler-api).
For multiple file contacts, each `.sol` file will be included in the directory.
Single file contracts will be named `main.sol`. Some contracts are written in Vyper, not Solidity. These will be named `main.vy`.
For Solidity Compiler Input JSON, the compiler input will be stored in `contract.json`.
**Not all contract code is in Solidity. Some contract code is in Vyper, or other languages! Check metadata.json!**
As a quick-and-dirty script, to extract all of the source code, you can use this bash script:
```bash
mkdir code
cd organized_contracts/
for f in * ; do
echo $f
cat $f/*/contract.json | jq '.sources | to_entries[].value.content' -r > ../code/"$f".txt
cat $f/*/*.sol > ../code/"$f".txt
done
```
### Other Fun Facts
Top 100 words:
<details>
<summary>Click to expand</summary>
<pre>
23189252 the
20816285 address
16207663 uint256
14793579 to
13746030 function
9952507 returns
9069124 0
8256548 a
8189582 of
6854095 is
6783298 dev
6363279 return
5555811 if
5497552 memory
5403232 from
5203839 amount
5146685 internal
4838549 value
4753195 be
4700814 external
4676440 owner
4535518 this
4477899 view
4463166 for
4205382 bool
3770805 contract
3732595 token
3719841 and
3578693 public
3447968 string
3422923 tokenid
3243596 require
3134425 1
3063929 in
2996585 bytes
2976900 data
2831472 by
2748878 transfer
2729742 account
2605117 that
2588692 param
2535414 private
2465042 an
2418190 solidity
2377723 uint
2333621 call
2326567 not
2319841 virtual
2295154 zero
2220201 sender
2118342 as
2113922 sol
2024428 target
1945888 event
1919425 s
1901005 or
1899022 pure
1884128 tokens
1859283 must
1850785 it
1796854 with
1783457 contracts
1760318 b
1742610 revert
1711696 spender
1698735 bytes32
1655261 recipient
1645305 i
1608529 indexed
1585283 true
1575421 2
1551352 when
1528254 can
1475879 length
1466789 override
1444666 will
1356364 approve
1355666 8
1314732 notice
1304351 implementation
1293963 are
1291253 import
1290551 on
1267019 balance
1257438 available
1253286 log
1232433 pragma
1211177 since
1193506 msgsender
1193496 result
1190481 liquidity
1185869 msg
1181724 operator
1178211 errormessage
1176497 slot
1156971 set
1154460 openzeppelin
1148764 cannot
1123141 erc20
1115019 abi
</pre>
</details>
## Notices
The smart contract source code in this dataset were obtained from publicly available sources. You should always abide by the appropriate code and software licenses, as well as all applicable copyright law.
THE DATASET/SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE DATASET/SOFTWARE OR THE USE OR OTHER DEALINGS IN THE DATASET/SOFTWARE.
|
Zellic/smart-contract-fiesta
|
[
"size_categories:100K<n<1M",
"language:en",
"solidity",
"blockchain",
"ethereum",
"smart-contract",
"region:us"
] |
2023-04-07T22:19:11+00:00
|
{"language": ["en"], "size_categories": ["100K<n<1M"], "pretty_name": "Zellic Smart Contract Source Index", "tags": ["solidity", "blockchain", "ethereum", "smart-contract"]}
|
2023-04-23T07:19:14+00:00
|
0396e6db65104540397920be679caf65ffe9606b
|
WWRemix/paccc
|
[
"license:openrail",
"region:us"
] |
2023-04-07T22:26:40+00:00
|
{"license": "openrail"}
|
2023-04-07T22:32:42+00:00
|
|
a55b6379fd44cffed853ce66d27dba76ef36bff8
|
# Dataset Card for "chunk_74"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_74
|
[
"region:us"
] |
2023-04-07T22:33:36+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 24036396192.25, "num_examples": 250254}], "download_size": 22100557621, "dataset_size": 24036396192.25}}
|
2023-04-07T23:05:37+00:00
|
d2a2402b78ab0d80a840affae274578be3caf46c
|
# Dataset Card for "chunk_75"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_75
|
[
"region:us"
] |
2023-04-07T22:45:44+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 23301532944.625, "num_examples": 242603}], "download_size": 21413215455, "dataset_size": 23301532944.625}}
|
2023-04-07T23:18:14+00:00
|
a93b6075ef9c923e65b517419fed994d7e361173
|
# Pathfinder-X2
license: CC BY 4.0, Free to use for any purpose, including commercial, with attribution.
The Pathfinder and Pathfinder-X datasets have been crucial for training Large Language Models with Long-Range Dependencies.
In January of 2023, Meta's Mega LLM scored a 97% on the Pathfinder-X dataset, indicating a need for an even more challenging benchmark.
Pathfinder-X2 contains 200,000 512x512 images along with 200,000 segmentation masks for those images. Each image contains an assortment
of dashed-line "snakes" of varying length, and a model's task is to segment only the snake with a circle on one end.
Each image is meant to be fed in as a sequence,pixel-by-pixel, into a Large Language Model.
Explanation paper: https://www.overleaf.com/read/rpsmdnxbdfjt
Based on the Pathfinder dataset by Drew Linsley, Alekh K Ashok, Lakshmi N Govindarajan, Rex Liu, and Thomas Serre.


---
annotations_creators:
- Tyler Suard
tags:
- language
- nlp
- llm
- long-range
size_categories:
- 100K<n<1M
---
|
Tylersuard/PathfinderX2
|
[
"region:us"
] |
2023-04-07T23:05:34+00:00
|
{}
|
2023-04-14T21:29:10+00:00
|
68e64acbc4257c4dc08d79fd190f3d4e3ae05920
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
Cats-Can/image_data
|
[
"region:us"
] |
2023-04-07T23:23:19+00:00
|
{}
|
2023-04-15T17:24:47+00:00
|
0b62533a5a97c097e57efec3a1b5e4916e64361b
|
# Dataset Card for "chunk_71"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_71
|
[
"region:us"
] |
2023-04-08T00:24:01+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 22275932400.375, "num_examples": 231925}], "download_size": 19440928019, "dataset_size": 22275932400.375}}
|
2023-04-08T01:01:55+00:00
|
5a2bc03494d5b68222c870ae85ba247c612f9f13
|
ID3/metadata_videos_canciones_youtube
|
[
"license:apache-2.0",
"region:us"
] |
2023-04-08T00:51:07+00:00
|
{"license": "apache-2.0", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "channel_id", "dtype": "string"}, {"name": "duracion", "dtype": "int64"}, {"name": "keywords", "dtype": "string"}, {"name": "fecha_publicacion", "dtype": "string"}, {"name": "rating", "dtype": "string"}, {"name": "titulo", "dtype": "string"}, {"name": "vistas", "dtype": "int64"}, {"name": "artista", "dtype": "string"}, {"name": "titulo_cancion", "dtype": "string"}, {"name": "escuchas_lastfm", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 993152, "num_examples": 2853}, {"name": "validation", "num_bytes": 114234, "num_examples": 317}], "download_size": 0, "dataset_size": 1107386}}
|
2023-04-08T21:37:04+00:00
|
|
6b23b572562121e0f7bf3726e00bbed34fc707c9
|
ID3/letras_canciones_youtube
|
[
"license:apache-2.0",
"region:us"
] |
2023-04-08T00:54:43+00:00
|
{"license": "apache-2.0", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "letra", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9069952, "num_examples": 2186}, {"name": "validation", "num_bytes": 1166569, "num_examples": 243}], "download_size": 3981299, "dataset_size": 10236521}}
|
2023-04-08T01:08:32+00:00
|
|
35aed8b44a112e133407e0d79f5ab496d14c5574
|
# Dataset Card for "comentarios_de_canciones_youtube"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ID3/comentarios_de_canciones_youtube
|
[
"region:us"
] |
2023-04-08T00:59:04+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "comentario", "dtype": "string"}, {"name": "likes", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 83381566, "num_examples": 695557}, {"name": "validation", "num_bytes": 9254738, "num_examples": 77285}], "download_size": 0, "dataset_size": 92636304}}
|
2023-04-08T01:09:08+00:00
|
0bdbef7503f64ad7b9333acf4a4864afdf657289
|
ID3/comentario_youtube_lorea
|
[
"license:apache-2.0",
"region:us"
] |
2023-04-08T01:20:30+00:00
|
{"license": "apache-2.0", "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4897908, "num_examples": 3538}], "download_size": 1607680, "dataset_size": 4897908}}
|
2023-04-09T23:39:46+00:00
|
|
a93a1a73e03c148ffee2d0fdbba62fd59cd9a669
|
# Dataset Card for "pokemon"
Added `colors` column.
Original: https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions
|
offchan/pokemon
|
[
"region:us"
] |
2023-04-08T01:22:22+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "colors", "dtype": {"array2_d": {"shape": [5, 3], "dtype": "float32"}}}, {"name": "control_image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 200309733.0, "num_examples": 833}], "download_size": 194602470, "dataset_size": 200309733.0}}
|
2023-04-16T00:33:13+00:00
|
30b3cb233552109569d8d2cbf66a30a3db047a22
|
# Dataset Card for "comentarios_youtube_validos_similitud"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ID3/comentarios_youtube_validos_similitud
|
[
"region:us"
] |
2023-04-08T01:40:03+00:00
|
{"dataset_info": {"features": [{"name": "comentario", "dtype": "string"}, {"name": "likes", "dtype": "int64"}, {"name": "similitud", "dtype": "float64"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 83527097, "num_examples": 652388}], "download_size": 54010613, "dataset_size": 83527097}}
|
2023-04-08T05:40:16+00:00
|
b2916dd15a7fb08e3996ac572a32ba569013e0be
|
# Dataset Card for "counterfact-simple"
Dataset from [ROME](https://rome.baulab.info/) by Meng et al., simplified to be just prompts, paraphrased prompts, and their true and false targets.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
azhx/counterfact-simple
|
[
"region:us"
] |
2023-04-08T01:47:23+00:00
|
{"dataset_info": {"features": [{"name": "subject", "dtype": "string"}, {"name": "proposition", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}, {"name": "case_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 12882614.735952066, "num_examples": 118363}, {"name": "test", "num_bytes": 1431353.264047934, "num_examples": 13151}], "download_size": 5496476, "dataset_size": 14313968.0}}
|
2023-04-08T03:38:17+00:00
|
5027010f6682b6e6d3a13bc3eb8430d9463f5baa
|
# Dataset Card for "self-critiquing-helpful-rate"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dmayhem93/self-critiquing-helpful-rate
|
[
"region:us"
] |
2023-04-08T01:54:42+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "source_id", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "time", "dtype": "float64"}, {"name": "labeler", "dtype": "string"}, {"name": "is_topic_based_summarization", "dtype": "bool"}, {"name": "prompt", "dtype": "string"}, {"name": "helpful", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 185274964, "num_examples": 33168}, {"name": "test", "num_bytes": 22721638, "num_examples": 4243}], "download_size": 30550709, "dataset_size": 207996602}}
|
2023-04-08T01:56:39+00:00
|
afbacaa9ea5eef7d4244c97816660f8173f273d4
|
# Dataset Card for "self-critiquing-critique-answer-ranking"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dmayhem93/self-critiquing-critique-answer-ranking
|
[
"region:us"
] |
2023-04-08T01:57:31+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "source_id", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "time", "dtype": "float64"}, {"name": "labeler", "dtype": "string"}, {"name": "is_topic_based_summarization", "dtype": "bool"}, {"name": "prompt", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "rankings", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 45224708, "num_examples": 9120}, {"name": "test", "num_bytes": 9935078, "num_examples": 1849}], "download_size": 17247273, "dataset_size": 55159786}}
|
2023-04-08T03:24:37+00:00
|
ec36674003af1c01b3758e3fdab0b054119bfa3f
|
## 内容
包含约200万条由[BELLE](https://github.com/LianjiaTech/BELLE)项目生成的中文指令数据。
## 样例
```
{
"instruction": "将以下三个句子组合成一个有意义的段落。\n狗是人类最好的朋友。它们非常聪明,可以进行各种活动。如果你喜欢散步,狗可以成为你一起散步的伙伴。",
"input": "",
"output": "狗是人类最好的朋友,它们非常聪明,可以进行各种活动。如果你喜欢散步,狗可以成为你一起散步的伙伴。出门散步是一种良好的锻炼方式,而有狗的陪伴会让散步变得更有趣,并且有狗在身边也能给你带来安全感。所以,拥有一只狗作为你的伙伴,可以帮助你变得更加积极主动和健康。"
}
```
### 字段:
```
instruction: 指令
input: 输入(本数据集均为空)
output: 输出
```
## 使用限制
仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。
本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。
|
BelleGroup/train_2M_CN
|
[
"task_categories:text2text-generation",
"size_categories:1M<n<10M",
"language:zh",
"license:gpl-3.0",
"region:us"
] |
2023-04-08T01:58:58+00:00
|
{"language": ["zh"], "license": "gpl-3.0", "size_categories": ["1M<n<10M"], "task_categories": ["text2text-generation"]}
|
2023-04-08T02:33:28+00:00
|
4af4651d3267f39af74eab0d45c2a15d2880fc35
|
dgo7/embeddings_dataset
|
[
"license:mit",
"region:us"
] |
2023-04-08T02:04:28+00:00
|
{"license": "mit"}
|
2023-04-08T02:09:20+00:00
|
|
e4c17f9a45c5045c29ccec78636a213a9bdc093c
|
Filtered and modified version of Japanese/Chinese language pair data from [WikiMatrix v1](https://opus.nlpl.eu/WikiMatrix.php).
Process steps:
1. Basic regex based filtering / length checking to remove abnormal pairs.
2. Semantic similarity filtering with a threshold value of 0.6, based on [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
3. Convert all Traditional Chinese sentences into Simplified Chinese with [zhconv](https://github.com/gumblex/zhconv).
------
经过过滤和修改的日语/中文语言对数据,来自[WikiMatrix v1](https://opus.nlpl.eu/WikiMatrix.php)。
处理步骤:
1. 基本的基于正则表达式的过滤/长度检查,以删除异常对。
2. 基于[sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE)的语义相似性过滤,阈值为0.6。
3. 使用[zhconv](https://github.com/gumblex/zhconv)将所有繁体中文句子转换为简体中文。
------
以下はフィルタリングされ修正された日本語/中国語のペアデータです。データ元は[WikiMatrix v1](https://opus.nlpl.eu/WikiMatrix.php)です。
処理手順:
1. 正規表現に基づくフィルタリング/長さのチェックを行い、異常なペアを削除します。
2. [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE)に基づくセマンティック類似性フィルタリングを行い、閾値は0.6です。
3. [zhconv](https://github.com/gumblex/zhconv)を使って、すべての繁体字中国語の文を簡体字中国語に変換します。
|
larryvrh/WikiMatrix-v1-Ja_Zh-filtered
|
[
"task_categories:translation",
"size_categories:100K<n<1M",
"language:ja",
"language:zh",
"license:cc-by-sa-4.0",
"region:us"
] |
2023-04-08T02:07:25+00:00
|
{"language": ["ja", "zh"], "license": "cc-by-sa-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["translation"], "dataset_info": {"features": [{"name": "ja", "dtype": "string"}, {"name": "zh", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 149036235, "num_examples": 690095}], "download_size": 115870646, "dataset_size": 149036235}}
|
2023-04-08T04:16:37+00:00
|
ee65b6708272f40606d1a0351e8cad18c3e604a3
|
# Generated_Chat_0.4M
## 内容
包含约40万条由[BELLE](https://github.com/LianjiaTech/BELLE)项目生成的个性化角色对话数据,包含角色介绍。
注意:此数据集是由ChatGPT产生的,未经过严格校验,题目或解题过程可能包含错误。使用过程中请注意这一点。
## 样例
```
{
"instruction": "于以下角色信息完成一段对话\nAlice是一个年轻的律师,她正在为一个涉及濒危物种保护的案件做准备。Bob是一位当地的居民,他所在的社区经常受到该濒危物种的干扰。\n",
"input": "",
"output": "\nAlice: 你好Bob,我是濒危物种保护案的律师,很高兴见到你。我们前几天发给你的文件你看到了吗?\nBob: 看到了,你们说要禁止我们在该物种的常住地采集蘑菇,这很不公平。\nAlice: 保护濒危物种对整个生态系统都很重要,如果我们的举动损害了这个物种的生存,那么整个生态系统都将遭受损失。\nBob: 我理解您的立场,但是我们一直以来都依靠这个物种来维持我们的经济生活,现在我们要怎么办?\nAlice: 我理解这对当地社区的影响,但是现在我们必须找到一种既能保护这个物种,也同时使您们的经济生活不受影响的解决方案。你有任何想法吗?我们可以一起来想办法。\n"
}
```
### 字段
```
instruction: 指令
input: 输入(本数据集均为空)
output: 输出
```
## 局限性和使用限制
我们要求开发者仅将我们开源的代码、数据、模型及后续衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。
由于数据是由*ChatGPT*生成的,未经严格验证,在事实性和其他方面还存在一些不足。因此,在使用此数据集时,请务必注意甄别。
本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目的开发者不承担任何责任。
***
# Generated_Chat_0.4M
## Contents
Includes approx. 400k Personalized Character Dialogue generated by BELLE, with character Introduction.
Note: this subset was generated by *ChatGPT* and was not strictly verified. The quizzes or answers might contain errors. Please take this in mind when using this subset.
## Sample
```
{
"instruction": "于以下角色信息完成一段对话\nAlice是一个年轻的律师,她正在为一个涉及濒危物种保护的案件做准备。Bob是一位当地的居民,他所在的社区经常受到该濒危物种的干扰。\n",
"input": "",
"output": "\nAlice: 你好Bob,我是濒危物种保护案的律师,很高兴见到你。我们前几天发给你的文件你看到了吗?\nBob: 看到了,你们说要禁止我们在该物种的常住地采集蘑菇,这很不公平。\nAlice: 保护濒危物种对整个生态系统都很重要,如果我们的举动损害了这个物种的生存,那么整个生态系统都将遭受损失。\nBob: 我理解您的立场,但是我们一直以来都依靠这个物种来维持我们的经济生活,现在我们要怎么办?\nAlice: 我理解这对当地社区的影响,但是现在我们必须找到一种既能保护这个物种,也同时使您们的经济生活不受影响的解决方案。你有任何想法吗?我们可以一起来想办法。\n"
}
```
### Schema
```
instruction: 指令
input: 输入(本数据集均为空)
output: 输出
```
## Limitation and Usage Limits
We require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed.
Since this dataset was generated by *ChatGPT* and was not strictly verified, it still has shortcomings regarding factuality and other aspects. When using this dataset, careful inspection is needed.
This dataset does not represent anyone's ground, interest or thought, and is not related to any kind of claim of any groups. The developers of this project do not assume any responsibility to potential harm inflicted by using this dataset and project.
|
BelleGroup/generated_chat_0.4M
|
[
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:zh",
"license:gpl-3.0",
"region:us"
] |
2023-04-08T02:36:11+00:00
|
{"language": ["zh"], "license": "gpl-3.0", "size_categories": ["100K<n<1M"], "task_categories": ["text2text-generation"]}
|
2023-04-08T02:55:12+00:00
|
903ed9692658f639e071683f061fa0b2c647a306
|
# A Sentiment Analsysis Dataset for Finetuning Large Models in Chat-style
More details can be found at https://github.com/l294265421/chat-sentiment-analysis
## Supported Tasks
- Aspect Term Extraction (ATE)
- Opinion Term Extraction (OTE)
- Aspect Term-Opinion Term Pair Extraction (AOPE)
- Aspect term, Sentiment, Opinion term Triplet Extraction (ASOTE)
- Aspect Category Detection (ACD)
- Aspect Category-Sentiment Pair Extraction (ACSA)
- [Aspect-Category-Opinion-Sentiment (ACOS) Quadruple Extraction](https://github.com/NUSTM/ACOS)
- [Holder, Target, Opinion, Sentiment (HTOS) Quadruple Extraction](https://github.com/jerbarnes/semeval22_structured_sentiment)
|
yuncongli/chat-sentiment-analysis
|
[
"language:en",
"license:mit",
"sentiment",
"aspect-based sentiment analysis",
"Aspect Term Extraction",
"Opinion Term Extraction",
"Aspect Term-Opinion Term Pair Extraction",
"Aspect term, Sentiment, Opinion term Triplet Extraction",
"Aspect Category Detection",
"Aspect Category-Sentiment Pair Extraction",
"Aspect-Category-Opinion-Sentiment (ACOS) Quadruple Extraction",
"Holder, Target, Opinion, Sentiment (HTOS) Quadruple Extraction",
"sentiment analysis",
"region:us"
] |
2023-04-08T03:04:08+00:00
|
{"language": ["en"], "license": "mit", "tags": ["sentiment", "aspect-based sentiment analysis", "Aspect Term Extraction", "Opinion Term Extraction", "Aspect Term-Opinion Term Pair Extraction", "Aspect term, Sentiment, Opinion term Triplet Extraction", "Aspect Category Detection", "Aspect Category-Sentiment Pair Extraction", "Aspect-Category-Opinion-Sentiment (ACOS) Quadruple Extraction", "Holder, Target, Opinion, Sentiment (HTOS) Quadruple Extraction", "sentiment analysis"]}
|
2023-04-08T03:19:48+00:00
|
1244cef76b619126fb667a8c5682bcdb98505e89
|
# Dataset Card for "2000-python"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DavidMOBrien/2000-python
|
[
"region:us"
] |
2023-04-08T03:04:13+00:00
|
{"dataset_info": {"features": [{"name": "before", "dtype": "string"}, {"name": "after", "dtype": "string"}, {"name": "repo", "dtype": "string"}, {"name": "type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 673046734.9541036, "num_examples": 146486}, {"name": "test", "num_bytes": 84131990.5229482, "num_examples": 18311}, {"name": "valid", "num_bytes": 84131990.5229482, "num_examples": 18311}], "download_size": 311424235, "dataset_size": 841310716.0}}
|
2023-04-08T03:06:30+00:00
|
44c5dcaaa6d6eea9d92e0ce60929ff11227eee02
|
# Dataset Card for "gutenberg_spacy-ner-monitoring"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DavidFM43/gutenberg_spacy-ner-monitoring
|
[
"region:us"
] |
2023-04-08T03:07:25+00:00
|
{"dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-CARDINAL", "2": "I-CARDINAL", "3": "B-PERSON", "4": "I-PERSON", "5": "B-TIME", "6": "I-TIME", "7": "B-WORK_OF_ART", "8": "I-WORK_OF_ART"}}}}], "splits": [{"name": "train", "num_bytes": 1697, "num_examples": 1}, {"name": "test", "num_bytes": 1531, "num_examples": 1}], "download_size": 5147, "dataset_size": 3228}}
|
2023-04-08T03:07:27+00:00
|
0c6d6629f048200ead6b9b7280c4ff008092c4b2
|
# Dataset Card for "fill50k"
Added `colors` column.
Original: https://huggingface.co/datasets/fusing/fill50k
|
offchan/fill50k
|
[
"region:us"
] |
2023-04-08T03:28:07+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "colors", "dtype": {"array2_d": {"shape": [2, 3], "dtype": "float32"}}}, {"name": "conditioning_image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 455788831.0, "num_examples": 50000}], "download_size": 316357363, "dataset_size": 455788831.0}}
|
2023-04-16T00:33:36+00:00
|
e84e9febb9b1c3852e989eecac86066b4b1e4d9b
|
JTwoGames/LanaSang
|
[
"license:unknown",
"region:us"
] |
2023-04-08T03:43:11+00:00
|
{"license": "unknown"}
|
2023-04-08T03:43:11+00:00
|
|
ea85e3fd7efaa83f94f558bf1c7006b39b9806e4
|
This dataset contains audios of 264 species of birds singing that were all processed. It was processed as follows:
1. Stereo to Mono
2. Resampled 16kHz
3. High Pass Filter (1500Hz and filter order of 16)
4. Normalized
The raw dataset was provided by the BirdCLEF 2023 challenge from Kaggle. You can access it in https://www.kaggle.com/competitions/birdclef-2023/data
|
bernardocecchetto/BirdCLEF-Challenge2023-Kaggle
|
[
"license:apache-2.0",
"region:us"
] |
2023-04-08T04:05:17+00:00
|
{"license": "apache-2.0"}
|
2023-04-08T21:45:46+00:00
|
5150803873cc4084a4d7ed5254de7c371f16835f
|
# Dataset Card for "CCMatrix-v1-Ja_Zh-filtered"
------
Filtered and modified version of Japanese/Chinese language pair data from [CCMatrix v1](https://opus.nlpl.eu/CCMatrix.php).
Process steps:
1. Basic regex based filtering / length checking to remove abnormal pairs.
2. Semantic similarity filtering with a threshold value of 0.6, based on [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
3. Convert all Traditional Chinese sentences into Simplified Chinese with [zhconv](https://github.com/gumblex/zhconv).
------
经过过滤和修改的日语/中文语言对数据,来自[CCMatrix v1](https://opus.nlpl.eu/CCMatrix.php)。
处理步骤:
1. 基本的基于正则表达式的过滤/长度检查,以删除异常对。
2. 基于[sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE)的语义相似性过滤,阈值为0.6。
3. 使用[zhconv](https://github.com/gumblex/zhconv)将所有繁体中文句子转换为简体中文。
------
以下はフィルタリングされ修正された日本語/中国語のペアデータです。データ元は[CCMatrix v1](https://opus.nlpl.eu/CCMatrix.php)です。
処理手順:
1. 正規表現に基づくフィルタリング/長さのチェックを行い、異常なペアを削除します。
2. [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE)に基づくセマンティック類似性フィルタリングを行い、閾値は0.6です。
3. [zhconv](https://github.com/gumblex/zhconv)を使って、すべての繁体字中国語の文を簡体字中国語に変換します。
|
larryvrh/CCMatrix-v1-Ja_Zh-filtered
|
[
"task_categories:translation",
"language:zh",
"language:ja",
"region:us"
] |
2023-04-08T04:05:55+00:00
|
{"language": ["zh", "ja"], "task_categories": ["translation"], "pretty_name": "cc", "dataset_info": {"features": [{"name": "ja", "dtype": "string"}, {"name": "zh", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 847526347, "num_examples": 5686275}], "download_size": 651183008, "dataset_size": 847526347}}
|
2023-04-08T04:13:43+00:00
|
d5a7ccda84f7fb080b8aeae1e8d985d9238590bc
|
sushei/combined_dataset
|
[
"license:unknown",
"region:us"
] |
2023-04-08T04:12:49+00:00
|
{"license": "unknown"}
|
2023-04-08T04:12:49+00:00
|
|
0fb16512a881cfcbaf41ce3e3815e5db7e6b0a5b
|
# Dataset Card for "self-critiquing-critique"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dmayhem93/self-critiquing-critique
|
[
"region:us"
] |
2023-04-08T04:37:03+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "source_id", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "time", "dtype": "float64"}, {"name": "labeler", "dtype": "string"}, {"name": "is_topic_based_summarization", "dtype": "bool"}, {"name": "category", "dtype": "string"}, {"name": "severity", "dtype": "int64"}, {"name": "text_quotes", "list": [{"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}]}, {"name": "response_quotes", "list": [{"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}]}, {"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 262218653, "num_examples": 61503}, {"name": "test", "num_bytes": 43153769, "num_examples": 9437}], "download_size": 36446351, "dataset_size": 305372422}}
|
2023-04-08T04:37:20+00:00
|
857b725815f3be77bbadb7d4c97a32f90f64076c
|
# Dataset Card for "self-critiquing-refine"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dmayhem93/self-critiquing-refine
|
[
"region:us"
] |
2023-04-08T04:37:21+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "source_id", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "time", "dtype": "float64"}, {"name": "labeler", "dtype": "string"}, {"name": "is_topic_based_summarization", "dtype": "bool"}, {"name": "category", "dtype": "string"}, {"name": "severity", "dtype": "int64"}, {"name": "text_quotes", "list": [{"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}]}, {"name": "response_quotes", "list": [{"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}]}, {"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 170238231, "num_examples": 34069}, {"name": "test", "num_bytes": 26100872, "num_examples": 5119}], "download_size": 33958497, "dataset_size": 196339103}}
|
2023-04-08T04:37:37+00:00
|
f39ee3e1f4022a6e07eb0e68119bada36873d5e4
|
# Dataset Card for "self-critiquing-critique-and-refine"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dmayhem93/self-critiquing-critique-and-refine
|
[
"region:us"
] |
2023-04-08T04:37:38+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "source_id", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "time", "dtype": "float64"}, {"name": "labeler", "dtype": "string"}, {"name": "is_topic_based_summarization", "dtype": "bool"}, {"name": "category", "dtype": "string"}, {"name": "severity", "dtype": "int64"}, {"name": "text_quotes", "list": [{"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}]}, {"name": "response_quotes", "list": [{"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}]}, {"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 170238231, "num_examples": 34069}, {"name": "test", "num_bytes": 26100872, "num_examples": 5119}], "download_size": 27410564, "dataset_size": 196339103}}
|
2023-04-08T04:37:51+00:00
|
6fd70320338a6c95afe50ce2ef9db98d35ee7ab1
|
# Dataset Card for "self-critiquing-base"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dmayhem93/self-critiquing-base
|
[
"region:us"
] |
2023-04-08T04:37:59+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "time", "dtype": "float64"}, {"name": "labeler", "dtype": "string"}, {"name": "is_topic_based_summarization", "dtype": "bool"}, {"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 229932964, "num_examples": 47017}, {"name": "test", "num_bytes": 73005699, "num_examples": 10647}], "download_size": 55618766, "dataset_size": 302938663}}
|
2023-04-08T04:38:27+00:00
|
42c8a974007f91e99d7212ff77615efe9a97e860
|
# Dataset Card for "self-critiquing-helpful-sft"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dmayhem93/self-critiquing-helpful-sft
|
[
"region:us"
] |
2023-04-08T04:38:30+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "source_id", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "time", "dtype": "float64"}, {"name": "labeler", "dtype": "string"}, {"name": "is_topic_based_summarization", "dtype": "bool"}, {"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 68462243, "num_examples": 12270}, {"name": "test", "num_bytes": 8427723, "num_examples": 1580}], "download_size": 11327924, "dataset_size": 76889966}}
|
2023-04-08T04:38:38+00:00
|
a254b503a275e47da3d6eee0a9a88116dfcb7207
|
# Dataset Card for "mrpc_VALUE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/mrpc_VALUE
|
[
"region:us"
] |
2023-04-08T04:43:15+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 975850, "num_examples": 3668}, {"name": "validation", "num_bytes": 109315, "num_examples": 408}], "download_size": 733404, "dataset_size": 1085165}}
|
2023-04-08T04:43:18+00:00
|
a58ca3bcf853a496bdce3ed505a2b8783b43b9ea
|
# Dataset Card for "New_BioRED_Model"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
SHS/New_BioRED_Model
|
[
"region:us"
] |
2023-04-08T04:46:32+00:00
|
{"dataset_info": {"features": [{"name": "pmid", "dtype": "string"}, {"name": "passage", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 752283, "num_examples": 148}, {"name": "val", "num_bytes": 171371, "num_examples": 33}, {"name": "test", "num_bytes": 160097, "num_examples": 30}], "download_size": 392630, "dataset_size": 1083751}}
|
2023-04-08T04:46:39+00:00
|
47c827f7e2de483a0f1859f0d9c7622e9326ff06
|
# Dataset Card for "biored_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
SHS/biored_tokenized
|
[
"region:us"
] |
2023-04-08T04:51:12+00:00
|
{"dataset_info": {"features": [{"name": "pmid", "dtype": "string"}, {"name": "passage", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 160097, "num_examples": 30}, {"name": "train", "num_bytes": 752283, "num_examples": 148}, {"name": "val", "num_bytes": 171371, "num_examples": 33}], "download_size": 392627, "dataset_size": 1083751}}
|
2023-04-09T21:52:05+00:00
|
d141a8daf878d7deef0d6073ab6e7d3f7338cef3
|
# Dataset Card for "wnli_VALUE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/wnli_VALUE
|
[
"region:us"
] |
2023-04-08T05:05:48+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "value_score", "dtype": "int64"}], "splits": [{"name": "dev", "num_bytes": 12687, "num_examples": 71}, {"name": "test", "num_bytes": 38782, "num_examples": 146}, {"name": "train", "num_bytes": 112014, "num_examples": 635}], "download_size": 76773, "dataset_size": 163483}}
|
2023-04-08T05:05:52+00:00
|
4501d72bce7610428be8a11c0ae88a5c9cca3666
|
andrewsunanda/fast_food_image_classification
|
[
"task_categories:image-classification",
"language:en",
"region:us"
] |
2023-04-08T05:41:47+00:00
|
{"language": ["en"], "task_categories": ["image-classification"]}
|
2023-04-08T05:53:22+00:00
|
|
880a6f3d466c0eae24bdaab001cebd935696fea2
|
# Dataset Card for "chunk_81"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_81
|
[
"region:us"
] |
2023-04-08T06:26:59+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 27997511760.125, "num_examples": 291495}], "download_size": 25812383443, "dataset_size": 27997511760.125}}
|
2023-04-08T06:45:43+00:00
|
dc91f5f50beb391c9e7f8972b5c3d0cd830ea3df
|
Violeets/Violeet
|
[
"license:creativeml-openrail-m",
"region:us"
] |
2023-04-08T06:33:41+00:00
|
{"license": "creativeml-openrail-m"}
|
2023-04-08T06:37:25+00:00
|
|
0324c6497981bc548294671799afcd1f83529a2e
|
# Dataset Card for "CROHME_try_copy_png"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zhixiaoni/CROHME_try_copy_png
|
[
"region:us"
] |
2023-04-08T06:48:02+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "1", "1": "2", "2": "3", "3": "4", "4": "5"}}}}], "splits": [{"name": "train", "num_bytes": 7689932.171, "num_examples": 2441}], "download_size": 6854574, "dataset_size": 7689932.171}}
|
2023-04-08T06:48:10+00:00
|
7372b3c04dd7a09e4ca5ae572557d843cb4b5482
|
# Dataset Card for "LaMini-Instruction"
<p align="center" width="100%">
<a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini.png" alt="Title" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
<p align="center"> <a href="https://twitter.com/WuMinghao_nlp" target="_blank">Minghao Wu</a>, <a href="https://twitter.com/MacAbdul9" target="_blank">Abdul Waheed</a>, <a href="https://chiyuzhang94.github.io/" target="_blank">Chiyu Zhang</a>, <a href="https://mageed.arts.ubc.ca/" target="_blank">Muhammad Abdul-Mageed</a>, <a href="https://afaji.github.io/" target="_blank">Alham Fikri Aji</a>,</p>
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Instances](#data-instances)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Additional Information](#additional-information)
## Dataset Description
We distill the knowledge from large language models by performing sentence/offline distillation (Kim and Rush, 2016). We generate a total of **2.58M** pairs of instructions and responses using [`gpt-3.5-turbo`](https://openai.com/api/) based on several existing resources of prompts, including [self-instruct](https://github.com/yizhongw/self-instruct) (Wang et al., 2022), [P3](https://huggingface.co/datasets/bigscience/P3) (Sanh et al., 2022), [FLAN](https://github.com/google-research/FLAN) (Longpre et al., 2023) and [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) (Taori et al., 2023). More information about the process for generating our instruction dataset, please refer to [our paper](https://arxiv.org/abs/2304.14402).
We use LaMini-Instruction to fine-tune various small-sized language models, resulting in [LaMini-LM model series](https://github.com/mbzuai-nlp/laMini-LM#models).
Each sample contain the following two fields:
* `instruction`: `string`, describes the task that the model is expected to perform.
* `response`: `string`, the response to the instruction as generated by `gpt-3.5-turbo`.
* `instruction_source`: `string`, the source of a instruction. The possible values are `{'alpaca', 'generated_flan', 'generated_p3', 'original_flan', 'original_p3', 'self_instruct_with_topic', 'self_instruct_without_topic'}`.
Two examples in our LaMini-instruction data:
```json
{
"instruction": "What is the capital of Australia?\n\nIs Canberra the only city chosen to serve as a capital due to a compromise between Sydney and Melbourne?",
"response": "The capital of Australia is Canberra. Yes, Canberra was chosen as the capital of Australia in 1908 due to a compromise between Sydney and Melbourne, as neither city wanted the other to become the capital.",
"instruction_source": "generated_flan"
}
```
```json
{
"instruction": "Create a poem that has the word \"ocean\" in every other line.",
"response": "The ocean's depth is a mystery,\nIts waves crashing with energy.\nSalt air fills the coastal scene,\nThe ocean's blue hues pure and clean.\n\nSeagulls cry out a melancholy song,\nThe ocean's rhythm steady and strong.\nSunsets paint the water gold,\nThe ocean's secrets forever untold.",
"instruction_source": "self_instruct_without_topic"
}
```
## Considerations for Using the Data
As the output is distilled from ChatGPT, this data contains errors and biases produced by ChatGPT. Models trained with this dataset will inherit the errors and biases. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections.
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
Please cite us if you use our data or models.
```bibtex
@article{lamini-lm,
author = {Minghao Wu and
Abdul Waheed and
Chiyu Zhang and
Muhammad Abdul-Mageed and
Alham Fikri Aji
},
title = {LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions},
journal = {CoRR},
volume = {abs/2304.14402},
year = {2023},
url = {https://arxiv.org/abs/2304.14402},
eprinttype = {arXiv},
eprint = {2304.14402}
}
```
|
MBZUAI/LaMini-instruction
|
[
"task_categories:text2text-generation",
"size_categories:1M<n<10M",
"language:en",
"license:cc-by-nc-4.0",
"arxiv:2304.14402",
"region:us"
] |
2023-04-08T06:48:12+00:00
|
{"language": ["en"], "license": "cc-by-nc-4.0", "size_categories": ["1M<n<10M"], "task_categories": ["text2text-generation"], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "instruction_source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1162632572, "num_examples": 2585615}], "download_size": 704293718, "dataset_size": 1162632572}}
|
2023-04-30T10:01:41+00:00
|
2440eff736502574ecf22796349a5efcd601d143
|
# Dataset Card for "chunk_84"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_84
|
[
"region:us"
] |
2023-04-08T06:56:18+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 20988697104.625, "num_examples": 218523}], "download_size": 18931779804, "dataset_size": 20988697104.625}}
|
2023-04-08T07:10:18+00:00
|
0226c330c26ebffa85de08b7cc277b4a0f99a2b5
|
# Dataset Card for "chunk_78"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_78
|
[
"region:us"
] |
2023-04-08T07:13:28+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 23971371696.875, "num_examples": 249577}], "download_size": 22358615413, "dataset_size": 23971371696.875}}
|
2023-04-08T07:33:55+00:00
|
8a353bf435699d2cf92cdfcda165e93ca1b40389
|
# MegaInstruct
A large instruct dataset, merging multiple into the alpaca format
### Note:
Both the gpt4all and vicuna datasets have usernames appended to them, so hopefully username aware chatbot datasets can be added on top of this!
|
Korakoe/MegaInstruct
|
[
"task_categories:text-generation",
"size_categories:100K<n<1M",
"large",
"instruct",
"usernames",
"region:us"
] |
2023-04-08T07:33:30+00:00
|
{"size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "pretty_name": "Mega Instruct", "tags": ["large", "instruct", "usernames"]}
|
2023-04-08T08:07:23+00:00
|
c7e4e8853342fceeb10cfecf7572aaf34d68a9a5
|
# Dataset Card for "chunk_83"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_83
|
[
"region:us"
] |
2023-04-08T07:39:21+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 24688754208.25, "num_examples": 257046}], "download_size": 21660129475, "dataset_size": 24688754208.25}}
|
2023-04-08T07:59:54+00:00
|
4438ba21713ce67b57bba63aed659427f03567e8
|
# Dataset Card for "chunk_79"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_79
|
[
"region:us"
] |
2023-04-08T08:31:06+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 24403395600.625, "num_examples": 254075}], "download_size": 22771478558, "dataset_size": 24403395600.625}}
|
2023-04-08T09:15:11+00:00
|
acb17d248d02da86da02de9dd2ad9caa0880db64
|
# Dataset Card for "chunk_86"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_86
|
[
"region:us"
] |
2023-04-08T08:32:19+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 25978678848.5, "num_examples": 270476}], "download_size": 23602836628, "dataset_size": 25978678848.5}}
|
2023-04-08T08:56:30+00:00
|
cebe58fb06a8298024624345005030e82644f830
|
# Dataset Card for "lane_master2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Efferbach/lane_master2
|
[
"region:us"
] |
2023-04-08T08:33:45+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "validation", "num_bytes": 89415329.0, "num_examples": 129}, {"name": "train", "num_bytes": 1930956250.8, "num_examples": 3075}], "download_size": 2053158573, "dataset_size": 2020371579.8}}
|
2023-04-08T08:36:56+00:00
|
8d60b9f056adba070c50249f1bc14ea47f692d1f
|
# Dataset Card for "chunk_80"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_80
|
[
"region:us"
] |
2023-04-08T08:40:49+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 24182773344.75, "num_examples": 251778}], "download_size": 21979407663, "dataset_size": 24182773344.75}}
|
2023-04-08T09:23:26+00:00
|
11243562d9be95a7f046db36a721593a69398c66
|
# Dataset Card for "chunk_88"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_88
|
[
"region:us"
] |
2023-04-08T09:14:18+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 25025402448.125, "num_examples": 260551}], "download_size": 22932788696, "dataset_size": 25025402448.125}}
|
2023-04-08T09:35:05+00:00
|
027743ab03b688a3cf9c841246de70dade616d5c
|
# Dataset Card for "chunk_87"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_87
|
[
"region:us"
] |
2023-04-08T09:17:03+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 25394322816.0, "num_examples": 264392}], "download_size": 23210049393, "dataset_size": 25394322816.0}}
|
2023-04-08T09:40:49+00:00
|
bd07a8e58128e751809a2281acc07e47c5255bf6
|
# Dataset Card for "chunk_89"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_89
|
[
"region:us"
] |
2023-04-08T10:06:09+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 24703929792.5, "num_examples": 257204}], "download_size": 22700301276, "dataset_size": 24703929792.5}}
|
2023-04-08T10:26:47+00:00
|
86e6b82b1b149b0f5727382db3eb6bb0e5ec72f2
|
# Dataset Card for "chunk_77"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_77
|
[
"region:us"
] |
2023-04-08T10:23:19+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 24511641696.75, "num_examples": 255202}], "download_size": 22799458862, "dataset_size": 24511641696.75}}
|
2023-04-08T10:58:17+00:00
|
3d02d5db4299469905a417e4e693eb9278bc1ce5
|
# Dataset Card for BLiterature
*BLiterature is part of a bigger project that is not yet complete. Not all information here may be accurate or accessible.*
## Dataset Description
- **Homepage:** (TODO)
- **Repository:** N/A
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** KaraKaraWitch
### Dataset Summary
BLiterature is a raw dataset dump consisting of text from at most 260,261,224 blog posts (excluding categories and date-grouped posts) from blog.fc2.com.
### Supported Tasks and Leaderboards
This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes.
* text-classification
* text-generation
### Languages
* Japanese
## Dataset Structure
All the files are located in jsonl files that has been compressed into archives of 7z.
### Data Instances
```json
["http://1kimono.blog49.fc2.com/blog-entry-50.html",
"<!DOCTYPE HTML\n\tPUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\"\n\t\t\"http://www.w3.org/TR/html4/loose.dtd\">\n<!--\n<!DOCTYPE HTML\n\tPUBLIC \"-//W3C//DTD HTML 4.01//EN\"\n\t\t\"http://www.w3.org/T... (TRUNCATED)"]
```
### Data Fields
There is only 2 fields in the list. URL and content retrieved. content retrieved may contain values which the scraper ran into issues. If so they are marked in xml such as such.
```<?xml version="1.0" encoding="utf-8"?><error>Specifc Error</error>```
URLs may not match the final url in which the page was retrieved from. As they may be redirects present while scraping.
#### Q-Score Distribution
Not Applicable
### Data Splits
The jsonl files were split roughly every 2,500,000 posts. Allow for a slight deviation of 5000 additional posts due to how the files were saved.
## Dataset Creation
### Curation Rationale
fc2 is a Japanese blog hosting website which offers a place for anyone to host their blog on. As a result, the language used compared to other more official sources is more informal and relaxed as anyone can post whatever they personally want.
### Source Data
#### Initial Data Collection and Normalization
None. No normalization is performed as this is a raw dump of the dataset.
#### Who are the source language producers?
The authors of each blog, which may include others to post on their blog domain as well.
### Annotations
#### Annotation process
No Annotations are present.
#### Who are the annotators?
No human annotators.
### Personal and Sensitive Information
As this dataset contains information from individuals, there is a more likely chance to find personally identifiable information. However, we believe that the author has pre-vetted their posts in good faith to avoid such occurrences.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content.
It may also be useful for other languages depending on your language model.
### Discussion of Biases
This dataset contains real life referances and revolves around Japanese culture. As such there will be a bias towards it.
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
KaraKaraWitch
### Licensing Information
Apache 2.0, for all parts of which KaraKaraWitch may be considered authors. All other material is distributed under fair use principles.
Ronsor Labs additionally is allowed to relicense the dataset as long as it has gone through processing.
### Citation Information
```
@misc{bliterature,
title = {BLiterature: fc2 blogs for the masses.},
author = {KaraKaraWitch},
year = {2023},
howpublished = {\url{https://huggingface.co/datasets/KaraKaraWitch/BLiterature}},
}
```
### Name Etymology
[Literature (リテラチュア) - Reina Ueda (上田麗奈)](https://www.youtube.com/watch?v=Xo1g5HWgaRA)
`Blogs` > `B` + `Literature` > `BLiterature`
### Contributions
- [@KaraKaraWitch (Twitter)](https://twitter.com/KaraKaraWitch) for gathering this dataset.
- [neggles (Github)](https://github.com/neggles) for providing compute for the gathering of dataset.
|
RyokoAI/BLiterature-260M
|
[
"task_categories:text-classification",
"task_categories:text-generation",
"size_categories:100M<n<1B",
"language:jp",
"license:apache-2.0",
"blogs",
"training",
"text",
"not-for-all-audiences",
"region:us"
] |
2023-04-08T10:24:21+00:00
|
{"language": ["jp"], "license": "apache-2.0", "size_categories": ["100M<n<1B"], "task_categories": ["text-classification", "text-generation"], "pretty_name": "BLiterature", "tags": ["blogs", "training", "text", "not-for-all-audiences"]}
|
2023-06-28T01:14:43+00:00
|
e5877785d1726fcaf5204d9b4a23500bdd281f95
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
0x7o/ad_detector
|
[
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-04-08T10:27:03+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "pretty_name": "Ad Detector"}
|
2023-04-08T13:30:01+00:00
|
2a0c0f371591a2d234392f66f167a541a82fa6ed
|
一个来自K-SportsSum:https://github.com/krystalan/k-sportssum 的实现,原作者给出了思路,但并未实现其具体过程,此数据集是对该数据集“新闻与评论句子根据相似度搭配”部分的实现。
方法是:遍历新闻句子,以类似指针的方式获取新闻句子的时间信息(如果有的话),然后将两个指针作为范围,将范围内的新闻句子遍历,在同一时间范围之内查找评论句子,评分后选择最高的结果,并删除该句以防止重复,最终获得一句新闻搭配一句评论的结果。
我使用了bert—Score和ROUGE指标,按照7:3加权计算分数。
*建议* 数据集内给出了该搭配的指标,请考虑使用平均数等方式过滤掉较低的坏搭配。
An implementation from K-SportsSum: https://github.com/krystalan/k-sportssum was used to implement the "news and comment sentences paired based on similarity" section of the dataset. The original author provided the idea but did not implement the specific process. The method involves traversing the news sentences and obtaining the time information of the news sentences (if any) in a pointer-like way. Then, two pointers are used as the range to traverse the news sentences. Within the same time range, the comment sentences are searched, and the highest scored result is selected and deleted to avoid duplication, resulting in one news sentence paired with one comment sentence. BERT-Score and ROUGE metrics were used with a 7:3 weighted score calculation. It is recommended to use averaging or other methods to filter out poor pairings based on the metrics provided within the dataset.
|
CCCP-Admiral/K-SportsSum-BetterMapped-CN
|
[
"task_categories:summarization",
"size_categories:100K<n<1M",
"language:zh",
"license:apache-2.0",
"region:us"
] |
2023-04-08T10:32:45+00:00
|
{"language": ["zh"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["summarization"]}
|
2023-04-08T10:55:53+00:00
|
96fd2def2d9082fd6f66f54ed578daa0eb367273
|
Western armenian wikipedia 04.2023
4M tokens
10.785 articles
|
armvectores/hyw_wikipedia_2023
|
[
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:hyw",
"wikipedia",
"western armenian",
"region:us"
] |
2023-04-08T10:46:27+00:00
|
{"language": ["hyw"], "size_categories": ["1M<n<10M"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "article", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 55910963, "num_examples": 10785}], "download_size": 26613923, "dataset_size": 55910963}, "tags": ["wikipedia", "western armenian"]}
|
2023-04-09T11:40:32+00:00
|
9680cbe79e3c1398b4ed00a662e3fa5133d499eb
|
# Dataset Card for "chunk_91"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_91
|
[
"region:us"
] |
2023-04-08T11:09:05+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 23962151088.875, "num_examples": 249481}], "download_size": 21994351099, "dataset_size": 23962151088.875}}
|
2023-04-08T11:29:42+00:00
|
553e0bcdf44c70d61cc79aa304b8a77fe4789b7b
|
# Dataset Card for "chunk_94"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_94
|
[
"region:us"
] |
2023-04-08T11:13:33+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 23300092224.5, "num_examples": 242588}], "download_size": 21582387763, "dataset_size": 23300092224.5}}
|
2023-04-08T11:26:07+00:00
|
cd302c6e2d7e9de52b7cc67b73cd6b32fd7d66ef
|
YunqiLI/test
|
[
"language:en",
"license:bigscience-openrail-m",
"finance",
"region:us"
] |
2023-04-08T11:27:34+00:00
|
{"language": ["en"], "license": "bigscience-openrail-m", "tags": ["finance"]}
|
2023-04-08T11:28:42+00:00
|
|
94cd3f7a5ccb1688b78c5ace0742c443a070dd42
|
# Dataset Card for "chunk_92"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_92
|
[
"region:us"
] |
2023-04-08T11:29:33+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 23811451776.0, "num_examples": 247912}], "download_size": 22034658039, "dataset_size": 23811451776.0}}
|
2023-04-08T11:49:57+00:00
|
49536c0c339e4e73d1324bfd1f003a08864532bf
|
nikolaiharkov/tangkapan-ikan
|
[
"license:gfdl",
"region:us"
] |
2023-04-08T11:34:38+00:00
|
{"license": "gfdl"}
|
2023-04-08T17:49:14+00:00
|
|
0004b8ebcf2e6792da426565b0b9400e3d4d6071
|
# Dataset Card for "chunk_90"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_90
|
[
"region:us"
] |
2023-04-08T11:43:13+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 24455933856.25, "num_examples": 254622}], "download_size": 22487639626, "dataset_size": 24455933856.25}}
|
2023-04-08T12:25:24+00:00
|
29551fe3febc446d973a1733ee4ac2fc1e3ad24a
|
# Dataset Card for "chunk_95"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_95
|
[
"region:us"
] |
2023-04-08T12:04:33+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 23562015120.625, "num_examples": 245315}], "download_size": 21629616576, "dataset_size": 23562015120.625}}
|
2023-04-08T12:16:39+00:00
|
dfe1e312a837c6547de05bfa5cb7bb02a1a2bf61
|
# Dataset Card for "chunk_96"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_96
|
[
"region:us"
] |
2023-04-08T12:05:25+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 23543862048.25, "num_examples": 245126}], "download_size": 21840778279, "dataset_size": 23543862048.25}}
|
2023-04-08T12:18:13+00:00
|
b325d0d502a24ea67471fb17d247e95f856ab540
|
waechor/tuto_foo_french2023
|
[
"license:cc-by-sa-3.0",
"region:us"
] |
2023-04-08T12:22:51+00:00
|
{"license": "cc-by-sa-3.0"}
|
2023-04-08T12:24:53+00:00
|
|
9d62b3e6ee89d21aa29bba934ec6061b050ba32f
|
# Dataset Card for "naively_captioned_CUB2002011_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
anjunhu/naively_captioned_CUB2002011_train
|
[
"region:us"
] |
2023-04-08T12:33:22+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 165992443.75, "num_examples": 5994}], "download_size": 165754928, "dataset_size": 165992443.75}}
|
2023-04-11T22:43:32+00:00
|
c25c4ce6d382173ff19d8740e9743a4c3b4c8bbe
|
# Dataset Card for "flint_images_300_300"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Kevin-M-Smith/flint_images_300_300
|
[
"region:us"
] |
2023-04-08T13:07:19+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "clutter", "1": "email", "2": "email-squished", "3": "handwritten-document", "4": "spreadsheet", "5": "typeset-document"}}}}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 176737820.0, "num_examples": 4965}, {"name": "test", "num_bytes": 44473375.0, "num_examples": 1242}], "download_size": 221048030, "dataset_size": 221211195.0}}
|
2023-04-08T13:08:46+00:00
|
3a506662d76858370d11c09903da7a0045084f26
|
# Dataset Card for "flint_images_600_600"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Kevin-M-Smith/flint_images_600_600
|
[
"region:us"
] |
2023-04-08T13:10:31+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "clutter", "1": "email", "2": "email-squished", "3": "handwritten-document", "4": "spreadsheet", "5": "typeset-document"}}}}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 648700686.0, "num_examples": 4965}, {"name": "test", "num_bytes": 159791287.0, "num_examples": 1242}], "download_size": 807442120, "dataset_size": 808491973.0}}
|
2023-04-08T13:15:38+00:00
|
4453c65f948961c99cfebcc599f06768f0116b63
|
# Dataset Card for "Pizza_Dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CasperLD/Pizza_Dataset
|
[
"region:us"
] |
2023-04-08T13:25:59+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3787499.0, "num_examples": 80}], "download_size": 0, "dataset_size": 3787499.0}}
|
2023-04-08T13:26:55+00:00
|
2372f1820de4884f862d4ce50ed2135cb91e3c3e
|
# Dataset Card for "flint_images_900_900"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Kevin-M-Smith/flint_images_900_900
|
[
"region:us"
] |
2023-04-08T13:26:01+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "clutter", "1": "email", "2": "email-squished", "3": "handwritten-document", "4": "spreadsheet", "5": "typeset-document"}}}}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1326456197.0, "num_examples": 4965}, {"name": "test", "num_bytes": 327048562.0, "num_examples": 1242}], "download_size": 1650313094, "dataset_size": 1653504759.0}}
|
2023-04-08T13:36:28+00:00
|
05e05fd7852e32ea59b29f1fafe3b73e95e807c4
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
0x7o/spam_detector
|
[
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-04-08T13:27:11+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "pretty_name": "Spam Detector"}
|
2023-04-09T03:09:42+00:00
|
7ba750921b2327accde491b6f81d356306a8f720
|
# Dataset Card for "Pizza_Dataset_Extra_Detailed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CasperLD/Pizza_Dataset_Extra_Detailed
|
[
"region:us"
] |
2023-04-08T13:32:32+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3791807.0, "num_examples": 80}], "download_size": 3782195, "dataset_size": 3791807.0}}
|
2023-04-08T13:33:00+00:00
|
e915d91394d55365f226aaa22e80f41ea8a6f5d1
|
Amirkid/milspotify
|
[
"license:creativeml-openrail-m",
"region:us"
] |
2023-04-08T13:37:37+00:00
|
{"license": "creativeml-openrail-m", "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 75339255, "num_examples": 2427716}], "download_size": 38213804, "dataset_size": 75339255}}
|
2023-04-08T13:37:56+00:00
|
|
3dcda740781ab397bd8461c5f06763bd76c5d8cd
|
pleisto/tianpeng-dataset
|
[
"task_categories:text2text-generation",
"language:en",
"language:ch",
"language:zh",
"license:gpl-3.0",
"doi:10.57967/hf/0521",
"region:us"
] |
2023-04-08T14:19:54+00:00
|
{"language": ["en", "ch", "zh"], "license": "gpl-3.0", "task_categories": ["text2text-generation"]}
|
2023-04-09T07:40:12+00:00
|
|
1124b5debcc9553fe30b25e5a411e16b6b3614ed
|
# MedQuaAD-Italian-Fauno-Baize
This dataset is an Italian translation of the MedQuaAD dataset presented by Baize's authors.
## Dataset Description
- **Paper:** https://arxiv.org/abs/2304.01196
### Languages
Italian
## Dataset Structure
### Data Instances
Sentences 46,867
average number of turns 3.8
response lengths of each turn 35.8
### Data Fields
topic, input
### Data Splits
Train
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
https://github.com/project-baize/baize-chatbot
## Additional Information
### Dataset Curators
[Andrea Bacciu](https://andreabac3.github.io/), Dr. [Giovanni Trappolini](https://sites.google.com/view/giovannitrappolini), [Andrea Santilli](https://www.santilli.xyz/), and Professor [Fabrizio Silvestri](https://sites.google.com/diag.uniroma1.it/fabriziosilvestri/home).
### Licensing Information
This project is a derivative of Baize, and we adhere to the licensing constraints imposed by Baize's creators.
### Citation Information
```bibtex
@misc{fauno,
author = {Andrea Bacciu, Giovanni Trappolini, Andrea Santilli, Fabrizio Silvestri},
title = {Fauno: The Italian Large Language Model that will leave you senza parole!},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/andreabac3/Fauno-Italian-LLM}},
}
```
```bibtex
@article{xu2023baize,
title={Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data},
author={Xu, Canwen and Guo, Daya and Duan, Nan and McAuley, Julian},
journal={arXiv preprint arXiv:2304.01196},
year={2023}
}
```
|
andreabac3/MedQuaAD-Italian-Fauno-Baize
|
[
"license:gpl-3.0",
"arxiv:2304.01196",
"region:us"
] |
2023-04-08T14:26:59+00:00
|
{"license": "gpl-3.0"}
|
2023-04-08T14:44:46+00:00
|
cd055016e192f2b214079a741b52e798b4ca2372
|
# StackOverflow-Italian-Fauno-Baize
This dataset is an Italian translation of the StackOverflow dataset presented by Baize's authors.
## Dataset Description
- **Paper:** https://arxiv.org/abs/2304.01196
### Languages
Italian
## Dataset Structure
### Data Instances
Sentences 57,046
average number of turns 3.6
response lengths of each turn 36.0
### Data Fields
topic, input
### Data Splits
Train
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
https://github.com/project-baize/baize-chatbot
## Additional Information
### Dataset Curators
[Andrea Bacciu](https://andreabac3.github.io/), Dr. [Giovanni Trappolini](https://sites.google.com/view/giovannitrappolini), [Andrea Santilli](https://www.santilli.xyz/), and Professor [Fabrizio Silvestri](https://sites.google.com/diag.uniroma1.it/fabriziosilvestri/home).
### Licensing Information
This project is a derivative of Baize, and we adhere to the licensing constraints imposed by Baize's creators.
### Citation Information
```bibtex
@misc{fauno,
author = {Andrea Bacciu, Giovanni Trappolini, Andrea Santilli, Fabrizio Silvestri},
title = {Fauno: The Italian Large Language Model that will leave you senza parole!},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/andreabac3/Fauno-Italian-LLM}},
}
```
```bibtex
@article{xu2023baize,
title={Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data},
author={Xu, Canwen and Guo, Daya and Duan, Nan and McAuley, Julian},
journal={arXiv preprint arXiv:2304.01196},
year={2023}
}
```
|
andreabac3/StackOverflow-Italian-Fauno-Baize
|
[
"license:gpl-3.0",
"arxiv:2304.01196",
"region:us"
] |
2023-04-08T14:46:42+00:00
|
{"license": "gpl-3.0"}
|
2023-04-08T14:49:40+00:00
|
0e6fdb2a91936ad0477293d97638314629672fdb
|
# Quora-Italian-Fauno-Baize
This dataset is an Italian translation of the Quora dataset presented by Baize's authors.
## Dataset Description
- **Paper:** https://arxiv.org/abs/2304.01196
### Languages
Italian
## Dataset Structure
### Data Instances
Sentences 54,456
average number of turns 3.9
response lengths of each turn 35.9
### Data Fields
topic, input
### Data Splits
Train
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
https://github.com/project-baize/baize-chatbot
## Additional Information
### Dataset Curators
[Andrea Bacciu](https://andreabac3.github.io/), Dr. [Giovanni Trappolini](https://sites.google.com/view/giovannitrappolini), [Andrea Santilli](https://www.santilli.xyz/), and Professor [Fabrizio Silvestri](https://sites.google.com/diag.uniroma1.it/fabriziosilvestri/home).
### Licensing Information
This project is a derivative of Baize, and we adhere to the licensing constraints imposed by Baize's creators.
### Citation Information
```bibtex
@misc{fauno,
author = {Andrea Bacciu, Giovanni Trappolini, Andrea Santilli, Fabrizio Silvestri},
title = {Fauno: The Italian Large Language Model that will leave you senza parole!},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/andreabac3/Fauno-Italian-LLM}},
}
```
```bibtex
@article{xu2023baize,
title={Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data},
author={Xu, Canwen and Guo, Daya and Duan, Nan and McAuley, Julian},
journal={arXiv preprint arXiv:2304.01196},
year={2023}
}
```
|
andreabac3/Quora-Italian-Fauno-Baize
|
[
"license:gpl-3.0",
"arxiv:2304.01196",
"region:us"
] |
2023-04-08T14:52:17+00:00
|
{"license": "gpl-3.0"}
|
2023-04-08T14:54:40+00:00
|
6f27df527556f0c2774f45297cfca7780477ad75
|
# Dataset Card for JSICK
## Table of Contents
- [Dataset Card for JSICK](#dataset-card-for-jsick)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Japanese Sentences Involving Compositional Knowledge (JSICK) Dataset.](#japanese-sentences-involving-compositional-knowledge-jsick-dataset)
- [JSICK-stress Test set](#jsick-stress-test-set)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [base](#base)
- [stress](#stress)
- [Data Fields](#data-fields)
- [base](#base-1)
- [stress](#stress-1)
- [Data Splits](#data-splits)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/verypluming/JSICK
- **Repository:** https://github.com/verypluming/JSICK
- **Paper:** https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00518/113850/Compositional-Evaluation-on-Japanese-Textual
- **Paper:** https://www.jstage.jst.go.jp/article/pjsai/JSAI2021/0/JSAI2021_4J3GS6f02/_pdf/-char/ja
### Dataset Summary
From official [GitHub](https://github.com/verypluming/JSICK):
#### Japanese Sentences Involving Compositional Knowledge (JSICK) Dataset.
JSICK is the Japanese NLI and STS dataset by manually translating the English dataset [SICK (Marelli et al., 2014)](https://aclanthology.org/L14-1314/) into Japanese.
We hope that our dataset will be useful in research for realizing more advanced models that are capable of appropriately performing multilingual compositional inference.
#### JSICK-stress Test set
The JSICK-stress test set is a dataset to investigate whether models capture word order and case particles in Japanese.
The JSICK-stress test set is provided by transforming syntactic structures of sentence pairs in JSICK, where we analyze whether models are attentive to word order and case particles to predict entailment labels and similarity scores.
The JSICK test set contains 1666, 797, and 1006 sentence pairs (A, B) whose premise sentences A (the column `sentence_A_Ja_origin`) include the basic word order involving
ga-o (nominative-accusative), ga-ni (nominative-dative), and ga-de (nominative-instrumental/locative) relations, respectively.
We provide the JSICK-stress test set by transforming syntactic structures of these pairs by the following three ways:
- `scrum_ga_o`: a scrambled pair, where the word order of premise sentences A is scrambled into o-ga, ni-ga, and de-ga order, respectively.
- `ex_ga_o`: a rephrased pair, where the only case particles (ga, o, ni, de) in the premise A are swapped
- `del_ga_o`: a rephrased pair, where the only case particles (ga, o, ni) in the premise A are deleted
### Languages
The language data in JSICK is in Japanese and English.
## Dataset Structure
### Data Instances
When loading a specific configuration, users has to append a version dependent suffix:
```python
import datasets as ds
dataset: ds.DatasetDict = ds.load_dataset("hpprc/jsick")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['id', 'premise', 'hypothesis', 'label', 'score', 'premise_en', 'hypothesis_en', 'label_en', 'score_en', 'corr_entailment_labelAB_En', 'corr_entailment_labelBA_En', 'image_ID', 'original_caption', 'semtag_short', 'semtag_long'],
# num_rows: 4500
# })
# test: Dataset({
# features: ['id', 'premise', 'hypothesis', 'label', 'score', 'premise_en', 'hypothesis_en', 'label_en', 'score_en', 'corr_entailment_labelAB_En', 'corr_entailment_labelBA_En', 'image_ID', 'original_caption', 'semtag_short', 'semtag_long'],
# num_rows: 4927
# })
# })
dataset: ds.DatasetDict = ds.load_dataset("hpprc/jsick", name="stress")
print(dataset)
# DatasetDict({
# test: Dataset({
# features: ['id', 'premise', 'hypothesis', 'label', 'score', 'sentence_A_Ja_origin', 'entailment_label_origin', 'relatedness_score_Ja_origin', 'rephrase_type', 'case_particles'],
# num_rows: 900
# })
# })
```
#### base
An example of looks as follows:
```json
{
'id': 1,
'premise': '子供たちのグループが庭で遊んでいて、後ろの方には年を取った男性が立っている',
'hypothesis': '庭にいる男の子たちのグループが遊んでいて、男性が後ろの方に立っている',
'label': 1, // (neutral)
'score': 3.700000047683716,
'premise_en': 'A group of kids is playing in a yard and an old man is standing in the background',
'hypothesis_en': 'A group of boys in a yard is playing and a man is standing in the background',
'label_en': 1, // (neutral)
'score_en': 4.5,
'corr_entailment_labelAB_En': 'nan',
'corr_entailment_labelBA_En': 'nan',
'image_ID': '3155657768_b83a7831e5.jpg',
'original_caption': 'A group of children playing in a yard , a man in the background .',
'semtag_short': 'nan',
'semtag_long': 'nan',
}
```
#### stress
An example of looks as follows:
```json
{
'id': '5818_de_d',
'premise': '女性火の近くダンスをしている',
'hypothesis': '火の近くでダンスをしている女性は一人もいない',
'label': 2, // (contradiction)
'score': 4.0,
'sentence_A_Ja_origin': '女性が火の近くでダンスをしている',
'entailment_label_origin': 2,
'relatedness_score_Ja_origin': 3.700000047683716,
'rephrase_type': 'd',
'case_particles': 'de'
}
```
### Data Fields
#### base
A version adopting the column names of a typical NLI dataset.
| Name | Description |
| -------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
| id | The ids (the same with original SICK). |
| premise | The first sentence in Japanese. |
| hypothesis | The second sentence in Japanese. |
| label | The entailment label in Japanese. |
| score | The relatedness score in the range [1-5] in Japanese. |
| premise_en | The first sentence in English. |
| hypothesis_en | The second sentence in English. |
| label_en | The original entailment label in English. |
| score_en | The original relatedness score in the range [1-5] in English. |
| semtag_short | The linguistic phenomena tags in Japanese. |
| semtag_long | The details of linguistic phenomena tags in Japanese. |
| image_ID | The original image in [8K ImageFlickr dataset](https://www.kaggle.com/datasets/adityajn105/flickr8k). |
| original_caption | The original caption in [8K ImageFlickr dataset](https://www.kaggle.com/datasets/adityajn105/flickr8k). |
| corr_entailment_labelAB_En | The corrected entailment label from A to B in English by [(Karouli et al., 2017)](http://vcvpaiva.github.io/includes/pubs/2017-iwcs.pdf). |
| corr_entailment_labelBA_En | The corrected entailment label from B to A in English by [(Karouli et al., 2017)](http://vcvpaiva.github.io/includes/pubs/2017-iwcs.pdf). |
#### stress
| Name | Description |
| --------------------------- | ------------------------------------------------------------------------------------------------- |
| id | Ids (the same with original SICK). |
| premise | The first sentence in Japanese. |
| hypothesis | The second sentence in Japanese. |
| label | The entailment label in Japanese |
| score | The relatedness score in the range [1-5] in Japanese. |
| sentence_A_Ja_origin | The original premise sentences A from the JSICK test set. |
| entailment_label_origin | The original entailment labels. |
| relatedness_score_Ja_origin | The original relatedness scores. |
| rephrase_type | The type of transformation applied to the syntactic structures of the sentence pairs. |
| case_particles | The grammatical particles in Japanese that indicate the function or role of a noun in a sentence. |
### Data Splits
| name | train | validation | test |
| --------------- | ----: | ---------: | ----: |
| base | 4,500 | | 4,927 |
| original | 4,500 | | 4,927 |
| stress | | | 900 |
| stress-original | | | 900 |
### Annotations
To annotate the JSICK dataset, they used the crowdsourcing platform "Lancers" to re-annotate entailment labels and similarity scores for JSICK.
They had six native Japanese speakers as annotators, who were randomly selected from the platform.
The annotators were asked to fully understand the guidelines and provide the same labels as gold labels for ten test questions.
For entailment labels, they adopted annotations that were agreed upon by a majority vote as gold labels and checked whether the majority judgment vote was semantically valid for each example.
For similarity scores, they used the average of the annotation results as gold scores.
The raw annotations with the JSICK dataset are [publicly available](https://github.com/verypluming/JSICK/blob/main/jsick/jsick-all-annotations.tsv).
The average annotation time was 1 minute per pair, and Krippendorff's alpha for the entailment labels was 0.65.
## Additional Information
- [verypluming/JSICK](https://github.com/verypluming/JSICK)
- [Compositional Evaluation on Japanese Textual Entailment and Similarity](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00518/113850/Compositional-Evaluation-on-Japanese-Textual)
- [JSICK: 日本語構成的推論・類似度データセットの構築](https://www.jstage.jst.go.jp/article/pjsai/JSAI2021/0/JSAI2021_4J3GS6f02/_article/-char/ja)
### Licensing Information
CC BY-SA 4.0
### Citation Information
```bibtex
@article{yanaka-mineshima-2022-compositional,
title = "Compositional Evaluation on {J}apanese Textual Entailment and Similarity",
author = "Yanaka, Hitomi and
Mineshima, Koji",
journal = "Transactions of the Association for Computational Linguistics",
volume = "10",
year = "2022",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/2022.tacl-1.73",
doi = "10.1162/tacl_a_00518",
pages = "1266--1284",
}
@article{谷中 瞳2021,
title={JSICK: 日本語構成的推論・類似度データセットの構築},
author={谷中 瞳 and 峯島 宏次},
journal={人工知能学会全国大会論文集},
volume={JSAI2021},
number={ },
pages={4J3GS6f02-4J3GS6f02},
year={2021},
doi={10.11517/pjsai.JSAI2021.0_4J3GS6f02}
}
```
### Contributions
Thanks to [Hitomi Yanaka](https://hitomiyanaka.mystrikingly.com/) and [Koji Mineshima](https://abelard.flet.keio.ac.jp/person/minesima/index-j.html) for creating this dataset.
|
hpprc/jsick
|
[
"task_categories:sentence-similarity",
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:extended|sick",
"language:ja",
"language:en",
"license:cc-by-sa-4.0",
"semantic-textual-similarity",
"sts",
"region:us"
] |
2023-04-08T15:02:06+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["ja", "en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["translation"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|sick"], "task_categories": ["sentence-similarity", "text-classification"], "task_ids": ["natural-language-inference", "semantic-similarity-scoring"], "pretty_name": "JSICK", "tags": ["semantic-textual-similarity", "sts"]}
|
2023-04-11T05:18:09+00:00
|
893bb4803901febacdca86993e4d2aa61428f92c
|
# Dataset Card for "chunk_93"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_93
|
[
"region:us"
] |
2023-04-08T15:10:15+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 23732884512.25, "num_examples": 247094}], "download_size": 21863521228, "dataset_size": 23732884512.25}}
|
2023-04-08T15:24:46+00:00
|
82678e8248a6aae681ee5334f4cc3d12aef87fe1
|
# Dataset Card for "chunk_97"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_97
|
[
"region:us"
] |
2023-04-08T15:13:28+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 23263593984.0, "num_examples": 242208}], "download_size": 21528015709, "dataset_size": 23263593984.0}}
|
2023-04-08T15:27:25+00:00
|
272d7c67e8139275693a0565f13771a8cd6c66a3
|
# Dataset Card for "chunk_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_100
|
[
"region:us"
] |
2023-04-08T15:20:53+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 24332512176.875, "num_examples": 253337}], "download_size": 22555624925, "dataset_size": 24332512176.875}}
|
2023-04-08T15:40:35+00:00
|
f921912b6f85d99c409da72f70d719257cf3ca81
|
# Dataset Card for GoEmotions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
### Dataset Summary
The RuGoEmotions dataset contains 34k Reddit comments labeled for 9 emotion categories (joy, interest, surprice, sadness, anger, disgust, fear, guilt and neutral).
The dataset already with predefined train/val/test splits
### Supported Tasks and Leaderboards
This dataset is intended for multi-class, multi-label emotion classification.
### Languages
The data is in Russian.
## Dataset Structure
### Data Instances
Each instance is a reddit comment with one or more emotion annotations (or neutral).
### Data Fields
The configuration includes:
- `text`: the reddit comment
- `labels`: the emotion annotations
### Data Splits
The simplified data includes a set of train/val/test splits with 26.9k, 3.29k, and 3.37k examples respectively.
## Dataset Creation
### Curation Rationale
From the paper abstract:
> Understanding emotion expressed in language has a wide range of applications, from building empathetic chatbots to
detecting harmful online behavior. Advancement in this area can be improved using large-scale datasets with a
fine-grained typology, adaptable to multiple downstream tasks.
### Source Data
#### Initial Data Collection and Normalization
Data was collected from Reddit comments via a variety of automated methods discussed in 3.1 of the paper.
#### Who are the source language producers?
English-speaking Reddit users.
### Annotations
#### Who are the annotators?
Annotations were produced by 3 English-speaking crowdworkers in India.
### Personal and Sensitive Information
This dataset includes the original usernames of the Reddit users who posted each comment. Although Reddit usernames
are typically disasociated from personal real-world identities, this is not always the case. It may therefore be
possible to discover the identities of the individuals who created this content in some cases.
## Considerations for Using the Data
### Social Impact of Dataset
Emotion detection is a worthwhile problem which can potentially lead to improvements such as better human/computer
interaction. However, emotion detection algorithms (particularly in computer vision) have been abused in some cases
to make erroneous inferences in human monitoring and assessment applications such as hiring decisions, insurance
pricing, and student attentiveness (see
[this article](https://www.unite.ai/ai-now-institute-warns-about-misuse-of-emotion-detection-software-and-other-ethical-issues/)).
### Discussion of Biases
From the authors' github page:
> Potential biases in the data include: Inherent biases in Reddit and user base biases, the offensive/vulgar word lists used for data filtering, inherent or unconscious bias in assessment of offensive identity labels, annotators were all native English speakers from India. All these likely affect labelling, precision, and recall for a trained model. Anyone using this dataset should be aware of these limitations of the dataset.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Researchers at Amazon Alexa, Google Research, and Stanford. See the [author list](https://arxiv.org/abs/2005.00547).
### Licensing Information
The GitHub repository which houses this dataset has an
[Apache License 2.0](https://github.com/google-research/google-research/blob/master/LICENSE).
### Citation Information
@inproceedings{demszky2020goemotions,
author = {Demszky, Dorottya and Movshovitz-Attias, Dana and Ko, Jeongwoo and Cowen, Alan and Nemade, Gaurav and Ravi, Sujith},
booktitle = {58th Annual Meeting of the Association for Computational Linguistics (ACL)},
title = {{GoEmotions: A Dataset of Fine-Grained Emotions}},
year = {2020}
}
### Contributions
Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
|
Djacon/ru_goemotions
|
[
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"multilinguality:monolingual",
"language:ru",
"license:mit",
"emotion",
"arxiv:2005.00547",
"region:us"
] |
2023-04-08T15:27:02+00:00
|
{"language": ["ru"], "license": ["mit"], "multilinguality": ["monolingual"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "multi-label-classification"], "pretty_name": "RuGoEmotions", "tags": ["emotion"]}
|
2023-04-08T15:51:52+00:00
|
522dd944e5117af9ccc541ec3f6ae95dee5fa239
|
# Dataset Card for "chunk_98"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_98
|
[
"region:us"
] |
2023-04-08T15:31:14+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 23567105664.0, "num_examples": 245368}], "download_size": 21584566260, "dataset_size": 23567105664.0}}
|
2023-04-08T15:50:57+00:00
|
09a4b2cad6326ce63dd4aa12b5e6c8dcab8097e9
|
# Dataset Card for "chunk_76"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_76
|
[
"region:us"
] |
2023-04-08T15:39:28+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 23957924976.375, "num_examples": 249437}], "download_size": 22055578504, "dataset_size": 23957924976.375}}
|
2023-04-08T16:12:37+00:00
|
1c084c7fda81c310c8fadc987685a0a23c89eb1a
|
# Dataset Description
* Example model using the dataset: https://huggingface.co/hackathon-somos-nlp-2023/roberta-base-bne-finetuned-suicide-es
* Example space using the dataset: https://huggingface.co/spaces/hackathon-somos-nlp-2023/suicide-comments-es
* Language: Spanish
## Dataset Summary
The dataset consists of comments on Reddit, Twitter, and inputs/outputs of the Alpaca dataset translated to Spanish language and classified as suicidal ideation/behavior and non-suicidal.
# Dataset Structure
The dataset has 10050 rows (777 considered as Suicidal Ideation/Behavior and 9273 considered Not Suicidal).
## Dataset fields
* `Text`: User comment.
* `Label`: 1 if suicidal ideation/behavior; 0 if not suicidal comment.
# Dataset Creation
## Suicidal Ideation/Behavior
* 90 rows from Columbia Suicide Severity Rating Scale (C-SSRS)
https://zenodo.org/record/2667859#.ZDGnX-xBxYi
C-SSRS is a gold dataset for suicidal comments detection on Reddit.
We use `Helsinki-NLP/opus-mt-en-es` to translate the dataset. We also explode on paragraphs, filter messages less than 240 characters, and we filter the positive ones validating against the [Moderation API of OpenAI](https://platform.openai.com/docs/guides/moderation).
* 519 rows from https://github.com/laxmimerit/twitter-suicidal-intention-dataset/tree/master
The dataset contains the tweet data of suicidal intention and no intention data.
We use `Helsinki-NLP/opus-mt-en-es` to translate the dataset. We filter the positive ones validating against the [Moderation API of OpenAI](https://platform.openai.com/docs/guides/moderation).
* 168 rows added manually from public forums and public blogs.
## Non Suicidal
* 5000 rows from instructions of https://huggingface.co/datasets/somosnlp/somos-clean-alpaca-es
* 2000 rows from output of https://huggingface.co/datasets/somosnlp/somos-clean-alpaca-es
* 2000 rows from Columbia Suicide Severity Rating Scale (C-SSRS)
* 100 rows from https://huggingface.co/datasets/ziq/depression_advice. We use `Helsinki-NLP/opus-mt-en-es` to translate the dataset.
* 100 rows added manually from public forums, blogs and podcasts.
# Considerations for Using the Data
## Social Impact of Dataset
The dataset could contain some patterns to detect suicidal ideation/behavior.
## Discussion of Biases
No measures have been taken to estimate the bias and toxicity embedded in the dataset. However, the most of the data is collected on Reddit, Twitter, and ChatGPT. So there is probably an age bias because [the Internet is used more by younger people](https://www.statista.com/statistics/272365/age-distribution-of-internet-users-worldwide).
# Additional Information
## Team
* [dariolopez](https://huggingface.co/dariolopez)
* [diegogd](https://huggingface.co/diegogd)
## Licesing
This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
|
hackathon-somos-nlp-2023/suicide-comments-es
|
[
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:es",
"license:apache-2.0",
"region:us"
] |
2023-04-08T15:43:52+00:00
|
{"language": ["es"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"]}
|
2023-04-10T08:26:54+00:00
|
e439c8f3b33e00f9247346a4ef6327a3f61046cc
|
# AutoTrain Dataset for project: pro
## Dataset Description
This dataset has been automatically processed by AutoTrain for project pro.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Dietitian",
"target": "As a dietitian, I would like to design a vegetarian recipe for 2 people that has approximate 500 calories per serving and has a low glycemic index. Can you please provide a suggestion?"
},
{
"text": "IT Architect",
"target": "I want you to act as an IT Architect. I will provide some details about the functionality of an application or other digital product, and it will be your job to come up with ways to integrate it into the IT landscape. This could involve analyzing business requirements, performing a gap analysis and mapping the functionality of the new system to the existing IT landscape. Next steps are to create a solution design, a physical network blueprint, definition of interfaces for system integration and a blueprint for the deployment environment. My first request is \"I need help to integrate a CMS system.\""
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 122 |
| valid | 31 |
|
qdi0/autotrain-data-pro
|
[
"task_categories:summarization",
"region:us"
] |
2023-04-08T15:46:44+00:00
|
{"task_categories": ["summarization"]}
|
2023-04-08T15:47:21+00:00
|
ca857d981312404ad149fd81b0d603cdac6308e9
|
# Dataset Card for "comentarios_youtube_validos_timestamp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ID3/comentarios_youtube_validos_timestamp
|
[
"region:us"
] |
2023-04-08T15:50:06+00:00
|
{"dataset_info": {"features": [{"name": "comentario", "dtype": "string"}, {"name": "likes", "dtype": "int64"}, {"name": "similitud", "dtype": "float64"}, {"name": "timeStamp", "dtype": "int64"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1347724, "num_examples": 8181}], "download_size": 0, "dataset_size": 1347724}}
|
2023-04-08T16:48:31+00:00
|
49a799de82dc66dd4828b1b53307bafad187f085
|
pdulepet/small_squad
|
[
"license:mit",
"region:us"
] |
2023-04-08T15:58:55+00:00
|
{"license": "mit"}
|
2023-04-08T15:58:55+00:00
|
|
829b8eb628c1978808d1393ab0f72771d5f6e8b2
|
cognitivecomputations/oa_leet10k
|
[
"license:apache-2.0",
"region:us"
] |
2023-04-08T16:06:59+00:00
|
{"license": "apache-2.0"}
|
2023-04-15T19:08:10+00:00
|
|
63f0776552b7ddfd3777b61e1377e5db916ed71a
|
oblivionchecke/calyfilm-zdarmo
|
[
"license:openrail",
"region:us"
] |
2023-04-08T16:31:38+00:00
|
{"license": "openrail"}
|
2023-04-08T16:31:38+00:00
|
|
a73ef7e2b021a5029ac87e8d826f8dd11c5e2c30
|
# Dataset Card for "chunk_99"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_99
|
[
"region:us"
] |
2023-04-08T16:31:54+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 24252696288.75, "num_examples": 252506}], "download_size": 22892001720, "dataset_size": 24252696288.75}}
|
2023-04-08T17:06:37+00:00
|
76b9e02ec8f7e1de751235a17fa8c8a51e15755f
|
# Dataset Card for "chunk_101"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_101
|
[
"region:us"
] |
2023-04-08T16:41:01+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 25400181744.375, "num_examples": 264453}], "download_size": 24071815499, "dataset_size": 25400181744.375}}
|
2023-04-08T17:27:42+00:00
|
e0d1122705ebf9f63fa72dc68f83ae0296c693fc
|
# Dataset Card for "chunk_105"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_105
|
[
"region:us"
] |
2023-04-08T17:44:52+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 18310590720.0, "num_examples": 190640}], "download_size": 16453314083, "dataset_size": 18310590720.0}}
|
2023-04-08T18:01:33+00:00
|
0cbe012ad029d5944c4b22ea8d9b3a48eecc1d4f
|
# Dataset Card for "chunk_106"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_106
|
[
"region:us"
] |
2023-04-08T17:49:29+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 16748466048.0, "num_examples": 174376}], "download_size": 15134175223, "dataset_size": 16748466048.0}}
|
2023-04-08T18:04:39+00:00
|
6bd4fca8220d854ec48f082e78fd4e444b7aeffb
|
# Dataset Card for "chunk_102"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_102
|
[
"region:us"
] |
2023-04-08T18:06:30+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 24673578624.0, "num_examples": 256888}], "download_size": 23180534651, "dataset_size": 24673578624.0}}
|
2023-04-08T18:42:01+00:00
|
89132e51910501bb6cbb474a96025ab716f670ca
|
# Dataset Card for "chunk_103"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_103
|
[
"region:us"
] |
2023-04-08T18:16:49+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 26204103504.125, "num_examples": 272823}], "download_size": 23197549997, "dataset_size": 26204103504.125}}
|
2023-04-08T18:53:54+00:00
|
0f52735d32fe539fd790a480bcf1864421fa244e
|
# Dataset Card for "chunk_104"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_104
|
[
"region:us"
] |
2023-04-08T18:18:53+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 27678152160.75, "num_examples": 288170}], "download_size": 25306195521, "dataset_size": 27678152160.75}}
|
2023-04-08T18:56:56+00:00
|
8c9eb06b0b516e40a87c2d19b598fefe27795655
|
# Dataset Card for "chunk_107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_107
|
[
"region:us"
] |
2023-04-08T18:48:38+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 16759895760.125, "num_examples": 174495}], "download_size": 14947281130, "dataset_size": 16759895760.125}}
|
2023-04-08T19:02:53+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.