autotrain-data-processor
commited on
Commit
·
9df0a3f
1
Parent(s):
1f1756a
Processed data from AutoTrain data processor ([2023-08-23 14:17 ]
Browse files- README.md +58 -0
- processed/dataset_dict.json +1 -0
- processed/train/data-00000-of-00001.arrow +3 -0
- processed/train/dataset_info.json +28 -0
- processed/train/state.json +17 -0
- processed/valid/data-00000-of-00001.arrow +3 -0
- processed/valid/dataset_info.json +28 -0
- processed/valid/state.json +17 -0
README.md
ADDED
|
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
task_categories:
|
| 5 |
+
- summarization
|
| 6 |
+
|
| 7 |
+
---
|
| 8 |
+
# AutoTrain Dataset for project: test-summarization
|
| 9 |
+
|
| 10 |
+
## Dataset Description
|
| 11 |
+
|
| 12 |
+
This dataset has been automatically processed by AutoTrain for project test-summarization.
|
| 13 |
+
|
| 14 |
+
### Languages
|
| 15 |
+
|
| 16 |
+
The BCP-47 code for the dataset's language is en.
|
| 17 |
+
|
| 18 |
+
## Dataset Structure
|
| 19 |
+
|
| 20 |
+
### Data Instances
|
| 21 |
+
|
| 22 |
+
A sample from this dataset looks as follows:
|
| 23 |
+
|
| 24 |
+
```json
|
| 25 |
+
[
|
| 26 |
+
{
|
| 27 |
+
"feat_id": "13829542",
|
| 28 |
+
"text": "Kasia: When are u coming back?\r\nMatt: Back where?\r\nKasia: Oh come on\r\nKasia: you know what i mean\r\nMatt: I really don't \r\nKasia: When are you coming back to Warsaw\r\nMatt: I have no idea\r\nMatt: maybe around easter\r\nKasia: will you let me know\r\nMatt: sure if I know something then I will let you know asap\r\nKasia: ok \r\nMatt: are you mad?\r\nKasia: a bit\r\nMatt: oh come on\r\nMatt: this is not my fault \r\nMatt: there is no way that I can answer that question\r\nMatt: not now\r\nKasia: Fine",
|
| 29 |
+
"target": "Matt doesn't know when he's coming back to Warsaw. He might come around Easter. When he knows more, he will let Kasia know. Kasia is a bit upset."
|
| 30 |
+
},
|
| 31 |
+
{
|
| 32 |
+
"feat_id": "13862523",
|
| 33 |
+
"text": "Oliver: Have you beaten the game yet?\nTom: Not yet\nOliver: Ok... what mission are you playing?\nTom: The one before the final one, it's pretty hard\nOliver: I didn't find it particularly hard\nTom: I mean, combat is easy at this point in the game but the puzzles are difficult\nOliver: Ok, I got it\nTom: It's fun how most horror action games let you get really powerful by the end of the story\nOliver: Well, you know, being a pussy from start to finish ain't my idea of fun even in a horror game\nTom: I know... but do you remember Bioshock? You turned into some sort of superhero and the final boss was pretty much a joke\nOliver: Don't you dare talk like that about my favorite game\nTom: I know, I love it too, but it had its flaws\nOliver: No it didn't XD\nTom: Lol\nOliver: Well, keep playing\nTom: I'll let you know when I finish this one and my overall impressions\nOliver: Ok",
|
| 34 |
+
"target": "Tom will contact Oliver after finishing new horror action game."
|
| 35 |
+
}
|
| 36 |
+
]
|
| 37 |
+
```
|
| 38 |
+
|
| 39 |
+
### Dataset Fields
|
| 40 |
+
|
| 41 |
+
The dataset has the following fields (also called "features"):
|
| 42 |
+
|
| 43 |
+
```json
|
| 44 |
+
{
|
| 45 |
+
"feat_id": "Value(dtype='string', id=None)",
|
| 46 |
+
"text": "Value(dtype='string', id=None)",
|
| 47 |
+
"target": "Value(dtype='string', id=None)"
|
| 48 |
+
}
|
| 49 |
+
```
|
| 50 |
+
|
| 51 |
+
### Dataset Splits
|
| 52 |
+
|
| 53 |
+
This dataset is split into a train and validation split. The split sizes are as follow:
|
| 54 |
+
|
| 55 |
+
| Split name | Num samples |
|
| 56 |
+
| ------------ | ------------------- |
|
| 57 |
+
| train | 11785 |
|
| 58 |
+
| valid | 2947 |
|
processed/dataset_dict.json
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
{"splits": ["train", "valid"]}
|
processed/train/data-00000-of-00001.arrow
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b2ad82992a82f5eab18a673da7ea80c28705cf214b17fa26ee7a9ed278c3591a
|
| 3 |
+
size 7565056
|
processed/train/dataset_info.json
ADDED
|
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"citation": "",
|
| 3 |
+
"description": "AutoTrain generated dataset",
|
| 4 |
+
"features": {
|
| 5 |
+
"feat_id": {
|
| 6 |
+
"dtype": "string",
|
| 7 |
+
"_type": "Value"
|
| 8 |
+
},
|
| 9 |
+
"text": {
|
| 10 |
+
"dtype": "string",
|
| 11 |
+
"_type": "Value"
|
| 12 |
+
},
|
| 13 |
+
"target": {
|
| 14 |
+
"dtype": "string",
|
| 15 |
+
"_type": "Value"
|
| 16 |
+
}
|
| 17 |
+
},
|
| 18 |
+
"homepage": "",
|
| 19 |
+
"license": "",
|
| 20 |
+
"splits": {
|
| 21 |
+
"train": {
|
| 22 |
+
"name": "train",
|
| 23 |
+
"num_bytes": 7560765,
|
| 24 |
+
"num_examples": 11785,
|
| 25 |
+
"dataset_name": null
|
| 26 |
+
}
|
| 27 |
+
}
|
| 28 |
+
}
|
processed/train/state.json
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_data_files": [
|
| 3 |
+
{
|
| 4 |
+
"filename": "data-00000-of-00001.arrow"
|
| 5 |
+
}
|
| 6 |
+
],
|
| 7 |
+
"_fingerprint": "898f75eb01f375c3",
|
| 8 |
+
"_format_columns": [
|
| 9 |
+
"feat_id",
|
| 10 |
+
"target",
|
| 11 |
+
"text"
|
| 12 |
+
],
|
| 13 |
+
"_format_kwargs": {},
|
| 14 |
+
"_format_type": null,
|
| 15 |
+
"_output_all_columns": false,
|
| 16 |
+
"_split": null
|
| 17 |
+
}
|
processed/valid/data-00000-of-00001.arrow
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cd535c0e33061dc55924cd101d051b875a442508295f5d51e44f32468ab12b6f
|
| 3 |
+
size 1919752
|
processed/valid/dataset_info.json
ADDED
|
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"citation": "",
|
| 3 |
+
"description": "AutoTrain generated dataset",
|
| 4 |
+
"features": {
|
| 5 |
+
"feat_id": {
|
| 6 |
+
"dtype": "string",
|
| 7 |
+
"_type": "Value"
|
| 8 |
+
},
|
| 9 |
+
"text": {
|
| 10 |
+
"dtype": "string",
|
| 11 |
+
"_type": "Value"
|
| 12 |
+
},
|
| 13 |
+
"target": {
|
| 14 |
+
"dtype": "string",
|
| 15 |
+
"_type": "Value"
|
| 16 |
+
}
|
| 17 |
+
},
|
| 18 |
+
"homepage": "",
|
| 19 |
+
"license": "",
|
| 20 |
+
"splits": {
|
| 21 |
+
"valid": {
|
| 22 |
+
"name": "valid",
|
| 23 |
+
"num_bytes": 1918352,
|
| 24 |
+
"num_examples": 2947,
|
| 25 |
+
"dataset_name": null
|
| 26 |
+
}
|
| 27 |
+
}
|
| 28 |
+
}
|
processed/valid/state.json
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_data_files": [
|
| 3 |
+
{
|
| 4 |
+
"filename": "data-00000-of-00001.arrow"
|
| 5 |
+
}
|
| 6 |
+
],
|
| 7 |
+
"_fingerprint": "f2c5d880317cec18",
|
| 8 |
+
"_format_columns": [
|
| 9 |
+
"feat_id",
|
| 10 |
+
"target",
|
| 11 |
+
"text"
|
| 12 |
+
],
|
| 13 |
+
"_format_kwargs": {},
|
| 14 |
+
"_format_type": null,
|
| 15 |
+
"_output_all_columns": false,
|
| 16 |
+
"_split": null
|
| 17 |
+
}
|