sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
e4ed6440e01f3b24ac3231578c84173ee41fe944
|
INCOMPLETE CHECK FILES
|
averageandyyy/imda_dataset_clean_2
|
[
"region:us"
] |
2023-05-23T08:36:03+00:00
|
{}
|
2023-05-25T11:06:44+00:00
|
3ae5baf2f47446c710978a042a9cf65cca36cbf1
|
```bib
@misc{clark2023seahorse,
title={SEAHORSE: A Multilingual, Multifaceted Dataset for Summarization Evaluation},
author={Elizabeth Clark and Shruti Rijhwani and Sebastian Gehrmann and Joshua Maynez and Roee Aharoni and Vitaly Nikolaev and Thibault Sellam and Aditya Siddhant and Dipanjan Das and Ankur P. Parikh},
year={2023},
eprint={2305.13194},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
tasksource/seahorse_summarization_evaluation
|
[
"task_categories:summarization",
"language:de",
"language:en",
"language:ru",
"language:tr",
"language:vi",
"license:cc",
"arxiv:2305.13194",
"region:us"
] |
2023-05-23T08:45:14+00:00
|
{"language": ["de", "en", "ru", "tr", "vi"], "license": "cc", "task_categories": ["summarization"]}
|
2023-05-23T08:46:41+00:00
|
d92eb598f04f97e2031169cd4f48299fb67e7d3d
|
hiltybosch/test
|
[
"license:gpl-2.0",
"region:us"
] |
2023-05-23T08:57:44+00:00
|
{"license": "gpl-2.0"}
|
2023-05-23T08:57:44+00:00
|
|
7927efac4b2e85086d094f69636cad5e296c1e50
|
tbomez/test
|
[
"license:openrail",
"region:us"
] |
2023-05-23T09:34:21+00:00
|
{"license": "openrail"}
|
2023-05-30T14:20:19+00:00
|
|
e42b4d738cfe1c6be893ed705acd9605dc888af7
|
# Dataset Card for PAWS-X MT
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [PAWS-X](https://github.com/google-research-datasets/paws/tree/master/pawsx)
- **Repository:** [PAWS-X](https://github.com/google-research-datasets/paws/tree/master/pawsx)
- **Paper:** [PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification](https://arxiv.org/abs/1908.11828)
- **Point of Contact:** [Yinfei Yang]([email protected])
### Dataset Summary
This dataset contains 23,659 **human** translated PAWS evaluation pairs and
296,406 **machine** translated training pairs in six typologically distinct
languages: French, Spanish, German, Chinese, Japanese, and Korean. All
translated pairs are sourced from examples in
[PAWS-Wiki](https://github.com/google-research-datasets/paws#paws-wiki).
For further details, see the accompanying paper:
[PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase
Identification](https://arxiv.org/abs/1908.11828)
This is a machine-translated version of the original dataset into English from each langauge.
### Supported Tasks and Leaderboards
It has been majorly used for paraphrase identification for English and other 6 languages namely French, Spanish, German, Chinese, Japanese, and Korean
### Languages
The dataset is in English, French, Spanish, German, Chinese, Japanese, and Korean
## Dataset Structure
### Data Instances
For en:
```
id : 1
sentence1 : In Paris , in October 1560 , he secretly met the English ambassador , Nicolas Throckmorton , asking him for a passport to return to England through Scotland .
sentence2 : In October 1560 , he secretly met with the English ambassador , Nicolas Throckmorton , in Paris , and asked him for a passport to return to Scotland through England .
label : 0
```
For fr:
```
id : 1
sentence1 : À Paris, en octobre 1560, il rencontra secrètement l'ambassadeur d'Angleterre, Nicolas Throckmorton, lui demandant un passeport pour retourner en Angleterre en passant par l'Écosse.
sentence2 : En octobre 1560, il rencontra secrètement l'ambassadeur d'Angleterre, Nicolas Throckmorton, à Paris, et lui demanda un passeport pour retourner en Écosse par l'Angleterre.
label : 0
```
### Data Fields
All files are in tsv format with four columns:
Column Name | Data
:---------- | :--------------------------------------------------------
id | An ID that matches the ID of the source pair in PAWS-Wiki
sentence1 | The first sentence
sentence2 | The second sentence
label | Label for each pair
The source text of each translation can be retrieved by looking up the ID in the
corresponding file in PAWS-Wiki.
### Data Splits
The numbers of examples for each of the seven languages are shown below:
Language | Train | Dev | Test
:------- | ------: | -----: | -----:
en | 49,401 | 2,000 | 2,000
fr | 49,401 | 2,000 | 2,000
es | 49,401 | 2,000 | 2,000
de | 49,401 | 2,000 | 2,000
zh | 49,401 | 2,000 | 2,000
ja | 49,401 | 2,000 | 2,000
ko | 49,401 | 2,000 | 2,000
> **Caveat**: please note that the dev and test sets of PAWS-X are both sourced
> from the dev set of PAWS-Wiki. As a consequence, the same `sentence 1` may
> appear in both the dev and test sets. Nevertheless our data split guarantees
> that there is no overlap on sentence pairs (`sentence 1` + `sentence 2`)
> between dev and test.
## Dataset Creation
### Curation Rationale
Most existing work on adversarial data generation focuses on English. For example, PAWS (Paraphrase Adversaries from Word Scrambling) (Zhang et al., 2019) consists of challenging English paraphrase identification pairs from Wikipedia and Quora. They remedy this gap with PAWS-X, a new dataset of 23,659 human translated PAWS evaluation pairs in six typologically distinct languages: French, Spanish, German, Chinese, Japanese, and Korean. They provide baseline numbers for three models with different capacity to capture non-local context and sentence structure, and using different multilingual training and evaluation regimes. Multilingual BERT (Devlin et al., 2019) fine-tuned on PAWS English plus machine-translated data performs the best, with a range of 83.1-90.8 accuracy across the non-English languages and an average accuracy gain of 23% over the next best model. PAWS-X shows the effectiveness of deep, multilingual pre-training while also leaving considerable headroom as a new challenge to drive multilingual research that better captures structure and contextual information.
### Source Data
PAWS (Paraphrase Adversaries from Word Scrambling)
#### Initial Data Collection and Normalization
All translated pairs are sourced from examples in [PAWS-Wiki](https://github.com/google-research-datasets/paws#paws-wiki)
#### Who are the source language producers?
This dataset contains 23,659 human translated PAWS evaluation pairs and 296,406 machine translated training pairs in six typologically distinct languages: French, Spanish, German, Chinese, Japanese, and Korean.
### Annotations
#### Annotation process
If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.
#### Who are the annotators?
The paper mentions the translate team, especially Mengmeng Niu, for the help with the annotations.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
### Licensing Information
The dataset may be freely used for any purpose, although acknowledgement of Google LLC ("Google") as the data source would be appreciated. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.
### Citation Information
```
@InProceedings{pawsx2019emnlp,
title = {{PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification}},
author = {Yang, Yinfei and Zhang, Yuan and Tar, Chris and Baldridge, Jason},
booktitle = {Proc. of EMNLP},
year = {2019}
}
```
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik), [@gowtham1997](https://github.com/gowtham1997) for adding this dataset.
|
juletxara/pawsx_mt
|
[
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"task_ids:multi-input-text-classification",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-paws",
"language:en",
"license:other",
"paraphrase-identification",
"arxiv:1908.11828",
"region:us"
] |
2023-05-23T09:39:03+00:00
|
{"annotations_creators": ["expert-generated", "machine-generated"], "language_creators": ["expert-generated", "machine-generated"], "language": ["en"], "license": ["other"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-paws"], "task_categories": ["text-classification"], "task_ids": ["semantic-similarity-classification", "semantic-similarity-scoring", "text-scoring", "multi-input-text-classification"], "paperswithcode_id": "paws-x", "pretty_name": "PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification", "tags": ["paraphrase-identification"], "dataset_info": [{"config_name": "nllb-200-distilled-600M", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 470424, "num_examples": 2000}, {"name": "es", "num_bytes": 477895, "num_examples": 2000}, {"name": "fr", "num_bytes": 478044, "num_examples": 2000}, {"name": "ja", "num_bytes": 461718, "num_examples": 2000}, {"name": "ko", "num_bytes": 467649, "num_examples": 2000}, {"name": "zh", "num_bytes": 481919, "num_examples": 2000}], "download_size": 2704143, "dataset_size": 2837649}, {"config_name": "nllb-200-distilled-1.3B", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 469810, "num_examples": 2000}, {"name": "es", "num_bytes": 477848, "num_examples": 2000}, {"name": "fr", "num_bytes": 476036, "num_examples": 2000}, {"name": "ja", "num_bytes": 465219, "num_examples": 2000}, {"name": "ko", "num_bytes": 469779, "num_examples": 2000}, {"name": "zh", "num_bytes": 481685, "num_examples": 2000}], "download_size": 2706871, "dataset_size": 2840377}, {"config_name": "nllb-200-1.3B", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 472562, "num_examples": 2000}, {"name": "es", "num_bytes": 480329, "num_examples": 2000}, {"name": "fr", "num_bytes": 479096, "num_examples": 2000}, {"name": "ja", "num_bytes": 465418, "num_examples": 2000}, {"name": "ko", "num_bytes": 468672, "num_examples": 2000}, {"name": "zh", "num_bytes": 480250, "num_examples": 2000}], "download_size": 2712821, "dataset_size": 2846327}, {"config_name": "nllb-200-3.3B", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 475185, "num_examples": 2000}, {"name": "es", "num_bytes": 482022, "num_examples": 2000}, {"name": "fr", "num_bytes": 480477, "num_examples": 2000}, {"name": "ja", "num_bytes": 468442, "num_examples": 2000}, {"name": "ko", "num_bytes": 475577, "num_examples": 2000}, {"name": "zh", "num_bytes": 483772, "num_examples": 2000}], "download_size": 2731969, "dataset_size": 2865475}, {"config_name": "xglm-564M", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 405887, "num_examples": 2000}, {"name": "es", "num_bytes": 433475, "num_examples": 2000}, {"name": "fr", "num_bytes": 451810, "num_examples": 2000}, {"name": "ja", "num_bytes": 480321, "num_examples": 2000}, {"name": "ko", "num_bytes": 430501, "num_examples": 2000}, {"name": "zh", "num_bytes": 536783, "num_examples": 2000}], "download_size": 2605271, "dataset_size": 2738777}, {"config_name": "xglm-1.7B", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 448117, "num_examples": 2000}, {"name": "es", "num_bytes": 470068, "num_examples": 2000}, {"name": "fr", "num_bytes": 478245, "num_examples": 2000}, {"name": "ja", "num_bytes": 462409, "num_examples": 2000}, {"name": "ko", "num_bytes": 410803, "num_examples": 2000}, {"name": "zh", "num_bytes": 455754, "num_examples": 2000}], "download_size": 2591890, "dataset_size": 2725396}, {"config_name": "xglm-2.9B", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 450076, "num_examples": 2000}, {"name": "es", "num_bytes": 471853, "num_examples": 2000}, {"name": "fr", "num_bytes": 475575, "num_examples": 2000}, {"name": "ja", "num_bytes": 435278, "num_examples": 2000}, {"name": "ko", "num_bytes": 407905, "num_examples": 2000}, {"name": "zh", "num_bytes": 437874, "num_examples": 2000}], "download_size": 2545055, "dataset_size": 2678561}, {"config_name": "xglm-4.5B", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 466986, "num_examples": 2000}, {"name": "es", "num_bytes": 483691, "num_examples": 2000}, {"name": "fr", "num_bytes": 485910, "num_examples": 2000}, {"name": "ja", "num_bytes": 485014, "num_examples": 2000}, {"name": "ko", "num_bytes": 459562, "num_examples": 2000}, {"name": "zh", "num_bytes": 502672, "num_examples": 2000}], "download_size": 2750329, "dataset_size": 2883835}, {"config_name": "xglm-7.5B", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 457033, "num_examples": 2000}, {"name": "es", "num_bytes": 471085, "num_examples": 2000}, {"name": "fr", "num_bytes": 474534, "num_examples": 2000}, {"name": "ja", "num_bytes": 455080, "num_examples": 2000}, {"name": "ko", "num_bytes": 432714, "num_examples": 2000}, {"name": "zh", "num_bytes": 462024, "num_examples": 2000}], "download_size": 2618964, "dataset_size": 2752470}, {"config_name": "bloom-560m", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 422431, "num_examples": 2000}, {"name": "es", "num_bytes": 407925, "num_examples": 2000}, {"name": "fr", "num_bytes": 417238, "num_examples": 2000}, {"name": "ja", "num_bytes": 541097, "num_examples": 2000}, {"name": "ko", "num_bytes": 305526, "num_examples": 2000}, {"name": "zh", "num_bytes": 467990, "num_examples": 2000}], "download_size": 2428701, "dataset_size": 2562207}, {"config_name": "bloom-1b1", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 420950, "num_examples": 2000}, {"name": "es", "num_bytes": 440695, "num_examples": 2000}, {"name": "fr", "num_bytes": 444933, "num_examples": 2000}, {"name": "ja", "num_bytes": 383160, "num_examples": 2000}, {"name": "ko", "num_bytes": 309106, "num_examples": 2000}, {"name": "zh", "num_bytes": 427093, "num_examples": 2000}], "download_size": 2292431, "dataset_size": 2425937}, {"config_name": "bloom-1b7", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 441068, "num_examples": 2000}, {"name": "es", "num_bytes": 455189, "num_examples": 2000}, {"name": "fr", "num_bytes": 458970, "num_examples": 2000}, {"name": "ja", "num_bytes": 471554, "num_examples": 2000}, {"name": "ko", "num_bytes": 387729, "num_examples": 2000}, {"name": "zh", "num_bytes": 434684, "num_examples": 2000}], "download_size": 2515688, "dataset_size": 2649194}, {"config_name": "bloom-3b", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 452342, "num_examples": 2000}, {"name": "es", "num_bytes": 468924, "num_examples": 2000}, {"name": "fr", "num_bytes": 469477, "num_examples": 2000}, {"name": "ja", "num_bytes": 450059, "num_examples": 2000}, {"name": "ko", "num_bytes": 371349, "num_examples": 2000}, {"name": "zh", "num_bytes": 443763, "num_examples": 2000}], "download_size": 2522408, "dataset_size": 2655914}, {"config_name": "bloom-7b1", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 460868, "num_examples": 2000}, {"name": "es", "num_bytes": 476090, "num_examples": 2000}, {"name": "fr", "num_bytes": 477681, "num_examples": 2000}, {"name": "ja", "num_bytes": 462541, "num_examples": 2000}, {"name": "ko", "num_bytes": 410996, "num_examples": 2000}, {"name": "zh", "num_bytes": 452755, "num_examples": 2000}], "download_size": 2607425, "dataset_size": 2740931}, {"config_name": "llama-7B", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 467040, "num_examples": 2000}, {"name": "es", "num_bytes": 479857, "num_examples": 2000}, {"name": "fr", "num_bytes": 481692, "num_examples": 2000}, {"name": "ja", "num_bytes": 469209, "num_examples": 2000}, {"name": "ko", "num_bytes": 460027, "num_examples": 2000}, {"name": "zh", "num_bytes": 492611, "num_examples": 2000}], "download_size": 2716930, "dataset_size": 2850436}, {"config_name": "llama-13B", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 464622, "num_examples": 2000}, {"name": "es", "num_bytes": 475395, "num_examples": 2000}, {"name": "fr", "num_bytes": 475380, "num_examples": 2000}, {"name": "ja", "num_bytes": 455735, "num_examples": 2000}, {"name": "ko", "num_bytes": 446006, "num_examples": 2000}, {"name": "zh", "num_bytes": 477833, "num_examples": 2000}], "download_size": 2661465, "dataset_size": 2794971}, {"config_name": "llama-30B", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 471142, "num_examples": 2000}, {"name": "es", "num_bytes": 480239, "num_examples": 2000}, {"name": "fr", "num_bytes": 480078, "num_examples": 2000}, {"name": "ja", "num_bytes": 473976, "num_examples": 2000}, {"name": "ko", "num_bytes": 468087, "num_examples": 2000}, {"name": "zh", "num_bytes": 498795, "num_examples": 2000}], "download_size": 2738811, "dataset_size": 2872317}, {"config_name": "RedPajama-INCITE-Base-3B-v1", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 454468, "num_examples": 2000}, {"name": "es", "num_bytes": 474260, "num_examples": 2000}, {"name": "fr", "num_bytes": 477493, "num_examples": 2000}, {"name": "ja", "num_bytes": 463806, "num_examples": 2000}, {"name": "ko", "num_bytes": 455166, "num_examples": 2000}, {"name": "zh", "num_bytes": 520240, "num_examples": 2000}], "download_size": 2711927, "dataset_size": 2845433}, {"config_name": "RedPajama-INCITE-7B-Base", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 467209, "num_examples": 2000}, {"name": "es", "num_bytes": 482675, "num_examples": 2000}, {"name": "fr", "num_bytes": 479674, "num_examples": 2000}, {"name": "ja", "num_bytes": 469695, "num_examples": 2000}, {"name": "ko", "num_bytes": 427807, "num_examples": 2000}, {"name": "zh", "num_bytes": 475045, "num_examples": 2000}], "download_size": 2668599, "dataset_size": 2802105}, {"config_name": "open_llama_3b", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 459906, "num_examples": 2000}, {"name": "es", "num_bytes": 474097, "num_examples": 2000}, {"name": "fr", "num_bytes": 477589, "num_examples": 2000}, {"name": "ja", "num_bytes": 462664, "num_examples": 2000}, {"name": "ko", "num_bytes": 434739, "num_examples": 2000}, {"name": "zh", "num_bytes": 490475, "num_examples": 2000}], "download_size": 2665964, "dataset_size": 2799470}, {"config_name": "open_llama_7b", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 464258, "num_examples": 2000}, {"name": "es", "num_bytes": 476895, "num_examples": 2000}, {"name": "fr", "num_bytes": 475470, "num_examples": 2000}, {"name": "ja", "num_bytes": 467530, "num_examples": 2000}, {"name": "ko", "num_bytes": 420696, "num_examples": 2000}, {"name": "zh", "num_bytes": 471007, "num_examples": 2000}], "download_size": 2642350, "dataset_size": 2775856}, {"config_name": "open_llama_13b", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 466772, "num_examples": 2000}, {"name": "es", "num_bytes": 480354, "num_examples": 2000}, {"name": "fr", "num_bytes": 480221, "num_examples": 2000}, {"name": "ja", "num_bytes": 460154, "num_examples": 2000}, {"name": "ko", "num_bytes": 443434, "num_examples": 2000}, {"name": "zh", "num_bytes": 467898, "num_examples": 2000}], "download_size": 2665327, "dataset_size": 2798833}, {"config_name": "xgen-7b-4k-base", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 466109, "num_examples": 2000}, {"name": "es", "num_bytes": 480599, "num_examples": 2000}, {"name": "fr", "num_bytes": 481774, "num_examples": 2000}, {"name": "ja", "num_bytes": 455601, "num_examples": 2000}, {"name": "ko", "num_bytes": 441720, "num_examples": 2000}, {"name": "zh", "num_bytes": 473661, "num_examples": 2000}], "download_size": 2665958, "dataset_size": 2799464}, {"config_name": "xgen-7b-8k-base", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 464831, "num_examples": 2000}, {"name": "es", "num_bytes": 478903, "num_examples": 2000}, {"name": "fr", "num_bytes": 481199, "num_examples": 2000}, {"name": "ja", "num_bytes": 458928, "num_examples": 2000}, {"name": "ko", "num_bytes": 448148, "num_examples": 2000}, {"name": "zh", "num_bytes": 475878, "num_examples": 2000}], "download_size": 2674381, "dataset_size": 2807887}, {"config_name": "xgen-7b-8k-inst", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 472749, "num_examples": 2000}, {"name": "es", "num_bytes": 483956, "num_examples": 2000}, {"name": "fr", "num_bytes": 487250, "num_examples": 2000}, {"name": "ja", "num_bytes": 485563, "num_examples": 2000}, {"name": "ko", "num_bytes": 476502, "num_examples": 2000}, {"name": "zh", "num_bytes": 507723, "num_examples": 2000}], "download_size": 2780237, "dataset_size": 2913743}, {"config_name": "open_llama_7b_v2", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 464268, "num_examples": 2000}, {"name": "es", "num_bytes": 476576, "num_examples": 2000}, {"name": "fr", "num_bytes": 478153, "num_examples": 2000}, {"name": "ja", "num_bytes": 460932, "num_examples": 2000}, {"name": "ko", "num_bytes": 456955, "num_examples": 2000}, {"name": "zh", "num_bytes": 467587, "num_examples": 2000}], "download_size": 2670965, "dataset_size": 2804471}, {"config_name": "falcon-7b", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 456304, "num_examples": 2000}, {"name": "es", "num_bytes": 474821, "num_examples": 2000}, {"name": "fr", "num_bytes": 448537, "num_examples": 2000}, {"name": "ja", "num_bytes": 373442, "num_examples": 2000}, {"name": "ko", "num_bytes": 425657, "num_examples": 2000}, {"name": "zh", "num_bytes": 449866, "num_examples": 2000}], "download_size": 2495121, "dataset_size": 2628627}, {"config_name": "polylm-1.7b", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 459992, "num_examples": 2000}, {"name": "es", "num_bytes": 466048, "num_examples": 2000}, {"name": "fr", "num_bytes": 470826, "num_examples": 2000}, {"name": "ja", "num_bytes": 448180, "num_examples": 2000}, {"name": "ko", "num_bytes": 415816, "num_examples": 2000}, {"name": "zh", "num_bytes": 438679, "num_examples": 2000}], "download_size": 2566035, "dataset_size": 2699541}, {"config_name": "polylm-13b", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 473536, "num_examples": 2000}, {"name": "es", "num_bytes": 482328, "num_examples": 2000}, {"name": "fr", "num_bytes": 481341, "num_examples": 2000}, {"name": "ja", "num_bytes": 452146, "num_examples": 2000}, {"name": "ko", "num_bytes": 457546, "num_examples": 2000}, {"name": "zh", "num_bytes": 464947, "num_examples": 2000}], "download_size": 2678338, "dataset_size": 2811844}, {"config_name": "polylm-multialpaca-13b", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 472264, "num_examples": 2000}, {"name": "es", "num_bytes": 477291, "num_examples": 2000}, {"name": "fr", "num_bytes": 474987, "num_examples": 2000}, {"name": "ja", "num_bytes": 465751, "num_examples": 2000}, {"name": "ko", "num_bytes": 465889, "num_examples": 2000}, {"name": "zh", "num_bytes": 461985, "num_examples": 2000}], "download_size": 2684661, "dataset_size": 2818167}, {"config_name": "open_llama_3b_v2", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 454405, "num_examples": 2000}, {"name": "es", "num_bytes": 475689, "num_examples": 2000}, {"name": "fr", "num_bytes": 476410, "num_examples": 2000}, {"name": "ja", "num_bytes": 447704, "num_examples": 2000}, {"name": "ko", "num_bytes": 435675, "num_examples": 2000}, {"name": "zh", "num_bytes": 466981, "num_examples": 2000}], "download_size": 2623358, "dataset_size": 2756864}, {"config_name": "Llama-2-7b-hf", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 468952, "num_examples": 2000}, {"name": "es", "num_bytes": 481463, "num_examples": 2000}, {"name": "fr", "num_bytes": 481620, "num_examples": 2000}, {"name": "ja", "num_bytes": 452968, "num_examples": 2000}, {"name": "ko", "num_bytes": 448819, "num_examples": 2000}, {"name": "zh", "num_bytes": 476890, "num_examples": 2000}], "download_size": 2677206, "dataset_size": 2810712}, {"config_name": "Llama-2-13b-hf", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 471040, "num_examples": 2000}, {"name": "es", "num_bytes": 480439, "num_examples": 2000}, {"name": "fr", "num_bytes": 479753, "num_examples": 2000}, {"name": "ja", "num_bytes": 457856, "num_examples": 2000}, {"name": "ko", "num_bytes": 459972, "num_examples": 2000}, {"name": "zh", "num_bytes": 478780, "num_examples": 2000}], "download_size": 2694334, "dataset_size": 2827840}, {"config_name": "Llama-2-7b-chat-hf", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 429595, "num_examples": 2000}, {"name": "es", "num_bytes": 395137, "num_examples": 2000}, {"name": "fr", "num_bytes": 338615, "num_examples": 2000}, {"name": "ja", "num_bytes": 448313, "num_examples": 2000}, {"name": "ko", "num_bytes": 429424, "num_examples": 2000}, {"name": "zh", "num_bytes": 425094, "num_examples": 2000}], "download_size": 2332672, "dataset_size": 2466178}, {"config_name": "Llama-2-13b-chat-hf", "features": [{"name": "id", "dtype": "int32"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "de", "num_bytes": 476183, "num_examples": 2000}, {"name": "es", "num_bytes": 481248, "num_examples": 2000}, {"name": "fr", "num_bytes": 480349, "num_examples": 2000}, {"name": "ja", "num_bytes": 475454, "num_examples": 2000}, {"name": "ko", "num_bytes": 482906, "num_examples": 2000}, {"name": "zh", "num_bytes": 492532, "num_examples": 2000}], "download_size": 2755166, "dataset_size": 2888672}]}
|
2023-07-21T09:18:49+00:00
|
f66e758b72fd2f5996b260ecdb89aff6dfccfbbf
|
# Dataset Card for "xnli"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://www.nyu.edu/projects/bowman/xnli/](https://www.nyu.edu/projects/bowman/xnli/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 7.74 GB
- **Size of the generated dataset:** 3.23 GB
- **Total amount of disk used:** 10.97 GB
### Dataset Summary
XNLI is a subset of a few thousand examples from MNLI which has been translated
into a 14 different languages (some low-ish resource). As with MNLI, the goal is
to predict textual entailment (does sentence A imply/contradict/neither sentence
B) and is a classification task (given two sentences, predict one of three
labels).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### all_languages
- **Size of downloaded dataset files:** 483.96 MB
- **Size of the generated dataset:** 1.61 GB
- **Total amount of disk used:** 2.09 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"hypothesis": "{\"language\": [\"ar\", \"bg\", \"de\", \"el\", \"en\", \"es\", \"fr\", \"hi\", \"ru\", \"sw\", \"th\", \"tr\", \"ur\", \"vi\", \"zh\"], \"translation\": [\"احد اع...",
"label": 0,
"premise": "{\"ar\": \"واحدة من رقابنا ستقوم بتنفيذ تعليماتك كلها بكل دقة\", \"bg\": \"един от нашите номера ще ви даде инструкции .\", \"de\": \"Eine ..."
}
```
#### ar
- **Size of downloaded dataset files:** 483.96 MB
- **Size of the generated dataset:** 109.32 MB
- **Total amount of disk used:** 593.29 MB
An example of 'validation' looks as follows.
```
{
"hypothesis": "اتصل بأمه حالما أوصلته حافلة المدرسية.",
"label": 1,
"premise": "وقال، ماما، لقد عدت للمنزل."
}
```
#### bg
- **Size of downloaded dataset files:** 483.96 MB
- **Size of the generated dataset:** 128.32 MB
- **Total amount of disk used:** 612.28 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"hypothesis": "\"губиш нещата на следното ниво , ако хората си припомнят .\"...",
"label": 0,
"premise": "\"по време на сезона и предполагам , че на твоето ниво ще ги загубиш на следващото ниво , ако те решат да си припомнят отбора на ..."
}
```
#### de
- **Size of downloaded dataset files:** 483.96 MB
- **Size of the generated dataset:** 86.17 MB
- **Total amount of disk used:** 570.14 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"hypothesis": "Man verliert die Dinge auf die folgende Ebene , wenn sich die Leute erinnern .",
"label": 0,
"premise": "\"Du weißt , während der Saison und ich schätze , auf deiner Ebene verlierst du sie auf die nächste Ebene , wenn sie sich entschl..."
}
```
#### el
- **Size of downloaded dataset files:** 483.96 MB
- **Size of the generated dataset:** 142.30 MB
- **Total amount of disk used:** 626.26 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"hypothesis": "\"Τηλεφώνησε στη μαμά του μόλις το σχολικό λεωφορείο τον άφησε.\"...",
"label": 1,
"premise": "Και είπε, Μαμά, έφτασα στο σπίτι."
}
```
### Data Fields
The data fields are the same among all splits.
#### all_languages
- `premise`: a multilingual `string` variable, with possible languages including `ar`, `bg`, `de`, `el`, `en`.
- `hypothesis`: a multilingual `string` variable, with possible languages including `ar`, `bg`, `de`, `el`, `en`.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
#### ar
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
#### bg
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
#### de
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
#### el
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
### Data Splits
| name |train |validation|test|
|-------------|-----:|---------:|---:|
|all_languages|392702| 2490|5010|
|ar |392702| 2490|5010|
|bg |392702| 2490|5010|
|de |392702| 2490|5010|
|el |392702| 2490|5010|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{conneau2018xnli,
author = {Conneau, Alexis
and Rinott, Ruty
and Lample, Guillaume
and Williams, Adina
and Bowman, Samuel R.
and Schwenk, Holger
and Stoyanov, Veselin},
title = {XNLI: Evaluating Cross-lingual Sentence Representations},
booktitle = {Proceedings of the 2018 Conference on Empirical Methods
in Natural Language Processing},
year = {2018},
publisher = {Association for Computational Linguistics},
location = {Brussels, Belgium},
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
|
juletxara/xnli_mt
|
[
"language:en",
"region:us"
] |
2023-05-23T10:00:18+00:00
|
{"language": ["en"], "paperswithcode_id": "xnli", "pretty_name": "Cross-lingual Natural Language Inference", "dataset_info": [{"config_name": "nllb-200-distilled-600M", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 851225, "num_examples": 5010}, {"name": "bg", "num_bytes": 860275, "num_examples": 5010}, {"name": "de", "num_bytes": 852016, "num_examples": 5010}, {"name": "el", "num_bytes": 852043, "num_examples": 5010}, {"name": "es", "num_bytes": 862194, "num_examples": 5010}, {"name": "fr", "num_bytes": 861464, "num_examples": 5010}, {"name": "hi", "num_bytes": 839337, "num_examples": 5010}, {"name": "ru", "num_bytes": 860117, "num_examples": 5010}, {"name": "sw", "num_bytes": 829257, "num_examples": 5010}, {"name": "th", "num_bytes": 845834, "num_examples": 5010}, {"name": "tr", "num_bytes": 840611, "num_examples": 5010}, {"name": "ur", "num_bytes": 829009, "num_examples": 5010}, {"name": "vi", "num_bytes": 846643, "num_examples": 5010}, {"name": "zh", "num_bytes": 851646, "num_examples": 5010}], "download_size": 11040341, "dataset_size": 11881671}, {"config_name": "nllb-200-distilled-1.3B", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 851205, "num_examples": 5010}, {"name": "bg", "num_bytes": 857938, "num_examples": 5010}, {"name": "de", "num_bytes": 849800, "num_examples": 5010}, {"name": "el", "num_bytes": 849820, "num_examples": 5010}, {"name": "es", "num_bytes": 860984, "num_examples": 5010}, {"name": "fr", "num_bytes": 862545, "num_examples": 5010}, {"name": "hi", "num_bytes": 848151, "num_examples": 5010}, {"name": "ru", "num_bytes": 858069, "num_examples": 5010}, {"name": "sw", "num_bytes": 830347, "num_examples": 5010}, {"name": "th", "num_bytes": 841814, "num_examples": 5010}, {"name": "tr", "num_bytes": 840738, "num_examples": 5010}, {"name": "ur", "num_bytes": 828996, "num_examples": 5010}, {"name": "vi", "num_bytes": 848990, "num_examples": 5010}, {"name": "zh", "num_bytes": 855461, "num_examples": 5010}], "download_size": 11043528, "dataset_size": 11884858}, {"config_name": "nllb-200-1.3B", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 855256, "num_examples": 5010}, {"name": "bg", "num_bytes": 861195, "num_examples": 5010}, {"name": "de", "num_bytes": 854679, "num_examples": 5010}, {"name": "el", "num_bytes": 852766, "num_examples": 5010}, {"name": "es", "num_bytes": 863689, "num_examples": 5010}, {"name": "fr", "num_bytes": 868360, "num_examples": 5010}, {"name": "hi", "num_bytes": 846414, "num_examples": 5010}, {"name": "ru", "num_bytes": 865308, "num_examples": 5010}, {"name": "sw", "num_bytes": 830998, "num_examples": 5010}, {"name": "th", "num_bytes": 846171, "num_examples": 5010}, {"name": "tr", "num_bytes": 845907, "num_examples": 5010}, {"name": "ur", "num_bytes": 838279, "num_examples": 5010}, {"name": "vi", "num_bytes": 848249, "num_examples": 5010}, {"name": "zh", "num_bytes": 846116, "num_examples": 5010}], "download_size": 11082057, "dataset_size": 11923387}, {"config_name": "nllb-200-3.3B", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 863302, "num_examples": 5010}, {"name": "bg", "num_bytes": 863677, "num_examples": 5010}, {"name": "de", "num_bytes": 857147, "num_examples": 5010}, {"name": "el", "num_bytes": 856383, "num_examples": 5010}, {"name": "es", "num_bytes": 866137, "num_examples": 5010}, {"name": "fr", "num_bytes": 871853, "num_examples": 5010}, {"name": "hi", "num_bytes": 857305, "num_examples": 5010}, {"name": "ru", "num_bytes": 869523, "num_examples": 5010}, {"name": "sw", "num_bytes": 839567, "num_examples": 5010}, {"name": "th", "num_bytes": 850312, "num_examples": 5010}, {"name": "tr", "num_bytes": 851657, "num_examples": 5010}, {"name": "ur", "num_bytes": 832903, "num_examples": 5010}, {"name": "vi", "num_bytes": 856479, "num_examples": 5010}, {"name": "zh", "num_bytes": 853093, "num_examples": 5010}], "download_size": 11148008, "dataset_size": 11989338}, {"config_name": "xglm-564M", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 789329, "num_examples": 5010}, {"name": "bg", "num_bytes": 846003, "num_examples": 5010}, {"name": "de", "num_bytes": 781577, "num_examples": 5010}, {"name": "el", "num_bytes": 1069000, "num_examples": 5010}, {"name": "es", "num_bytes": 852488, "num_examples": 5010}, {"name": "fr", "num_bytes": 860951, "num_examples": 5010}, {"name": "hi", "num_bytes": 849698, "num_examples": 5010}, {"name": "ru", "num_bytes": 898706, "num_examples": 5010}, {"name": "sw", "num_bytes": 842743, "num_examples": 5010}, {"name": "th", "num_bytes": 1098847, "num_examples": 5010}, {"name": "tr", "num_bytes": 788523, "num_examples": 5010}, {"name": "ur", "num_bytes": 786383, "num_examples": 5010}, {"name": "vi", "num_bytes": 827304, "num_examples": 5010}, {"name": "zh", "num_bytes": 1083312, "num_examples": 5010}], "download_size": 11533534, "dataset_size": 12374864}, {"config_name": "xglm-1.7B", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 788487, "num_examples": 5010}, {"name": "bg", "num_bytes": 863627, "num_examples": 5010}, {"name": "de", "num_bytes": 824591, "num_examples": 5010}, {"name": "el", "num_bytes": 870729, "num_examples": 5010}, {"name": "es", "num_bytes": 856025, "num_examples": 5010}, {"name": "fr", "num_bytes": 877381, "num_examples": 5010}, {"name": "hi", "num_bytes": 973947, "num_examples": 5010}, {"name": "ru", "num_bytes": 840252, "num_examples": 5010}, {"name": "sw", "num_bytes": 784472, "num_examples": 5010}, {"name": "th", "num_bytes": 821323, "num_examples": 5010}, {"name": "tr", "num_bytes": 747863, "num_examples": 5010}, {"name": "ur", "num_bytes": 855280, "num_examples": 5010}, {"name": "vi", "num_bytes": 807745, "num_examples": 5010}, {"name": "zh", "num_bytes": 801384, "num_examples": 5010}], "download_size": 10871776, "dataset_size": 11713106}, {"config_name": "xglm-2.9B", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 791983, "num_examples": 5010}, {"name": "bg", "num_bytes": 856898, "num_examples": 5010}, {"name": "de", "num_bytes": 833316, "num_examples": 5010}, {"name": "el", "num_bytes": 859152, "num_examples": 5010}, {"name": "es", "num_bytes": 875232, "num_examples": 5010}, {"name": "fr", "num_bytes": 880335, "num_examples": 5010}, {"name": "hi", "num_bytes": 754460, "num_examples": 5010}, {"name": "ru", "num_bytes": 839486, "num_examples": 5010}, {"name": "sw", "num_bytes": 807832, "num_examples": 5010}, {"name": "th", "num_bytes": 792237, "num_examples": 5010}, {"name": "tr", "num_bytes": 744151, "num_examples": 5010}, {"name": "ur", "num_bytes": 763715, "num_examples": 5010}, {"name": "vi", "num_bytes": 825575, "num_examples": 5010}, {"name": "zh", "num_bytes": 803580, "num_examples": 5010}], "download_size": 10586622, "dataset_size": 11427952}, {"config_name": "xglm-4.5B", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 825461, "num_examples": 5010}, {"name": "bg", "num_bytes": 861124, "num_examples": 5010}, {"name": "de", "num_bytes": 847007, "num_examples": 5010}, {"name": "el", "num_bytes": 875762, "num_examples": 5010}, {"name": "es", "num_bytes": 871840, "num_examples": 5010}, {"name": "fr", "num_bytes": 882720, "num_examples": 5010}, {"name": "hi", "num_bytes": 826770, "num_examples": 5010}, {"name": "ru", "num_bytes": 865706, "num_examples": 5010}, {"name": "sw", "num_bytes": 807688, "num_examples": 5010}, {"name": "th", "num_bytes": 827077, "num_examples": 5010}, {"name": "tr", "num_bytes": 836039, "num_examples": 5010}, {"name": "ur", "num_bytes": 799881, "num_examples": 5010}, {"name": "vi", "num_bytes": 846648, "num_examples": 5010}, {"name": "zh", "num_bytes": 836279, "num_examples": 5010}], "download_size": 10968672, "dataset_size": 11810002}, {"config_name": "xglm-7.5B", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 818748, "num_examples": 5010}, {"name": "bg", "num_bytes": 853616, "num_examples": 5010}, {"name": "de", "num_bytes": 833462, "num_examples": 5010}, {"name": "el", "num_bytes": 860997, "num_examples": 5010}, {"name": "es", "num_bytes": 855814, "num_examples": 5010}, {"name": "fr", "num_bytes": 859597, "num_examples": 5010}, {"name": "hi", "num_bytes": 788540, "num_examples": 5010}, {"name": "ru", "num_bytes": 846308, "num_examples": 5010}, {"name": "sw", "num_bytes": 813638, "num_examples": 5010}, {"name": "th", "num_bytes": 793438, "num_examples": 5010}, {"name": "tr", "num_bytes": 753138, "num_examples": 5010}, {"name": "ur", "num_bytes": 811513, "num_examples": 5010}, {"name": "vi", "num_bytes": 829040, "num_examples": 5010}, {"name": "zh", "num_bytes": 823480, "num_examples": 5010}], "download_size": 10699999, "dataset_size": 11541329}, {"config_name": "bloom-560m", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 793192, "num_examples": 5010}, {"name": "bg", "num_bytes": 1293032, "num_examples": 5026}, {"name": "de", "num_bytes": 853267, "num_examples": 5011}, {"name": "el", "num_bytes": 853650, "num_examples": 5028}, {"name": "es", "num_bytes": 790401, "num_examples": 5019}, {"name": "fr", "num_bytes": 785706, "num_examples": 5022}, {"name": "hi", "num_bytes": 815413, "num_examples": 5020}, {"name": "ru", "num_bytes": 1119100, "num_examples": 5035}, {"name": "sw", "num_bytes": 1283629, "num_examples": 5010}, {"name": "th", "num_bytes": 1927388, "num_examples": 5010}, {"name": "tr", "num_bytes": 1136397, "num_examples": 5010}, {"name": "ur", "num_bytes": 806534, "num_examples": 5050}, {"name": "vi", "num_bytes": 810195, "num_examples": 5033}, {"name": "zh", "num_bytes": 895087, "num_examples": 5013}], "download_size": 13312268, "dataset_size": 14162991}, {"config_name": "bloom-1b1", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 772035, "num_examples": 5010}, {"name": "bg", "num_bytes": 838287, "num_examples": 5010}, {"name": "de", "num_bytes": 816688, "num_examples": 5010}, {"name": "el", "num_bytes": 757902, "num_examples": 5010}, {"name": "es", "num_bytes": 811192, "num_examples": 5010}, {"name": "fr", "num_bytes": 823552, "num_examples": 5010}, {"name": "hi", "num_bytes": 755051, "num_examples": 5010}, {"name": "ru", "num_bytes": 802154, "num_examples": 5010}, {"name": "sw", "num_bytes": 769220, "num_examples": 5010}, {"name": "th", "num_bytes": 855265, "num_examples": 5010}, {"name": "tr", "num_bytes": 1009235, "num_examples": 5010}, {"name": "ur", "num_bytes": 784984, "num_examples": 5010}, {"name": "vi", "num_bytes": 798443, "num_examples": 5010}, {"name": "zh", "num_bytes": 795561, "num_examples": 5010}], "download_size": 10548239, "dataset_size": 11389569}, {"config_name": "bloom-1b7", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 817013, "num_examples": 5010}, {"name": "bg", "num_bytes": 803575, "num_examples": 5010}, {"name": "de", "num_bytes": 811977, "num_examples": 5010}, {"name": "el", "num_bytes": 768757, "num_examples": 5010}, {"name": "es", "num_bytes": 834218, "num_examples": 5010}, {"name": "fr", "num_bytes": 844544, "num_examples": 5010}, {"name": "hi", "num_bytes": 780516, "num_examples": 5010}, {"name": "ru", "num_bytes": 856927, "num_examples": 5010}, {"name": "sw", "num_bytes": 745814, "num_examples": 5010}, {"name": "th", "num_bytes": 930774, "num_examples": 5010}, {"name": "tr", "num_bytes": 871417, "num_examples": 5010}, {"name": "ur", "num_bytes": 751069, "num_examples": 5010}, {"name": "vi", "num_bytes": 814194, "num_examples": 5010}, {"name": "zh", "num_bytes": 790631, "num_examples": 5010}], "download_size": 10580096, "dataset_size": 11421426}, {"config_name": "bloom-3b", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 819238, "num_examples": 5010}, {"name": "bg", "num_bytes": 822686, "num_examples": 5010}, {"name": "de", "num_bytes": 850318, "num_examples": 5010}, {"name": "el", "num_bytes": 809037, "num_examples": 5010}, {"name": "es", "num_bytes": 850349, "num_examples": 5010}, {"name": "fr", "num_bytes": 855581, "num_examples": 5010}, {"name": "hi", "num_bytes": 797905, "num_examples": 5010}, {"name": "ru", "num_bytes": 861096, "num_examples": 5010}, {"name": "sw", "num_bytes": 767209, "num_examples": 5010}, {"name": "th", "num_bytes": 820321, "num_examples": 5010}, {"name": "tr", "num_bytes": 881668, "num_examples": 5010}, {"name": "ur", "num_bytes": 810843, "num_examples": 5010}, {"name": "vi", "num_bytes": 828926, "num_examples": 5010}, {"name": "zh", "num_bytes": 793476, "num_examples": 5010}], "download_size": 10727323, "dataset_size": 11568653}, {"config_name": "bloom-7b1", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 834767, "num_examples": 5010}, {"name": "bg", "num_bytes": 848921, "num_examples": 5010}, {"name": "de", "num_bytes": 827646, "num_examples": 5010}, {"name": "el", "num_bytes": 886001, "num_examples": 5010}, {"name": "es", "num_bytes": 859775, "num_examples": 5010}, {"name": "fr", "num_bytes": 863548, "num_examples": 5010}, {"name": "hi", "num_bytes": 814484, "num_examples": 5010}, {"name": "ru", "num_bytes": 860392, "num_examples": 5010}, {"name": "sw", "num_bytes": 811380, "num_examples": 5010}, {"name": "th", "num_bytes": 775738, "num_examples": 5010}, {"name": "tr", "num_bytes": 747961, "num_examples": 5010}, {"name": "ur", "num_bytes": 836727, "num_examples": 5010}, {"name": "vi", "num_bytes": 836042, "num_examples": 5010}, {"name": "zh", "num_bytes": 814866, "num_examples": 5010}], "download_size": 10776918, "dataset_size": 11618248}, {"config_name": "llama-7B", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 792437, "num_examples": 5010}, {"name": "bg", "num_bytes": 855365, "num_examples": 5010}, {"name": "de", "num_bytes": 844453, "num_examples": 5010}, {"name": "el", "num_bytes": 864748, "num_examples": 5010}, {"name": "es", "num_bytes": 871358, "num_examples": 5010}, {"name": "fr", "num_bytes": 882671, "num_examples": 5010}, {"name": "hi", "num_bytes": 791631, "num_examples": 5010}, {"name": "ru", "num_bytes": 853745, "num_examples": 5010}, {"name": "sw", "num_bytes": 753655, "num_examples": 5010}, {"name": "th", "num_bytes": 787365, "num_examples": 5010}, {"name": "tr", "num_bytes": 814193, "num_examples": 5010}, {"name": "ur", "num_bytes": 811987, "num_examples": 5010}, {"name": "vi", "num_bytes": 807334, "num_examples": 5010}, {"name": "zh", "num_bytes": 841441, "num_examples": 5010}], "download_size": 10731053, "dataset_size": 11572383}, {"config_name": "llama-13B", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 833799, "num_examples": 5010}, {"name": "bg", "num_bytes": 850755, "num_examples": 5010}, {"name": "de", "num_bytes": 842498, "num_examples": 5010}, {"name": "el", "num_bytes": 853859, "num_examples": 5010}, {"name": "es", "num_bytes": 865884, "num_examples": 5010}, {"name": "fr", "num_bytes": 872326, "num_examples": 5010}, {"name": "hi", "num_bytes": 803350, "num_examples": 5010}, {"name": "ru", "num_bytes": 850066, "num_examples": 5010}, {"name": "sw", "num_bytes": 785595, "num_examples": 5010}, {"name": "th", "num_bytes": 794461, "num_examples": 5010}, {"name": "tr", "num_bytes": 789769, "num_examples": 5010}, {"name": "ur", "num_bytes": 813459, "num_examples": 5010}, {"name": "vi", "num_bytes": 783219, "num_examples": 5010}, {"name": "zh", "num_bytes": 828885, "num_examples": 5010}], "download_size": 10726595, "dataset_size": 11567925}, {"config_name": "RedPajama-INCITE-Base-3B-v1", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 815395, "num_examples": 5010}, {"name": "bg", "num_bytes": 870568, "num_examples": 5010}, {"name": "de", "num_bytes": 830593, "num_examples": 5010}, {"name": "el", "num_bytes": 887938, "num_examples": 5010}, {"name": "es", "num_bytes": 866523, "num_examples": 5010}, {"name": "fr", "num_bytes": 880668, "num_examples": 5010}, {"name": "hi", "num_bytes": 871126, "num_examples": 5010}, {"name": "ru", "num_bytes": 875379, "num_examples": 5010}, {"name": "sw", "num_bytes": 775459, "num_examples": 5010}, {"name": "th", "num_bytes": 829562, "num_examples": 5010}, {"name": "tr", "num_bytes": 813161, "num_examples": 5010}, {"name": "ur", "num_bytes": 812296, "num_examples": 5010}, {"name": "vi", "num_bytes": 824340, "num_examples": 5010}, {"name": "zh", "num_bytes": 892427, "num_examples": 5010}], "download_size": 11004105, "dataset_size": 11845435}, {"config_name": "RedPajama-INCITE-7B-Base", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 789074, "num_examples": 5010}, {"name": "bg", "num_bytes": 870916, "num_examples": 5010}, {"name": "de", "num_bytes": 845436, "num_examples": 5010}, {"name": "el", "num_bytes": 850780, "num_examples": 5010}, {"name": "es", "num_bytes": 875677, "num_examples": 5010}, {"name": "fr", "num_bytes": 880989, "num_examples": 5010}, {"name": "hi", "num_bytes": 751526, "num_examples": 5010}, {"name": "ru", "num_bytes": 881090, "num_examples": 5010}, {"name": "sw", "num_bytes": 746100, "num_examples": 5010}, {"name": "th", "num_bytes": 685496, "num_examples": 5010}, {"name": "tr", "num_bytes": 770359, "num_examples": 5010}, {"name": "ur", "num_bytes": 708810, "num_examples": 5010}, {"name": "vi", "num_bytes": 735197, "num_examples": 5010}, {"name": "zh", "num_bytes": 848461, "num_examples": 5010}], "download_size": 10398581, "dataset_size": 11239911}, {"config_name": "llama-30B", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 860301, "num_examples": 5010}, {"name": "bg", "num_bytes": 863946, "num_examples": 5010}, {"name": "de", "num_bytes": 858009, "num_examples": 5010}, {"name": "el", "num_bytes": 874347, "num_examples": 5010}, {"name": "es", "num_bytes": 875007, "num_examples": 5010}, {"name": "fr", "num_bytes": 884764, "num_examples": 5010}, {"name": "hi", "num_bytes": 846950, "num_examples": 5010}, {"name": "ru", "num_bytes": 869708, "num_examples": 5010}, {"name": "sw", "num_bytes": 857197, "num_examples": 5010}, {"name": "th", "num_bytes": 847402, "num_examples": 5010}, {"name": "tr", "num_bytes": 825879, "num_examples": 5010}, {"name": "ur", "num_bytes": 860074, "num_examples": 5010}, {"name": "vi", "num_bytes": 862456, "num_examples": 5010}, {"name": "zh", "num_bytes": 849263, "num_examples": 5010}], "download_size": 11193973, "dataset_size": 12035303}, {"config_name": "open_llama_3b", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 705142, "num_examples": 5010}, {"name": "bg", "num_bytes": 875604, "num_examples": 5010}, {"name": "de", "num_bytes": 851525, "num_examples": 5010}, {"name": "el", "num_bytes": 739635, "num_examples": 5010}, {"name": "es", "num_bytes": 866291, "num_examples": 5010}, {"name": "fr", "num_bytes": 880556, "num_examples": 5010}, {"name": "hi", "num_bytes": 392659, "num_examples": 5010}, {"name": "ru", "num_bytes": 876933, "num_examples": 5010}, {"name": "sw", "num_bytes": 738299, "num_examples": 5010}, {"name": "th", "num_bytes": 1273724, "num_examples": 5010}, {"name": "tr", "num_bytes": 769184, "num_examples": 5010}, {"name": "ur", "num_bytes": 739162, "num_examples": 5010}, {"name": "vi", "num_bytes": 701661, "num_examples": 5010}, {"name": "zh", "num_bytes": 878129, "num_examples": 5010}], "download_size": 10447174, "dataset_size": 11288504}, {"config_name": "open_llama_7b", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 765568, "num_examples": 5010}, {"name": "bg", "num_bytes": 860978, "num_examples": 5010}, {"name": "de", "num_bytes": 839878, "num_examples": 5010}, {"name": "el", "num_bytes": 790038, "num_examples": 5010}, {"name": "es", "num_bytes": 862624, "num_examples": 5010}, {"name": "fr", "num_bytes": 871243, "num_examples": 5010}, {"name": "hi", "num_bytes": 328421, "num_examples": 5010}, {"name": "ru", "num_bytes": 867424, "num_examples": 5010}, {"name": "sw", "num_bytes": 784318, "num_examples": 5010}, {"name": "th", "num_bytes": 1133537, "num_examples": 5010}, {"name": "tr", "num_bytes": 770420, "num_examples": 5010}, {"name": "ur", "num_bytes": 739842, "num_examples": 5010}, {"name": "vi", "num_bytes": 767095, "num_examples": 5010}, {"name": "zh", "num_bytes": 840369, "num_examples": 5010}], "download_size": 10380425, "dataset_size": 11221755}, {"config_name": "open_llama_13b", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 855506, "num_examples": 5010}, {"name": "bg", "num_bytes": 860868, "num_examples": 5010}, {"name": "de", "num_bytes": 845896, "num_examples": 5010}, {"name": "el", "num_bytes": 789495, "num_examples": 5010}, {"name": "es", "num_bytes": 874595, "num_examples": 5010}, {"name": "fr", "num_bytes": 883531, "num_examples": 5010}, {"name": "hi", "num_bytes": 349430, "num_examples": 5010}, {"name": "ru", "num_bytes": 860441, "num_examples": 5010}, {"name": "sw", "num_bytes": 819611, "num_examples": 5010}, {"name": "th", "num_bytes": 1249012, "num_examples": 5010}, {"name": "tr", "num_bytes": 813974, "num_examples": 5010}, {"name": "ur", "num_bytes": 775914, "num_examples": 5010}, {"name": "vi", "num_bytes": 826589, "num_examples": 5010}, {"name": "zh", "num_bytes": 828483, "num_examples": 5010}], "download_size": 10792015, "dataset_size": 11633345}, {"config_name": "xgen-7b-4k-base", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 815916, "num_examples": 5010}, {"name": "bg", "num_bytes": 866698, "num_examples": 5010}, {"name": "de", "num_bytes": 845296, "num_examples": 5010}, {"name": "el", "num_bytes": 873279, "num_examples": 5010}, {"name": "es", "num_bytes": 867614, "num_examples": 5010}, {"name": "fr", "num_bytes": 878177, "num_examples": 5010}, {"name": "hi", "num_bytes": 795679, "num_examples": 5010}, {"name": "ru", "num_bytes": 870241, "num_examples": 5010}, {"name": "sw", "num_bytes": 815925, "num_examples": 5010}, {"name": "th", "num_bytes": 680865, "num_examples": 5010}, {"name": "tr", "num_bytes": 808508, "num_examples": 5010}, {"name": "ur", "num_bytes": 755658, "num_examples": 5010}, {"name": "vi", "num_bytes": 798616, "num_examples": 5010}, {"name": "zh", "num_bytes": 839810, "num_examples": 5010}], "download_size": 10670952, "dataset_size": 11512282}, {"config_name": "xgen-7b-8k-base", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 822039, "num_examples": 5010}, {"name": "bg", "num_bytes": 866105, "num_examples": 5010}, {"name": "de", "num_bytes": 834487, "num_examples": 5010}, {"name": "el", "num_bytes": 871714, "num_examples": 5010}, {"name": "es", "num_bytes": 863765, "num_examples": 5010}, {"name": "fr", "num_bytes": 874570, "num_examples": 5010}, {"name": "hi", "num_bytes": 811916, "num_examples": 5010}, {"name": "ru", "num_bytes": 863980, "num_examples": 5010}, {"name": "sw", "num_bytes": 801837, "num_examples": 5010}, {"name": "th", "num_bytes": 773394, "num_examples": 5010}, {"name": "tr", "num_bytes": 812359, "num_examples": 5010}, {"name": "ur", "num_bytes": 762615, "num_examples": 5010}, {"name": "vi", "num_bytes": 845558, "num_examples": 5010}, {"name": "zh", "num_bytes": 840984, "num_examples": 5010}], "download_size": 10803993, "dataset_size": 11645323}, {"config_name": "xgen-7b-8k-inst", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 852293, "num_examples": 5010}, {"name": "bg", "num_bytes": 877290, "num_examples": 5010}, {"name": "de", "num_bytes": 843890, "num_examples": 5010}, {"name": "el", "num_bytes": 900388, "num_examples": 5010}, {"name": "es", "num_bytes": 871938, "num_examples": 5010}, {"name": "fr", "num_bytes": 883776, "num_examples": 5010}, {"name": "hi", "num_bytes": 819611, "num_examples": 5010}, {"name": "ru", "num_bytes": 871868, "num_examples": 5010}, {"name": "sw", "num_bytes": 903297, "num_examples": 5010}, {"name": "th", "num_bytes": 781456, "num_examples": 5010}, {"name": "tr", "num_bytes": 888386, "num_examples": 5010}, {"name": "ur", "num_bytes": 835512, "num_examples": 5010}, {"name": "vi", "num_bytes": 881933, "num_examples": 5010}, {"name": "zh", "num_bytes": 886819, "num_examples": 5010}], "download_size": 11257127, "dataset_size": 12098457}, {"config_name": "open_llama_7b_v2", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 799618, "num_examples": 5010}, {"name": "bg", "num_bytes": 864517, "num_examples": 5010}, {"name": "de", "num_bytes": 844605, "num_examples": 5010}, {"name": "el", "num_bytes": 867881, "num_examples": 5010}, {"name": "es", "num_bytes": 872871, "num_examples": 5010}, {"name": "fr", "num_bytes": 883623, "num_examples": 5010}, {"name": "hi", "num_bytes": 821085, "num_examples": 5010}, {"name": "ru", "num_bytes": 875313, "num_examples": 5010}, {"name": "sw", "num_bytes": 810855, "num_examples": 5010}, {"name": "th", "num_bytes": 756931, "num_examples": 5010}, {"name": "tr", "num_bytes": 832938, "num_examples": 5010}, {"name": "ur", "num_bytes": 776355, "num_examples": 5010}, {"name": "vi", "num_bytes": 841205, "num_examples": 5010}, {"name": "zh", "num_bytes": 836994, "num_examples": 5010}], "download_size": 10843461, "dataset_size": 11684791}, {"config_name": "polylm-1.7b", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 840312, "num_examples": 5010}, {"name": "bg", "num_bytes": 766907, "num_examples": 5010}, {"name": "de", "num_bytes": 846775, "num_examples": 5010}, {"name": "el", "num_bytes": 985392, "num_examples": 5010}, {"name": "es", "num_bytes": 850661, "num_examples": 5010}, {"name": "fr", "num_bytes": 872488, "num_examples": 5010}, {"name": "hi", "num_bytes": 947295, "num_examples": 5010}, {"name": "ru", "num_bytes": 823812, "num_examples": 5010}, {"name": "sw", "num_bytes": 639344, "num_examples": 5010}, {"name": "th", "num_bytes": 873714, "num_examples": 5010}, {"name": "tr", "num_bytes": 882916, "num_examples": 5010}, {"name": "ur", "num_bytes": 707398, "num_examples": 5010}, {"name": "vi", "num_bytes": 837592, "num_examples": 5010}, {"name": "zh", "num_bytes": 811983, "num_examples": 5010}], "download_size": 10845259, "dataset_size": 11686589}, {"config_name": "polylm-13b", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 856622, "num_examples": 5010}, {"name": "bg", "num_bytes": 872936, "num_examples": 5010}, {"name": "de", "num_bytes": 853814, "num_examples": 5010}, {"name": "el", "num_bytes": 792171, "num_examples": 5010}, {"name": "es", "num_bytes": 867823, "num_examples": 5010}, {"name": "fr", "num_bytes": 876800, "num_examples": 5010}, {"name": "hi", "num_bytes": 825863, "num_examples": 5010}, {"name": "ru", "num_bytes": 876390, "num_examples": 5010}, {"name": "sw", "num_bytes": 659651, "num_examples": 5010}, {"name": "th", "num_bytes": 848574, "num_examples": 5010}, {"name": "tr", "num_bytes": 801914, "num_examples": 5010}, {"name": "ur", "num_bytes": 750495, "num_examples": 5010}, {"name": "vi", "num_bytes": 847699, "num_examples": 5010}, {"name": "zh", "num_bytes": 823542, "num_examples": 5010}], "download_size": 10712964, "dataset_size": 11554294}, {"config_name": "polylm-multialpaca-13b", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 832229, "num_examples": 5010}, {"name": "bg", "num_bytes": 873130, "num_examples": 5010}, {"name": "de", "num_bytes": 846302, "num_examples": 5010}, {"name": "el", "num_bytes": 846617, "num_examples": 5010}, {"name": "es", "num_bytes": 861183, "num_examples": 5010}, {"name": "fr", "num_bytes": 863929, "num_examples": 5010}, {"name": "hi", "num_bytes": 938018, "num_examples": 5010}, {"name": "ru", "num_bytes": 866081, "num_examples": 5010}, {"name": "sw", "num_bytes": 802054, "num_examples": 5010}, {"name": "th", "num_bytes": 836126, "num_examples": 5010}, {"name": "tr", "num_bytes": 799768, "num_examples": 5010}, {"name": "ur", "num_bytes": 909124, "num_examples": 5010}, {"name": "vi", "num_bytes": 842588, "num_examples": 5010}, {"name": "zh", "num_bytes": 823529, "num_examples": 5010}], "download_size": 11099348, "dataset_size": 11940678}, {"config_name": "open_llama_3b_v2", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 692849, "num_examples": 5010}, {"name": "bg", "num_bytes": 852675, "num_examples": 5010}, {"name": "de", "num_bytes": 835619, "num_examples": 5010}, {"name": "el", "num_bytes": 834201, "num_examples": 5010}, {"name": "es", "num_bytes": 873160, "num_examples": 5010}, {"name": "fr", "num_bytes": 881098, "num_examples": 5010}, {"name": "hi", "num_bytes": 726395, "num_examples": 5010}, {"name": "ru", "num_bytes": 853657, "num_examples": 5010}, {"name": "sw", "num_bytes": 690930, "num_examples": 5010}, {"name": "th", "num_bytes": 724712, "num_examples": 5010}, {"name": "tr", "num_bytes": 755625, "num_examples": 5010}, {"name": "ur", "num_bytes": 753648, "num_examples": 5010}, {"name": "vi", "num_bytes": 795981, "num_examples": 5010}, {"name": "zh", "num_bytes": 844200, "num_examples": 5010}], "download_size": 10273420, "dataset_size": 11114750}, {"config_name": "Llama-2-7b-hf", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 833964, "num_examples": 5010}, {"name": "bg", "num_bytes": 867408, "num_examples": 5010}, {"name": "de", "num_bytes": 852305, "num_examples": 5010}, {"name": "el", "num_bytes": 859363, "num_examples": 5010}, {"name": "es", "num_bytes": 880162, "num_examples": 5010}, {"name": "fr", "num_bytes": 886400, "num_examples": 5010}, {"name": "hi", "num_bytes": 802665, "num_examples": 5010}, {"name": "ru", "num_bytes": 868568, "num_examples": 5010}, {"name": "sw", "num_bytes": 775118, "num_examples": 5010}, {"name": "th", "num_bytes": 774722, "num_examples": 5010}, {"name": "tr", "num_bytes": 810268, "num_examples": 5010}, {"name": "ur", "num_bytes": 786428, "num_examples": 5010}, {"name": "vi", "num_bytes": 841904, "num_examples": 5010}, {"name": "zh", "num_bytes": 837126, "num_examples": 5010}], "download_size": 10835071, "dataset_size": 11676401}, {"config_name": "Llama-2-13b-hf", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 838926, "num_examples": 5010}, {"name": "bg", "num_bytes": 864619, "num_examples": 5010}, {"name": "de", "num_bytes": 847106, "num_examples": 5010}, {"name": "el", "num_bytes": 858400, "num_examples": 5010}, {"name": "es", "num_bytes": 873274, "num_examples": 5010}, {"name": "fr", "num_bytes": 878414, "num_examples": 5010}, {"name": "hi", "num_bytes": 819446, "num_examples": 5010}, {"name": "ru", "num_bytes": 864307, "num_examples": 5010}, {"name": "sw", "num_bytes": 821998, "num_examples": 5010}, {"name": "th", "num_bytes": 812673, "num_examples": 5010}, {"name": "tr", "num_bytes": 812102, "num_examples": 5010}, {"name": "ur", "num_bytes": 831111, "num_examples": 5010}, {"name": "vi", "num_bytes": 838971, "num_examples": 5010}, {"name": "zh", "num_bytes": 835539, "num_examples": 5010}], "download_size": 10955556, "dataset_size": 11796886}, {"config_name": "Llama-2-7b-chat-hf", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 948578, "num_examples": 5010}, {"name": "bg", "num_bytes": 776309, "num_examples": 5010}, {"name": "de", "num_bytes": 725534, "num_examples": 5010}, {"name": "el", "num_bytes": 956805, "num_examples": 5010}, {"name": "es", "num_bytes": 631915, "num_examples": 5010}, {"name": "fr", "num_bytes": 534372, "num_examples": 5010}, {"name": "hi", "num_bytes": 960220, "num_examples": 5010}, {"name": "ru", "num_bytes": 535448, "num_examples": 5010}, {"name": "sw", "num_bytes": 1001740, "num_examples": 5010}, {"name": "th", "num_bytes": 995206, "num_examples": 5010}, {"name": "tr", "num_bytes": 865992, "num_examples": 5010}, {"name": "ur", "num_bytes": 864017, "num_examples": 5010}, {"name": "vi", "num_bytes": 246890, "num_examples": 5010}, {"name": "zh", "num_bytes": 538232, "num_examples": 5010}], "download_size": 9739928, "dataset_size": 10581258}, {"config_name": "Llama-2-13b-chat-hf", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "ar", "num_bytes": 932439, "num_examples": 5010}, {"name": "bg", "num_bytes": 877857, "num_examples": 5010}, {"name": "de", "num_bytes": 859893, "num_examples": 5010}, {"name": "el", "num_bytes": 910487, "num_examples": 5010}, {"name": "es", "num_bytes": 872553, "num_examples": 5010}, {"name": "fr", "num_bytes": 879291, "num_examples": 5010}, {"name": "hi", "num_bytes": 987002, "num_examples": 5010}, {"name": "ru", "num_bytes": 887918, "num_examples": 5010}, {"name": "sw", "num_bytes": 1021074, "num_examples": 5010}, {"name": "th", "num_bytes": 1054387, "num_examples": 5010}, {"name": "tr", "num_bytes": 900761, "num_examples": 5010}, {"name": "ur", "num_bytes": 1099374, "num_examples": 5010}, {"name": "vi", "num_bytes": 884472, "num_examples": 5010}, {"name": "zh", "num_bytes": 882394, "num_examples": 5010}], "download_size": 12208572, "dataset_size": 13049902}]}
|
2023-07-21T09:21:37+00:00
|
88e351a457e9a233677aac0a191c13337d9ea7b3
|
# Dataset Card for "flyte-slack-data-new"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Samhita/flyte-slack-data-new
|
[
"region:us"
] |
2023-05-23T10:33:28+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2391980, "num_examples": 3708}], "download_size": 1274797, "dataset_size": 2391980}}
|
2023-05-23T11:24:54+00:00
|
0c1ceb6b3c91387078df478bf1c6a67993e0d837
|
# Dataset Card for "456d4d56"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/456d4d56
|
[
"region:us"
] |
2023-05-23T10:34:40+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 186, "num_examples": 10}], "download_size": 1338, "dataset_size": 186}}
|
2023-05-23T10:34:41+00:00
|
782915ace00ca8935a0ea55a4f67d4dd184577d2
|
# Dataset Card for "3eeea607"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/3eeea607
|
[
"region:us"
] |
2023-05-23T10:39:32+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 182, "num_examples": 10}], "download_size": 1338, "dataset_size": 182}}
|
2023-05-23T10:39:35+00:00
|
8b1bd8f4ea2a31d39710039a6236475b2028e09a
|
# Dataset Card for "1d21656f"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/1d21656f
|
[
"region:us"
] |
2023-05-23T10:51:27+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 186, "num_examples": 10}], "download_size": 1329, "dataset_size": 186}}
|
2023-05-23T10:51:28+00:00
|
a7b700ba0569e28ad248d676b34ea7e76b34a7a8
|
maconphillips/Warren-VT-Info
|
[
"license:mit",
"region:us"
] |
2023-05-23T11:17:09+00:00
|
{"license": "mit"}
|
2023-05-23T11:18:25+00:00
|
|
235ccb0af0ea17ebba1b9d1967a1125ede598bbb
|
# Dataset Card for "cd2a9a0a"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/cd2a9a0a
|
[
"region:us"
] |
2023-05-23T11:27:43+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 186, "num_examples": 10}], "download_size": 1336, "dataset_size": 186}}
|
2023-05-23T11:27:45+00:00
|
54a7f098e9d1c3afd1fdb0b8f79af343625c0aca
|
# Dataset Card for "card_with_first_commit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
librarian-bots/card_with_first_commit
|
[
"task_categories:text-classification",
"task_categories:feature-extraction",
"task_categories:fill-mask",
"size_categories:10K<n<100K",
"language:en",
"model cards",
"region:us"
] |
2023-05-23T11:33:46+00:00
|
{"language": ["en"], "size_categories": ["10K<n<100K"], "task_categories": ["text-classification", "feature-extraction", "fill-mask"], "pretty_name": "Model card READMEs with first commit information", "dataset_info": {"features": [{"name": "modelId", "dtype": "string"}, {"name": "tags", "sequence": "string"}, {"name": "pipeline_tag", "dtype": "string"}, {"name": "config", "struct": [{"name": "architectures", "sequence": "string"}, {"name": "model_type", "dtype": "string"}, {"name": "task_specific_params", "struct": [{"name": "conversational", "struct": [{"name": "max_length", "dtype": "float64"}]}, {"name": "summarization", "struct": [{"name": "early_stopping", "dtype": "bool"}, {"name": "length_penalty", "dtype": "float64"}, {"name": "max_length", "dtype": "float64"}, {"name": "min_length", "dtype": "float64"}, {"name": "no_repeat_ngram_size", "dtype": "float64"}, {"name": "num_beams", "dtype": "float64"}, {"name": "prefix", "dtype": "string"}]}, {"name": "text-generation", "struct": [{"name": "do_sample", "dtype": "bool"}, {"name": "max_length", "dtype": "float64"}]}, {"name": "translation_en_to_de", "struct": [{"name": "early_stopping", "dtype": "bool"}, {"name": "max_length", "dtype": "float64"}, {"name": "num_beams", "dtype": "float64"}, {"name": "prefix", "dtype": "string"}]}, {"name": "translation_en_to_fr", "struct": [{"name": "early_stopping", "dtype": "bool"}, {"name": "max_length", "dtype": "float64"}, {"name": "num_beams", "dtype": "float64"}, {"name": "prefix", "dtype": "string"}]}, {"name": "translation_en_to_ro", "struct": [{"name": "early_stopping", "dtype": "bool"}, {"name": "max_length", "dtype": "float64"}, {"name": "num_beams", "dtype": "float64"}, {"name": "prefix", "dtype": "string"}]}]}]}, {"name": "downloads", "dtype": "int64"}, {"name": "first_commit", "dtype": "timestamp[ns, tz=UTC]"}, {"name": "card", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20198907.41971414, "num_examples": 30344}], "download_size": 25260494, "dataset_size": 20198907.41971414}, "tags": ["model cards"]}
|
2023-06-27T13:17:14+00:00
|
9e8132b5e7cfb6e08a0ef938b2c6de3efb4c64c5
|
# Dataset Card for "c9d65b85"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/c9d65b85
|
[
"region:us"
] |
2023-05-23T11:41:48+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 190, "num_examples": 10}], "download_size": 1329, "dataset_size": 190}}
|
2023-05-23T11:42:03+00:00
|
04866681065afd84ae64b022703216c47dcc76f0
|
# Dataset Card for "0e33ea6d"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/0e33ea6d
|
[
"region:us"
] |
2023-05-23T11:42:06+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 190, "num_examples": 10}], "download_size": 1329, "dataset_size": 190}}
|
2023-05-23T11:42:08+00:00
|
564635b1bfd733052d7df5073254cb74a9166be7
|
# Dataset Card for "db67d073"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/db67d073
|
[
"region:us"
] |
2023-05-23T11:42:08+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 190, "num_examples": 10}], "download_size": 1329, "dataset_size": 190}}
|
2023-05-23T11:42:10+00:00
|
ad3f4c315a577d5223a0f70a94d3b5ea5fa15567
|
# Dataset Card for "rotated_x"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
aravind-selvam/rotated_x
|
[
"region:us"
] |
2023-05-23T11:51:38+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 112837730.0, "num_examples": 4000}, {"name": "validation", "num_bytes": 28685240.0, "num_examples": 1000}], "download_size": 140758572, "dataset_size": 141522970.0}}
|
2023-05-23T11:52:03+00:00
|
e7e69cd0af26092c9f69b4a6b4f64e39d804b3d1
|
# a1111-sd-webui-locon
An extension for loading lycoris model in sd-webui. (include locon and loha)
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
# THIS EXTENSION IS NOT FOR ADDITIONAL NETWORK
### LyCORIS
https://github.com/KohakuBlueleaf/LyCORIS
### usage
Install and use locon model as lora model. <br>
Make sure your sd-webui has built-in lora

|
ngocuong/a1111-sd-webui-locon
|
[
"region:us"
] |
2023-05-23T11:51:49+00:00
|
{}
|
2023-05-23T12:13:08+00:00
|
91b3f6f333ced78ac721a907ecc6d73592f345e4
|
# Dataset Card for "c63c8cae"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/c63c8cae
|
[
"region:us"
] |
2023-05-23T11:55:34+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 186, "num_examples": 10}], "download_size": 1338, "dataset_size": 186}}
|
2023-05-23T11:55:41+00:00
|
d46f601e6db5196c47f1c39c630a63f8cb92dc14
|
# Dataset Card for "1e690292"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/1e690292
|
[
"region:us"
] |
2023-05-23T11:55:44+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 186, "num_examples": 10}], "download_size": 1338, "dataset_size": 186}}
|
2023-05-23T11:55:46+00:00
|
acf0e7fd4890d71349d94a4d421ff26d03ed810c
|
# Dataset Card for "processed_cdi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
deetsadi/processed_cdi
|
[
"region:us"
] |
2023-05-23T12:14:47+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "conditioning_image", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11007178.0, "num_examples": 200}], "download_size": 10989469, "dataset_size": 11007178.0}}
|
2023-05-24T13:40:53+00:00
|
5d57c58223ef44bec03955333d8aeef66376bb55
|
# Dataset Card for "b602fbf2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/b602fbf2
|
[
"region:us"
] |
2023-05-23T12:16:11+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 184, "num_examples": 10}], "download_size": 1335, "dataset_size": 184}}
|
2023-05-23T12:16:12+00:00
|
630b73ba0acb1e107894db3d718009edc0404fe6
|
# Dataset Card for "8ebe0fb3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/8ebe0fb3
|
[
"region:us"
] |
2023-05-23T12:16:29+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 186, "num_examples": 10}], "download_size": 1339, "dataset_size": 186}}
|
2023-05-23T12:16:31+00:00
|
dcf411c5655ac7152364589209e521b9a131ae74
|
# Dataset Card for "5570368b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/5570368b
|
[
"region:us"
] |
2023-05-23T12:21:06+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 186, "num_examples": 10}], "download_size": 1336, "dataset_size": 186}}
|
2023-05-23T12:21:07+00:00
|
037fffdba0c8fc3fa519d2ffed47f183667656b5
|
# Specifications of Dataset Download in Geom3D
We provide both the raw and processed data at [this HuggingFace link](https://huggingface.co/datasets/chao1224/Geom3D_data).
## PCQM4Mv2
```
mkdir -p pcqm4mv2/raw
cd pcqm4mv2/raw
wget http://ogb-data.stanford.edu/data/lsc/pcqm4m-v2-train.sdf.tar.gz
tar -xf pcqm4m-v2-train.sdf.tar.gz
wget http://ogb-data.stanford.edu/data/lsc/pcqm4m-v2.zip
unzip pcqm4m-v2.zip
mv pcqm4m-v2/raw/data.csv.gz .
rm pcqm4m-v2.zip
rm -rf pcqm4m-v2
```
## GEOM
```
wget https://dataverse.harvard.edu/api/access/datafile/4327252
mv 4327252 rdkit_folder.tar.gz
tar -xvf rdkit_folder.tar.gz
```
## Molecule3D
Install it following the google drive link [here](https://github.com/divelab/MoleculeX/tree/molx/Molecule3D).
## QM9
Automatically installed under folder `.QM9/raw`.
## MD17
Automatically installed under folder `./MD17`.
In March 2023 (or even earlier), they updated the MD17 FTP site, and the previous datasets are missing. We may need to keep and upload a version to the website.
## rMD17
Download the dataset from [this link](https://figshare.com/articles/dataset/Revised_MD17_dataset_rMD17_/12672038?file=24013628), and put the file `12672038.zip` under `./rMD17` folder.
- `unzip 12672038.zip`
- `tar xjf rmd17.tar.bz2`
- `mv rmd17/npz_data .`
- `mv rmd17/splits .`
## COLL
We use this repo: `[email protected]:TUM-DAML/gemnet_pytorch.git`.
## LBA/PDBBind
```
mkdir -p lba/raw
mkdir -p lba/processed
cd lba/raw
# wget http://www.pdbbind.org.cn/download/pdbbind_v2015_refined_set.tar.gz
# wget http://www.pdbbind.org.cn/download/pdbbind_v2018_refined.tar.gz
# wget http://www.pdbbind.org.cn/download/pdbbind_v2019_refined.tar.gz
# wget https://zenodo.org/record/4914718/files/LBA-split-by-sequence-identity-30-indices.tar.gz
wget http://www.pdbbind.org.cn/download/PDBbind_v2020_refined.tar.gz
tar -xzvf PDBbind_v2020_refined.tar.gz
wget https://zenodo.org/record/4914718/files/LBA-split-by-sequence-identity-30.tar.gz
tar -xzvf LBA-split-by-sequence-identity-30.tar.gz
mv split-by-sequence-identity-30/indices ../processed/
mv split-by-sequence-identity-30/targets ../processed/
```
## LEP
```
mkdir -p lep/raw
mkdir -p lep/processed
cd lep/raw
wget https://zenodo.org/record/4914734/files/LEP-raw.tar.gz
tar -xzvf LEP-raw.tar.gz
wget https://zenodo.org/record/4914734/files/LEP-split-by-protein.tar.gz
tar -xzvf LEP-split-by-protein.tar.gz
```
## MoleculeNet dataset
```
wget http://snap.stanford.edu/gnn-pretrain/data/chem_dataset.zip
unzip chem_dataset.zip
dataset_list=(tox21 toxcast clintox bbbp sider muv hiv bace)
for dataset in "${dataset_list[@]}"; do
mkdir -p molecule_datasets/"$dataset"/raw
cp dataset/"$dataset"/raw/* molecule_datasets/"$dataset"/raw/
done
rm -rf dataset
wget -O malaria-processed.csv https://raw.githubusercontent.com/HIPS/neural-fingerprint/master/data/2015-06-03-malaria/malaria-processed.csv
mkdir -p ./molecule_datasets/malaria/raw
mv malaria-processed.csv ./molecule_datasets/malaria/raw/malaria.csv
wget -O cep-processed.csv https://raw.githubusercontent.com/HIPS/neural-fingerprint/master/data/2015-06-02-cep-pce/cep-processed.csv
mkdir -p ./molecule_datasets/cep/raw
mv cep-processed.csv ./molecule_datasets/cep/raw/cep.csv
```
## EC & FOLD
Check this [link](https://github.com/phermosilla/IEConv_proteins#download-the-preprocessed-datasets).
- `ProtFunct` is for task `EC`
- `HomologyTAPE` is for task `FOLD`
Or
- `cd EC; python download.py`
- `cd FOLD; python download.py`
## MatBench
```
mkdir MatBench
cd MatBench
wget https://figshare.com/ndownloader/files/17494820
mv 17494820 expt_is_metal.json.gz
gzip -d expt_is_metal.json.gz
wget https://figshare.com/ndownloader/files/17494814
mv 17494814 expt_gap.json.gz
gzip -d expt_gap.json.gz
wget https://figshare.com/ndownloader/files/17494637
mv 17494637 glass.json.gz
gzip -d glass.json.gz
wget https://figshare.com/ndownloader/articles/9755486/versions/2
mv 2 perovskites.json.gz
unzip perovskites.json.gz
rm perovskites.json.gz
rm 17494805_perovskites.json.gz
gzip -d 17494808_perovskites.json.gz
mv 17494808_perovskites.json perovskites.json
wget https://figshare.com/ndownloader/files/17476067
mv 17476067 dielectric.json.gz
gzip -d dielectric.json.gz
wget https://figshare.com/ndownloader/files/17476064
mv 17476064 log_gvrh.json.gz
gzip -d log_gvrh.json.gz
wget https://figshare.com/ndownloader/files/17476061
mv 17476061 log_kvrh.json.gz
gzip -d log_kvrh.json.gz
wget https://figshare.com/ndownloader/files/17476046
mv 17476046 jdft2d.json.gz
gzip -d jdft2d.json.gz
wget https://figshare.com/ndownloader/files/17476040
mv 17476040 steels.json.gz
gzip -d steels.json.gz
wget https://figshare.com/ndownloader/files/17476037
mv 17476037 phonons.json.gz
gzip -d phonons.json.gz
wget https://figshare.com/ndownloader/files/17476034
mv 17476034 mp_is_metal.json.gz
gzip -d mp_is_metal.json.gz
wget https://figshare.com/ndownloader/files/17476028
mv 17476028 mp_e_form.json.gz
gzip -d mp_e_form.json.gz
wget https://figshare.com/ndownloader/files/17084741
mv 17084741 mp_gap.json.gz
gzip -d mp_gap.json.gz
```
The dataset size can match with [MatBenchmark v0.1](https://github.com/materialsproject/matbench/blob/main/matbench/matbench_v0.1_dataset_metadata.json).
## QMOF
```
mkdir QMOF
cd QMOF
wget https://figshare.com/ndownloader/articles/13147324/versions/13
mv 13 qmof_database_v13.zip
unzip qmof_database_v13.zip
unzip qmof_database.zip
cd qmof_database
python xyz_to_cifs.py
cd ../..
```
Or follow [this link](https://github.com/arosen93/QMOF/blob/main/benchmarks.md) for prediction on QMOF DB v13.
|
chao1224/Geom3D_data
|
[
"region:us"
] |
2023-05-23T12:22:13+00:00
|
{}
|
2023-08-10T18:52:17+00:00
|
0f45faf2f25efd519732179d9395250e13171b9b
|
# Dataset Card for "caricature-portraits-blip-captions-512"
## The 2D Caricature Dataset from [3D-CariGAN](https://github.com/qq775193759/3D-CariGAN) cropped to 512x512 and blip captioned
```
@article{ye2021caricature,
author = {Ye, Zipeng and Xia, Mengfei and Sun, Yanan and Yi, Ran and Yu, Minjing and Zhang, Juyong and Lai, Yu-Kun and Liu, Yong-Jin},
title = {3D-CariGAN: An End-to-End Solution to 3D Caricature Generation from Normal Face Photos},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2021},
doi={10.1109/TVCG.2021.3126659},
}
```
|
Norod78/caricature-portraits-blip-captions-512
|
[
"size_categories:1K<n<10K",
"license:cc-by-nc-sa-4.0",
"text-to-image",
"region:us"
] |
2023-05-23T12:30:51+00:00
|
{"license": "cc-by-nc-sa-4.0", "size_categories": ["1K<n<10K"], "pretty_name": "Caricature portraits - Blip captions", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1663841422.423, "num_examples": 5619}], "download_size": 1662924830, "dataset_size": 1663841422.423}, "tags": ["text-to-image"]}
|
2023-05-23T12:40:26+00:00
|
390e43722c4bf2a2e1a3b2c2078b1c9925b7edb4
|
# Dataset Card for "hotel_reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
coeuslearning/hotel_reviews
|
[
"task_categories:text-generation",
"task_categories:question-answering",
"language:en",
"license:openrail",
"art",
"region:us"
] |
2023-05-23T12:31:20+00:00
|
{"language": ["en"], "license": "openrail", "task_categories": ["text-generation", "question-answering"], "pretty_name": "Hotel Reviews", "dataset_info": {"features": [{"name": "hotel", "dtype": "string"}, {"name": "city", "dtype": "string"}, {"name": "review", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 105}], "download_size": 21186, "dataset_size": 53555}, "tags": ["art"]}
|
2023-05-23T15:37:45+00:00
|
ed11bfa6aa5955342e1829e84d8cd0b47ed0340d
|
# Dataset Card for "61d0c3df"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/61d0c3df
|
[
"region:us"
] |
2023-05-23T12:37:01+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 186, "num_examples": 10}], "download_size": 1325, "dataset_size": 186}}
|
2023-05-23T12:37:02+00:00
|
74536929806d85a4b519cae5c6277bb64c664452
|
mC4 - Thai (Clean) - Size(ss) 187M Tokens (6.67% of mC4-th-clean, ~2.8B Tokens)
---
license: apache-2.0
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 1881070925
num_examples: 494539
download_size: 739477768
dataset_size: 1881070925
---
|
Wiritpol/mC4-th-clean
|
[
"size_categories:100K<n<1M",
"language:th",
"license:apache-2.0",
"region:us"
] |
2023-05-23T12:41:21+00:00
|
{"language": ["th"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "pretty_name": "mC4-th-size(ss)"}
|
2023-05-23T15:55:56+00:00
|
1dbb3d7dd610f0e0283ea234220f5cb7b5cc17d1
|
# Dataset Card for "all_datasets_wikis"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
StivenLancheros/all_datasets_wikis
|
[
"region:us"
] |
2023-05-23T12:42:31+00:00
|
{"dataset_info": {"features": [{"name": "src_title", "dtype": "string"}, {"name": "tgt_title", "dtype": "string"}, {"name": "src_summary", "dtype": "string"}, {"name": "tgt_summary", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "src_document", "sequence": [{"name": "title", "dtype": "string"}, {"name": "section_level", "dtype": "string"}, {"name": "content", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 6735593897, "num_examples": 440000}], "download_size": 2531579730, "dataset_size": 6735593897}}
|
2023-05-23T14:02:26+00:00
|
e0005b73a81ec297cf360d0d632a1c96ff32fff0
|
# Dataset Card for "717aa708"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/717aa708
|
[
"region:us"
] |
2023-05-23T12:43:14+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 184, "num_examples": 10}], "download_size": 1337, "dataset_size": 184}}
|
2023-05-23T12:43:15+00:00
|
25793a6bf77db37c68c7b4b1668a4b17d1c9022a
|
# Dataset Card for "0c620523"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/0c620523
|
[
"region:us"
] |
2023-05-23T12:48:12+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 182, "num_examples": 10}], "download_size": 1344, "dataset_size": 182}}
|
2023-05-23T12:48:15+00:00
|
6e3c5e3d55e0ea7fb4cdbe3a06235e6ca0215f81
|
# Dataset Card for "6493f099"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/6493f099
|
[
"region:us"
] |
2023-05-23T12:49:02+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 186, "num_examples": 10}], "download_size": 1337, "dataset_size": 186}}
|
2023-05-23T12:49:04+00:00
|
3fc104a31173273d5c236f2964df2d0baf84577a
|
# Dataset Card for "b5c4c9cc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/b5c4c9cc
|
[
"region:us"
] |
2023-05-23T13:04:08+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 182, "num_examples": 10}], "download_size": 1338, "dataset_size": 182}}
|
2023-05-23T13:04:09+00:00
|
6235df6e2bf6409610314615dba967549f709871
|
# Dataset Card for "a9fc8709"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/a9fc8709
|
[
"region:us"
] |
2023-05-23T13:04:10+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 182, "num_examples": 10}], "download_size": 1338, "dataset_size": 182}}
|
2023-05-23T13:04:12+00:00
|
422f88614e88a9691bcbaf51bc247ce94a624ca7
|
# Dataset Card for "ff300641-e4f6-4f20-ba81-75560448759e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
burtenshaw/ff300641-e4f6-4f20-ba81-75560448759e
|
[
"region:us"
] |
2023-05-23T13:07:26+00:00
|
{"dataset_info": {"features": [{"name": "document", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 393401, "num_examples": 4815}], "download_size": 0, "dataset_size": 393401}}
|
2023-05-24T06:40:30+00:00
|
3d5ecf171812d8b08c24f866ccb7683511e237f9
|
# Dataset Card for "11b118ffaa2b01fba512f522e56d01dc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
winglian/11b118ffaa2b01fba512f522e56d01dc_full
|
[
"region:us"
] |
2023-05-23T13:13:51+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "labels", "sequence": "int64"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 2248973632, "num_examples": 112311}], "download_size": 523726957, "dataset_size": 2248973632}}
|
2023-05-23T13:14:25+00:00
|
25d98706c6a2473dc020b7b1cb0ae7393f61bac8
|
# Dataset Card for "card_with_first_commit_embedded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
davanstrien/card_with_first_commit_embedded
|
[
"region:us"
] |
2023-05-23T13:19:35+00:00
|
{"dataset_info": {"features": [{"name": "modelId", "dtype": "string"}, {"name": "tags", "sequence": "string"}, {"name": "pipeline_tag", "dtype": "string"}, {"name": "config", "struct": [{"name": "architectures", "sequence": "string"}, {"name": "model_type", "dtype": "string"}, {"name": "task_specific_params", "struct": [{"name": "conversational", "struct": [{"name": "max_length", "dtype": "float64"}]}, {"name": "summarization", "struct": [{"name": "early_stopping", "dtype": "bool"}, {"name": "length_penalty", "dtype": "float64"}, {"name": "max_length", "dtype": "float64"}, {"name": "min_length", "dtype": "float64"}, {"name": "no_repeat_ngram_size", "dtype": "float64"}, {"name": "num_beams", "dtype": "float64"}, {"name": "prefix", "dtype": "string"}]}, {"name": "text-generation", "struct": [{"name": "do_sample", "dtype": "bool"}, {"name": "max_length", "dtype": "float64"}]}, {"name": "translation_en_to_de", "struct": [{"name": "early_stopping", "dtype": "bool"}, {"name": "max_length", "dtype": "float64"}, {"name": "num_beams", "dtype": "float64"}, {"name": "prefix", "dtype": "string"}]}, {"name": "translation_en_to_fr", "struct": [{"name": "early_stopping", "dtype": "bool"}, {"name": "max_length", "dtype": "float64"}, {"name": "num_beams", "dtype": "float64"}, {"name": "prefix", "dtype": "string"}]}, {"name": "translation_en_to_ro", "struct": [{"name": "early_stopping", "dtype": "bool"}, {"name": "max_length", "dtype": "float64"}, {"name": "num_beams", "dtype": "float64"}, {"name": "prefix", "dtype": "string"}]}]}]}, {"name": "downloads", "dtype": "int64"}, {"name": "first_commit", "dtype": "timestamp[ns, tz=UTC]"}, {"name": "card", "dtype": "string"}, {"name": "embedding", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 177783576, "num_examples": 30344}], "download_size": 137071859, "dataset_size": 177783576}}
|
2023-05-23T13:19:43+00:00
|
969c16bc69df6f019ccc1104716bea59a01ef34d
|
# AutoTrain Dataset for project: analytics-intent-reasoning
## Dataset Description
This dataset has been automatically processed by AutoTrain for project analytics-intent-reasoning.
### Languages
The BCP-47 code for the dataset's language is zh.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "\u9500\u552e\u91d1\u989d\u7684\u540c\u6bd4",
"target": 1
},
{
"text": "\u676d\u5dde\u54ea\u4e2a\u533a\u7684\u9500\u552e\u91d1\u989d\u6700\u9ad8",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['\u62a5\u8868\u6784\u5efa', '\u67e5\u8be2\u7c7b', '\u67e5\u8be2\u7c7b\u67e5\u8be2\u7c7b'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 72 |
| valid | 20 |
|
paiyun-huang/autotrain-data-analytics-intent-reasoning
|
[
"task_categories:text-classification",
"language:zh",
"region:us"
] |
2023-05-23T13:31:34+00:00
|
{"language": ["zh"], "task_categories": ["text-classification"]}
|
2023-05-24T08:42:08+00:00
|
f08a7da554cd69d5fa4febd614cc867ae8ba7724
|
# Dataset Card for "modeling"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Gae8J/modeling
|
[
"task_categories:audio-classification",
"size_categories:n<1K",
"region:us"
] |
2023-05-23T13:40:21+00:00
|
{"size_categories": ["n<1K"], "task_categories": ["audio-classification"], "dataset_info": {"features": [{"name": "file", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Bark", "1": "Bow-wow", "2": "Growling", "3": "Howl", "4": "Whimper", "5": "Yip"}}}}, {"name": "is_unknown", "dtype": "bool"}, {"name": "youtube_id", "dtype": "string"}, {"name": "youtube_url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 360959501, "num_examples": 516}, {"name": "validation", "num_bytes": 44245407, "num_examples": 65}, {"name": "test", "num_bytes": 44926668, "num_examples": 61}], "download_size": 368397025, "dataset_size": 450131576}}
|
2023-05-26T10:55:55+00:00
|
4a1a7938f32aca7b1ec54d08ebbb5f6bddb0debe
|
# llm-japanese-dataset-vanilla
LLM構築用の日本語チャットデータセット
[izumi-lab/llm-japanese-dataset](https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset) から,日英翻訳のデータセット等を抜いたものです.
主に,日本語LLMモデルなどに対して,チャット(Instruction)応答タスクに関してLoRAなどでチューニングするために使用できます.
※様々な公開言語資源を利用させていただきました.関係各位にはこの場を借りて御礼申し上げます.
## データの詳細
データの詳細は,[izumi-lab/llm-japanese-dataset](https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset) に関する,以下の論文を参照してください.
- 日本語: [https://jxiv.jst.go.jp/index.php/jxiv/preprint/view/383](https://jxiv.jst.go.jp/index.php/jxiv/preprint/view/383)
- 英語: [https://arxiv.org/abs/2305.12720](https://arxiv.org/abs/2305.12720)
- GitHub: [https://github.com/masanorihirano/llm-japanese-dataset](https://github.com/masanorihirano/llm-japanese-dataset)
- 最新情報: [llm.msuzuki.me](https://llm.msuzuki.me).
なお,Citationには,よろしければ,以下をご利用ください.
```
@inproceedings{Suzuki2023-llm-japanese-vanilla,
title={{From Base to Conversational: Japanese Instruction Dataset and Tuning Large Language Models}},
author={Masahiro Suzuki and Masanori Hirano and Hiroki Sakaji},
booktitle={2023 IEEE International Conference on Big Data (BigData)},
year={2023},
pages={5684-5693},
doi={10.1109/BigData59044.2023.10386605}
}
```
共同研究,データ提供,各種支援,その他問い合わせは,[email protected] へ.
## How to use
```python
from datasets import load_dataset
# latest version
dataset = load_dataset("izumi-lab/llm-japanese-dataset-vanilla")
# v0.1.0
dataset = load_dataset("izumi-lab/llm-japanese-dataset-vanilla", revision="0.1.0")
print(dataset.num_rows)
# {'train': 1811964}
# v1.0.0
dataset = load_dataset("izumi-lab/llm-japanese-dataset-vanilla", revision="1.0.0")
print(dataset.num_rows)
# {'train': 2515626}
```
v0.1.0 contains 1,811,964 data
v1.0.0 contains 2,515,626 data
v1.0.2 contains 2,492,588 data
For more details, see: https://github.com/masanorihirano/llm-japanese-dataset/tree/vanilla
## LICENSE
CC-BY-SA 4.0
(For more details, see: LICENSE, NOTICE.md, NOTICE2.md)
## Note
To see more latest information, please go to [llm.msuzuki.me](https://llm.msuzuki.me).
|
izumi-lab/llm-japanese-dataset-vanilla
|
[
"size_categories:1M<n<10M",
"language:ja",
"license:cc-by-sa-4.0",
"arxiv:2305.12720",
"region:us"
] |
2023-05-23T13:45:27+00:00
|
{"language": ["ja"], "license": "cc-by-sa-4.0", "size_categories": ["1M<n<10M"]}
|
2024-02-17T16:17:18+00:00
|
7cecbbb3d2eb8c75c8571c53e5a5270cfd0c5a9e
|
# Dataset Card for "Cross_ner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
P3ps/Cross_ner
|
[
"region:us"
] |
2023-05-23T13:55:21+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-academicjournal", "2": "I-academicjournal", "3": "B-album", "4": "I-album", "5": "B-algorithm", "6": "I-algorithm", "7": "B-astronomicalobject", "8": "I-astronomicalobject", "9": "B-award", "10": "I-award", "11": "B-band", "12": "I-band", "13": "B-book", "14": "I-book", "15": "B-chemicalcompound", "16": "I-chemicalcompound", "17": "B-chemicalelement", "18": "I-chemicalelement", "19": "B-conference", "20": "I-conference", "21": "B-country", "22": "I-country", "23": "B-discipline", "24": "I-discipline", "25": "B-election", "26": "I-election", "27": "B-enzyme", "28": "I-enzyme", "29": "B-event", "30": "I-event", "31": "B-field", "32": "I-field", "33": "B-literarygenre", "34": "I-literarygenre", "35": "B-location", "36": "I-location", "37": "B-magazine", "38": "I-magazine", "39": "B-metrics", "40": "I-metrics", "41": "B-misc", "42": "I-misc", "43": "B-musicalartist", "44": "I-musicalartist", "45": "B-musicalinstrument", "46": "I-musicalinstrument", "47": "B-musicgenre", "48": "I-musicgenre", "49": "B-organisation", "50": "I-organisation", "51": "B-person", "52": "I-person", "53": "B-poem", "54": "I-poem", "55": "B-politicalparty", "56": "I-politicalparty", "57": "B-politician", "58": "I-politician", "59": "B-product", "60": "I-product", "61": "B-programlang", "62": "I-programlang", "63": "B-protein", "64": "I-protein", "65": "B-researcher", "66": "I-researcher", "67": "B-scientist", "68": "I-scientist", "69": "B-song", "70": "I-song", "71": "B-task", "72": "I-task", "73": "B-theory", "74": "I-theory", "75": "B-university", "76": "I-university", "77": "B-writer", "78": "I-writer"}}}}], "splits": [{"name": "train", "num_bytes": 6995502.064669556, "num_examples": 20856}, {"name": "test", "num_bytes": 1749210.9353304438, "num_examples": 5215}], "download_size": 2609946, "dataset_size": 8744713.0}}
|
2023-05-23T13:55:32+00:00
|
948dea517572d2dc5e4c47ac811dc1921d17d403
|
# Dataset Card for "12b9f855"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/12b9f855
|
[
"region:us"
] |
2023-05-23T14:56:58+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 186, "num_examples": 10}], "download_size": 1337, "dataset_size": 186}}
|
2023-05-23T14:57:03+00:00
|
f8fcd3e73c4dae2b4d897dcec0dc4a9e1cf6809a
|
# Dataset Card for "1f482462"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/1f482462
|
[
"region:us"
] |
2023-05-23T14:57:05+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 186, "num_examples": 10}], "download_size": 1337, "dataset_size": 186}}
|
2023-05-23T14:57:09+00:00
|
c1531d853af85a553f099dfa43dcaa0aff4155b5
|
HEN10/doc_train
|
[
"license:openrail",
"region:us"
] |
2023-05-23T15:13:40+00:00
|
{"license": "openrail"}
|
2023-05-23T15:13:41+00:00
|
|
4182c1f7a15b974151f132bb69d7b59b887e047d
|
# Dataset Card for "dirty_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Veweew/dirty_small
|
[
"region:us"
] |
2023-05-23T15:17:15+00:00
|
{"dataset_info": {"features": [{"name": "identifier", "dtype": "string"}, {"name": "jsonl", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4210160114, "num_examples": 1668544}, {"name": "test", "num_bytes": 456326883, "num_examples": 203876}, {"name": "dev", "num_bytes": 463679193, "num_examples": 203342}], "download_size": 940114201, "dataset_size": 5130166190}}
|
2023-05-24T03:47:05+00:00
|
957ab7208bd96f7ec299a63edf7065c202c8d1d6
|
The dataset contains 20703 records. The dataset was created by removing all dataset items from the original 27k dataset that had a BLEU score 0 or more than 0.3388.
|
skupina-7/nlp-paraphrases-20k-pruned
|
[
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:sl",
"region:us"
] |
2023-05-23T15:23:09+00:00
|
{"language": ["sl"], "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "pretty_name": "NLP Paraphrases 20k (pruned)"}
|
2023-05-23T15:34:15+00:00
|
5a76fbb20808d7c53ee64e2d467e9983fbd740b2
|
# ShareGPT-ko-74k
ShareGPT 90k의 cleaned 버전을 구글 번역기를 이용하여 번역하였습니다.\
원본 데이터셋은 [여기](https://github.com/lm-sys/FastChat/issues/90)에서 확인하실 수 있습니다.
Korean-translated version of ShareGPT-90k, translated by Google Translaton.\
You can check the original dataset [here](https://github.com/lm-sys/FastChat/issues/90).
## Dataset Description
json 파일의 구조는 원본 데이터셋과 동일합니다.\
`*_unclneaed.json`은 원본 데이터셋을 번역하고 따로 후처리하지 않은 데이터셋입니다. (총 74k)\
`*_cleaned.json`은 위의 데이터에서 코드가 포함된 데이터를 러프하게 제거한 데이터셋입니다. (총 55k)\
**주의**: 코드는 번역되었을 수 있으므로 cleaned를 쓰시는 걸 추천합니다.
The structure of the dataset is the same with the original dataset.\
`*_unclneaed.json` are Korean-translated data, without any post-processing. (total 74k dialogues)\
`*_clneaed.json` are post-processed version which dialogues containing code snippets are eliminated from. (total 55k dialogues)\
**WARNING**: Code snippets might have been translated into Korean. I recommend you use cleaned files.
## Licensing Information
GPT를 이용한 데이터셋이므로 OPENAI의 [약관](https://openai.com/policies/terms-of-use)을 따릅니다.\
그 외의 경우 [CC BY 2.0 KR](https://creativecommons.org/licenses/by/2.0/kr/)을 따릅니다.
The licensing status of the datasets follows [OPENAI Licence](https://openai.com/policies/terms-of-use) as it contains GPT-generated sentences.\
For all the other cases, the licensing status follows [CC BY 2.0 KR](https://creativecommons.org/licenses/by/2.0/kr/).
## Code
번역에 사용한 코드는 아래 리포지토리에서 확인 가능합니다. Check out the following repository to see the translation code used.\
https://github.com/dubuduru/ShareGPT-translation
You can use the repository to translate ShareGPT-like dataset into your preferred language.
|
dbdu/ShareGPT-74k-ko
|
[
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:ko",
"license:cc-by-2.0",
"conversation",
"chatgpt",
"gpt-3.5",
"region:us"
] |
2023-05-23T15:30:43+00:00
|
{"language": ["ko"], "license": "cc-by-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "pretty_name": "ShareGPT-74k-ko", "tags": ["conversation", "chatgpt", "gpt-3.5"]}
|
2023-08-19T06:00:39+00:00
|
bc3256ec83363fbb6240ac819433c235a93b2993
|
# zh-tw-pythia-ta8000-v1-e1-tr_wiki_sg-001-c1024
This dataset is a part of the `zh-tw-llm` project.
* Tokenizer: `zh-tw-pythia-tokenizer-a8000-v1`
* Built with: `translations`, `wikipedia`, `sharegpt`
* Rows: `train` `305956`, `test` `225`
* Max length: `1024`
* Full config:
```json
{"build_with": ["translations", "wikipedia", "sharegpt"], "preview_length": 128, "translations_settings": {"source_dataset": "zetavg/coct-en-zh-tw-translations-twp-300k", "lang_1_key": "en", "lang_2_key": "ch", "templates": ["English: {lang_1}\nChinese: {lang_2}", "Chinese: {lang_2}\nEnglish: {lang_1}"], "use_template": "random", "rows_limit": 200000, "test_size": 100, "test_split_seed": 42}, "sharegpt_settings": {"source_dataset": "zetavg/ShareGPT-Processed", "train_on_inputs": false, "languages": [{"en": 0.4}, "zh_Hant"], "rows_limit": 8000, "test_size": 0.02, "test_split_seed": 42, "test_rows_limit": 100}, "wikipedia_settings": {"source_dataset": "zetavg/zh-tw-wikipedia", "exclude": [{"content_length_longer_than": 1024}, {"match": "小行星", "in": "markdown", "in_range": [0, 40]}, {"match": ",是中國", "in": "markdown", "in_range": [0, 20]}, {"match": "中華人民共和國", "in": "markdown", "in_range": [0, 20]}, {"match": "是中華人民共和國", "in": "markdown", "in_range": [0, 40]}], "rows_limit": 100000, "test_size": 0.1, "test_split_seed": 42, "test_rows_limit": 30}}
```
|
zh-tw-llm-dv/zh-tw-pythia-ta8000-v1-e1-tr_wiki_sg-001-c1024
|
[
"region:us"
] |
2023-05-23T15:36:45+00:00
|
{"dataset_info": {"dataset_size": 1639035396.6266758, "download_size": 549430210, "features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"dtype": "string", "name": "preview"}, {"dtype": "int64", "name": "length"}, {"dtype": "int64", "name": "messages_count"}], "splits": [{"name": "train", "num_bytes": 1637688841.0831976, "num_examples": 305956}, {"name": "test", "num_bytes": 1346555.543478261, "num_examples": 225}]}}
|
2023-05-23T15:42:28+00:00
|
9a18e2cc56d52040bb1a5b163856305033229f4d
|
coeuslearning/hotel_reviews_with_files
|
[
"task_categories:text-generation",
"language:en",
"license:openrail",
"region:us"
] |
2023-05-23T15:55:04+00:00
|
{"language": ["en"], "license": "openrail", "task_categories": ["text-generation"]}
|
2023-05-23T16:01:15+00:00
|
|
71568e9e1f158153b323d32a7ffa8e7047723cbf
|
# Dataset Card for "seahorse"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mrm8488/seahorse
|
[
"region:us"
] |
2023-05-23T16:00:51+00:00
|
{"dataset_info": {"features": [{"name": "gem_id", "dtype": "string"}, {"name": "worker_lang", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "model", "dtype": "string"}, {"name": "question1", "dtype": "string"}, {"name": "question2", "dtype": "string"}, {"name": "question3", "dtype": "string"}, {"name": "question4", "dtype": "string"}, {"name": "question5", "dtype": "string"}, {"name": "question6", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 26336341, "num_examples": 60980}], "download_size": 9335923, "dataset_size": 26336341}}
|
2023-05-23T16:00:56+00:00
|
1afca901cca09e2eede909aa7449ee6cce317afe
|
# Dataset Card for "VQAv2_minival_validation_sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_minival_validation_sample
|
[
"region:us"
] |
2023-05-23T16:08:30+00:00
|
{"dataset_info": {"features": [{"name": "question_type", "dtype": "string"}, {"name": "multiple_choice_answer", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "answers_original", "list": [{"name": "answer", "dtype": "string"}, {"name": "answer_confidence", "dtype": "string"}, {"name": "answer_id", "dtype": "int64"}]}, {"name": "id_image", "dtype": "int64"}, {"name": "answer_type", "dtype": "string"}, {"name": "question_id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "clip_tags_ViT_L_14", "sequence": "string"}, {"name": "blip_caption", "dtype": "string"}, {"name": "LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14", "sequence": "string"}, {"name": "DETA_detections_deta_swin_large_o365_coco_classes", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float32"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float32"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "DETA_detections_deta_swin_large_o365_clip_ViT_L_14", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "caption", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "id", "dtype": "int64"}, {"name": "DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "caption", "dtype": "string"}, {"name": "captions_module", "sequence": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_without_filtering", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "caption", "dtype": "string"}, {"name": "captions_module", "sequence": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_random", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "caption", "dtype": "string"}, {"name": "captions_module", "sequence": "string"}, {"name": "captions_module_filter", "sequence": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "clip_tags_LAION_ViT_H_14_2B", "sequence": "string"}, {"name": "LLM_Description_gpt3_downstream_tasks_visual_genome_LAION-ViT-H-14-2B", "sequence": "string"}, {"name": "Attributes_ViT_L_14_descriptors_text_davinci_003_full", "sequence": "string"}, {"name": "clip_tags_ViT_L_14_wo_openai", "sequence": "string"}, {"name": "clip_tags_ViT_L_14_with_openai", "sequence": "string"}, {"name": "clip_tags_LAION_ViT_H_14_2B_wo_openai", "sequence": "string"}, {"name": "clip_tags_LAION_ViT_H_14_2B_with_openai", "sequence": "string"}, {"name": "clip_tags_LAION_ViT_bigG_14_2B_wo_openai", "sequence": "string"}, {"name": "clip_tags_LAION_ViT_bigG_14_2B_with_openai", "sequence": "string"}, {"name": "Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full", "sequence": "string"}, {"name": "Attributes_LAION_ViT_bigG_14_2B_descriptors_text_davinci_003_full", "sequence": "string"}, {"name": "clip_tags_ViT_B_16_with_openai", "sequence": "string"}, {"name": "blip_caption_beam_5_Salesforce_blip2_flan_t5_xxl", "dtype": "string"}, {"name": "DETA_detections_deta_swin_large_o365_coco_classes_caption_all_patches_Salesforce_blip_image_captioning_large_", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "captions_all_patches", "sequence": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}], "splits": [{"name": "validation", "num_bytes": 32906198.0, "num_examples": 100}], "download_size": 8017526, "dataset_size": 32906198.0}}
|
2023-05-23T16:25:07+00:00
|
bc7102068377f40b833129586a181c88dc1c93c3
|
# Dataset Card for "f936b644"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/f936b644
|
[
"region:us"
] |
2023-05-23T16:12:54+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 182, "num_examples": 10}], "download_size": 1339, "dataset_size": 182}}
|
2023-05-23T16:12:56+00:00
|
5ca5d0a5b806bb86f3058a6a89341160849aa850
|
# VoxCeleb 1
VoxCeleb1 contains over 100,000 utterances for 1,251 celebrities, extracted from videos uploaded to YouTube.
## Verification Split
| | train | validation | test |
| :---: | :---: | :---: | :---: |
| # of speakers | 1211 | 1211 | 40 |
| # of samples | 133777 | 14865 | 4874 |
## References
- https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html
|
yangwang825/vox1-veri-full
|
[
"task_categories:audio-classification",
"audio",
"VoxCeleb",
"verification",
"region:us"
] |
2023-05-23T16:20:23+00:00
|
{"task_categories": ["audio-classification"], "tags": ["audio", "VoxCeleb", "verification"]}
|
2023-05-23T21:24:28+00:00
|
cf68f3a828d76d287443dd4e30b3e06bb648aa85
|
# Dataset Card for "medical_qa_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
stoddur/medical_qa_tokenized
|
[
"region:us"
] |
2023-05-23T16:20:25+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 1487793528, "num_examples": 241839}], "download_size": 0, "dataset_size": 1487793528}}
|
2023-05-23T17:07:00+00:00
|
bb4743e12d5a9e857ff508dad5bd534d6f31a9e2
|
# Dataset Card for SF Nexus Extracted Features: Chapters Only
## Dataset Description
- **Homepage: https://sfnexus.io/**
- **Repository: https://github.com/SF-Nexus/extracted-features-notebooks**
- **Point of Contact: Alex Wermer-Colan**
### Dataset Summary
The SF Nexus Extracted Features Chapters and Chunks dataset contains text and metadata from a subset of 306 texts from our corpus of 403 mid/late-twentieth century science fiction books, originally digitized from Temple University Libraries' Paskow Science Fiction Collection.
After digitization, the books were cleaned using Abbyy FineReader.
Because this is a collection of copyrighted fiction, the books have been disaggregated.
To improve performance of topic modeling and other nlp tasks, each book has also been split into chapters. This dataset includes the subset of our corpus in which chapters were present.
Each row of this dataset contains one "chapter" of text as well as metadata about that text's title, author and publication.
### About the SF Nexus Corpus
The Paskow Science Fiction collection contains primarily materials from post-WWII, especially mass-market works of the New Wave era (often dated to 1964-1980).
The digitized texts have also been ingested into HathiTrust's repository for preservation and data curation; they are now viewable on HathiTrust's [Temple page](https://babel.hathitrust.org/cgi/ls?field1=ocr;q1=%2A;a=srchls;facet=htsource%3A%22Temple%20University%22;pn=4) for non-consumptive research.
For more information on the project to digitize and curate a corpus of "New Wave" science fiction, see Alex Wermer-Colan's post on the Temple University Scholars Studio blog, ["Building a New Wave Science Fiction Corpus."](https://sites.temple.edu/tudsc/2017/12/20/building-new-wave-science-fiction-corpus/).
### Languages
English
## Dataset Structure
This dataset contains disaggregated "chunks" of text from mid-twentieth century science fiction books and associated metadata. For example:
```
{'Unnamed': 1,
'Title': 'THEEARTHISNEAR',
'Author': 'PESEK',
'Pub Year': '1973',
'Chapter': '1',
'Text': '. . . A But Cadiz. Cape Cape Elijah, Elmo’s Gone Horn, I I Islands, Life, Lisbon, No, Or Palos Prophet So So St The Then Those Verde What Wonderful, a a a a a a a a about above against ago ago! all all along and and and and and and and and and and are aroused as as at at away becalmed beyond beyond beyond black bobbing bows breath breath, breeze broke broke. burning but but but calm came chewed continent continent, corks. crashing dark days days days did down drive dry else, end endless enough even ever far fire fire, flowed foaming for for fresh from from from gave got grew had—a hardly horizon horizon, horizon, hot hungry in in in in in in in it it it it, its itself knew. know lands lay leather, licking life life! life-giving like like little lives. long long longed loose madness masts men middle mist more motionless mouths, nameless nameless nameless names. night. no noon noonday not nothing ocean ocean, ocean, oceans of of of of of of of of of on on on one or our our our our our our our our our out over perhaps price quicksilver remember rigging, roared, roaring, rocks. rose rousing sails sails. salty say? sea sea sea sea shining slack slight smelling some somewhere somewhere spices spices. stayed storm strange strips struck sun. swell swollen tackle tasted teeth terrible, than that that the the the the the the the the the the the the the the the the the the the the the the the the the the their them them, then then then there thin thing, thirst thirst thirst those those though throats. time to to to to tongues tune unknown unknown up, us us wanted was was was was was water water water, water, waves way we we we we weeks, when when when while whistled who who wind wind, without yardarm yards.'
'Clean Text': ' a but cadiz cape cape elijah elmo s gone horn i i islands life lisbon no or palos prophet so so st the then those verde what wonderful a a a a a a a a about above against ago ago all all along and and and and and and and and and and are aroused as as at at away becalmed beyond beyond beyond black bobbing bows breath breath breeze broke broke burning but but but calm came chewed continent continent corks crashing dark days days days did down drive dry else end endless enough even ever far fire fire flowed foaming for for fresh from from from gave got grew had a hardly horizon horizon horizon hot hungry in in in in in in in it it it it its itself knew know lands lay leather licking life life life giving like like little lives long long longed loose madness masts men middle mist more motionless mouths nameless nameless nameless names night no noon noonday not nothing ocean ocean ocean oceans of of of of of of of of of on on on one or our our our our our our our our our out over perhaps price quicksilver remember rigging roared roaring rocks rose rousing sails sails salty say sea sea sea sea shining slack slight smelling some somewhere somewhere spices spices stayed storm strange strips struck sun swell swollen tackle tasted teeth terrible than that that the the the the the the the the the the the the the the the the the the the the the the the the the the their them them then then then there thin thing thirst thirst thirst those those though throats time to to to to tongues tune unknown unknown up us us wanted was was was was was water water water water waves way we we we we weeks when when when while whistled who who wind wind without yardarm yards'
'Chapter Word Count': '343',
}
```
### Data Fields
- **Unnamed: int** A unique id for the text
- **Title: str** The title of the book from which the text has been extracted
- **Author: str** The author of the book from which the text has been extracted
- **Pub Year: str** The date on which the book was published (first printing)
- **Chapter: int** The chapter in the book from which the text has been extracted
- **Text: str** The chunk of text extracted from the book
- **Clean Text: str** The chunk of text extracted from the book with lowercasing performed and punctuation, numbers and extra spaces removed
- **Chapter Word Count: int** The number of words the chunk of text contains
To Be Added:
- **summary: str** A brief summary of the book, if extracted from library records
- **pub_date: int** The date on which the book was published (first printing)
- **pub_city: int** The city in which the book was published (first printing)
- **lcgft_category: str** Information from the Library of Congress Genre/Form Terms for Library and Archival Materials, if known
### Loading the Dataset
Use the following code to load the dataset in a Python environment (note: does not work with repo set to private)
```
from datasets import load_dataset
# If the dataset is gated/private, make sure you have run huggingface-cli login
dataset = load_dataset("SF-Corpus/EF_Chapters_Only")
```
Or just clone the dataset repo
```
git lfs install
git clone https://huggingface.co/datasets/SF-Corpus/EF_Chapters_Only
# if you want to clone without large files – just their pointers
# prepend your git clone with the following env var:
GIT_LFS_SKIP_SMUDGE=1
```
## Dataset Creation
### Curation Rationale
For an overview of our approach to data curation of literary texts, see Alex Wermer-Colan’s and James Kopaczewski’s article, “The New Wave of Digital Collections: Speculating on the Future of Library Curation”(2022)
### Source Data
The Loretta C. Duckworth Scholars Studio has partnered with Temple University Libraries’ Special Collections Research Center (SCRC) and Digital Library Initiatives (DLI) to build a digitized corpus of copyrighted science fiction literature. Besides its voluminous Urban Archives, the SCRC also houses a significant collection of science-fiction literature. The Paskow Science Fiction Collection was originally established in 1972, when Temple acquired 5,000 science fiction paperbacks from a Temple alumnus, the late David C. Paskow. Subsequent donations, including troves of fanzines and the papers of such sci-fi writers as John Varley and Stanley G. Weinbaum, expanded the collection over the last few decades, both in size and in the range of genres. SCRC staff and undergraduate student workers recently performed the usual comparison of gift titles against cataloged books, removing science fiction items that were exact duplicates of existing holdings. A refocusing of the SCRC’s collection development policy for science fiction de-emphasized fantasy and horror titles, so some titles in those genres were removed as well.
## Considerations for Using the Data
This data card only exhibits extracted features for copyrighted fiction; no copyrighted work is being made available for consumption. These digitized files are made accessible for purposes of education and research. Temple University Libraries have given attribution to rights holders when possible. If you hold the rights to materials in our digitized collections that are unattributed, please let us know so that we may maintain accurate information about these materials.
If you are a rights holder and are concerned that you have found material on this website for which you have not granted permission (or is not covered by a copyright exception under US copyright laws), you may request the removal of the material from our site by writing to [email protected].
For more information on non-consumptive research, check out HathiTrust Research Center’s Non-Consumptive Use Research Policy.
## Additional Information
### Dataset Curators
For a full list of conributors to the SF Nexus project, visit [https://sfnexus.io/people/](https://sfnexus.io/people/).
|
SF-Corpus/EF_Chapters_Only
|
[
"language:en",
"region:us"
] |
2023-05-23T16:29:27+00:00
|
{"language": ["en"], "pretty_name": "sf-nexus-ef-chapters-and-chunks"}
|
2023-12-05T23:31:52+00:00
|
c03d6c0404ce0e1ef69c54c3d6866d712ae91117
|
# Dataset Card for "diffusion_db_5k_train_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
myradeng/diffusion_db_5k_train_v1
|
[
"region:us"
] |
2023-05-23T16:37:14+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "seed", "dtype": "uint32"}, {"name": "step", "dtype": "uint16"}, {"name": "cfg", "dtype": "float32"}, {"name": "sampler", "dtype": "string"}, {"name": "width", "dtype": "uint16"}, {"name": "height", "dtype": "uint16"}, {"name": "user_name", "dtype": "string"}, {"name": "timestamp", "dtype": "timestamp[us, tz=UTC]"}, {"name": "image_nsfw", "dtype": "float32"}, {"name": "prompt_nsfw", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 2078675154.4, "num_examples": 4000}], "download_size": 2057841600, "dataset_size": 2078675154.4}}
|
2023-05-23T16:38:35+00:00
|
9dd2ac600c96994780012a468612170a529cbad6
|
# Dataset Card for "diffusion_db_5k_val_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
myradeng/diffusion_db_5k_val_v1
|
[
"region:us"
] |
2023-05-23T16:38:37+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "seed", "dtype": "uint32"}, {"name": "step", "dtype": "uint16"}, {"name": "cfg", "dtype": "float32"}, {"name": "sampler", "dtype": "string"}, {"name": "width", "dtype": "uint16"}, {"name": "height", "dtype": "uint16"}, {"name": "user_name", "dtype": "string"}, {"name": "timestamp", "dtype": "timestamp[us, tz=UTC]"}, {"name": "image_nsfw", "dtype": "float32"}, {"name": "prompt_nsfw", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 519559992.6, "num_examples": 1000}], "download_size": 519441334, "dataset_size": 519559992.6}}
|
2023-05-23T16:38:59+00:00
|
3bce726f59a23774cdaf759bbfa04bb2269cfbe1
|
henrydz/financial_dataset
|
[
"license:apache-2.0",
"region:us"
] |
2023-05-23T16:41:04+00:00
|
{"license": "apache-2.0"}
|
2023-05-23T16:43:06+00:00
|
|
c2bcf5e25963f0fc2de4491957adb938c7e45956
|
allenai/layout_distribution_shift
|
[
"license:apache-2.0",
"region:us"
] |
2023-05-23T16:58:24+00:00
|
{"license": "apache-2.0", "dataset_info": {"features": [{"name": "words", "sequence": "string"}, {"name": "bbox", "sequence": {"sequence": "float64"}}, {"name": "labels", "sequence": "int64"}, {"name": "block_ids", "sequence": "int64"}, {"name": "line_ids", "sequence": "int64"}, {"name": "files", "dtype": "string"}], "splits": [{"name": "remapped_Acta_dev.json", "num_bytes": 9101699, "num_examples": 491}, {"name": "remapped_Acta_fewshot_finetune_10_pubs_dev_episode_0.json", "num_bytes": 27958, "num_examples": 2}, {"name": "remapped_Acta_fewshot_finetune_10_pubs_dev_episode_1.json", "num_bytes": 18241, "num_examples": 2}, {"name": "remapped_Acta_fewshot_finetune_10_pubs_dev_episode_2.json", "num_bytes": 45036, "num_examples": 2}, {"name": "remapped_Acta_fewshot_finetune_10_pubs_train_episode_0.json", "num_bytes": 2269140, "num_examples": 117}, {"name": "remapped_Acta_fewshot_finetune_10_pubs_train_episode_1.json", "num_bytes": 2011417, "num_examples": 102}, {"name": "remapped_Acta_fewshot_finetune_10_pubs_train_episode_2.json", "num_bytes": 2236354, "num_examples": 116}, {"name": "remapped_Acta_test.json", "num_bytes": 9450719, "num_examples": 495}, {"name": "remapped_Acta_train.json", "num_bytes": 71764609, "num_examples": 3848}, {"name": "remapped_BMC_dev.json", "num_bytes": 23369323, "num_examples": 503}, {"name": "remapped_BMC_fewshot_finetune_10_pubs_dev_episode_0.json", "num_bytes": 108560, "num_examples": 2}, {"name": "remapped_BMC_fewshot_finetune_10_pubs_dev_episode_1.json", "num_bytes": 67630, "num_examples": 2}, {"name": "remapped_BMC_fewshot_finetune_10_pubs_dev_episode_2.json", "num_bytes": 74671, "num_examples": 2}, {"name": "remapped_BMC_fewshot_finetune_10_pubs_train_episode_0.json", "num_bytes": 3696565, "num_examples": 82}, {"name": "remapped_BMC_fewshot_finetune_10_pubs_train_episode_1.json", "num_bytes": 3831159, "num_examples": 77}, {"name": "remapped_BMC_fewshot_finetune_10_pubs_train_episode_2.json", "num_bytes": 4578916, "num_examples": 96}, {"name": "remapped_BMC_test.json", "num_bytes": 25850198, "num_examples": 535}, {"name": "remapped_BMC_train.json", "num_bytes": 216531051, "num_examples": 4628}, {"name": "remapped_PLoS_dev.json", "num_bytes": 78334040, "num_examples": 1499}, {"name": "remapped_PLoS_fewshot_finetune_10_pubs_dev_episode_0.json", "num_bytes": 93335, "num_examples": 2}, {"name": "remapped_PLoS_fewshot_finetune_10_pubs_dev_episode_1.json", "num_bytes": 125366, "num_examples": 2}, {"name": "remapped_PLoS_fewshot_finetune_10_pubs_dev_episode_2.json", "num_bytes": 126234, "num_examples": 2}, {"name": "remapped_PLoS_fewshot_finetune_10_pubs_train_episode_0.json", "num_bytes": 6190119, "num_examples": 120}, {"name": "remapped_PLoS_fewshot_finetune_10_pubs_train_episode_1.json", "num_bytes": 5238068, "num_examples": 98}, {"name": "remapped_PLoS_fewshot_finetune_10_pubs_train_episode_2.json", "num_bytes": 5662127, "num_examples": 121}, {"name": "remapped_PLoS_test.json", "num_bytes": 77843621, "num_examples": 1480}, {"name": "remapped_PLoS_train.json", "num_bytes": 622303242, "num_examples": 11937}, {"name": "remapped_RU_dev.json", "num_bytes": 37618273, "num_examples": 689}, {"name": "remapped_RU_fewshot_finetune_10_pubs_dev_episode_0.json", "num_bytes": 140245, "num_examples": 2}, {"name": "remapped_RU_fewshot_finetune_10_pubs_dev_episode_1.json", "num_bytes": 135845, "num_examples": 2}, {"name": "remapped_RU_fewshot_finetune_10_pubs_dev_episode_2.json", "num_bytes": 153598, "num_examples": 2}, {"name": "remapped_RU_fewshot_finetune_10_pubs_train_episode_0.json", "num_bytes": 6575257, "num_examples": 116}, {"name": "remapped_RU_fewshot_finetune_10_pubs_train_episode_1.json", "num_bytes": 5998010, "num_examples": 105}, {"name": "remapped_RU_fewshot_finetune_10_pubs_train_episode_2.json", "num_bytes": 5014176, "num_examples": 99}, {"name": "remapped_RU_test.json", "num_bytes": 36500742, "num_examples": 665}, {"name": "remapped_RU_train.json", "num_bytes": 297906664, "num_examples": 5452}, {"name": "remapped_diverse_publications_125_publishers_dev.json", "num_bytes": 26129574, "num_examples": 493}, {"name": "remapped_diverse_publications_125_publishers_train.json", "num_bytes": 628804969, "num_examples": 13002}, {"name": "remapped_diverse_publications_25_publishers_dev.json", "num_bytes": 30070714, "num_examples": 606}, {"name": "remapped_diverse_publications_25_publishers_train.json", "num_bytes": 675457461, "num_examples": 13538}], "download_size": 442657892, "dataset_size": 2921454926}}
|
2023-05-24T02:28:08+00:00
|
|
67072d04b89c60dfa0aa59b6ce6ee0baea1b9919
|
# Dataset Card for "demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
santhosh97/demo
|
[
"region:us"
] |
2023-05-23T17:01:16+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input_image", "dtype": "image"}, {"name": "ground_truth_image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 83633000.0, "num_examples": 80}], "download_size": 41819821, "dataset_size": 83633000.0}}
|
2023-05-23T17:01:18+00:00
|
256e269fdc80559957226d16a1f779ff99b76c2d
|
# Dataset Card for "testing-ABBA-HTR"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sivan22/testing-ABBA-HTR
|
[
"region:us"
] |
2023-05-23T17:03:53+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "labels", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 169628.0, "num_examples": 9}], "download_size": 168372, "dataset_size": 169628.0}}
|
2023-05-23T17:04:35+00:00
|
3e025a17471e82d96190d3a9a18b49842f0bf4e6
|
BisratWorku/model
|
[
"license:apache-2.0",
"region:us"
] |
2023-05-23T17:11:51+00:00
|
{"license": "apache-2.0"}
|
2023-05-23T17:31:01+00:00
|
|
7feccda70e35f64d672d89816537f906e7a34394
|
# Dataset Card for "diffusion_db_5k_train_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
myradeng/diffusion_db_5k_train_v2
|
[
"region:us"
] |
2023-05-23T17:14:47+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "seed", "dtype": "uint32"}, {"name": "step", "dtype": "uint16"}, {"name": "cfg", "dtype": "float32"}, {"name": "sampler", "dtype": "string"}, {"name": "width", "dtype": "uint16"}, {"name": "height", "dtype": "uint16"}, {"name": "user_name", "dtype": "string"}, {"name": "timestamp", "dtype": "timestamp[us, tz=UTC]"}, {"name": "image_nsfw", "dtype": "float32"}, {"name": "prompt_nsfw", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 1817027835.2, "num_examples": 4000}], "download_size": 1800963916, "dataset_size": 1817027835.2}}
|
2023-05-23T17:15:44+00:00
|
3abcb1447a4bd9e0d9066967156425db1ebaf5bc
|
# Dataset Card for "diffusion_db_5k_val_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
myradeng/diffusion_db_5k_val_v2
|
[
"region:us"
] |
2023-05-23T17:15:49+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "seed", "dtype": "uint32"}, {"name": "step", "dtype": "uint16"}, {"name": "cfg", "dtype": "float32"}, {"name": "sampler", "dtype": "string"}, {"name": "width", "dtype": "uint16"}, {"name": "height", "dtype": "uint16"}, {"name": "user_name", "dtype": "string"}, {"name": "timestamp", "dtype": "timestamp[us, tz=UTC]"}, {"name": "image_nsfw", "dtype": "float32"}, {"name": "prompt_nsfw", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 439111662.8, "num_examples": 1000}], "download_size": 438396506, "dataset_size": 439111662.8}}
|
2023-05-23T17:16:07+00:00
|
e2674ba7bd38d93680f5e8f230bd78842ccd4d30
|
# Dataset Card for "2477d43f"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/2477d43f
|
[
"region:us"
] |
2023-05-23T17:34:18+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 182, "num_examples": 10}], "download_size": 1325, "dataset_size": 182}}
|
2023-05-23T17:34:20+00:00
|
0d0e3ee0f8515a51d4447b3b27b68fb5ba3a9f3e
|
# Dataset Card for "LaMini-Hallucination"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
# Citation
```
@article{lamini-lm,
author = {Minghao Wu and
Abdul Waheed and
Chiyu Zhang and
Muhammad Abdul-Mageed and
Alham Fikri Aji
},
title = {LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions},
journal = {CoRR},
volume = {abs/2304.14402},
year = {2023},
url = {https://arxiv.org/abs/2304.14402},
eprinttype = {arXiv},
eprint = {2304.14402}
}
```
|
MBZUAI/LaMini-Hallucination
|
[
"arxiv:2304.14402",
"region:us"
] |
2023-05-23T17:39:01+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "category", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 2785, "num_examples": 40}], "download_size": 3220, "dataset_size": 2785}}
|
2024-02-17T10:17:57+00:00
|
336814591f7a1641c784ae93fbbbbe1026a71c42
|
# Dataset Card for "diffusion_db_5k_train_v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
myradeng/diffusion_db_5k_train_v3
|
[
"region:us"
] |
2023-05-23T17:43:03+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "seed", "dtype": "uint32"}, {"name": "step", "dtype": "uint16"}, {"name": "cfg", "dtype": "float32"}, {"name": "sampler", "dtype": "string"}, {"name": "width", "dtype": "uint16"}, {"name": "height", "dtype": "uint16"}, {"name": "user_name", "dtype": "string"}, {"name": "timestamp", "dtype": "timestamp[us, tz=UTC]"}, {"name": "image_nsfw", "dtype": "float32"}, {"name": "prompt_nsfw", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 1802043115.2, "num_examples": 4000}], "download_size": 1779430118, "dataset_size": 1802043115.2}}
|
2023-05-23T17:43:51+00:00
|
3fcbc44a22437820f97b2c855950459990f92629
|
## "Say It Again, Kid!" (SIAK) Speech data collection##
## Training data for pronunciation quality classifiers for childred learning English ##
Train set and test set in flac format.
File id key, fields separated by underscores (example: train001fifi05_609_t10892805_living-room.flac)
* Speaker key indicates train or test set, and a running number for speaker. _speaker key is train001_
* Native language: "fifi" for Finnish, "enuk" for UK English, "othr" for other. _Native language fifi_
* Age of speaker in years (if known). _This speaker was 05 years old at the start of the recording period_
* Sample number. _This is the 609th sample spoken by the speaker. (Some kids really enjoyed contributing!)_
* Seconds from first sample given. _10892805 seconds since first recording. This speaker contributed the samples over a 4 month period_
* Targer utterance text with spaces etc replaced by dashes. _Utterance to be spoken was "living room"_
## Release history ##
This data is derived from the data collected in the SIAK project 2014-2018,
Participants agreed that their data can be published anonymously. Unfortunately the General Data Protection Regulation (GDPR)
became effective before the data was ready for release, and the publication effort halted.
However the data was leased to an ill-fated startup that started operationsa few weeks before COVID-19 lockdowns.
This collection is a derivation of the SIAK data with any strongly identifying metadata removed for use by the now bankrupt startup.
We were involved in collecting, storing and processing the data in the SIAK project and have gone through the speech samples
in enough detail to be assured that the data can be regarded as non-personal and thus except from GDPR as it consists of only single words or very short utterance repetitions, making it next to impossible to identify a speaker.
Reima Karhila and Anna Smolander
SIAK project researchers and unlucky startup founders
---
license: cc-by-nd-4.0
---
We emphasize, that by no derivatives we mean that you cannot use the audio samples as part of any work that is not directly related to describing the dataset in a speech technology or scientific language learning context. You may include them in a scientific presentation when the context is clearly to present the original data and not to use the data in another fashion.
Commercial use of speech samples for building and evaluation of speech technology models is _not_ prohibited.
If you publish work based on this dataset, please cite _Karhila & al.: Pronunciation Scoring System Embedded into Children’s Foreign
Language Learning Games with Experimental Verification of Learning Benefits, SLATE 2023_.
|
rkarhila/SIAK
|
[
"task_categories:automatic-speech-recognition",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nd-4.0",
"region:us"
] |
2023-05-23T17:43:37+00:00
|
{"language": ["en"], "license": "cc-by-nd-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["automatic-speech-recognition"], "pretty_name": "\"Say It Again, Kid!\" Native and Finnish accented Children's English with pronunciation scores"}
|
2023-08-16T18:45:11+00:00
|
c7c3fe37fd19d06edff43d55e04038d2b145bdc2
|
# Dataset Card for "diffusion_db_5k_val_v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
myradeng/diffusion_db_5k_val_v3
|
[
"region:us"
] |
2023-05-23T17:43:51+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "seed", "dtype": "uint32"}, {"name": "step", "dtype": "uint16"}, {"name": "cfg", "dtype": "float32"}, {"name": "sampler", "dtype": "string"}, {"name": "width", "dtype": "uint16"}, {"name": "height", "dtype": "uint16"}, {"name": "user_name", "dtype": "string"}, {"name": "timestamp", "dtype": "timestamp[us, tz=UTC]"}, {"name": "image_nsfw", "dtype": "float32"}, {"name": "prompt_nsfw", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 458258338.8, "num_examples": 1000}], "download_size": 458124179, "dataset_size": 458258338.8}}
|
2023-05-23T17:44:03+00:00
|
628803d8055d947439f381e9877e9b4b69fb0d35
|
# 🚢 Stanford Human Preferences Dataset v2 (SHP-2)
## Summary
SHP-2 is a dataset of **4.8M collective human preferences** over responses to questions/instructions in 129 different subject areas, from cooking to legal advice. It is an extended version of the original 385K [SHP dataset](https://huggingface.co/datasets/stanfordnlp/SHP).
The preferences are meant to reflect the helpfulness of one response over another, and are intended to be used for training RLHF reward models and NLG evaluation models (e.g., [SteamSHP](https://huggingface.co/stanfordnlp/SteamSHP-flan-t5-xl)).
Each example is a Reddit or StackExchange post with a question/instruction and a pair of top-level comments for that post, where one comment is more preferred by Reddit / StackExchange users (collectively).
SHP exploits the fact that if comment A was written *after* comment B but has a higher score nonetheless, then A is ostensibly more preferred to B.
If A had been written before B, then we could not conclude this, since its higher score could have been the result of more visibility.
We chose data where the preference label is intended to reflect which response is more *helpful* rather than which is less *harmful*, the latter being the focus of much past work.
How is SHP different from [Anthropic's HH-RLHF dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf) and [Open Assistant](https://huggingface.co/datasets/OpenAssistant/oasst1)?
| Dataset | Size | Input | Label | Domains | Data Format | Length |
| -------------------- | ---- | -------------------------- | ---------------------------- | ------------------------- | ------------------------------------- | --------------- |
| SHP-2 | 4.8M | Naturally occurring human-written responses | Collective Human Preference | 129 (labelled) | Question/Instruction + Response (Single-turn) | up to 10.1K T5 tokens |
| HH-RLHF | 91K | Dialogue with LLM | Individual Human Preference | not labelled | Live Chat (Multi-turn) | up to 1.5K T5 tokens |
| OASST | 161K | Dialogue with LLM | K Individual Preferences, Aggregated | not labelled | Live Chat (Multi-Turn) | up to 1.5K T5 tokens |
How is SHP different from other datasets that have scraped Reddit, like [ELI5](https://huggingface.co/datasets/eli5#source-data)?
SHP uses the timestamp information to infer preferences, while ELI5 only provides comments and scores -- the latter are not enough to infer preferences since comments made earlier tend to get higher scores from more visibility.
It also contains data from more domains:
| Dataset | Size | Comments + Scores | Preferences | Number of Domains |
| -------------------- | ---- | ------------------ | -------------| ------------------ |
| SHP-2 | 4.8M | Yes | Yes | 129 (70 from Reddit, 59 from StackExchange) |
| SHP | 385K | Yes | Yes | 18 (from Reddit) |
| ELI5 | 270K | Yes | No | 3 |
## Data Structure
There are 2 directories, one for Reddit and one for StackExchange. There are 70 subdirectories under `reddit/`, one for each subreddit, and 59 subdirectories under `stackexchange/`, one for each stackexchange site.
Each subdirectory contains a JSONL file for the training, validation, and test data.
Here's how to get the data using Huggingface's `datasets` library:
```python
from datasets import load_dataset
# Load all the data
dataset = load_dataset("stanfordnlp/shp-2")
# Load one of the subreddits
dataset = load_dataset("stanfordnlp/shp-2", data_dir="reddit/askculinary")
# Load one of the StackExchange sites
dataset = load_dataset("stanfordnlp/shp-2", data_dir="stackexchange/stack_academia")
```
Here's an example from `reddit/askculinary/train.json`:
```
{
`post_id`:"qt3nxl",
`domain`:"askculinary_train",
`upvote_ratio`:0.98,
`history`:"What's the best way to disassemble raspberries? Like this, but down to the individual seeds: https:\/\/i.imgur.com\/Z0c6ZKE.jpg I've been pulling them apart with tweezers and it's really time consuming. I have about 10 pounds to get through this weekend.",
`c_root_id_A`:"hkh25sc",
`c_root_id_B`:"hkh25lp",
`created_at_utc_A`:1636822112,
`created_at_utc_B`:1636822110,
`score_A`:340,
`score_B`:166,
`human_ref_A`:"Pectinex, perhaps? It's an enzyme that breaks down cellulose. With citrus, you let it sit in a dilute solution of pectinex overnight to break down the connective tissues. You end up with perfect citrus supremes. If you let the raspberries sit for a shorter time, I wonder if it would separate the seeds the same way...? Here's an example: https:\/\/www.chefsteps.com\/activities\/perfect-citrus-supreme",
`human_ref_B`:"Raspberry juice will make a bright stain at first, but in a matter of weeks it will start to fade away to almost nothing. It is what is known in the natural dye world as a fugitive dye, it will fade even without washing or exposure to light. I hope she gets lots of nice photos of these stains on her dress, because soon that will be all she has left of them!",
`labels`:1,
`metadata_A`: "",
`metadata_B`: "",
`seconds_difference`:2.0,
`score_ratio`:2.0481927711
}
```
Here's an example from `stackexchange/stack_academia/validation.json`:
```
{
`post_id`:"87393",
`domain`:"academia_validation",
`history`:"What to answer an author asking me if I reviewed his/her paper? <sep> Suppose I review someone's paper anonymously, the paper gets accepted, and a year or two later we meet e.g. in a social event and he/she asks me "did you review my paper?". What should I answer? There are several sub-questions here: Suppose the review was a good one, and the paper eventualy got accepted, so I do not mind telling that I was the reviewer. Is there any rule/norm prohibiting me from telling the truth? Suppose the review was not so good, so I do not want to reveal. What can I answer? If I just say "I am not allowed to tell you", this immediately reveals me... On the other hand, I do not want to lie. What options do I have?",
`c_root_id_A`:"87434",
`c_root_id_B`:"87453",
`created_at_utc_A`:1490989560,
`created_at_utc_B`:1491012608,
`score_A`:2,
`score_B`:5,
`human_ref_A`:"I am aware of at least one paper where a referee went out of cover (after the review process of course) and was explicitly mentioned in a later paper: <blockquote> X and Y thank Z, who as the anonymous referee was kind enough to point out the error (and later became non-anonymous). </blockquote> so it is sure fine to answer truthfully that yes you did review, but only if you wish of course (and most likely if you have been helpful and the authors of the paper responsive).",
`human_ref_B`:"Perhaps you should follow the example of Howard Percy Robertson (known as the 'R' in the famous FLRW, or Friedmann-Lematre-Robertson-Walker metric used in physical cosmology.) He was the referee of the famous Einstein-Rosen paper, which was rejected by Physical Review, prompting Einstein never to publish in Physical Review again. Einstein ignored the referee report, but months later, it seems, Robertson had a chance to talk to Einstein and may have helped convince him of the error of his ways. However, as far as we know, he never revealed to Einstein that he was the anonymous referee for Physical Review. It was not until 2005 I believe, long after the death of all participants, that Physical Review chose to disclose the referee's identity (http://physicstoday.scitation.org/doi/full/10.1063/1.2117822).",
`labels`:"0",
`metadata_A`:"Post URL: https://academia.stackexchange.com/questions/87393, Response URL: https://academia.stackexchange.com/questions/87434, Post author username: Erel Segal-Halevi, Post author profile: https://academia.stackexchange.com/users/787, Response author username: mts, Response author profile: https://academia.stackexchange.com/users/49583",
`metadata_B`:"Post URL: https://academia.stackexchange.com/questions/87393, Response URL: https://academia.stackexchange.com/questions/87453, Post author username: Erel Segal-Halevi, Post author profile: https://academia.stackexchange.com/users/787, Response author username: Viktor Toth, Response author profile: https://academia.stackexchange.com/users/7938",
`seconds_difference`:23048.0,
`score_ratio`:2.5,
}
```
where the fields are:
- ```post_id```: the ID of the Reddit post (string)
- ```domain```: the subreddit and split the example is drawn from, separated by an underscore (string)
- ```upvote_ratio```: the percent of votes received by the post that were positive (aka upvotes), -1.0 for stackexchange as there is no such data (float)
- ```history```: the post title concatented to the post body (string)
- ```c_root_id_A```: the ID of comment A (string)
- ```c_root_id_B```: the ID of comment B (string)
- ```created_at_utc_A```: utc timestamp of when comment A was created (integer)
- ```created_at_utc_B```: utc timestamp of when comment B was created (integer)
- ```score_A```: (# positive votes - # negative votes + 1) received by comment A (integer)
- ```score_B```: (# positive votes - # negative votes + 1) received by comment B (integer)
- ```human_ref_A```: text of comment A (string)
- ```human_ref_B```: text of comment B (string)
- ```labels```: the preference label -- it is 1 if A is preferred to B; 0 if B is preferred to A. This was randomized such that the label distribution is roughly 50/50. (integer)
- ```metadata_A```: metadata for stackexchange post and comment A (string)
- ```metadata_B```: metadata for stackexchange post and comment B (string)
- ```seconds_difference```: how many seconds after the less preferred comment the more preferred one was created (will always be >= 0) (integer)
- ```score_ratio```: the ratio of the more preferred comment's score to the less preferred comment's score (will be >= 1) (float)
## Dataset Design
### Domain Selection
The data is sourced from Reddit and StackExchange, which are both public forums organized into different domains.
SHP-2 contains a train, validation, and test split for comments scraped from each domain. We chose domains based on:
1. whether they were well-known (>= 100K subscribers for Reddit and >= 50K for StackExchange)
2. whether posts were expected to pose a question or instruction
3. whether responses were valued based on how *helpful* they were
4. whether comments had to be rooted in some objectivity, instead of being entirely about personal experiences (e.g., `askscience` vs. `AskAmericans`)
The train/validation/test splits were created by splitting the post IDs of a domain in 90%/5%/5% proportions respectively, so that no post would appear in multiple splits.
Since different posts have different numbers of comments, the number of preferences in each split is not exactly 90%/5%/5%.
See below for a list of all domains:
Reddit: \
techsupport, asklinguistics, askscience, catadvice, campingandhiking, askphysics, espresso, botany, asksocialscience, askbaking, ultralight, legaladvice, hiking, webdev, askengineers, screenwriting, askhistorians, vegetarian, writing, diy, musictheory, camping, moviesuggestions, askeconomics, stocks, frugal, outoftheloop, booksuggestions, gamedev, linuxquestions, asknetsec, aviation, askacademia, asksciencefiction, askhr, explainlikeimfive, etymology, entrepreneur, cooking, puppy101, keto, crochet, smallbusiness, architecture, artfundamentals, sewing, zerowaste, changemyview, mechanicadvice, iwanttolearn, eatcheapandhealthy, askanthropology, askculinary, askphilosophy, tea, running, excel, homebrewing, solotravel, fishing, cookingforbeginners, homeautomation, ifyoulikeblank, travel, suggestmeabook, televisionsuggestions, sysadmin, askcarguys, askdocs, askvet
StackExchange: \
stack_unix, stack_android, stack_academia, stack_superuser, stack_tex, stack_photo, stack_datascience, stack_mechanics, stack_english, stack_askubuntu, stack_sharepoint, stack_workplace, stack_blender, stack_ethereum, stack_stats, stack_bitcoin, stack_gamedev, stack_raspberrypi, stack_arduino, stack_magento, stack_physics, stack_mathoverflow, stack_dsp, stack_movies, stack_crypto, stack_apple, stack_mathematica, stack_philosophy, stack_wordpress, stack_ux, stack_webmasters, stack_cs, stack_travel, stack_bicycles, stack_softwarerecs, stack_money, stack_ell, stack_scifi, stack_aviation, stack_math, stack_biology, stack_drupal, stack_diy, stack_security, stack_salesforce, stack_graphicdesign, stack_stackoverflow, stack_webapps, stack_cooking, stack_networkengineering, stack_dba, stack_puzzling, stack_serverfault, stack_codereview, stack_music, stack_codegolf, stack_electronics, stack_chemistry, stack_gis
### Data Selection
For Reddit, the score of a post/comment is 1 plus the number of upvotes (approvals) it gets from users, minus the number of downvotes (disapprovals) it gets.
For Stackexchange, the score of a post/comment is 0 plus the number of upvotes (approvals) it gets from users, minus the number of downvotes (disapprovals) it gets.
The value of a score is relative; in domains(posts) with more traffic, there will be more higher-scoring posts(comments).
Within a post, comments posted earlier will tend to have a higher score simply due to having more exposure, which is why using timestamp information is essential when inferring preferences.
Given a post P and two comments (A,B) we only included the preference A > B in the dataset if
1. A was written *no later than* B and A has a higher score than B.
2. The post is a self-post (i.e., a body of text and not a link to another page) made before 2023, was not edited, and is not NSFW (over 18). For Stackexchange, edited posts were permitted as long as they were edited prior to the writing of the comments.
3. Neither comment was made by a deleted user, a moderator, or the post creator. The post was not made by a deleted user or moderator.
4. For Reddit, the post has a score >= 10 and each comment has a score >= 2 (upvoted at least once). For Stackexchange, the post has a score >= 5 and each comment has a non-zero score.
The conditions are laxer for StackExchange because it is more strictly moderataed than Reddit, allowing us to hit the same data quality with lower thresholds.
In particular, we allow negative-score comments from StackExchange because the negative scores are likely due to being inaccurat/misinformed rather than being toxic, and this provides a useful signal.
A post with `n` comments could have up to (`n` choose `2`) preferences in the data.
Since the number of comments per post is Pareto-distributed, to prevent a relatively small number of posts from dominating the Reddit data, we limited the scraping to 50 comments per post.
This means that each post could have up to (`50` choose `2`) comments in the dataset, though this is a much smaller number in practice, since all the criteria above need to be met.
No such criteria are imposed for StackExchange, since there are fewer comments per post.
### Reddit Preprocessing
We tried to keep preprocessing to a minimum. Subreddit-specific abbreviations were expanded (e.g., "CMV" to "Change my view that").
In hyperlinks, only the referring text was kept and the URL was removed (if the URL was written out, then it was kept).
### Finetuning
If you want to finetune a model to predict human preferences (e.g., for NLG evaluation or an RLHF reward model), here are some helpful tips:
1. **Preprocess the data.** The total input length should fit under the model's token limit (usually 512 tokens).
Although models like FLAN-T5 use positional embeddings, we found that the loss would not converge if we finetuned it on inputs over 512 tokens.
To avoid this, truncate the post text (in the `history` field) as much as possible, such that the whole input is under 512 tokens (do not truncate the comment(s) however).
If this is still over 512 tokens, simply skip the example.
2. **Use a sufficiently large model.**
Finetuning a single FLAN-T5-xl model across [the original 385K SHP training data](https://huggingface.co/datasets/stanfordnlp/SHP) should give you a test accuracy between 72-73% (across all domains on examples where the entire input fits within the token limit), ranging from 65-80% on individual subreddits.
3. **Do in-domain prediction.** Out-of-domain performance will be poor if the domains are unrelated (e.g., if you fine-tune on `askculinary` preferences and test on `askcarguys` preferences).
4. **Train for fewer epochs.** The InstructGPT paper paper suggests training a reward model for only 1 epoch.
Since the same comment appears in multiple preferences, it is easy to overfit to the data.
5. **Training on less data may help.**
Preferences with a large `score_ratio` (e.g., comment A having 2x the score of comment B) will provide a stronger signal for finetuning the model, so you may only want to consider preferences above a certain `score_ratio`.
The number of preferences per post is Pareto-distributed, so to prevent the model from over-fitting to certain posts, you may want to limit the number of preferences from a particular post.
## Biases and Limitations
### Biases
Although we filtered out posts with NSFW (over 18) content, chose domains that were well-moderated and had policies against harassment and bigotry, some of the data may contain discriminatory or harmful language.
The data does not reflect the views of the dataset creators.
Reddit and StackExchange users are also not representative of the broader population.
Although subreddit-specific demographic information is not available, Reddit users overall are disproportionately male and from developed, Western, and English-speaking countries ([Pew Research](https://www.pewresearch.org/internet/2013/07/03/6-of-online-adults-are-reddit-users/)).
This is likely also true of StackExchange users.
Please keep this in mind before using any models trained on this data.
### Limitations
The preference label in SHP is intended to reflect how *helpful* one response is relative to another, given an instruction/question.
SHP is not intended for use in harm-minimization, as it was not designed to include the toxic content that would be necessary to learn a good toxicity detector.
If you are looking for data where the preference label denotes less harm, we would recommend the harmfulness split of [Anthropic's HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf).
Another limitation is that the more preferred response in SHP is not necessarily the more factual one.
Though some comments do provide citations to justify their response, most do not.
There are exceptions to this, such as the `askhistorians` subreddit, which is heavily moderated and answers are expected to provide citations.
Note that the collective preference label in SHP is not necessarily what we would get if we asked users to independently vote on each comment before taking an unweighted sum.
This is because comment scores on Reddit are public and are known to influence user preferences; a high score increases the likelihood of getting more positive votes [(Muchnik et al., 2013)](https://pubmed.ncbi.nlm.nih.gov/23929980/).
Whether this "herding effect" temporarily or permanently shifts a user's preference is unclear.
Therefore, while SHP does reflect collective human preferences, models trained on SHP may not generalize to settings where individual preferences are aggregated differently (e.g., users vote independently without ever seeing the current comment score, users vote after conferring, etc.).
Thanks to Greg Stoddard for pointing this out.
## License
Last updated: 07/016/2023
### Reddit
The data was made by scraping publicly available data in accordance with the a historical version of [Reddit API Terms of Use](https://docs.google.com/a/reddit.com/forms/d/e/1FAIpQLSezNdDNK1-P8mspSbmtC2r86Ee9ZRbC66u929cG2GX0T9UMyw/viewform), without any direct communication or written agreements with Reddit.
According to the Terms of Use, "User Content" is owned by the users themselves -- not by Reddit -- and Reddit grants a "non-exclusive, non-transferable, non-sublicensable, and revocable license to copy and display the User Content".
At time of writing, Reddit grants "no other rights or licenses are granted or implied, including any right to use User Content for other purposes, such as for training a machine learning or artificial intelligence model, without the express permission of rightsholders in the applicable User Content."
However, the legality of training on publicly available data will depend on your jurisdiction (legal in Japan, for example).
Datasets made by scraping Reddit are widely used in the research community: for example, Facebook AI Research used data scraped from Reddit to make the [ELI5](https://huggingface.co/datasets/eli5#source-data) dataset in 2019, which was made available without a license.
Anthropic AI has also [attested to scraping Reddit](https://arxiv.org/pdf/2112.00861.pdf) for preferences using a different methodology, though this data was not made public.
We take no responsibility for and we do not expressly or implicitly endorse any downstream use of this dataset.
We reserve the right to modify the SHP dataset and this license at any point in the future.
### StackExchange
StackExchange data is made available under a [CC by-SA license](https://creativecommons.org/licenses/by-sa/4.0/).
## Contact
Please contact [email protected] if you have any questions about the data.
This dataset was created by Kawin Ethayarajh, Heidi (Chenyu) Zhang, and Shabnam Behzad with advice from Dan Jurafsky and Yizhong Wang.
Kawin and Heidi prepared the Reddit datasets and trained the SteamSHP models.
Kawin and Shabnam prepared the StackExchange data.
Dan and Yizhong provide advice on dataset construction.
## Citation
We will have a paper out soon, but until then, please cite:
```
@InProceedings{pmlr-v162-ethayarajh22a,
title = {Understanding Dataset Difficulty with $\mathcal{V}$-Usable Information},
author = {Ethayarajh, Kawin and Choi, Yejin and Swayamdipta, Swabha},
booktitle = {Proceedings of the 39th International Conference on Machine Learning},
pages = {5988--6008},
year = {2022},
editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan},
volume = {162},
series = {Proceedings of Machine Learning Research},
month = {17--23 Jul},
publisher = {PMLR},
}
```
|
stanfordnlp/SHP-2
|
[
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:1M<n<10M",
"language:en",
"human feedback",
"rlhf",
"preferences",
"reddit",
"preference model",
"RL",
"NLG",
"evaluation",
"arxiv:2112.00861",
"region:us"
] |
2023-05-23T18:03:15+00:00
|
{"language": ["en"], "size_categories": ["1M<n<10M"], "task_categories": ["text-generation", "question-answering"], "tags": ["human feedback", "rlhf", "preferences", "reddit", "preference model", "RL", "NLG", "evaluation"]}
|
2024-01-11T02:10:31+00:00
|
92d7c27b6884a568aa557134a4dff0ad1ed37452
|
# Dataset Card for SF Nexus Extracted Features: Full Texts
## Dataset Description
- **Homepage: https://sfnexus.io/**
- **Repository: https://github.com/SF-Nexus/extracted-features-notebooks**
- **Point of Contact: Alex Wermer-Colan**
### Dataset Summary
The SF Nexus Extracted Features Full Texts dataset contains text and metadata from 403 mid-twentieth century science fiction books, originally digitized from Temple University Libraries' Paskow Science Fiction Collection.
After digitization, the books were cleaned using Abbyy FineReader.
Because this is a collection of copyrighted fiction, the books have been disaggregated.
Each row of this dataset contains one text as well as metadata about that text's title, author and publication.
### About the SF Nexus Corpus
The Paskow Science Fiction collection contains primarily materials from post-WWII, especially mass-market works of the New Wave era (often dated to 1964-1980).
The digitized texts have also been ingested into HathiTrust's repository for preservation and data curation; they are now viewable on HathiTrust's [Temple page](https://babel.hathitrust.org/cgi/ls?field1=ocr;q1=%2A;a=srchls;facet=htsource%3A%22Temple%20University%22;pn=4) for non-consumptive research.
For more information on the project to digitize and curate a corpus of "New Wave" science fiction, see Alex Wermer-Colan's post on the Temple University Scholars Studio blog, ["Building a New Wave Science Fiction Corpus."](https://sites.temple.edu/tudsc/2017/12/20/building-new-wave-science-fiction-corpus/).
### Languages
English
## Dataset Structure
This dataset contains disaggregated "chunks" of text from mid-twentieth century science fiction books and associated metadata. For example:
```
{'Unnamed': 7299,
'Title': 'MILLENNIUM,
'Author': 'VARLEY',
'Pub Year': '1983',
'Text': '"'All "A "All "As "As "Let's "Me, "Poor "The "The "When $15.86 '“All (And (Do (Flight (I (It’s (I’m (Made (See (This (Time (Well, (Who (blonde, (but (by (deceased) (flight (he (him) (if (in (not (one (or (sorry; (that’s (we’d (which (with (“What’re * * * * * * * * * * * * * * * , . . . . . . .. .. .. .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...” ...” ...” .1.. .38 .45 .ah, .can’t .hell, .it .take .that .that’s .the .transponder .uh, .uh, .uh, .well, .we’re .what / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / /MILLENNIUM /MILLENNIUM 0-425-06250-3 1 1 1 1 1 10 10 10,000? 10. 10/MILLENNIUM 100/MILLENNIUM 10016. 101 102/MILLENNIUM 103 104/MILLENNIUM 105 106/MILLENNIUM 107 108/MILLENNIUM 109 10:30. 10:45 10th, 11 110/MILLENNIUM 111 112/MILLENNIUM 113 114/MILLENNIUM 115 116/MILLENNIUM 117 118/MILLENNIUM 119 11:30 11:30. 11:30. 11____________ 11th, 11th. 12, 12/MILLENNIUM 120/MILLENNIUM 121 122/MILLENNIUM 123 124/MILLENNIUM 125 126/MILLENNIUM 127 128 129 12:56.” 12_______________ 12th 12th, 12th. 13 13 13. 130/MILLENNIUM 131 132/MILLENNIUM 133 134 135 136/MILLENNIUM 137 138/MILLENNIUM 139 13th 13th. 14/MILLENNIUM 140/MILLENNIUM 141 142/ 143 144/MILLENNIUM 145 146 147 148 149 14_________________ 14th 15 150/MILLENNIUM 151 152/MILLENNIUM 153 154/MILLENNIUM 155 156 157 158/MILLENNIUM 159 15___________ 16 16-year-old 160/MILLENNIUM 161 162/MILLENNIUM 163 164/MILLENNIUM 165 166/MILLENNIUM 167 168 169 16______ 17 170/MILLENNIUM 171 172/MILLENNIUM 173 174/MILLENNIUM 175 176 177 178/MILLENNIUM 179 17______________ 18/MILLENNIUM 180/MILLENNIUM 181 182/MILLENNIUM 183 184 185 186/MILLENNIUM 187 188/MILLENNIUM 189 18_________ 19 190/MILLENNIUM 191 192. 192/MILLENNIUM 193 1930s, 194/MILLENNIUM 1941, 1942.” 195 1955 1955 1955 1955 1955 1955 1955, 1955, 1955, 1955, 1955,” 1955. 1955. 1955. 1955. 1955. 1955.” 1955.” 1955? 1958 196/MILLENNIUM 1968 1968. 1968. 197 1976. 1978, 1978. 198/MILLENNIUM 1980 1980s 1980s 1980s. 1980s. 1983 1983 1983 1983 1983, 199 1996, 1996—so 19_____________ 1:45. 20 20 200 200/MILLENNIUM 202 203 2034. 204 205 206 207 208 209 20th 20th 20th 20th 20th, 20th. 20ths 20ths 20ths, 20ths. 210/MILLENNIUM 212/MILLENNIUM 214/MILLENNIUM 216/MILLENNIUM 218/MILLENNIUM 22 220 222 223 224 225 226 227 228 229 23 230 2300 232 233 234 235 236 237 238 239 24 240 242 243 244 245 246 247 248 249 25 26 27 28 29 2_______________ 3 30 3000 3000 31 32 33 34 35 35 35 35 35 35 35 35 35 35 35. 35“ 36 37 38 39 3:13, 3:14, 3_________________ 4 40 40,000 42 43 44 45 46 47 48 49 4:30 4_____________ 5 50 5000 51 52/MILLENNIUM 53 54 55 55time, 56 57 58/MILLENNIUM 580, 580, 59 5_____________ 6 60 62 63 637 64 65 66 67 68 69 6___________ 7 7 7, 70 72 727 727’s 74 747 747 747 747 747 747 747 747 747 747 747 747 747 747 747 747 747 747 747 747, 747, 747, 747. 747. 747. 747. 747. 747. 747; 747? 747—I 75 757,375 76 77 78 79 7:15. 8 80 81 82 83 84 85 86 87 88 880 880 880, 880’s 89 8__________________ 9 9 9 9 9 90 92 93 94 95 96 97 98 99 9:11 9:11 9:11,” 9:30. 9:56, 9___________ A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A&P? A&R A, A, A. ALPO?” AMERICA ARTCC ARTCC, ATC ATC ATC ATC’s ATC’s ATC’s, AUTHOR’S Abayta Abayta, Abayta. Abayta: Abayta: About About About About About About According Across Across Across Actually, Actually, Add Adelaide, Administration. Advise Africa After After After After After After After After After After After After Again Again Again, Again, Again, Against Age Age Age Age. Age. Agency. Agent Agent Age—or Ah, Ah... Air Air Air Air Air Air Air Air Air Air Air-Line Aircraft Airlines Airlines, Airlines, Airport Airport Airport, Airport, Airport. Airport. Airways. Alameda Alameda Alarms Albert Aldiss; Ali Ali Alka-Seltzer All All All All All All All All All All All All All All All All All All All All All All All All All All All All All All All All All All All All All All All All All All All All-Seeing Allison Almost Almost Along Alpha Alpha Also Also, Always Am Am Am Am Am Am Am Am Am Am Am Am, Am.” Am?” Ambler, Ambrose Amelia Amerenglish Amerenglish. Amerenglish; America. America. American American American American Amniocentesis. Among Among Amsterdam, Am—but Am’s An An An An An An An Anarchy And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And And, And, And: And: And: And: Anderson; Anderson; Andromeda Anet Angeles Angeles Angeles, Angeles, Angeles, Angeles. Angeles. Angels. Another Another Another Another Another Another Antarctica. Antennas Antoine's, Any Any Any Any Any Any Any Any Anybody Anything Anything. Anything.” Anyway, Apaches, Apparently Apparently Apparently Apparently Apparently Apparently, Apple, Aquarius). Arabia Archibald Archibald Are Are Are Are Area Area Area. Arizona Arizona Arizona Arizona Arizona Arizona, Arizona, Arizona, Arizona. Arizona. Ark Ark. Armenia Army Army Army Army Army. Arnold Arnold Arnold Arnold Arnold Arnold Arnold, Arnold, Arnold’s Arthur Arthur, As As As As As As As As As As As As As As As As As As As As As As As As As As As Aside Asimov. Assuming Astronauts At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At Atlantic Atomic Attorney Author Avenue Avenue, Avenue, Aviation Aw, B B B, B. B. B. B. B. BALTIMORE, BART?” BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC BC, BC, BC. BC. BC. BC. BC. BC. BC’s BERKLEY BERKLEY BOOKS, BOOK® Babylonian Back Bad Baker. Bakersfield. Bali, Ball Ball Ball, Ball, Ball,” Ball. Ball.” Balls, Baltimore Baltimore Baltimore Baltimore Baltimore Baltimore Baltimore Baltimore Baltimore Baltimore Baltimore Baltimore Baltimore,’’ Baltimore. Baltimore. Baltimore... Baltimore?” Bamum Bangladesh. Bannister Bannister, Barbara. Barbara. Bars Batavia, Battleship Bauhaus Bay Bay Bay Bay Bay Bay Bay. Bay. Ba—” Be Because Because Because Because, Bee Before Before Before Before Before Before, Behind Behind Behold Behold Being Being, Believe Belli, Beltway Ben Bengalis Berkeley Berkley Berkley Berkley Berkley Berkley Berkley Bermuda Bermuda Berry Beside Better Better Better Beverly Bible Bierce Big Big Big Big Big Big Big Big Big Big Big Big Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill Bill, Bill, Bill,’’ Bill,’’ Bill,” Bill,” Bill,” Bill. Bill. Bill. Bill. Bill. Bill. Bill. Bill. Bill. Bill.” Bill.” Bill.” Bill; Bill? Bill’s Bill’s Bill’s Bill’s Biological, Bird Birds Birmingham Bite Bizarre? Black Board Board Board Board Board Board Board Board Board Board Board Board Board Board Board Board, Board. Board. Board. Board. Board.” Board.” Board?” Boeing Boeing Boeing Boeing Boeing Boeing Boers, Bogart Book Book Boston Boston, Boston. Both Both Both Both Both Both Both Bova, Bova’s Bradbury; Braille. Breaking Breathe Brest Brest Brest, Brian Bridge Briley Briley Briley Briley Briley Briley Briley Briley, Briley,” Briley,” Briley,” Briley. Briley. Briley. Briley. Briley?” Brilliant Brindle Brindle Brindle Brindle. Brindle.” Broken Brooklyn Brooklyn Brunner; Buddha. Budweiser Building. Building.” Bulova Bureau Bureau—which Burt, Business, But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But But, But, Butch By By By By By By By By By By By By" By,’ By” B—was C C C C, C, C. C. C. C. C. C. C. C. C. C. C. C. C. C. C... C.O. CBS CB’s. CHP CHP CHP CHP. CIA CVR CVR CVR CVR CVR CVR CVR CVR. CVR. CVR’s CVR’s. Caesarean Calcutta Calcutta-Benares. California California California California California California California California California California, California, California, California. California. California. California. California. California? Call Call Call Call Calm Came Came Camels Camels. Camp; Can Can Canal. Canary Canary Canary Candidate, Canyon Can’t Capetown, Captain Captain Captain Captain. Captain’s Carnegie Carole Carole Carole Carole Carole Carole. Carpenter Carpenter Carpenter Carpenter Carpenter Carpenter, Carpenter. Carpenter. Carpenter. Carpenter. Carpenter. Carpenter. Carpenter’s Carpenter’s Carson Carson Cary Cary Casablanca. Cassidy Catheters Causes Cecily, Censorship. Centauri, Centauri. Center Center Central Century Certain Cessna Chairman, Chairman’s Chalk Chamber Chamber, Chamber. Changes, Charge Charge.” Charity Charlie Charlie’s Chemical, Cheops Cherokees Cherokees, Chicago Chicago, Chicago. Chicago? Chief Chief Chief Chief China, China. Chinese Chinese Chinese Christ Christ! Christ, Christ. Christmas Christmas Christmas Christmas Christmas Christmas Christmas, Christmas. Chuck Chuck Cirocco. Civilization Civilizations Clarke; Claus Cleaver-of-Heads, Cleaver-of-Heads—a Clorets Clorets. Close Close Coal Coast, Coast, Coast? Cockpit Cockpit Cockpit Cockpit Coconino Coffee, Coffee? Cold, Colt Come Come Come Come Company, Compared Compared Computer Computer Computer Computer Computer Computer Computer Computer, Computer. Computer. Computer?” Computers Computers, Congeniality Congress Congress, Congress-critters Congress. Congruency Connecticut Connecticut Constables, Constellation Constellation Constellation Constellation. Contra Contra Contrary Control Control Control Control Control, Control. Controller. Controllers Controllers’ Controller’s Cool, Copyright Corporation, Corporation, Corporation. Correct Correction: Cosmic Costa Costa Could Could Council Council Council Council Council Council Council Council Council Council Council Council Council Council Council Council Council Council Council Council Council Council Council Council, Council. Council. Council. Council. Council.” Council’s Country—the County County County. Coventry Coventry Coventry Coventry Coventry Coventry Coventry Coventry Coventry Coventry Coventry Coventry Coventry Coventry Coventry Coventry Coventry Coventry Coventry, Coventry, Coventry. Coventry. Coventry’s Coventry’s Cow Craig Craig Craig Craig Craig... Craig: Craig: Crain Crain Crain Crain Crain Crain Crain, Crain, Crain, Crain.” Crashes Crashes Cristabel Cristabel Cristabel Cubs. Cummings; Cunard Cut Cybernet D D D D. D. D. D. D.C. DAY. DC-10 DC-10 DC-10 DC-10 DC-10 DC-10 DC-10 DC-10 DC-10 DC-10 DC-10 DC-10 DC-10 DC-10 DC-10 DC-10 DC-10, DC-10, DC-10. DC-10. DC-10. DC-10.” DC-3 DC-6 DC-8, DC-lO’s DEMON. DO Dachau Dakota. Dali Damn Damn Damn, Damn, Damn, Darkness Darkness Data Data Data Dating David David David’s Davis Davis Davis Davis Davis, Davis, Davis: Day. Day? DeLisle DeLisle DeLisle DeLisle DeLisle DeLisle. DeLisle. DeLisle. DeLisle’s DeLisle’s DeLisle’s DeLisle’s, Dead Dead December December December December December December December December. December. December. December. Decisions Deep Define Define Denver, Department, Desert Desert Desert... Despair Destiny. Diablo Diablo Diablo. Diablo. Diablo” Did Did Did Did Did Did Did Did Didn’t Diego Diego Diego, Diego?” Disgust Disney Dixie Djakarta, Djakarta, Djakarta, Do Do Do Do Do Do Do Do Do Do Do Do Do Do Do Do Do Doctor Doctor Doctor Doctor Doctor Doctor Doctor Doctor, Doctor,” Doctor,” Doctor,” Doctor.’’ Doctor?” Doctor?” Doctors Dodgers Dodgers Does Does Does Does Does Don Don Don, Don?’’ Don?” Donald Donald Donald Don’t Don’t Don’t Don’t Don’t Don’t Don’t Don’t Don’t Don’t Don’t Douglas Down Down’s Drinking Drunk? Dulles Dulles,” During During Duty Duty Each Each Each Each Each Each Each Each Earhart. Early Early" Early,” Earth Earth Earth Earth Earth, Earth. East East East Eastern Eastern Easy. Eat-drink-and-be-merry. Ed Ed Edison Edison Edward Effect Einstein, Einstein. Eisenhower. Either Either Elder. Eleanor Electric Eleven-oh-one, Eli Eli Eli Eli Eli Eli Eli Eli: Eloi Eloi Encounters. End End End End End End Engineer England English. Enquirer. Enquirer. Entertainment Epilogue Erebus, Eric Essence Eternity Eternity, Eugene Eugene Eugene, Eugene, Even Even Even Even Even Even Even Even Even Even Even Even Even Even Even Even Events Events Eventually Eventually Everglades Everglades, Everglades? Every Every Every Every Every Every Everybody Everybody Everybody Everybody Everybody Everybody Everybody Everybody’s Everyone Everyone Everyone Everyone Everything Everything Everything Everything Excedrin Excuse Extra Eye. Eyewash F-86 FAA FAA FAA FAA FAA’s FBI FBI FBI FBI FBI FBI, FBI. FDR. FDR’s FDR’s FDR’s FOR Factors Facts Fairly Fairly Fall Fall, Famous Farther Faster Fate, Fear, Fed Fed Fed Fed Fed Fed, Fed, Fed. Fed. Federal Federal Federal Federal Fifteen Fifty Fifty! Finally Finally Finally Finally, Finally, Finally, Fine, Fine. Fingers First First First First First First First Five Flagstaff. Flight Flight Flight Flight Flight Flight Flight Flight Flight Florida Fly Fokker-Aerospatiale For For For For For For For For For For For For For For For For For For For For For For For For For For For For For For Force Foreign Forget Forgot Fortunately, Fortune? Forty, Forty-five Four, Frances Frances Francisco Francisco Francisco Francisco Francisco, Francisco.” Frankly, Freddie Freddie Freddie Freddie Freddie Freddie Freddie Freddie Freddie Freddie Freddie Freddie Freddie Freddie Freddie Freddie, Freddie, Freddie,” Freddie. Free Free Free Fremont—which French. French. From From From From From From From Frost, Funny Funny Future G-man, Gab. Galaxy. Garden, Garden. Garry Gary Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate Gate, Gate, Gate, Gate, Gate, Gate, Gate, Gate, Gate, Gate, Gate. Gate. Gate. Gate. Gate. Gate. Gate. Gate. Gate. Gate. Gate. Gate. Gate. Gate. Gate. Gate. Gate. Gate. Gate. Gate. Gate. Gate. Gate. Gate. Gate. Gate. Gate. Gate. Gate. Gate.” Gate; Gate?” Gate’s Gate’s Gees Geller’s Genalysis General General General, Genesis.’’ Genie George George George George George George, George?” Georgia German Get Getting Gideon Gideon Gil Gil Gil Gil Gil Gil, Gilbert Gilmore Girl Girl, Glancing Glenlivet Gloria Gloria. Gloria. Gloria?” Glorious Go Go Go Go Go Go-Team God God God God God God, God, God, God, God,” God-knows-how-many God. God. God. God. God.” Gods, Goes Goes Goes Gold Golgotha" Golgotha,” Good Gooney Gooney Gordon Gordon Gordon Gordon Gordon Gordon Gordon Gordon Gordon Gordy Gordy Gordy Gordy Gordy Gordy Gordy Gordy, Gordy. Gordy... Gordy.” Gordy? Gordy?’’ Gordy?” Got Government Governor Gradually, Grand Grandfather Grandfather Grant Grant Gray.” Gray?” Greek Green Green. Greenland. Greetings! Group, Guard Guardia Guardians Guardians Guess Gumby, Guys H-bombs HISTORY HOTLINE, HST Hacker Hacker Hacker. Had Half Half Hall. Hanoi. Hard Hardy Harlan Harold Harrison; Harry Harsh, Harvard, Haubner Haubner, Haubner—he Have Having Having Having He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He He Headache Heads Hearst Heartless, Hefty Hefty Heinlein; Hell Hell, Hell, Hell, Hell, Henry Hepburn Her Her Her Her Her Her Her Her Her Her Her Her Her Her Her Her Herbert Herbert Herculean Here Here Here Here, Here’s Here’s Herman Hertz Hertz Hey!” Hey, He’d He’d He’d He’d He’d He’d He’d He’d He’d He’d He’d He’d He’d He’d He’d He’d He’d He’ll He’ll He’ll He’ll He’s He’s He’s He’s He’s He’s He’s He’s He’s He’s He’s He’s He’s He’s He’s He’s He’s He’s He’s He’s He’s He’s He’s He’s He’s Highway Hijackers Hildy Hildy. Hills. Himalayas Hindus, His His His His His His His His His His His His His His His His His His His His His His His His His His His His Historians, Historical Hodgson; Holiday Holiday Hollywood Hollywood Hollywood Hollywood Hollywood, Hollywood, Home Honduras Honest. Hong Hope Hopi Hopis, Horowitz Houston. Houston—got How How How How How How How How How How How How How However, However, However, However, However, However. Hoxv’d Huey, Hueys Hugo Hugo Hugo Huguenots, Human Humans Humphrey Hupfeld. Hysteria, I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I" I, I, I, I,” I. I. I. I. I.. I.. I... I... I... I...” I.D. I.D. I.D. I.” I.” I? I? I?” I?” IBM IC IN ISBN: Ian Idle If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If If—” Ike.” Illinois, Imagination Important Impossible, In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In In Inc. Inches India Indian Indian Indian Indian Indian, Indians Indian—so Indications Inference: Inn Inn, Inn. Inside Inside Inside, Instead Instead Instead, Intelligence Intelligence Interest" Interest,” International International International Interstate Interstate Into Investigation—a Investigator Irving Is Is Is Is Is Is Is Is Is Is Is Isaac Islands, Islands, Islands,” Islands... Isn’t Isn’t It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It It's Italian Italian, Its It’ll It’ll It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s It’s I—” I—” I—” I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’d I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’ll I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m I’m—” I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve I’ve JFK). Jack Jack Jack Jack Jack Jackie, Jane Jane January. Janz Janz Janz Janz Janz Janz Janz Janz Janz Janz Janz Janz Janz Janz Janz Janz, Janz, Janz. Janz. Janz. Janz? Janz?” Janz’s Japan Japan. Japanese Jean Jeff Jehovah. Jerry Jerry Jerry Jerry Jerry Jerry Jerry Jerry, Jerry, Jerry: Jerry: Jesus Jesus Jesus Jesus! Jesus. Jesus. Jesus. Jesus.” Jet JetStar JetStar JetStar JetStar. Jews Jockey. Joel John John John John John John Johnny Johnstown, Joyjuice. Juan Jumbos.” June Jupiter, Just Just Just Just Just Just Just Just Just Just Just Just Kampuchea. Kane Kankakee, Karakov Karakov Karakov, Karma, Kate’s Katharine Keane Keane Keane Keane Keane Keane Keane Keane Keane, Keane. Keane’s Keane’s Keep Keeping Keeping Kensington, Kensington, Kensington. Kevin Kevin Kevin Kevin Kevin Kickapoo Kid Kilworth; Knock, Kong. Kooky Korea Kuttner; L-1011, L. L.A. LAST LAX LAX LAX?’’ LOUISE La Lady Lake Lake, Land Land, Lanes Larry Larry Larry. Larry. Larry. Larry. Larry: Larry’s Larry’s Lars, Lars, Lars, Last Last Last Last Last Last Last Last Last Last Last Later Later, Later, Later, Later, Later, Latex; Laughter Laurel Law Law. Lawrence Lawrence Lawrence Lawrence Lawrence Lawrence Lawrence Lawrence Lawrence Lawrence Lawrence Lawrence Lawrence Lawrence Lawrence Lawrence Lawrence Lawrence Lawrence Lawrence Lawrence Lawrence Lawrence Lawrence Lawrence Lawrence Lawrence Lawrence Lawrence, Lawrence, Lawrence, Lawrence, Lawrence. Lawrence. Lawrence. Lawrence. Lawrence’s Lawrence’s Lawrence’s Lawrence’s Lawrence’s Lawrence’s Leading Leaning Leggio Leggio Leggio Leggio Leggio Leggio Leggio, Lest Lest Let Let Let Let Let’s Let’s Let’s Let’s Let’s Let’s Let’s Levitsky Levitsky, Lie, Lie. Life Like Like Like Like Like Like Like Like Lilly Lilly Lilly Lilly Lilly Lilly Lilly Lilly Lilly Lilly Lilly Lilly Lilly Lilly Lilly Lilly Lilly Lilly Lilly Lilly, Lilly. Lilly. Lilly. Lilly? Lilly’s Lilly’s Lilly’s Lilly’s Lilly’s Linus Listen Listen, Listen, Literally. Little Little Livermore Livermore Livermore Livermore, Livermore, Livermore. Lloyd Lockheed Lockheed Lockheed Lockheed Logically, London London London London London London London, Look Look Look Look Look Looking Looking Looking Looking Lord Lord, Lori Los Los Los Los Los Los Los Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise Louise, Louise, Louise, Louise, Louise, Louise, Louise, Louise,’’ Louise,” Louise,” Louise,” Louise,” Louise,” Louise,” Louise,” Louise,” Louise,” Louise,” Louise. Louise. Louise. Louise. Louise. Louise. Louise. Louise. Louise. Louise. Louise. Louise. Louise. Louise. Louise... Louise... Louise.’’ Louise.’’ Louise.” Louise.” Louise.” Louise? Louise?’’ Louise?” Louises Louise’s Louise’s Louise’s Louise’s Louise’s Louise’s Louisville, Love Luci Lucille Luckies Luckily, Lucky Lucy.” M.I.T. M.I.T., MEMBERS—Notify MGM/UA MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MILLENNIUM MacQuitty, Macbeth,” Macbeth.” Machine Machine Machine, Machines, Mack Madison Magic Maidive Maintenance Maintenance Makes Man Man Man Man Man Man, Mandy Mandy Mandy Mandy Mandy Mandy Mandy Mandy Mandy Mandy Mandy Mandy, Mandy, Mandy. Mandy. Manhandling! Manual.” Many Many Many Marin Mars, Martin Martin Martin Martin Martin Martin Martin Martin Martin Martin Martin Martin Martin Martin Martin Martin Martin Martin Martin Martin Martin Martin Martin Martin Martin Martin,’’ Martin. Martin. Martin. Martin’s Mary Mary Marybeth Marybeth Marybeth Marybeth Maryland. Maryland? Maryland?” Marylander-Columbians.” Mateo Maurice, Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybe Maybellene. Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer Mayer, Mayer, Mayer, Mayer, Mayer, Mayer, Mayer, Mayer, Mayer, Mayer, Mayer, Mayer, Mayer,’’ Mayer,” Mayer. Mayer. Mayer. Mayer. Mayer. Mayer. Mayer. Mayer. Mayer. Mayer. Mayer. Mayer.” Mayer; Mayer’s Mayer’s Mayer’s Mayer’s Mayer’s Mayer’s Mayer’s Mayer’s McDonnell-Douglas McRae, Mconel, Me Me, Me, Meanwhile Medics Melvin Member Member, Men Menlo Merely Methodists, Metz. Michael Mike Miles Millennium, Miller; Mine Minor Minoru Minoru Minoru Minoru Minoru Minoru Minoru’s Minutes” Miss Mission: Mister Mister Mister Mister Mister Mister Mister Mister Mister Mister Mister Mister Mister Mister Mister Mister Mister Model-T. Mojave Mommy Monday Monday, Moorcock; More More More: Morlock. Morlocks Morlocks Morlocks. Mornings Moroccans: Moses,” Most Most Most Most Most Most Most Most Most Most Most Most Most Most Most Most Most Most Mostly Mostly Mounds Mount Mount Mount Mount Mr. Mr. Mr. Mr. Mr. Mr. Mr. Mr. Mr. Mr. Ms. Ms. Much Much Much Musing Must My My My My My My My My My My My My My My My My My My My My My My My My My My My My My My My My My My My My My My My Myself, Myself, Myself, NBA NEW NOT NOTE NTSB NTSB NTSB NTSB NTSB. Nader. Nam Nameless Nameless Nameless Nameless Names Nancy Nancy Nancy Naomi, Naomi. Nashua National National National National National National National National National National National National National. National. Naturally, Naturally, Naturally, Naturally, Naturally, Naturally, Navajo Navajo Navajos Navajos, Naval Navy Navy, Nazis, Nebula Nebula Neither Nepal, Nero Never Never Never Never-bom, Never-neverland Nevertheless, New New New New New New New New New New New New New New New New Newcastle. News News Newton, Next Next Next Night Night Night Night Nineteen-eighties No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No, No, No, No, No, No. Noah’s Nobel Nobody Nobody Nobody Nobody Nobody Nobody Nobody Nobody Nobody Nobody Nobody Nobody Nobody Nobody Nobody Nobody. None None None None Normally Normally, Norman North Not Not Not Not Not Not Not Not Not Not Not Not Not Not Not Not Not Not Not Not Not Not Nothing Nothing Nothing Nothing Nothing Nothing Nothing. Nothing. Now Now Now Now Now Now Now Now Now Now Now Now Now Now, Now, Now. Nuclear. O OF OF OPEN OPHIUCHI Oahu, Oakland Oakland Oakland Oakland Oakland Oakland Oakland Oakland Oakland Oakland Oakland Oakland Oakland Oakland Oakland Oakland Oakland, Oakland, Oakland, Oakland, Oakland, Oakland, Oakland, Oakland,” Oakland-San Oakland-San Oakland. Oakland. Oakland. Oakland. Oakland. Oakland. Oakland. Obviously Obviously, Obviously, Ocean Oddly, Of Of Of Of Of Of Of Office Office Office Office Office, Office. Office. Officer Officer Officer/IIC,” Office” Oh, Oh, Oh, Oh, Oh, Oh, Oh, Oh, Oh, Oh, Oh, Oh, Oh, Oh. Ohio Oil Okay, Okay, Okay, Okay, Okay, Okay. Okay. Okay. Okay. Okay. Old On On On On On On On On On On On On On On On On On On On Once Once Once Once Once, Once, One One One One One One One One One One One One One One One One One One One One One One One One One One One One One One One One One One One One One One One One One One One, One. One... One: Only Only Only Only Operations Operations Operations Operations Operations Operations Operations Operations Operations Operations Operations Operations Operations Operations Operations Operations Operations Operations, Operations, Operations, Operations, Operations. Operations. Operations. Operations. Oppenheimer, Or Or Or Or Or Or Or Or Or Or Or Or Or Or Or Or Or Or Or Or, Or: Oregon Oregon Organisation, Organization.” Originally Orly, Otherwise, Otherwise, Our Our Our Our Our Our Out Out Out, Outside, Outside, Outsiders Over Over Oz. P.T. PATCO PERSISTENCE PRC PRINTED PRINTING PSA PSA PSA PSA Pacific Pacific Pacific Pacific Pan Pan Pan Pan Pan Pan Pan Pan Pan Pan Pan Pan Pan Pan Pan Pan Pan Papers Paradox! Paradox, Paradox.” Paris Paris, Paris-Frankfurt, Park Park Park.” Parkersburg, Parrish, Part Part Part Part Part Part Part Parts Parts Passengers Patrol, Patty Pauling, Peabody,’’ Penthouse, People People People People’s Perhaps Perhaps Perrier Persistence Personally, Petcher Petcher Petcher Petcher Petcher Petcher Petcher Petcher Petcher, Petcher, Petcher, Petcher. Petcher. Petcher.” Petcher.” Petcher—or Petcher’s, Peter Peter Peter Pharaoh’s Phil Phil Phil,” Phoenix Phoenix Phoenix Phoenix Phoenix. Phoenix?” Physics, Picture Pilots Pinky Pinky Pinky Pinky Pinky Pinky Pinky Pinky Pinky Pinky Pinky Pinky, Pinky, Pinky. Pinky. Pinky?” Pinky’s Pinky’s Pinky’s Piper Plan. Please Plenty Plus, Politically Pop, Pop, Pope Portland Possibly Possibly Possibly Post Post Post Post Post Post Post Post. Poul Poul Power Powers Powers Powers Powers, Powers. Pre-Destination, Preakness Predestination Prentice, Presently President President President President Presidents. Preston Presumably Pretty Pretty Pride Prize. Probably Probably Probably Productions Productions Professor Programmers’ Programming Project Project Project Project Project Project, Project, Project, Project. Project. Project. Project.” Project.” Prologue Prologue: Proximity Publishing Publishing Pursuant Pusher.” Put Putting Q Q Q Quadraplegic. Questions Questions, Quick R Ralph Ralph Ralph Ralph Ralph Ralph Ralph Ralph Ralph Ralph Ralph, Ralph? Ralph? Ralph’s Ralph’s Ralph’s Ralph’s Ralph’s Rangoon Rangoon Rangoon, Rank Rape Rapid Ray Ray Ready-Room, Really. Recorder Recorder Recorder Recorder, Recorder. Recorders Recorders Records. Recovery Red Redford Redford Redford Redford, Redford. Regency Region Region Relativity Remember Remember Remember, Reporters Republic Rescue Reservations Reynolds; Right Right Right Right Right Right Right. Right. Risks River Robert Robert Robert Robert Robert Robert Rockwell Rockwell Rockwell Rockwell Rockwell Rockwell Rockwell Rockwell Rockwell, Rockwell: Rockwell: Rockwell: Rockwell’s Roentgen Roentgen... Rog Rog Roger Roger Roger Roger Roger Roger Roger Roger Roger Roger Roger Roger Roger, Roger. Roman Roman Round Round Roy Rule Running Russians, Ryan Ryan Ryan, SAC, STATES Sacramento Sad Safer. Safety Safety Safety Safety Safety Safety Safety Safety—” Salvador Sam Sam Same Same San San San San San San San San San San Santa Santa Santa Sarah Sarah Sarah Sarah Sarah,’’ Saudi Say Schenectady, Schrader Schrader Schrader Schrader, Schuyler Scottish Screw Seaboard. Seasons. Seattle Seattle, Seattle, Second Seconds See See See See Seen Seen Seibel Seibel Seibel, Seiko Seiko Seiko Selectrics Senator Senator Senators Series, Series. Series. Service Set Seven Seventy-three Several Several Several Shadow Shadow Shame. Shanghai. She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She She Sheepdog, Sheltie. Sheppard Sheppard Sheppard Sheriff’s Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman Sherman, Sherman, Sherman, Sherman, Sherman, Sherman, Sherman, Sherman, Sherman,” Sherman-the-therapist Sherman. Sherman. Sherman. Sherman. Sherman. Sherman. Sherman. Sherman. Sherman. Sherman. Sherman. Sherman. Sherman. Sherman. Sherman. Sherman. Sherman. Sherman. Sherman.’’ Sherman.” Sherman.” Sherman.” Sherman? Sherman?” Sherman?” Sherman?” Sherman—among Sherman’s Sherman’s Sherman’s Sherman’s Sherman’s. Shetland She’d She’d She’d She’d She’d She’d She’d She’d She’d She’d She’d She’d She’d She’ll She’s She’s She’s She’s She’s She’s She’s She’s She’s She’s She’s Ship Ship Ship. Ship. Shit. Short Shorthanded, Shortly Should Should Show Sianis, Signed, Signed: Silesia Silverberg. Silverberg; Silvers Simple. Simply Simultaneously, Since Since Since Since Since Since Sit Sit Sitting Sitting Skydivers Sleepless Slims Slims. Small Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith, Smith, Smith, Smith, Smith, Smith, Smith, Smith, Smith, Smith, Smith, Smith, Smith, Smith, Smith, Smith, Smith,’’ Smith,” Smith. Smith. Smith. Smith. Smith. Smith. Smith. Smith. Smith.” Smith: Smith? Smith?’’ Smith?” Smith?” Smith?” Smith—I’d Smith—you Smith’s Smith’s Smith’s Smith’s Snatch Snatch Snatch Snatch Snatching So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So So Some Some Some Some Some Some Some Some Some Some Some Some Some Some Some Some Some Some Some Some Some Some Somebody Somebody Somebody Somebody Somebody Somebody Somebody Somebody Somebody—I’m Somehow, Someone Something Something Something Something Something Something Something Something Something Something Sometimes Sometimes Sometimes Sometimes Sometimes Sometimes Sometimes Sometimes Sometimes Sometimes Sometimes Sometimes Sometimes Sometimes Sometimes Sometimes Somewhere Somewhere Son Son Son.” Sondergard Sondergard Sondergard Sondergard Sondergard Sondergard Sondergard, Sondergard. Sondergard’s Sorry Sound Sound Southern Souvenirs? Spanish Special Special Special Special-Agent-in-Charge, Spielberg, Spitfire. Sprague Springs, Square Square Square. Square. Square. Square.” Square?” Squashed Squatting Standard Standard Standing Stanford. Stanley Stanley Stanley Stanley Stanley Stanley, Stanley, Stanley, Stanley, Stanley. Stanley. Stanley—dressed Star State States States States Stearman Stearman Stefan. Stefan. Steve Still Still Still, Still, Still, Still, Still, Still, Still, Still, Still, Still, Stop Stop, Strangely, Street Street. Stretching Strictly Strike Stroke Strunk Stuff Stunning Sturges, Such Suddenly Suddenly, Suffer, Suffering Suicide Suicide Suicide. Suitcases Sullivan Sullivan’s Sun, Sundance Supreme Sure Sure, Surely Surely Surgeon Suwannee, Swahili Swanson’s Swedish Swiss Swiss THE THE THE THE TITAN TITAN, TM TNT, TV TV Take Take Take Take Take Take Take Talk Tang-shen Tape: Team Team Team Team Team Team. Tear Technically Technically, Technically, Technically, Teheran Teheran Teheran Teheran Television Tell Tell Tell Tell Tell Tell Tell Teller. Tempers Temporal Temporal Ten Ten Ten Ten Tenerife Tenerife. Tenn; Testament Testimony Testimony Testimony Testimony Testimony Testimony Testimony Testimony Testimony Testimony Testimony Testimony Testimony Testimony Testimony Testimony Testimony Testimony Testimony Testimony Testimony Testimony Testimony Testimony Texas Texas Texas. Thank That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That That’ll That’s That’s That’s That’s That’s That’s That’s That’s That’s That’s That’s That’s That’s That’s That’s That’s That’s That’s That’s That’s That’s That’s That’s That’s That’s That’s That’s That’s That’s That’s That’s That’s The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The Their Their Their Their Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then Then, Theoretically Theories Theory. There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There There’d There’s There’s There’s There’s There’s There’s There’s There’s There’s There’s There’s There’s There’s There’s There’s There’s There’s There’s There’s There’s. These These These These These These They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They They’d They’d They’d They’d They’d They’d They’d They’d They’ll They’ll They’ll They’ll They’re They’re They’re They’re They’re They’re They’re They’re They’re They’re They’re They’re They’re They’re They’re They’re They’re They’ve They’ve Things Things Things Things Things Things Things Things Thinking Thinking Thinking Thinning Thirty, This This This This This This This This This This This This This This This This This This This This This This This This This This This This This This This This This This This This This This This This This, Thomas Thomkins, Thor Those Those Those Those Those Those Those Those Those, Though Though Though Three Three-quarters Thunder" Thunder,” Thunderhilton Thursday Thus, Thus, Time Time Time Time Time Time Time Time Time Time Time Time Time Time Time Time Time Time Time Time Time, Time, Time. Time. Time. Time? Times Times." Timex Tiny Titanic Titanic, Titanic. Titanic: To To To To To To To To To To To To To To Today, Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom Tom, Tom, Tom. Tom. Tom. Tom. Tom. Tom. Tom. Tom. Tom. Tom. Tom. Tom. Tom. Tom.’’ Tom.” Tom.” Tom: Tom: Tomorrowland). Tomorrow’s Tom’s Tom’s Tonolea, Tony Tony Tony Tony Tony Tony Tony Tony Tony Tony Tony Tony. Toss Tost Traffic Traffic Traffic Traffic Traffic Traffic Traffic Traffic Trans Transit.” Transportation Transportation Transportation Transportation Transportation Transportation Travel Travel...) Travels, Treasury Treat Triangle. Triangle. Tricky Trouble Trouble Try Trying Turn Twelve Twenty Twenty Twenty-six Twice Two Two Two Two Two Two Two Two Two Two, Two, Two. Two: Two: Twonky" Twonky,” Tyson, U.S. U.S. U.S. UFO’s UNITED UNIVAC UNTIL Ultimately, Under Understand Union?” Union— United United United United United United United United United United United United United United United United United United United United United, United, United. United. United. United?” Universe, University. Unless Until Until Uptime, Uri Usually Usually Usually Utah Utopia! VISION, Valhalla. Valium Various Varley Varley Varley Varley Varley Vem Vem Vem.” Very Very Vicks Vicks Vicks Vicks Victor Viet-Nam Viet-Nam—and View Viking Virginia Virginia Virginia Virginia Virginia. Vision,” Voice Voice Voice Voice W. WIZARD WIZARD, Wagner. Wake Wallace Walt Wanna Want War War War War.” Warm Warrior!" Warrior!” Wars, Was Was Was Was" Was,” Washington Washington Washington Washington Washington Washington Washington Washington Washington Washington, Washington. Washington. Washington. Washington.” Washington.” Washingtonians, Watches Watches Way Wayne Wayne Wayne We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We We Weapons Weather Well, Well, Well, Well, Well, Well, Well, Well, Well, Well, Well, Well, Well, Well, Well, Well, Well, Well, Well, Well, Wells Wells; Wells’ Went Went Were West West West West We’d We’d We’d We’d We’d We’d We’d We’d We’d We’d We’d We’d We’ll We’ll We’ll We’re We’re We’re We’re We’re We’ve We’ve We’ve We’ve We’ve We’ve What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What What Whatever Whatever Whatever Whatever Whatever Whatever What’s What’s What’s What’s What’s What’s What’s What’s What’s When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When When Where Where Whereas Where’s Where’s Where’s Where’s Where’s Whether Whether Which Which Which Which Which Which Which While While While While While While While While White.) Whitmore, Who Who Who Who Who Who Who Who Who Who Who Who Who Who Whoever Whose Who’s Why Why Why Why Why Why Why Why Why Why Why Why, Wilbur Wilhelm Will Will Will Will William William William William William William William Willis Willis Willis, Window Window Window Window Window Window Window Window Window Window Window Window Window Window With With With With With With With With With With Without Without Without Witnesses. Wizard Wolfe Woman, Word Words Words Words Words Words,” Words” Working World World World World World World World World World" World" World,” World,” Worse Would Would Would Would Wright X Xerox Xerox YORK Yeager, Year. Years Years Year’s Yesterday Yet Yet Yet Yet Yet Yet Yet Yokohama Yokohama Yokohama Yokohama, York York York York York York York York York, York, York, York,” York. You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You You, Your Your Your Your Your Your Your Your You’d You’d You’d You’d You’d You’ll You’ll You’ll You’re You’re You’re You’re You’re You’re You’re You’re You’re You’re You’re You’re You’re You’ve You’ve You’ve You’ve You’ve You’ve Zombies—' Zombies—’ Z’s [vital], a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a.. a.d. a.m. a.m. a.m., aback abacus abandoned abashed. ability ability ability able able able able able able able able able able able able able able able able able able aboard aboard aboard aboard. aboard. aboard; aboard? aboard?” aborted abound, about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about, about, about, about, about, about, about, about,” about. about. about. about. about. about. about. about. about... about.” about: about; about? about?” about—” about—” above above above above above abreast abruptly absence absence absence. absolute absolute absolute.’’ absolutely absolutely absolutely absolutely absolve absorbed abundance abuse acceleration accelerator. accelerators accelerators.” accent, accent. accent—something accept accept accept accept accept accept acceptable accepted accepted accepted accepting access access access access access accessed, accessible accident accident accident accident accident accident accident, accident,” accident. accidentally accidents.” accommodate accomplishments. according according according account account accounts accounts accounts. accumulates accurate accurate accurate accurate, accurately accusing ache. achieved achieved. achieved. aching achingly acknowledge acknowledge acknowledge acknowledge acknowledgement: acquaintances. acquainted,” acquired acre, acres. across across across across across across across across across across across across across across, act act act act act, acted action action action, action, action. action?” actions actions actions actions actions actions?” activity activity activity actress, actual actual actual actual actuality, actuality. actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually, actually,” actually. add add add add add added added added added, addict adding addition addition addition addition, address address: addressing add— adjective adjust adjusted adjusted, adjustment adjusts admired admission, admit admit admit admit admit admit admit admit admit admit admitted admitted admitted admitted. admitting admitting adolescent’s adrenalin, advanced advanced advancement advantage advantage advantage advantage advantage, advantages advertising advice, advice. advise advise advised advise—” advising aerial aeronautical affairs affairs affect affect affect affect afflict afflicted afford affront afraid afraid afraid afraid afraid afraid afraid afraid afraid afraid afraid afraid afraid afraid afraid. afraid. afraid. after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after, after, afternoon afternoon afternoon, afternoon, afternoon. afternoon. afternoon. afternoon. afterward, afterward. again again again again again again again again again again again again again again again again again again again again again again, again, again, again, again, again, again, again, again, again, again, again, again, again, again, again, again, again, again, again, again, again, again, again. again. again. again. again. again. again. again. again. again. again. again. again. again. again. again. again. again. again. again. again. again. again. again. again. again. again. again. again. again. again. again. again.” again.” again.” again: again?” again?” against against against against against against against against against against against again— again—it again—more age age age age age ...'
'Clean Text:' 'all a all as as let s me poor the the when all and do flight i it s i m made see this time well who blonde but by deceased flight he him if in not one or sorry that s we d which with what re ah can t hell it take that that s the transponder uh uh uh well we re what millennium millennium millennium millennium millennium millennium millennium millennium th millennium millennium millennium millennium millennium th th millennium millennium millennium millennium millennium th th th millennium millennium millennium millennium th th millennium millennium millennium th millennium millennium millennium millennium year old millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium s millennium millennium millennium s s s s so millennium th th th th th th ths ths ths ths millennium millennium millennium millennium millennium millennium time millennium s i s a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a p a r a a a alpo america artcc artcc atc atc atc atc s atc s atc s author s abayta abayta abayta abayta abayta about about about about about about according across across across actually actually add adelaide administration advise africa after after after after after after after after after after after after again again again again again against age age age age age agency agent agent age or ah ah air air air air air air air air air air air line aircraft airlines airlines airlines airport airport airport airport airport airport airways alameda alameda alarms albert aldiss ali ali alka seltzer all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all seeing allison almost almost along alpha alpha also also always am am am am am am am am am am am am am am ambler ambrose amelia amerenglish amerenglish amerenglish america america american american american american amniocentesis among among amsterdam am but am s an an an an an an an anarchy and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and and anderson anderson andromeda anet angeles angeles angeles angeles angeles angeles angeles angels another another another another another another antarctica antennas antoine s any any any any any any any any anybody anything anything anything anyway apaches apparently apparently apparently apparently apparently apparently apple aquarius arabia archibald archibald are are are are area area area arizona arizona arizona arizona arizona arizona arizona arizona arizona arizona ark ark armenia army army army army army arnold arnold arnold arnold arnold arnold arnold arnold arnold s arthur arthur as as as as as as as as as as as as as as as as as as as as as as as as as as as aside asimov assuming astronauts at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at atlantic atomic attorney author avenue avenue avenue aviation aw b b b b b b b b baltimore bart bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc bc s berkley berkley books book babylonian back bad baker bakersfield bali ball ball ball ball ball ball ball balls baltimore baltimore baltimore baltimore baltimore baltimore baltimore baltimore baltimore baltimore baltimore baltimore baltimore baltimore baltimore baltimore baltimore bamum bangladesh bannister bannister barbara barbara bars batavia battleship bauhaus bay bay bay bay bay bay bay bay ba be because because because because bee before before before before before before behind behind behold behold being being believe belli beltway ben bengalis berkeley berkley berkley berkley berkley berkley berkley bermuda bermuda berry beside better better better beverly bible bierce big big big big big big big big big big big big bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill bill s bill s bill s bill s biological bird birds birmingham bite bizarre black board board board board board board board board board board board board board board board board board board board board board board board boeing boeing boeing boeing boeing boeing boers bogart book book boston boston boston both both both both both both both bova bova s bradbury braille breaking breathe brest brest brest brian bridge briley briley briley briley briley briley briley briley briley briley briley briley briley briley briley briley brilliant brindle brindle brindle brindle brindle broken brooklyn brooklyn brunner buddha budweiser building building bulova bureau bureau which burt business but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but but butch by by by by by by by by by by by by by by b was c c c c c c c c c c c c c c c c c c c c c o cbs cb s chp chp chp chp cia cvr cvr cvr cvr cvr cvr cvr cvr cvr cvr s cvr s caesarean calcutta calcutta benares california california california california california california california california california california california california california california california california california california call call call call calm came came camels camels camp can can canal canary canary canary candidate canyon can t capetown captain captain captain captain captain s carnegie carole carole carole carole carole carole carpenter carpenter carpenter carpenter carpenter carpenter carpenter carpenter carpenter carpenter carpenter carpenter carpenter s carpenter s carson carson cary cary casablanca cassidy catheters causes cecily censorship centauri centauri center center central century certain cessna chairman chairman s chalk chamber chamber chamber changes charge charge charity charlie charlie s chemical cheops cherokees cherokees chicago chicago chicago chicago chief chief chief chief china china chinese chinese chinese christ christ christ christ christmas christmas christmas christmas christmas christmas christmas christmas chuck chuck cirocco civilization civilizations clarke claus cleaver of heads cleaver of heads a clorets clorets close close coal coast coast coast cockpit cockpit cockpit cockpit coconino coffee coffee cold colt come come come come company compared compared computer computer computer computer computer computer computer computer computer computer computer computers computers congeniality congress congress congress critters congress congruency connecticut connecticut constables constellation constellation constellation constellation contra contra contrary control control control control control control controller controllers controllers controller s cool copyright corporation corporation corporation correct correction cosmic costa costa could could council council council council council council council council council council council council council council council council council council council council council council council council council council council council council council s country the county county county coventry coventry coventry coventry coventry coventry coventry coventry coventry coventry coventry coventry coventry coventry coventry coventry coventry coventry coventry coventry coventry coventry coventry s coventry s cow craig craig craig craig craig craig craig crain crain crain crain crain crain crain crain crain crain crashes crashes cristabel cristabel cristabel cubs cummings cunard cut cybernet d d d d d d d d c day dc dc dc dc dc dc dc dc dc dc dc dc dc dc dc dc dc dc dc dc dc dc dc dc dc dc lo s demon do dachau dakota dali damn damn damn damn damn darkness darkness data data data dating david david david s davis davis davis davis davis davis davis day day delisle delisle delisle delisle delisle delisle delisle delisle delisle s delisle s delisle s delisle s dead dead december december december december december december december december december december december decisions deep define define denver department desert desert desert despair destiny diablo diablo diablo diablo diablo did did did did did did did did didn t diego diego diego diego disgust disney dixie djakarta djakarta djakarta do do do do do do do do do do do do do do do do do doctor doctor doctor doctor doctor doctor doctor doctor doctor doctor doctor doctor doctor doctor doctors dodgers dodgers does does does does does don don don don don donald donald donald don t don t don t don t don t don t don t don t don t don t don t douglas down down s drinking drunk dulles dulles during during duty duty each each each each each each each each earhart early early early earth earth earth earth earth earth east east east eastern eastern easy eat drink and be merry ed ed edison edison edward effect einstein einstein eisenhower either either elder eleanor electric eleven oh one eli eli eli eli eli eli eli eli eloi eloi encounters end end end end end end engineer england english enquirer enquirer entertainment epilogue erebus eric essence eternity eternity eugene eugene eugene eugene even even even even even even even even even even even even even even even even events events eventually eventually everglades everglades everglades every every every every every every everybody everybody everybody everybody everybody everybody everybody everybody s everyone everyone everyone everyone everything everything everything everything excedrin excuse extra eye eyewash f faa faa faa faa faa s fbi fbi fbi fbi fbi fbi fbi fdr fdr s fdr s fdr s for factors facts fairly fairly fall fall famous farther faster fate fear fed fed fed fed fed fed fed fed fed federal federal federal federal fifteen fifty fifty finally finally finally finally finally finally fine fine fingers first first first first first first first five flagstaff flight flight flight flight flight flight flight flight flight florida fly fokker aerospatiale for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for force foreign forget forgot fortunately fortune forty forty five four frances frances francisco francisco francisco francisco francisco francisco frankly freddie freddie freddie freddie freddie freddie freddie freddie freddie freddie freddie freddie freddie freddie freddie freddie freddie freddie freddie free free free fremont which french french from from from from from from from frost funny funny future g man gab galaxy garden garden garry gary gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate gate s gate s gees geller s genalysis general general general genesis genie george george george george george george george georgia german get getting gideon gideon gil gil gil gil gil gil gilbert gilmore girl girl glancing glenlivet gloria gloria gloria gloria glorious go go go go go go team god god god god god god god god god god god knows how many god god god god god gods goes goes goes gold golgotha golgotha good gooney gooney gordon gordon gordon gordon gordon gordon gordon gordon gordon gordy gordy gordy gordy gordy gordy gordy gordy gordy gordy gordy gordy gordy gordy got government governor gradually grand grandfather grandfather grant grant gray gray greek green green greenland greetings group guard guardia guardians guardians guess gumby guys h bombs history hotline hst hacker hacker hacker had half half hall hanoi hard hardy harlan harold harrison harry harsh harvard haubner haubner haubner he have having having having he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he he headache heads hearst heartless hefty hefty heinlein hell hell hell hell hell henry hepburn her her her her her her her her her her her her her her her her herbert herbert herculean here here here here here s here s herman hertz hertz hey hey he d he d he d he d he d he d he d he d he d he d he d he d he d he d he d he d he d he ll he ll he ll he ll he s he s he s he s he s he s he s he s he s he s he s he s he s he s he s he s he s he s he s he s he s he s he s he s he s highway hijackers hildy hildy hills himalayas hindus his his his his his his his his his his his his his his his his his his his his his his his his his his his his historians historical hodgson holiday holiday hollywood hollywood hollywood hollywood hollywood hollywood home honduras honest hong hope hopi hopis horowitz houston houston got how how how how how how how how how how how how how however however however however however however hoxv d huey hueys hugo hugo hugo huguenots human humans humphrey hupfeld hysteria i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i d i d i d i i i i i i ibm ic in isbn ian idle if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if if ike illinois imagination important impossible in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in in inc inches india indian indian indian indian indian indians indian so indications inference inn inn inn inside inside inside instead instead instead intelligence intelligence interest interest international international international interstate interstate into investigation a investigator irving is is is is is is is is is is is isaac islands islands islands islands isn t isn t it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it it s italian italian its it ll it ll it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s it s i i i i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i d i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i ll i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i m i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve i ve jfk jack jack jack jack jack jackie jane jane january janz janz janz janz janz janz janz janz janz janz janz janz janz janz janz janz janz janz janz janz janz janz janz s japan japan japanese jean jeff jehovah jerry jerry jerry jerry jerry jerry jerry jerry jerry jerry jerry jesus jesus jesus jesus jesus jesus jesus jesus jet jetstar jetstar jetstar jetstar jews jockey joel john john john john john john johnny johnstown joyjuice juan jumbos june jupiter just just just just just just just just just just just just kampuchea kane kankakee karakov karakov karakov karma kate s katharine keane keane keane keane keane keane keane keane keane keane keane s keane s keep keeping keeping kensington kensington kensington kevin kevin kevin kevin kevin kickapoo kid kilworth knock kong kooky korea kuttner l l l a last lax lax lax louise la lady lake lake land land lanes larry larry larry larry larry larry larry larry s larry s lars lars lars last last last last last last last last last last last later later later later later later latex laughter laurel law law lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence lawrence s lawrence s lawrence s lawrence s lawrence s lawrence s leading leaning leggio leggio leggio leggio leggio leggio leggio lest lest let let let let let s let s let s let s let s let s let s levitsky levitsky lie lie life like like like like like like like like lilly lilly lilly lilly lilly lilly lilly lilly lilly lilly lilly lilly lilly lilly lilly lilly lilly lilly lilly lilly lilly lilly lilly lilly lilly s lilly s lilly s lilly s lilly s linus listen listen listen literally little little livermore livermore livermore livermore livermore livermore lloyd lockheed lockheed lockheed lockheed logically london london london london london london london look look look look look looking looking looking looking lord lord lori los los los los los los los louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louise louises louise s louise s louise s louise s louise s louise s louisville love luci lucille luckies luckily lucky lucy m i t m i t members notify mgm ua millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium millennium macquitty macbeth macbeth machine machine machine machines mack madison magic maidive maintenance maintenance makes man man man man man man mandy mandy mandy mandy mandy mandy mandy mandy mandy mandy mandy mandy mandy mandy mandy manhandling manual many many many marin mars martin martin martin martin martin martin martin martin martin martin martin martin martin martin martin martin martin martin martin martin martin martin martin martin martin martin martin martin martin martin s mary mary marybeth marybeth marybeth marybeth maryland maryland maryland marylander columbians mateo maurice maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybe maybellene mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer mayer s mayer s mayer s mayer s mayer s mayer s mayer s mayer s mcdonnell douglas mcrae mconel me me me meanwhile medics melvin member member men menlo merely methodists metz michael mike miles millennium miller mine minor minoru minoru minoru minoru minoru minoru minoru s minutes miss mission mister mister mister mister mister mister mister mister mister mister mister mister mister mister mister mister mister model t mojave mommy monday monday moorcock more more more morlock morlocks morlocks morlocks mornings moroccans moses most most most most most most most most most most most most most most most most most most mostly mostly mounds mount mount mount mount mr mr mr mr mr mr mr mr mr mr ms ms much much much musing must my my my my my my my my my my my my my my my my my my my my my my my my my my my my my my my my my my my my my my my myself myself myself nba new not note ntsb ntsb ntsb ntsb ntsb nader nam nameless nameless nameless nameless names nancy nancy nancy naomi naomi nashua national national national national national national national national national national national national national national naturally naturally naturally naturally naturally naturally navajo navajo navajos navajos naval navy navy nazis nebula nebula neither nepal nero never never never never bom never neverland nevertheless new new new new new new new new new new new new new new new new newcastle news news newton next next next night night night night nineteen eighties no no no no no no no no no no no no no no no no no no no no no no no no no no no no no no no no no no no no no noah s nobel nobody nobody nobody nobody nobody nobody nobody nobody nobody nobody nobody nobody nobody nobody nobody nobody none none none none normally normally norman north not not not not not not not not not not not not not not not not not not not not not not nothing nothing nothing nothing nothing nothing nothing nothing now now now now now now now now now now now now now now now now nuclear o of of open ophiuchi oahu oakland oakland oakland oakland oakland oakland oakland oakland oakland oakland oakland oakland oakland oakland oakland oakland oakland oakland oakland oakland oakland oakland oakland oakland oakland san oakland san oakland oakland oakland oakland oakland oakland oakland obviously obviously obviously ocean oddly of of of of of of of office office office office office office office officer officer officer iic office oh oh oh oh oh oh oh oh oh oh oh oh oh oh ohio oil okay okay okay okay okay okay okay okay okay okay old on on on on on on on on on on on on on on on on on on on once once once once once once one one one one one one one one one one one one one one one one one one one one one one one one one one one one one one one one one one one one one one one one one one one one one one only only only only operations operations operations operations operations operations operations operations operations operations operations operations operations operations operations operations operations operations operations operations operations operations operations operations operations oppenheimer or or or or or or or or or or or or or or or or or or or or or oregon oregon organisation organization originally orly otherwise otherwise our our our our our our out out out outside outside outsiders over over oz p t patco persistence prc printed printing psa psa psa psa pacific pacific pacific pacific pan pan pan pan pan pan pan pan pan pan pan pan pan pan pan pan pan papers paradox paradox paradox paris paris paris frankfurt park park park parkersburg parrish part part part part part part part parts parts passengers patrol patty pauling peabody penthouse people people people people s perhaps perhaps perrier persistence personally petcher petcher petcher petcher petcher petcher petcher petcher petcher petcher petcher petcher petcher petcher petcher petcher or petcher s peter peter peter pharaoh s phil phil phil phoenix phoenix phoenix phoenix phoenix phoenix physics picture pilots pinky pinky pinky pinky pinky pinky pinky pinky pinky pinky pinky pinky pinky pinky pinky pinky pinky s pinky s pinky s piper plan please plenty plus politically pop pop pope portland possibly possibly possibly post post post post post post post post poul poul power powers powers powers powers powers pre destination preakness predestination prentice presently president president president president presidents preston presumably pretty pretty pride prize probably probably probably productions productions professor programmers programming project project project project project project project project project project project project project prologue prologue proximity publishing publishing pursuant pusher put putting q q q quadraplegic questions questions quick r ralph ralph ralph ralph ralph ralph ralph ralph ralph ralph ralph ralph ralph ralph s ralph s ralph s ralph s ralph s rangoon rangoon rangoon rank rape rapid ray ray ready room really recorder recorder recorder recorder recorder recorders recorders records recovery red redford redford redford redford redford regency region region relativity remember remember remember reporters republic rescue reservations reynolds right right right right right right right right risks river robert robert robert robert robert robert rockwell rockwell rockwell rockwell rockwell rockwell rockwell rockwell rockwell rockwell rockwell rockwell rockwell s roentgen roentgen rog rog roger roger roger roger roger roger roger roger roger roger roger roger roger roger roman roman round round roy rule running russians ryan ryan ryan sac states sacramento sad safer safety safety safety safety safety safety safety safety salvador sam sam same same san san san san san san san san san san santa santa santa sarah sarah sarah sarah sarah saudi say schenectady schrader schrader schrader schrader schuyler scottish screw seaboard seasons seattle seattle seattle second seconds see see see see seen seen seibel seibel seibel seiko seiko seiko selectrics senator senator senators series series series service set seven seventy three several several several shadow shadow shame shanghai she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she she sheepdog sheltie sheppard sheppard sheppard sheriff s sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman the therapist sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman sherman among sherman s sherman s sherman s sherman s sherman s shetland she d she d she d she d she d she d she d she d she d she d she d she d she d she ll she s she s she s she s she s she s she s she s she s she s she s ship ship ship ship shit short shorthanded shortly should should show sianis signed signed silesia silverberg silverberg silvers simple simply simultaneously since since since since since since sit sit sitting sitting skydivers sleepless slims slims small smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith smith i d smith you smith s smith s smith s smith s snatch snatch snatch snatch snatching so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so so some some some some some some some some some some some some some some some some some some some some some some somebody somebody somebody somebody somebody somebody somebody somebody somebody i m somehow someone something something something something something something something something something something sometimes sometimes sometimes sometimes sometimes sometimes sometimes sometimes sometimes sometimes sometimes sometimes sometimes sometimes sometimes sometimes somewhere somewhere son son son sondergard sondergard sondergard sondergard sondergard sondergard sondergard sondergard sondergard s sorry sound sound southern souvenirs spanish special special special special agent in charge spielberg spitfire sprague springs square square square square square square square squashed squatting standard standard standing stanford stanley stanley stanley stanley stanley stanley stanley stanley stanley stanley stanley stanley dressed star state states states states stearman stearman stefan stefan steve still still still still still still still still still still still still stop stop strangely street street stretching strictly strike stroke strunk stuff stunning sturges such suddenly suddenly suffer suffering suicide suicide suicide suitcases sullivan sullivan s sun sundance supreme sure sure surely surely surgeon suwannee swahili swanson s swedish swiss swiss the the the the titan titan tm tnt tv tv take take take take take take take talk tang shen tape team team team team team team tear technically technically technically technically teheran teheran teheran teheran television tell tell tell tell tell tell tell teller tempers temporal temporal ten ten ten ten tenerife tenerife tenn testament testimony testimony testimony testimony testimony testimony testimony testimony testimony testimony testimony testimony testimony testimony testimony testimony testimony testimony testimony testimony testimony testimony testimony testimony texas texas texas thank that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that that ll that s that s that s that s that s that s that s that s that s that s that s that s that s that s that s that s that s that s that s that s that s that s that s that s that s that s that s that s that s that s that s that s the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the their their their their then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then then theoretically theories theory there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there there d there s there s there s there s there s there s there s there s there s there s there s there s there s there s there s there s there s there s there s there s these these these these these these they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they they d they d they d they d they d they d they d they d they ll they ll they ll they ll they re they re they re they re they re they re they re they re they re they re they re they re they re they re they re they re they re they ve they ve things things things things things things things things thinking thinking thinking thinning thirty this this this this this this this this this this this this this this this this this this this this this this this this this this this this this this this this this this this this this this this this this thomas thomkins thor those those those those those those those those those though though though three three quarters thunder thunder thunderhilton thursday thus thus time time time time time time time time time time time time time time time time time time time time time time time time time time times times timex tiny titanic titanic titanic titanic to to to to to to to to to to to to to to today tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tom tomorrowland tomorrow s tom s tom s tonolea tony tony tony tony tony tony tony tony tony tony tony tony toss tost traffic traffic traffic traffic traffic traffic traffic traffic trans transit transportation transportation transportation transportation transportation transportation travel travel travels treasury treat triangle triangle tricky trouble trouble try trying turn twelve twenty twenty twenty six twice two two two two two two two two two two two two two two twonky twonky tyson u s u s u s ufo s united univac until ultimately under understand union union united united united united united united united united united united united united united united united united united united united united united united united united united united universe university unless until until uptime uri usually usually usually utah utopia vision valhalla valium various varley varley varley varley varley vem vem vem very very vicks vicks vicks vicks victor viet nam viet nam and view viking virginia virginia virginia virginia virginia vision voice voice voice voice w wizard wizard wagner wake wallace walt wanna want war war war war warm warrior warrior wars was was was was was washington washington washington washington washington washington washington washington washington washington washington washington washington washington washington washingtonians watches watches way wayne wayne wayne we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we weapons weather well well well well well well well well well well well well well well well well well well well well wells wells wells went went were west west west west we d we d we d we d we d we d we d we d we d we d we d we d we ll we ll we ll we re we re we re we re we re we ve we ve we ve we ve we ve we ve what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what what whatever whatever whatever whatever whatever whatever what s what s what s what s what s what s what s what s what s when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when where where whereas where s where s where s where s where s whether whether which which which which which which which while while while while while while while while white whitmore who who who who who who who who who who who who who who whoever whose who s why why why why why why why why why why why why wilbur wilhelm will will will will william william william william william william william willis willis willis window window window window window window window window window window window window window window with with with with with with with with with with without without without witnesses wizard wolfe woman word words words words words words words working world world world world world world world world world world world world worse would would would would wright x xerox xerox york yeager year years years year s yesterday yet yet yet yet yet yet yet yokohama yokohama yokohama yokohama york york york york york york york york york york york york york you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you your your your your your your your your you d you d you d you d you d you ll you ll you ll you re you re you re you re you re you re you re you re you re you re you re you re you re you ve you ve you ve you ve you ve you ve zombies zombies z s vital a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a d a m a m a m aback abacus abandoned abashed ability ability ability able able able able able able able able able able able able able able able able able able aboard aboard aboard aboard aboard aboard aboard aboard aborted abound about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about about above above above above above abreast abruptly absence absence absence absolute absolute absolute absolutely absolutely absolutely absolutely absolve absorbed abundance abuse acceleration accelerator accelerators accelerators accent accent accent something accept accept accept accept accept accept acceptable accepted accepted accepted accepting access access access access access accessed accessible accident accident accident accident accident accident accident accident accident accidentally accidents accommodate accomplishments according according according account account accounts accounts accounts accumulates accurate accurate accurate accurate accurately accusing ache achieved achieved achieved aching achingly acknowledge acknowledge acknowledge acknowledge acknowledgement acquaintances acquainted acquired acre acres across across across across across across across across across across across across across across act act act act act acted action action action action action action actions actions actions actions actions actions activity activity activity actress actual actual actual actual actuality actuality actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually actually add add add add add added added added added addict adding addition addition addition addition address address addressing add adjective adjust adjusted adjusted adjustment adjusts admired admission admit admit admit admit admit admit admit admit admit admit admitted admitted admitted admitted admitting admitting adolescent s adrenalin advanced advanced advancement advantage advantage advantage advantage advantage advantages advertising advice advice advise advise advised advise advising aerial aeronautical affairs affairs affect affect affect affect afflict afflicted afford affront afraid afraid afraid afraid afraid afraid afraid afraid afraid afraid afraid afraid afraid afraid afraid afraid afraid after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after after afternoon afternoon afternoon afternoon afternoon afternoon afternoon afternoon afterward afterward again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again again against against against against against against against against against against against again again it again more age age age age age age age age age age age agencies agency agency agency agency agenda agent agent agent s aggressive agitated ago ago ago ago ago ago ago ago ago ago ago ago ago ago ago ago ago ago ago ago ago ago ago ago ago ago ago ago ago ago ago ago ago ago agony agoraphobia ago or agree agree agree agree agree agree agreed agreed agreed agreed agreed agreed agreement ah ahead ahead ahead ahead ahead ahead ahead ahead ahead ahead ahead ahead aid aid aileron ailerons aim aimed aimed aimed aimed aimed aiming ain t ain t ain t ain t ain t ain t air air air air air air air air air air air air air air air air air air air air air air air air air air air raid air traffic air air air air air air aircraft aircraft aircraft aircraft aircraft aircraft aircraft aircraft aircraft aircraft aircraft aircraft aircraft aircraft aircraft aircraft aircraft airfields airframe airframe airframe airframe airframes airline airline airline airline airline airline airline airline airline airline airline airline airline airliner airliners airplane airplane airplane airplane airplane airplanes airplanes airplanes airplanes airplanes airplane at airport airport airport airport airport airport airport airport airport airport airport airport airport airport airport airport airport airport airport airport airport airport airport airports airports airports airspeed airworthy aisle aisle aisle aisle aisle aisle aisle aisle aisle aisle aisles aisles alarm alarm alarm alarm alarm alarm alarm alarm alarm alarm alarm alarm alarm alarm alarmed alarmed alarming alarms alcoholism alert alert alert alert alerted alien alien alive alive alive alive alive alive alive alive alive alive alive alive alive alive all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all important all time all all all all all all all all all all all all all all all all all all all all all all allergic alligator alliteration allow allowed allowed allowed allowed allows allusions allusions all how almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost alone alone alone alone alone alone alone alone alone alone alone alone alone along along along along along along along along along along along along along along along aloud already already already already already already already already already already already already already already already ...'
'Word Count': '85004
',
}
```
### Data Fields
- **Unnamed: int** A unique id for the text
- **Title: str** The title of the book from which the text has been extracted
- **Author: str** The author of the book from which the text has been extracted
- **Pub Year: str** The date on which the book was published (first printing)
- **Text: str** The text extracted from the book
- **Clean Text: str** The text extracted from the book with lowercasing performed and punctuation, numbers and extra spaces removed
- **Word Count: int** The number of words the text contains
To Be Added:
- **summary: str** A brief summary of the book, if extracted from library records
- **pub_date: int** The date on which the book was published (first printing)
- **pub_city: int** The city in which the book was published (first printing)
- **lcgft_category: str** Information from the Library of Congress Genre/Form Terms for Library and Archival Materials, if known
### Loading the Dataset
Use the following code to load the dataset in a Python environment (note: does not work with repo set to private)
```
from datasets import load_dataset
# If the dataset is gated/private, make sure you have run huggingface-cli login
dataset = load_dataset("SF-Corpus/EF_Full_Texts")
```
Or just clone the dataset repo
```
git lfs install
git clone https://huggingface.co/datasets/SF-Corpus/EF_Full_Texts
# if you want to clone without large files – just their pointers
# prepend your git clone with the following env var:
GIT_LFS_SKIP_SMUDGE=1
```
## Dataset Creation
### Curation Rationale
For an overview of our approach to data curation of literary texts, see Alex Wermer-Colan’s and James Kopaczewski’s article, “The New Wave of Digital Collections: Speculating on the Future of Library Curation”(2022)
### Source Data
The Loretta C. Duckworth Scholars Studio has partnered with Temple University Libraries’ Special Collections Research Center (SCRC) and Digital Library Initiatives (DLI) to build a digitized corpus of copyrighted science fiction literature. Besides its voluminous Urban Archives, the SCRC also houses a significant collection of science-fiction literature. The Paskow Science Fiction Collection was originally established in 1972, when Temple acquired 5,000 science fiction paperbacks from a Temple alumnus, the late David C. Paskow. Subsequent donations, including troves of fanzines and the papers of such sci-fi writers as John Varley and Stanley G. Weinbaum, expanded the collection over the last few decades, both in size and in the range of genres. SCRC staff and undergraduate student workers recently performed the usual comparison of gift titles against cataloged books, removing science fiction items that were exact duplicates of existing holdings. A refocusing of the SCRC’s collection development policy for science fiction de-emphasized fantasy and horror titles, so some titles in those genres were removed as well.
## Considerations for Using the Data
This data card only exhibits extracted features for copyrighted fiction; no copyrighted work is being made available for consumption. These digitized files are made accessible for purposes of education and research. Temple University Libraries have given attribution to rights holders when possible. If you hold the rights to materials in our digitized collections that are unattributed, please let us know so that we may maintain accurate information about these materials.
If you are a rights holder and are concerned that you have found material on this website for which you have not granted permission (or is not covered by a copyright exception under US copyright laws), you may request the removal of the material from our site by writing to [email protected].
For more information on non-consumptive research, check out HathiTrust Research Center’s Non-Consumptive Use Research Policy.
## Additional Information
### Dataset Curators
For a full list of conributors to the SF Nexus project, visit [https://sfnexus.io/people/](https://sfnexus.io/people/).
|
SF-Corpus/EF_Full_Texts
|
[
"language:en",
"region:us"
] |
2023-05-23T18:08:00+00:00
|
{"language": ["en"], "pretty_name": "sf-nexus-ef-chapters-and-chunks"}
|
2023-05-24T13:38:41+00:00
|
e7b3d35707a131ae4942562b2d9d6ccaef6e5e3e
|
# VoxCeleb 1
VoxCeleb1 contains over 100,000 utterances for 1,251 celebrities, extracted from videos uploaded to YouTube.
## Identification Split
| | train | validation | test |
| :---: | :---: | :---: | :---: |
| # of speakers | 1251 | 1251 | 1251 |
| # of samples | 138361 | 6904 | 8251 |
## References
- https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html
|
yangwang825/vox1-iden-full
|
[
"task_categories:audio-classification",
"audio",
"VoxCeleb",
"identification",
"region:us"
] |
2023-05-23T18:10:34+00:00
|
{"task_categories": ["audio-classification"], "tags": ["audio", "VoxCeleb", "identification"]}
|
2023-05-23T21:24:09+00:00
|
3197e264c66e1ce40219f24135f74dea26c41e99
|
# Dataset Card for Dataset Name
## Dataset Description
- **Autor:** Rubén Darío Jaramillo
- **Email:** [email protected]
- **WhatsApp:** +593 93 979 6676
### Dataset Summary
This dataset has been generated using [Prompt Generator for OpenAI's DALL-E](https://huggingface.co/spaces/rubend18/Prompt-Generator-for-OpenAI-DALL-E).
### Languages
English
## Dataset Structure
1.000.000 Prompts
|
rubend18/DALL-E-Prompts-OpenAI-ChatGPT
|
[
"task_categories:text-generation",
"task_categories:feature-extraction",
"task_categories:zero-shot-classification",
"size_categories:1B<n<10B",
"language:en",
"DALL-E",
"Prompt",
"Dataset",
"Compilation",
"OpenAI",
"image",
"images",
"region:us"
] |
2023-05-23T18:17:22+00:00
|
{"language": ["en"], "size_categories": ["1B<n<10B"], "task_categories": ["text-generation", "feature-extraction", "zero-shot-classification"], "pretty_name": "DALL-E Prompt Dataset Compilation", "tags": ["DALL-E", "Prompt", "Dataset", "Compilation", "OpenAI", "image", "images"]}
|
2023-05-26T15:00:22+00:00
|
f4a4232a169deabc6c73a7dd34be525eab777a6e
|
# VoxCeleb 1
VoxCeleb1 contains over 100,000 utterances for 1,251 celebrities, extracted from videos uploaded to YouTube.
## Verification Split
| | train | validation | test |
| :---: | :---: | :---: | :---: |
| # of speakers | 1211 | 1211 | 40 |
| # of samples | 299246 | 33672 | 4874 |
## References
- https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html
|
yangwang825/vox1-veri-3s
|
[
"task_categories:audio-classification",
"audio",
"VoxCeleb",
"verification",
"region:us"
] |
2023-05-23T18:19:58+00:00
|
{"task_categories": ["audio-classification"], "tags": ["audio", "VoxCeleb", "verification"]}
|
2023-05-23T19:33:41+00:00
|
fe115013a80e2dd8c1332c7ae7ac85c1b0f08647
|
zahraa/AgriNet
|
[
"license:cc-by-nc-nd-3.0",
"region:us"
] |
2023-05-23T18:23:03+00:00
|
{"license": "cc-by-nc-nd-3.0"}
|
2023-05-23T18:23:03+00:00
|
|
08de0a5cf92669e21d807c8dc458cefb43779f68
|
# VoxCeleb 1
VoxCeleb1 contains over 100,000 utterances for 1,251 celebrities, extracted from videos uploaded to YouTube.
## Identification Split
| | train | validation | test |
| :---: | :---: | :---: | :---: |
| # of speakers | 1251 | 1251 | 1251 |
| # of samples | 306208 | 14479 | 4874 |
## References
- https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html
|
yangwang825/vox1-iden-3s
|
[
"task_categories:audio-classification",
"audio",
"VoxCeleb",
"identification",
"region:us"
] |
2023-05-23T18:29:25+00:00
|
{"task_categories": ["audio-classification"], "tags": ["audio", "VoxCeleb", "identification"]}
|
2023-05-23T19:33:11+00:00
|
3e9afaec3756ed11f8e227184e2ce12f54520561
|
# Dataset Card for "4ab74c14"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/4ab74c14
|
[
"region:us"
] |
2023-05-23T18:30:51+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 188, "num_examples": 10}], "download_size": 1336, "dataset_size": 188}}
|
2023-05-23T18:30:52+00:00
|
ba53ab622cc23e9f3470ad8e0ae1b429ab7ee5f6
|
d3f4ult/angmgz
|
[
"region:us"
] |
2023-05-23T18:48:24+00:00
|
{}
|
2023-05-23T18:51:55+00:00
|
|
60bd9f12daebe9877cbb1100ee8e93702daa9457
|
# AutoTrain Dataset for project: hhhh
## Dataset Description
This dataset has been automatically processed by AutoTrain for project hhhh.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<256x256 RGBA PIL image>",
"target": 0
},
{
"image": "<256x256 RGBA PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['lion', 'tiger'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 360 |
| valid | 40 |
|
anasalashqar/autotrain-data-hhhh
|
[
"task_categories:image-classification",
"region:us"
] |
2023-05-23T19:06:46+00:00
|
{"task_categories": ["image-classification"]}
|
2023-05-23T19:13:54+00:00
|
ba0f3a2a0f318dbc902fd2a556f36ad28a03f577
|
b2ktortechnik/productdata
|
[
"license:unknown",
"region:us"
] |
2023-05-23T19:07:11+00:00
|
{"license": "unknown"}
|
2023-05-23T19:07:11+00:00
|
|
8275dc0dd071a20173ff8d475f33cb33ceda08c3
|
Akarsh/Website_Data
|
[
"license:bsd-3-clause",
"region:us"
] |
2023-05-23T19:10:41+00:00
|
{"license": "bsd-3-clause"}
|
2023-05-23T19:11:11+00:00
|
|
dbc64e89e822b662c6b2f905272e2d1795db2e27
|
Akarsh/autotrain-data-Test
|
[
"license:bsd-3-clause",
"region:us"
] |
2023-05-23T19:12:46+00:00
|
{"license": "bsd-3-clause"}
|
2023-05-23T19:13:02+00:00
|
|
e7bb8140b3b9454f418340924d1aebf60f659c7d
|
# Dataset Card for "0a67a744"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/0a67a744
|
[
"region:us"
] |
2023-05-23T19:15:32+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 186, "num_examples": 10}], "download_size": 1341, "dataset_size": 186}}
|
2023-05-23T19:15:33+00:00
|
7c0103fb59488299a8c3c500e9573fc8820502ec
|
# ES2Bash
This dataset contains a collection of natural language requests (in Spanish) and their corresponding bash commands. The purpose of this dataset is to provide examples of requests and their associated bash commands to facilitate machine learning and the development of natural language processing systems related to command-line operations.
# Features
The dataset consists of two main features:
* Natural Language Request (ES): This feature contains natural language requests written in Spanish. The requests represent tasks or actions to be performed using command-line commands.
* Bash Command: This feature contains the bash commands associated with each natural language request. The bash commands represent the way to execute the requested task or action using the command line.
# Initial Commands
The dataset initially contains requests related to the following commands:
* cat: Requests involving reading text files.
* ls: Requests related to obtaining information about files and directories at a specific location.
* cd: Requests to change the current directory.
# Dataset Expansion
In addition to the initial commands mentioned above, there are plans to expand this dataset to include more common command-line commands. The expansion will cover a broader range of tasks and actions that can be performed using command-line operations.
Efforts will also be made to improve the existing examples and ensure that they are clear, accurate, and representative of typical requests that users may have when working with command lines.
# Request Statistics
In the future, statistical data will be provided on the requests present in this dataset. This data may include information about the distribution of requests in different categories, the frequency of use of different commands, and any other relevant analysis to better understand the usage and needs of command-line users.
# Request Collection Process
This dataset is the result of a combination of requests generated by language models and manually added requests. The requests generated by language models were based on existing examples and prior knowledge related to the usage of command lines. A manual review was then conducted to ensure the quality and relevance of the requests.
|
dev2bit/es2bash
|
[
"task_categories:text-generation",
"language:es",
"license:apache-2.0",
"code",
"region:us"
] |
2023-05-23T19:25:37+00:00
|
{"language": ["es"], "license": "apache-2.0", "task_categories": ["text-generation"], "tags": ["code"]}
|
2023-05-23T20:11:43+00:00
|
362cd37f6002d77ad1486367d91ddb1472121f0d
|
Trickshotblaster/my_awesome_dataset
|
[
"license:mit",
"region:us"
] |
2023-05-23T19:40:34+00:00
|
{"license": "mit"}
|
2023-05-23T19:40:34+00:00
|
|
17e4f02712f73b3d2ed7e5c856f087358125a461
|
# Dataset Card for "dataset_for_p7_train_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
laurenmit/dataset_for_p7_train_test
|
[
"region:us"
] |
2023-05-23T19:46:51+00:00
|
{"dataset_info": {"features": [{"name": "document", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 542185.3459637562, "num_examples": 910}, {"name": "test", "num_bytes": 181125.65403624382, "num_examples": 304}], "download_size": 437165, "dataset_size": 723311.0}}
|
2023-05-23T19:46:59+00:00
|
69032448dc5f08edf3ffee52e47e073b7c44df1d
|
Pixartist/Data-collection
|
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] |
2023-05-23T19:47:50+00:00
|
{"license": "bigscience-bloom-rail-1.0"}
|
2023-07-13T18:49:06+00:00
|
|
40723000162b278c2dfca9724f2c789cf57fe13b
|
# Dataset Card for "Caltech101_with_background_test_google_flan_t5_xl_mode_A_ns_6084"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/Caltech101_with_background_test_google_flan_t5_xl_mode_A_ns_6084
|
[
"region:us"
] |
2023-05-23T19:58:12+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices", "num_bytes": 2289693, "num_examples": 6084}], "download_size": 404193, "dataset_size": 2289693}}
|
2023-05-23T19:58:14+00:00
|
1a794812be95edeb9d187fc82d96b70ef92d2bb6
|
# Dataset Card for "ts-eval"
This is a dataset of 744 high-quality TypeScript files, meant for evaluation of type prediction systems. The dataset is derived from TypeScript portion of [The Stack (dedup)](https://huggingface.co/datasets/bigcode/the-stack-dedup), version 1.1, and filtered.
## Filtering steps
We remove files that:
1. Depend on external modules and do not type check.
2. Have 0 type annotation locations.
3. Have 50 or fewer lines of code.
4. Have 0 functions.
5. Average fewer than 5 lines of code per function.
Next, we compute a weighted quality score, based on the following factors:
* function annotation density
* variable annotation density
* type definition density
* trivial types density
* predefined types density
* lines of code per function
* number of function usages
Files with a score 1 or more standard deviations below the mean are removed.
Next, we apply the training cutoff (December 31, 2021).
Finally, we remove type annotations. Some files become invalid during this process, so we exclude them from the dataset.
## Data fields
Most of the data fields come from [The Stack](https://huggingface.co/datasets/bigcode/the-stack/blob/main/README.md#data-fields).
We add the following fields:
* `loc` (integer): lines of code, excluding comments and blanks
* `functions` (integer): number of functions that have function bodies; includes function expressions, arrow functions, and methods
* `function_signatures` (integer): number of function signatures, i.e., function declarations with no bodies
* `function_parameters` (integer): number of function parameters
* `variable_declarations` (integer): number of variable declarations
* `property_declarations` (integer): number of property declarations, i.e., within classes and interfaces
* `function_usages` (integer): number of functions (defined within the file) that are called
* `trivial_types` (integer): number of trivial type annotations, e.g. `any`, `Function`
* `predefined_types` (integer): number of predefined type annotations, e.g. `boolean`, `number`, `string`
* `type_definitions` (integer): number of type definitions, e.g. classes, interfaces, type aliases
* `dynamism_heuristic` (integer): a count of dynamic language features, e.g. `eval`, run-time type tests
* `loc_per_function` (float): average lines of code per function
* `estimated_tokens` (integer): estimated token count, using the SantaCoder tokenizer
* `fun_ann_density` (float): a measure of function annotation density
* `var_ann_density` (float): a measure of variable annotation density
* `prop_ann_density` (float): a measure of property annotation density
* `typedef_density` (float): a measure of type definition density
* `dynamism_density` (float): a measure of dynamic feature usage density
* `trivial_density` (float): a measure of trivial type annotation density
* `predefined_density` (float): a measure of predefined type annotation density
* `metric` (float): the quality score
* `content_without_annotations` (string): content of the file, with type annotations removed
## Versions
The default version (`main`) is `v1.1`.
|Version|Description|
|-|-|
|`v1.1` | Original version of the evaluation dataset, based on v1.1 of the Stack. Applies the training cutoff (December 31, 2021). |
|`v1.1full` | Evaluation dataset, based on v1.1 of the Stack. Does not apply the training cutoff. |
|`v1.1subset` | A subset of v1.1 containing only 50 files. |
|
nuprl/ts-eval
|
[
"region:us"
] |
2023-05-23T20:04:30+00:00
|
{"dataset_info": {"features": [{"name": "hexsha", "dtype": "string"}, {"name": "size", "dtype": "int64"}, {"name": "ext", "dtype": "string"}, {"name": "lang", "dtype": "string"}, {"name": "max_stars_repo_path", "dtype": "string"}, {"name": "max_stars_repo_name", "dtype": "string"}, {"name": "max_stars_repo_head_hexsha", "dtype": "string"}, {"name": "max_stars_repo_licenses", "sequence": "string"}, {"name": "max_stars_count", "dtype": "float64"}, {"name": "max_stars_repo_stars_event_min_datetime", "dtype": "string"}, {"name": "max_stars_repo_stars_event_max_datetime", "dtype": "string"}, {"name": "max_issues_repo_path", "dtype": "string"}, {"name": "max_issues_repo_name", "dtype": "string"}, {"name": "max_issues_repo_head_hexsha", "dtype": "string"}, {"name": "max_issues_repo_licenses", "sequence": "string"}, {"name": "max_issues_count", "dtype": "float64"}, {"name": "max_issues_repo_issues_event_min_datetime", "dtype": "string"}, {"name": "max_issues_repo_issues_event_max_datetime", "dtype": "string"}, {"name": "max_forks_repo_path", "dtype": "string"}, {"name": "max_forks_repo_name", "dtype": "string"}, {"name": "max_forks_repo_head_hexsha", "dtype": "string"}, {"name": "max_forks_repo_licenses", "sequence": "string"}, {"name": "max_forks_count", "dtype": "float64"}, {"name": "max_forks_repo_forks_event_min_datetime", "dtype": "string"}, {"name": "max_forks_repo_forks_event_max_datetime", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "avg_line_length", "dtype": "float64"}, {"name": "max_line_length", "dtype": "int64"}, {"name": "alphanum_fraction", "dtype": "float64"}, {"name": "loc", "dtype": "int64"}, {"name": "functions", "dtype": "int64"}, {"name": "function_signatures", "dtype": "int64"}, {"name": "function_parameters", "dtype": "int64"}, {"name": "variable_declarations", "dtype": "int64"}, {"name": "property_declarations", "dtype": "int64"}, {"name": "function_usages", "dtype": "int64"}, {"name": "trivial_types", "dtype": "int64"}, {"name": "predefined_types", "dtype": "int64"}, {"name": "type_definitions", "dtype": "int64"}, {"name": "dynamism_heuristic", "dtype": "int64"}, {"name": "loc_per_function", "dtype": "float64"}, {"name": "estimated_tokens", "dtype": "int64"}, {"name": "fun_ann_density", "dtype": "float64"}, {"name": "var_ann_density", "dtype": "float64"}, {"name": "prop_ann_density", "dtype": "float64"}, {"name": "typedef_density", "dtype": "float64"}, {"name": "dynamism_density", "dtype": "float64"}, {"name": "trivial_density", "dtype": "float64"}, {"name": "predefined_density", "dtype": "float64"}, {"name": "metric", "dtype": "float64"}, {"name": "content_without_annotations", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 5737779, "num_examples": 744}], "download_size": 2384891, "dataset_size": 5737779}, "extra_gated_prompt": "## Terms of Use for The Stack\nThe Stack dataset is a collection of source code in over 300 programming languages. We ask that you read and acknowledge the following points before using the dataset:\n1. The Stack is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.\n2. The Stack is regularly updated to enact validated data removal requests. By clicking on \"Access repository\", you agree to update your own version of The Stack to the most recent usable version specified by the maintainers in [the following thread](https://huggingface.co/datasets/bigcode/the-stack/discussions/7). If you have questions about dataset versions and allowed uses, please also ask them in the dataset\u2019s [community discussions](https://huggingface.co/datasets/bigcode/the-stack/discussions/new). We will also notify users via email when the latest usable version changes.\n3. To host, share, or otherwise provide access to The Stack dataset, you must include [these Terms of Use](https://huggingface.co/datasets/bigcode/the-stack#terms-of-use-for-the-stack) and require users to agree to it.\n\nBy clicking on \"Access repository\" below, you accept that your contact information (email address and username) can be shared with the dataset maintainers as well.", "extra_gated_fields": {"Email": "text", "I have read the License and agree with its terms": "checkbox"}}
|
2023-05-23T23:01:28+00:00
|
4efabbf6edec4828312339e77f1cd412f935ed1e
|
# Dataset Card for "OxfordFlowers_test_google_flan_t5_xl_mode_A_ns_6149"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/OxfordFlowers_test_google_flan_t5_xl_mode_A_ns_6149
|
[
"region:us"
] |
2023-05-23T20:06:40+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices", "num_bytes": 2470439, "num_examples": 6149}], "download_size": 269782, "dataset_size": 2470439}}
|
2023-05-23T20:06:43+00:00
|
80c4d6c455e3aa6e28118c4fdf0eed6b418fdb22
|
# Dataset Card for "StanfordCars_test_google_flan_t5_xl_mode_A_ns_8041"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/StanfordCars_test_google_flan_t5_xl_mode_A_ns_8041
|
[
"region:us"
] |
2023-05-23T20:42:12+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices", "num_bytes": 3873420, "num_examples": 8041}], "download_size": 859959, "dataset_size": 3873420}}
|
2023-05-23T20:42:14+00:00
|
8a61b9f984ba6c5cf24940271e0b2cdfb9d0d4dd
|
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
language:
- en
size_categories:
- 10K<n<100K
|
Leventk/yasa_son
|
[
"license:other",
"region:us"
] |
2023-05-23T21:02:33+00:00
|
{"license": "other"}
|
2023-05-23T21:36:17+00:00
|
c53094c7929b8af3e8eca6913b921db31e5c728c
|
# Dataset Card for "samolet_frames"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ummagumm-a/samolet_frames
|
[
"region:us"
] |
2023-05-23T21:03:23+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "1", "1": "2", "2": "2023-02-14 15-03-46", "3": "2023-04-19 09-56-23", "4": "2023-05-10 14-44-10", "5": "3_Trim", "6": "4", "7": "5", "8": "6", "9": "7", "10": "IMG_9920", "11": "IMG_9921", "12": "IMG_9922", "13": "IMG_9923", "14": "IMG_9924"}}}}], "splits": [{"name": "train", "num_bytes": 334183336.777, "num_examples": 6541}, {"name": "test", "num_bytes": 68858108.0, "num_examples": 915}], "download_size": 456039861, "dataset_size": 403041444.777}}
|
2023-05-23T21:06:48+00:00
|
f624b4c03c9556ad8fdf5634a241a6be750028f2
|
# Dataset Card for "Food101_test_google_flan_t5_xl_mode_A_ns_25250"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/Food101_test_google_flan_t5_xl_mode_A_ns_25250
|
[
"region:us"
] |
2023-05-23T21:16:32+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices", "num_bytes": 10610673, "num_examples": 25250}], "download_size": 1146498, "dataset_size": 10610673}}
|
2023-06-03T16:07:34+00:00
|
5bb459780ba9b1d0598eec61b5d6771914e47bfe
|
# Dataset Card for "FGVC_Aircraft_test_google_flan_t5_xl_mode_A_ns_3333"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/FGVC_Aircraft_test_google_flan_t5_xl_mode_A_ns_3333
|
[
"region:us"
] |
2023-05-23T21:21:34+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices", "num_bytes": 1362914, "num_examples": 3333}], "download_size": 201454, "dataset_size": 1362914}}
|
2023-05-23T21:21:36+00:00
|
3dac15bd23bb26ca6cad29be4db32af7b004be74
|
# Dataset Card for "DTD_parition1_test_google_flan_t5_xl_mode_A_ns_1880"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/DTD_parition1_test_google_flan_t5_xl_mode_A_ns_1880
|
[
"region:us"
] |
2023-05-23T21:23:47+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices", "num_bytes": 773321, "num_examples": 1880}], "download_size": 174097, "dataset_size": 773321}}
|
2023-05-23T21:23:49+00:00
|
165f57858558277d17c025b746ece5adace179bb
|
# Dataset Card for "OxfordPets_test_google_flan_t5_xl_mode_A_ns_3669"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/OxfordPets_test_google_flan_t5_xl_mode_A_ns_3669
|
[
"region:us"
] |
2023-05-23T21:26:40+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices", "num_bytes": 1370580, "num_examples": 3669}], "download_size": 180974, "dataset_size": 1370580}}
|
2023-05-23T21:26:41+00:00
|
29c445b457b865e95e6170fb30d2e4c0606a91f1
|
akumoth/ygo_card_img
|
[
"license:mit",
"region:us"
] |
2023-05-23T21:30:40+00:00
|
{"license": "mit"}
|
2023-05-26T20:27:57+00:00
|
|
4adc9a4758a4571e82eab5c04e050ce81e451aad
|
# Dataset Card for "TinyImagenet_2k_validation_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_2000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/TinyImagenet_2k_validation_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_2000
|
[
"region:us"
] |
2023-05-23T21:32:51+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices", "num_bytes": 839095, "num_examples": 2000}], "download_size": 216830, "dataset_size": 839095}}
|
2023-05-23T21:32:52+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.