sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1262a6f31e118e7f7ed498c09d51faf4d022fd0b
|
# Dataset Card for "en-vi-opus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
aiface/en-vi-opus
|
[
"region:us"
] |
2023-10-22T05:23:01+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "test", "num_bytes": 666533, "num_examples": 2000}, {"name": "train", "num_bytes": 293544945, "num_examples": 1000000}, {"name": "validation", "num_bytes": 675786, "num_examples": 2000}], "download_size": 54167292, "dataset_size": 294887264}}
|
2023-10-22T05:23:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "en-vi-opus"
More Information needed
|
[
"# Dataset Card for \"en-vi-opus\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"en-vi-opus\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"en-vi-opus\"\n\nMore Information needed"
] |
be4078975f6b0fa936c2856056d1af4cbe62dd7d
|
# Dataset Card for "must-c-en-es-01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
maxolotl/must-c-en-es-01
|
[
"region:us"
] |
2023-10-22T05:27:36+00:00
|
{"dataset_info": {"features": [{"name": "en", "dtype": "string"}, {"name": "es", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 59876087, "num_examples": 259892}, {"name": "test", "num_bytes": 658233, "num_examples": 3035}, {"name": "validation", "num_bytes": 310169, "num_examples": 1309}], "download_size": 37505201, "dataset_size": 60844489}}
|
2023-10-22T05:27:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "must-c-en-es-01"
More Information needed
|
[
"# Dataset Card for \"must-c-en-es-01\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"must-c-en-es-01\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"must-c-en-es-01\"\n\nMore Information needed"
] |
e14dd83269f2f60135f352df8d642352aaf758e2
|
# Dataset Card for "must-c-en-es-wait3-01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
maxolotl/must-c-en-es-wait3-01
|
[
"region:us"
] |
2023-10-22T05:40:15+00:00
|
{"dataset_info": {"features": [{"name": "current_source", "dtype": "string"}, {"name": "current_target", "dtype": "string"}, {"name": "target_token", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 995393073, "num_examples": 5241096}, {"name": "test", "num_bytes": 9963278, "num_examples": 57200}, {"name": "validation", "num_bytes": 5434544, "num_examples": 27561}], "download_size": 184391223, "dataset_size": 1010790895}}
|
2023-10-22T05:40:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "must-c-en-es-wait3-01"
More Information needed
|
[
"# Dataset Card for \"must-c-en-es-wait3-01\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"must-c-en-es-wait3-01\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"must-c-en-es-wait3-01\"\n\nMore Information needed"
] |
b7574d1e4b9888738bb3bad2d4df23568c6030fe
|
# Dataset Card for "dreambooth-hackathon-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hosnasn/dreambooth-hackathon-images
|
[
"region:us"
] |
2023-10-22T05:52:44+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1078145.0, "num_examples": 24}], "download_size": 839878, "dataset_size": 1078145.0}}
|
2023-10-22T05:52:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "dreambooth-hackathon-images"
More Information needed
|
[
"# Dataset Card for \"dreambooth-hackathon-images\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"dreambooth-hackathon-images\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"dreambooth-hackathon-images\"\n\nMore Information needed"
] |
b71e0f6705d4285c49f8860d59f3a4445fb9150e
|
# Dataset Card for Evaluation run of TFLai/bloomz-1b7-4bit-alpaca
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TFLai/bloomz-1b7-4bit-alpaca
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [TFLai/bloomz-1b7-4bit-alpaca](https://huggingface.co/TFLai/bloomz-1b7-4bit-alpaca) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TFLai__bloomz-1b7-4bit-alpaca",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-22T07:20:32.245905](https://huggingface.co/datasets/open-llm-leaderboard/details_TFLai__bloomz-1b7-4bit-alpaca/blob/main/results_2023-10-22T07-20-32.245905.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.15845218120805368,
"em_stderr": 0.0037396259228482218,
"f1": 0.19149119127516798,
"f1_stderr": 0.0038329285857522646,
"acc": 0.2691397000789266,
"acc_stderr": 0.007005621297482063
},
"harness|drop|3": {
"em": 0.15845218120805368,
"em_stderr": 0.0037396259228482218,
"f1": 0.19149119127516798,
"f1_stderr": 0.0038329285857522646
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.5382794001578532,
"acc_stderr": 0.014011242594964127
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_TFLai__bloomz-1b7-4bit-alpaca
|
[
"region:us"
] |
2023-10-22T06:20:35+00:00
|
{"pretty_name": "Evaluation run of TFLai/bloomz-1b7-4bit-alpaca", "dataset_summary": "Dataset automatically created during the evaluation run of model [TFLai/bloomz-1b7-4bit-alpaca](https://huggingface.co/TFLai/bloomz-1b7-4bit-alpaca) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TFLai__bloomz-1b7-4bit-alpaca\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-22T07:20:32.245905](https://huggingface.co/datasets/open-llm-leaderboard/details_TFLai__bloomz-1b7-4bit-alpaca/blob/main/results_2023-10-22T07-20-32.245905.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.15845218120805368,\n \"em_stderr\": 0.0037396259228482218,\n \"f1\": 0.19149119127516798,\n \"f1_stderr\": 0.0038329285857522646,\n \"acc\": 0.2691397000789266,\n \"acc_stderr\": 0.007005621297482063\n },\n \"harness|drop|3\": {\n \"em\": 0.15845218120805368,\n \"em_stderr\": 0.0037396259228482218,\n \"f1\": 0.19149119127516798,\n \"f1_stderr\": 0.0038329285857522646\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5382794001578532,\n \"acc_stderr\": 0.014011242594964127\n }\n}\n```", "repo_url": "https://huggingface.co/TFLai/bloomz-1b7-4bit-alpaca", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_22T07_20_32.245905", "path": ["**/details_harness|drop|3_2023-10-22T07-20-32.245905.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-22T07-20-32.245905.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_22T07_20_32.245905", "path": ["**/details_harness|gsm8k|5_2023-10-22T07-20-32.245905.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-22T07-20-32.245905.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_22T07_20_32.245905", "path": ["**/details_harness|winogrande|5_2023-10-22T07-20-32.245905.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-22T07-20-32.245905.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_22T07_20_32.245905", "path": ["results_2023-10-22T07-20-32.245905.parquet"]}, {"split": "latest", "path": ["results_2023-10-22T07-20-32.245905.parquet"]}]}]}
|
2023-10-22T06:20:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of TFLai/bloomz-1b7-4bit-alpaca
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model TFLai/bloomz-1b7-4bit-alpaca on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-22T07:20:32.245905(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of TFLai/bloomz-1b7-4bit-alpaca",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TFLai/bloomz-1b7-4bit-alpaca on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-22T07:20:32.245905(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of TFLai/bloomz-1b7-4bit-alpaca",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TFLai/bloomz-1b7-4bit-alpaca on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-22T07:20:32.245905(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
24,
31,
172,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of TFLai/bloomz-1b7-4bit-alpaca## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model TFLai/bloomz-1b7-4bit-alpaca on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-22T07:20:32.245905(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
89a34c1cc6bcbc8accd3e57bac88ccd86ae4680d
|
# Dataset Card for "imdb_prefix20_forDPO_gpt2-large-imdb-FT_siebert_sentiment-roberta-large-english"
# 1. Purpose of creating the dataset
For reproduction of DPO (direct preference optimization) thesis experiments
(https://arxiv.org/abs/2305.18290)
# 2. How data is produced
To reproduce the paper's experimental results, we need (x, chosen, rejected) data.
However, imdb data only contains good or bad reviews, so the data must be readjusted.
## 2.1 prepare imdb data
First, download the imdb data and then remove words after 20 tokens using the gpt2-large tokenizer.
(https://huggingface.co/datasets/imdb)
## 2.2 generate sentence
The gpt2-large model fine-tuned by imdb generates two sentences after input (text).
(https://github.com/eric-mitchell/direct-preference-optimization/issues/28)
(https://drive.google.com/file/d/1ZPlfmfkCindqJfD8eNrl8kwtMJ2f1Nqv/view)
## 2.3 labeling method
Use sentiment bert to label good and bad sentences as (chosen, rejected).
(https://github.com/eric-mitchell/direct-preference-optimization/issues/27)
(https://huggingface.co/siebert/sentiment-roberta-large-english)
|
insub/imdb_prefix20_forDPO_gpt2-large-imdb-FT_siebert_sentiment-roberta-large-english
|
[
"arxiv:2305.18290",
"region:us"
] |
2023-10-22T06:33:43+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 23573801, "num_examples": 25000}, {"name": "test", "num_bytes": 23551578, "num_examples": 25000}], "download_size": 28260315, "dataset_size": 47125379}}
|
2023-10-22T07:02:45+00:00
|
[
"2305.18290"
] |
[] |
TAGS
#arxiv-2305.18290 #region-us
|
# Dataset Card for "imdb_prefix20_forDPO_gpt2-large-imdb-FT_siebert_sentiment-roberta-large-english"
# 1. Purpose of creating the dataset
For reproduction of DPO (direct preference optimization) thesis experiments
(URL
# 2. How data is produced
To reproduce the paper's experimental results, we need (x, chosen, rejected) data.
However, imdb data only contains good or bad reviews, so the data must be readjusted.
## 2.1 prepare imdb data
First, download the imdb data and then remove words after 20 tokens using the gpt2-large tokenizer.
(URL
## 2.2 generate sentence
The gpt2-large model fine-tuned by imdb generates two sentences after input (text).
(URL
(URL
## 2.3 labeling method
Use sentiment bert to label good and bad sentences as (chosen, rejected).
(URL
(URL
|
[
"# Dataset Card for \"imdb_prefix20_forDPO_gpt2-large-imdb-FT_siebert_sentiment-roberta-large-english\"",
"# 1. Purpose of creating the dataset\nFor reproduction of DPO (direct preference optimization) thesis experiments \n(URL",
"# 2. How data is produced\nTo reproduce the paper's experimental results, we need (x, chosen, rejected) data. \nHowever, imdb data only contains good or bad reviews, so the data must be readjusted.",
"## 2.1 prepare imdb data\nFirst, download the imdb data and then remove words after 20 tokens using the gpt2-large tokenizer. \n(URL",
"## 2.2 generate sentence\nThe gpt2-large model fine-tuned by imdb generates two sentences after input (text).\n(URL \n(URL",
"## 2.3 labeling method\nUse sentiment bert to label good and bad sentences as (chosen, rejected). \n(URL \n(URL"
] |
[
"TAGS\n#arxiv-2305.18290 #region-us \n",
"# Dataset Card for \"imdb_prefix20_forDPO_gpt2-large-imdb-FT_siebert_sentiment-roberta-large-english\"",
"# 1. Purpose of creating the dataset\nFor reproduction of DPO (direct preference optimization) thesis experiments \n(URL",
"# 2. How data is produced\nTo reproduce the paper's experimental results, we need (x, chosen, rejected) data. \nHowever, imdb data only contains good or bad reviews, so the data must be readjusted.",
"## 2.1 prepare imdb data\nFirst, download the imdb data and then remove words after 20 tokens using the gpt2-large tokenizer. \n(URL",
"## 2.2 generate sentence\nThe gpt2-large model fine-tuned by imdb generates two sentences after input (text).\n(URL \n(URL",
"## 2.3 labeling method\nUse sentiment bert to label good and bad sentences as (chosen, rejected). \n(URL \n(URL"
] |
[
15,
43,
26,
52,
34,
32,
29
] |
[
"passage: TAGS\n#arxiv-2305.18290 #region-us \n# Dataset Card for \"imdb_prefix20_forDPO_gpt2-large-imdb-FT_siebert_sentiment-roberta-large-english\"# 1. Purpose of creating the dataset\nFor reproduction of DPO (direct preference optimization) thesis experiments \n(URL# 2. How data is produced\nTo reproduce the paper's experimental results, we need (x, chosen, rejected) data. \nHowever, imdb data only contains good or bad reviews, so the data must be readjusted.## 2.1 prepare imdb data\nFirst, download the imdb data and then remove words after 20 tokens using the gpt2-large tokenizer. \n(URL## 2.2 generate sentence\nThe gpt2-large model fine-tuned by imdb generates two sentences after input (text).\n(URL \n(URL## 2.3 labeling method\nUse sentiment bert to label good and bad sentences as (chosen, rejected). \n(URL \n(URL"
] |
4685b3b115b699b52d0d489e2a1db739e7464bab
|
# Dataset Card for "must-c-en-de-01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
maxolotl/must-c-en-de-01
|
[
"region:us"
] |
2023-10-22T06:42:50+00:00
|
{"dataset_info": {"features": [{"name": "en", "dtype": "string"}, {"name": "de", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 55588148, "num_examples": 249032}, {"name": "test", "num_bytes": 683511, "num_examples": 3159}, {"name": "validation", "num_bytes": 320578, "num_examples": 1410}], "download_size": 35050288, "dataset_size": 56592237}}
|
2023-10-22T06:42:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "must-c-en-de-01"
More Information needed
|
[
"# Dataset Card for \"must-c-en-de-01\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"must-c-en-de-01\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"must-c-en-de-01\"\n\nMore Information needed"
] |
db1efe94a596d6c788f62ba0d6909cd6edbf3380
|
# Dataset Card for "must-c-en-es-02"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
maxolotl/must-c-en-es-02
|
[
"region:us"
] |
2023-10-22T06:47:13+00:00
|
{"dataset_info": {"features": [{"name": "en", "dtype": "string"}, {"name": "es", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 59874575, "num_examples": 259892}, {"name": "test", "num_bytes": 658214, "num_examples": 3035}, {"name": "validation", "num_bytes": 310157, "num_examples": 1309}], "download_size": 37502474, "dataset_size": 60842946}}
|
2023-10-22T06:47:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "must-c-en-es-02"
More Information needed
|
[
"# Dataset Card for \"must-c-en-es-02\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"must-c-en-es-02\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"must-c-en-es-02\"\n\nMore Information needed"
] |
d3261dc341cf7a0cc4f89216e34c8b39c1a6e06b
|
# Dataset Card for "must-c-en-es-wait3-02"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
maxolotl/must-c-en-es-wait3-02
|
[
"region:us"
] |
2023-10-22T06:48:05+00:00
|
{"dataset_info": {"features": [{"name": "current_source", "dtype": "string"}, {"name": "current_target", "dtype": "string"}, {"name": "target_token", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 995120593, "num_examples": 5240243}, {"name": "test", "num_bytes": 9960448, "num_examples": 57187}, {"name": "validation", "num_bytes": 5429701, "num_examples": 27549}], "download_size": 184348036, "dataset_size": 1010510742}}
|
2023-10-22T06:48:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "must-c-en-es-wait3-02"
More Information needed
|
[
"# Dataset Card for \"must-c-en-es-wait3-02\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"must-c-en-es-wait3-02\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"must-c-en-es-wait3-02\"\n\nMore Information needed"
] |
e07f9333e9670f8f2724ac104f273f5081c00b36
|
# Dataset Card for "DONUT"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DLI-Lab/DONUT
|
[
"region:us"
] |
2023-10-22T06:54:21+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "context_id", "dtype": "int64"}, {"name": "candidate_id", "dtype": "int64"}, {"name": "context", "sequence": "string"}, {"name": "target", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 319463974, "num_examples": 367337}], "download_size": 51456522, "dataset_size": 319463974}}
|
2023-10-22T07:18:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "DONUT"
More Information needed
|
[
"# Dataset Card for \"DONUT\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"DONUT\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"DONUT\"\n\nMore Information needed"
] |
8fb4b91fe7a2ee7de713bb7cf33fd6fa53f3b8d0
|
# Dataset Card for "002953b6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/002953b6
|
[
"region:us"
] |
2023-10-22T06:58:34+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 186, "num_examples": 10}], "download_size": 1369, "dataset_size": 186}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T06:58:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "002953b6"
More Information needed
|
[
"# Dataset Card for \"002953b6\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"002953b6\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"002953b6\"\n\nMore Information needed"
] |
499b12df3848acf9236d8ae47f81091f3b67f0ec
|
# Dataset Card for "606de66e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/606de66e
|
[
"region:us"
] |
2023-10-22T06:58:37+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 186, "num_examples": 10}], "download_size": 1369, "dataset_size": 186}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T06:58:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "606de66e"
More Information needed
|
[
"# Dataset Card for \"606de66e\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"606de66e\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"606de66e\"\n\nMore Information needed"
] |
053f0e588384078da98824714c58533a1b0fc89e
|
# Dataset Card for "cvt1_GS3_test2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fun1021183/cvt1_GS3_test2
|
[
"region:us"
] |
2023-10-22T07:00:45+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1329050970.3, "num_examples": 8100}, {"name": "test", "num_bytes": 3266176.0, "num_examples": 20}], "download_size": 1254891219, "dataset_size": 1332317146.3}}
|
2023-10-22T07:02:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cvt1_GS3_test2"
More Information needed
|
[
"# Dataset Card for \"cvt1_GS3_test2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cvt1_GS3_test2\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cvt1_GS3_test2\"\n\nMore Information needed"
] |
4ba2c63fe4bac0d2c54664ae7a537a70c63db59e
|
This <span style="color:teal;">parallel corpus </span> contains <span style="color:teal;">14,478</span> aligned sentence pairs <span style="color:teal;">Nande-French</span> in a <span style="color:teal;">90:10</span> split for the train and the test sets. It has been mainly used to fine-tune the <span style="color:teal;"> t5-base </span> pretrained model for the development of <a href="https://huggingface.co/SalomonMetre13/nnd_fr_mt" style="color:green;">this translation model </a>
|
SalomonMetre13/nnd_fr_14k
|
[
"task_categories:translation",
"size_categories:10K<n<100K",
"language:nnd",
"license:mit",
"region:us"
] |
2023-10-22T07:12:12+00:00
|
{"language": ["nnd"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["translation"]}
|
2023-10-27T08:10:40+00:00
|
[] |
[
"nnd"
] |
TAGS
#task_categories-translation #size_categories-10K<n<100K #language-West Ambae #license-mit #region-us
|
This <span style="color:teal;">parallel corpus </span> contains <span style="color:teal;">14,478</span> aligned sentence pairs <span style="color:teal;">Nande-French</span> in a <span style="color:teal;">90:10</span> split for the train and the test sets. It has been mainly used to fine-tune the <span style="color:teal;"> t5-base </span> pretrained model for the development of <a href="URL style="color:green;">this translation model </a>
|
[] |
[
"TAGS\n#task_categories-translation #size_categories-10K<n<100K #language-West Ambae #license-mit #region-us \n"
] |
[
39
] |
[
"passage: TAGS\n#task_categories-translation #size_categories-10K<n<100K #language-West Ambae #license-mit #region-us \n"
] |
ba5caa5012fe63c1f1a5293f291f372f5eb2ab2e
|
# Dataset Card for "cvt1_GS3_test3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fun1021183/cvt1_GS3_test3
|
[
"region:us"
] |
2023-10-22T07:38:16+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 633806034.0, "num_examples": 3900}, {"name": "test", "num_bytes": 385612653.92, "num_examples": 2480}], "download_size": 918457935, "dataset_size": 1019418687.9200001}}
|
2023-10-22T07:39:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cvt1_GS3_test3"
More Information needed
|
[
"# Dataset Card for \"cvt1_GS3_test3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cvt1_GS3_test3\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cvt1_GS3_test3\"\n\nMore Information needed"
] |
64540c7ddce7cef337b811113804aea8a0db1939
|
# Dataset Card for "cvt1_GS3_test4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fun1021183/cvt1_GS3_test4
|
[
"region:us"
] |
2023-10-22T07:44:21+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 194977374.302, "num_examples": 1257}, {"name": "test", "num_bytes": 354176115.15, "num_examples": 2221}], "download_size": 548038875, "dataset_size": 549153489.4519999}}
|
2023-10-22T07:45:16+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cvt1_GS3_test4"
More Information needed
|
[
"# Dataset Card for \"cvt1_GS3_test4\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cvt1_GS3_test4\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cvt1_GS3_test4\"\n\nMore Information needed"
] |
cfd373c42db0195503afd579f18b0934639e21d7
|
# Dataset Card for "covidqa_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pbaoo2705/covidqa_processed
|
[
"region:us"
] |
2023-10-22T08:01:28+00:00
|
{"dataset_info": {"features": [{"name": "context", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "start_positions", "dtype": "int64"}, {"name": "end_positions", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 6915408, "num_examples": 1960}], "download_size": 1791787, "dataset_size": 6915408}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T08:01:30+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "covidqa_processed"
More Information needed
|
[
"# Dataset Card for \"covidqa_processed\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"covidqa_processed\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"covidqa_processed\"\n\nMore Information needed"
] |
94dbe7465f220f45bb8138865599eead6d131f83
|
# Dataset Card for "covidqa_processed_eval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pbaoo2705/covidqa_processed_eval
|
[
"region:us"
] |
2023-10-22T08:01:30+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "context_chunks", "sequence": "string"}, {"name": "document_id", "dtype": "int64"}, {"name": "id", "dtype": "int64"}, {"name": "context", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "start_positions", "dtype": "int64"}, {"name": "end_positions", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2643073, "num_examples": 50}], "download_size": 730327, "dataset_size": 2643073}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T08:01:31+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "covidqa_processed_eval"
More Information needed
|
[
"# Dataset Card for \"covidqa_processed_eval\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"covidqa_processed_eval\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"covidqa_processed_eval\"\n\nMore Information needed"
] |
233787b2ece1c04990ca44c83b4e93a42bf7abd2
|
# Dataset Card for "cvt1_GS3_test_f"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fun1021183/cvt1_GS3_test_f
|
[
"region:us"
] |
2023-10-22T08:17:52+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2139623378.875, "num_examples": 13257}, {"name": "test", "num_bytes": 745774671.875, "num_examples": 4721}], "download_size": 2721265703, "dataset_size": 2885398050.75}}
|
2023-10-22T08:22:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cvt1_GS3_test_f"
More Information needed
|
[
"# Dataset Card for \"cvt1_GS3_test_f\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cvt1_GS3_test_f\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cvt1_GS3_test_f\"\n\nMore Information needed"
] |
153ae8210f81a2e5e681652e6f4f5add8b1bc44d
|
# Dataset Card for "eegimage"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
wbxlala/eegimage
|
[
"region:us"
] |
2023-10-22T08:23:49+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 301026150.48, "num_examples": 7360}, {"name": "test", "num_bytes": 94069909.4, "num_examples": 2300}], "download_size": 396817353, "dataset_size": 395096059.88}}
|
2023-10-22T08:24:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "eegimage"
More Information needed
|
[
"# Dataset Card for \"eegimage\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"eegimage\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"eegimage\"\n\nMore Information needed"
] |
9337c4f387afd58b5e3dfca992b1836c4b949c81
|
# Dataset Card for "cvt1_GS3_1"
* ヒストグラム平坦化をGraySpectrogram3に適用
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mb23/cvt1_GS3_1
|
[
"region:us"
] |
2023-10-22T08:25:44+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2139623378.875, "num_examples": 13257}, {"name": "test", "num_bytes": 745774671.875, "num_examples": 4721}], "download_size": 2721265703, "dataset_size": 2885398050.75}}
|
2023-10-25T04:57:53+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cvt1_GS3_1"
* ヒストグラム平坦化をGraySpectrogram3に適用
More Information needed
|
[
"# Dataset Card for \"cvt1_GS3_1\"\n* ヒストグラム平坦化をGraySpectrogram3に適用\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cvt1_GS3_1\"\n* ヒストグラム平坦化をGraySpectrogram3に適用\n\nMore Information needed"
] |
[
6,
36
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cvt1_GS3_1\"\n* ヒストグラム平坦化をGraySpectrogram3に適用\n\nMore Information needed"
] |
2231ed0dce056b76e7cc3865b7432a8572604548
|
Dataset transformed to the image-caption format from https://huggingface.co/datasets/liuhaotian/LLaVA-CC3M-Pretrain-595K
|
Phando/llava-filtered-cc3m-595k
|
[
"region:us"
] |
2023-10-22T08:31:05+00:00
|
{}
|
2023-10-29T02:15:17+00:00
|
[] |
[] |
TAGS
#region-us
|
Dataset transformed to the image-caption format from URL
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
1a6c00bb5e9fea0f91ea52524b03076fe67fbc0c
|
sample
|
rohan3998/book
|
[
"region:us"
] |
2023-10-22T08:34:13+00:00
|
{}
|
2023-10-22T08:35:18+00:00
|
[] |
[] |
TAGS
#region-us
|
sample
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
4dccac080eeaa5c10683cadbb39af3649f21a57f
|
This is a dataset containing the *United States Presidential State of the Union Addresses* through 2020; derived from the `sotu` R package.
|
textminr/sotu-paragraphs
|
[
"size_categories:n<1K",
"language:en",
"license:gpl-2.0",
"sotu",
"region:us"
] |
2023-10-22T10:07:29+00:00
|
{"language": ["en"], "license": "gpl-2.0", "size_categories": ["n<1K"], "tags": ["sotu"]}
|
2023-10-22T11:39:31+00:00
|
[] |
[
"en"
] |
TAGS
#size_categories-n<1K #language-English #license-gpl-2.0 #sotu #region-us
|
This is a dataset containing the *United States Presidential State of the Union Addresses* through 2020; derived from the 'sotu' R package.
|
[] |
[
"TAGS\n#size_categories-n<1K #language-English #license-gpl-2.0 #sotu #region-us \n"
] |
[
31
] |
[
"passage: TAGS\n#size_categories-n<1K #language-English #license-gpl-2.0 #sotu #region-us \n"
] |
e60187029cc2653f3cd2bd49bdafcd6b06384c6f
|
# Dataset Card for "eegimage2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
wbxlala/eegimage2
|
[
"region:us"
] |
2023-10-22T10:25:49+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 314788629.2, "num_examples": 7360}, {"name": "test", "num_bytes": 98370684.0, "num_examples": 2300}], "download_size": 414779791, "dataset_size": 413159313.2}}
|
2023-10-22T10:26:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "eegimage2"
More Information needed
|
[
"# Dataset Card for \"eegimage2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"eegimage2\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"eegimage2\"\n\nMore Information needed"
] |
cac2745e9158d5b893e0bd0479354488f3c875d1
|
# Dataset Card for "commonsense-dialogues4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
chrisgru/commonsense-dialogues4
|
[
"region:us"
] |
2023-10-22T10:34:18+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 23345091, "num_examples": 12597}, {"name": "test", "num_bytes": 1057813, "num_examples": 1159}], "download_size": 13076849, "dataset_size": 24402904}}
|
2023-10-22T10:34:27+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "commonsense-dialogues4"
More Information needed
|
[
"# Dataset Card for \"commonsense-dialogues4\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"commonsense-dialogues4\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"commonsense-dialogues4\"\n\nMore Information needed"
] |
58b1f81dfa4f797e0e6f8297c6e0f550f375132f
|
# Dataset Card for "child-10k_for-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
haseong8012/child-10k_for-test
|
[
"region:us"
] |
2023-10-22T10:46:42+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "audio", "sequence": "float32"}], "splits": [{"name": "test", "num_bytes": 1828164269, "num_examples": 10000}], "download_size": 1591443773, "dataset_size": 1828164269}}
|
2023-10-22T10:58:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "child-10k_for-test"
More Information needed
|
[
"# Dataset Card for \"child-10k_for-test\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"child-10k_for-test\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"child-10k_for-test\"\n\nMore Information needed"
] |
45ab531ef322dd7893176d2518b847c1279f6e3f
|
# Dataset Card for Evaluation run of yeontaek/Platypus2xOpenOrca-13B-IA3-v3
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/yeontaek/Platypus2xOpenOrca-13B-IA3-v3
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [yeontaek/Platypus2xOpenOrca-13B-IA3-v3](https://huggingface.co/yeontaek/Platypus2xOpenOrca-13B-IA3-v3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_yeontaek__Platypus2xOpenOrca-13B-IA3-v3",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-22T11:48:46.198205](https://huggingface.co/datasets/open-llm-leaderboard/details_yeontaek__Platypus2xOpenOrca-13B-IA3-v3/blob/main/results_2023-10-22T11-48-46.198205.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.004404362416107382,
"em_stderr": 0.0006781451620479675,
"f1": 0.07597000838926182,
"f1_stderr": 0.001647112822339397,
"acc": 0.45089736370800626,
"acc_stderr": 0.010370579775637361
},
"harness|drop|3": {
"em": 0.004404362416107382,
"em_stderr": 0.0006781451620479675,
"f1": 0.07597000838926182,
"f1_stderr": 0.001647112822339397
},
"harness|gsm8k|5": {
"acc": 0.12357846853677028,
"acc_stderr": 0.009065050306776916
},
"harness|winogrande|5": {
"acc": 0.7782162588792423,
"acc_stderr": 0.011676109244497808
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_yeontaek__Platypus2xOpenOrca-13B-IA3-v3
|
[
"region:us"
] |
2023-10-22T10:48:50+00:00
|
{"pretty_name": "Evaluation run of yeontaek/Platypus2xOpenOrca-13B-IA3-v3", "dataset_summary": "Dataset automatically created during the evaluation run of model [yeontaek/Platypus2xOpenOrca-13B-IA3-v3](https://huggingface.co/yeontaek/Platypus2xOpenOrca-13B-IA3-v3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_yeontaek__Platypus2xOpenOrca-13B-IA3-v3\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-22T11:48:46.198205](https://huggingface.co/datasets/open-llm-leaderboard/details_yeontaek__Platypus2xOpenOrca-13B-IA3-v3/blob/main/results_2023-10-22T11-48-46.198205.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.004404362416107382,\n \"em_stderr\": 0.0006781451620479675,\n \"f1\": 0.07597000838926182,\n \"f1_stderr\": 0.001647112822339397,\n \"acc\": 0.45089736370800626,\n \"acc_stderr\": 0.010370579775637361\n },\n \"harness|drop|3\": {\n \"em\": 0.004404362416107382,\n \"em_stderr\": 0.0006781451620479675,\n \"f1\": 0.07597000838926182,\n \"f1_stderr\": 0.001647112822339397\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.12357846853677028,\n \"acc_stderr\": 0.009065050306776916\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7782162588792423,\n \"acc_stderr\": 0.011676109244497808\n }\n}\n```", "repo_url": "https://huggingface.co/yeontaek/Platypus2xOpenOrca-13B-IA3-v3", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_22T11_48_46.198205", "path": ["**/details_harness|drop|3_2023-10-22T11-48-46.198205.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-22T11-48-46.198205.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_22T11_48_46.198205", "path": ["**/details_harness|gsm8k|5_2023-10-22T11-48-46.198205.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-22T11-48-46.198205.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_22T11_48_46.198205", "path": ["**/details_harness|winogrande|5_2023-10-22T11-48-46.198205.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-22T11-48-46.198205.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_22T11_48_46.198205", "path": ["results_2023-10-22T11-48-46.198205.parquet"]}, {"split": "latest", "path": ["results_2023-10-22T11-48-46.198205.parquet"]}]}]}
|
2023-10-22T10:48:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of yeontaek/Platypus2xOpenOrca-13B-IA3-v3
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model yeontaek/Platypus2xOpenOrca-13B-IA3-v3 on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-22T11:48:46.198205(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of yeontaek/Platypus2xOpenOrca-13B-IA3-v3",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model yeontaek/Platypus2xOpenOrca-13B-IA3-v3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-22T11:48:46.198205(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of yeontaek/Platypus2xOpenOrca-13B-IA3-v3",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model yeontaek/Platypus2xOpenOrca-13B-IA3-v3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-22T11:48:46.198205(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
28,
31,
176,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of yeontaek/Platypus2xOpenOrca-13B-IA3-v3## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model yeontaek/Platypus2xOpenOrca-13B-IA3-v3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-22T11:48:46.198205(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
325c65b9424ed5cf639a56302e1fe71cb2fec26b
|
# Dataset Card for "xview_captions_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Braddy/xview_captions_v2
|
[
"region:us"
] |
2023-10-22T10:55:23+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "sequence": "string"}, {"name": "file_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 715798376.168, "num_examples": 7092}], "download_size": 693617401, "dataset_size": 715798376.168}}
|
2023-10-22T10:55:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "xview_captions_v2"
More Information needed
|
[
"# Dataset Card for \"xview_captions_v2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"xview_captions_v2\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"xview_captions_v2\"\n\nMore Information needed"
] |
0d992af4a272a3f7d77a7b509de998ad5c6262c6
|
# Dataset Card for "gita_supersite_dump"
Extracted from: [gitasupersite.iitk](https://www.gitasupersite.iitk.ac.in/)
To recreate checkout [this notebook](./dump.ipynb)
Translation column names:
- `htrskd` - Hindi Translation By Swami Ramsukhdas
- `httyn` - Hindi Translation By Swami Tejomayananda
- `htshg` - Hindi Translation Of Sri Shankaracharya's Sanskrit Commentary By Sri Harikrishnadas Goenka
- `scsh` - Sanskrit Commentary By Sri Shankaracharya
- `hcchi` - Hindi Commentary By Swami Chinmayananda
- `hcrskd` - Hindi Commentary By Swami Ramsukhdas
- `scang` - Sanskrit Commentary By Sri Abhinavgupta
- `scram` - Sanskrit Commentary By Sri Ramanujacharya
- `scanand` - Sanskrit Commentary By Sri Anandgiri
- `scjaya` - Sanskrit Commentary By Sri Jayatirtha
- `scmad` - Sanskrit Commentary By Sri Madhvacharya
- `scval` - Sanskrit Commentary By Sri Vallabhacharya
- `scms` - Sanskrit Commentary By Sri Madhusudan Saraswati
- `scsri` - Sanskrit Commentary By Sri Sridhara Swami
- `scvv` - Sanskrit Commentary By Sri Vedantadeshikacharya Venkatanatha
- `scpur` - Sanskrit Commentary By Sri Purushottamji
- `scneel` - Sanskrit Commentary By Sri Neelkanth
- `scdhan` - Sanskrit Commentary By Sri Dhanpati
- `ecsiva` - English Commentary By Swami Sivananda
- `etsiva` - English Translation By Swami Sivananda
- `etpurohit` - English Translation By Purohit Swami
- `etgb` - English Translation By Swami Gambirananda
- `setgb` - English Translation Of Sri Shankaracharya By Swami Gambirananda
- `etssa` - English Translation By Dr. S. Sankaranarayan
- `etassa` - English Translation of Abhinavgupta's Sanskrit Commentary By Dr. S. Sankaranarayan
- `etradi` - English Translation of Ramanujacharya's Sanskrit Commentary By Swami Adidevananda
- `etadi` - English Translation By Swami Adidevananda
Script column names:
- `dv` - "Devanagari"
- `as` - "Assamese"
- `bn` - "Bengali"
- `gu` - "Gujarati"
- `pa` - "Gurmukhi"
- `kn` - "Kannada"
- `ml` - "Malayalam"
- `or` - "Odia"
- `ro` - "Roman"
- `ta` - "Tamil"
- `te` - "Telugu"
|
yashnbx/gita_supersite_dump
|
[
"size_categories:n<1K",
"region:us"
] |
2023-10-22T11:18:24+00:00
|
{"size_categories": ["n<1K"], "dataset_info": {"features": [{"name": "shloka_id", "dtype": "string"}, {"name": "chapter", "dtype": "string"}, {"name": "sutra", "dtype": "string"}, {"name": "trans-htrskd", "dtype": "string", "description": "Hindi Translation By Swami Ramsukhdas"}, {"name": "trans-httyn", "dtype": "string", "description": "Hindi Translation By Swami Tejomayananda"}, {"name": "trans-hcchi", "dtype": "string", "description": "Hindi Commentary By Swami Chinmayananda"}, {"name": "trans-hcrskd", "dtype": "string", "description": "Hindi Commentary By Swami Ramsukhdas"}, {"name": "trans-scang", "dtype": "string", "description": "Sanskrit Commentary By Sri Abhinavgupta"}, {"name": "trans-scram", "dtype": "string", "description": "Sanskrit Commentary By Sri Ramanujacharya"}, {"name": "trans-scanand", "dtype": "string", "description": "Sanskrit Commentary By Sri Anandgiri"}, {"name": "trans-scval", "dtype": "string", "description": "Sanskrit Commentary By Sri Vallabhacharya"}, {"name": "trans-scms", "dtype": "string", "description": "Sanskrit Commentary By Sri Madhusudan Saraswati"}, {"name": "trans-scsri", "dtype": "string", "description": "Sanskrit Commentary By Sri Sridhara Swami"}, {"name": "trans-scvv", "dtype": "string", "description": "Sanskrit Commentary By Sri Vedantadeshikacharya Venkatanatha"}, {"name": "trans-scpur", "dtype": "string", "description": "Sanskrit Commentary By Sri Purushottamji"}, {"name": "trans-scneel", "dtype": "string", "description": "Sanskrit Commentary By Sri Neelkanth"}, {"name": "trans-scdhan", "dtype": "string", "description": "Sanskrit Commentary By Sri Dhanpati"}, {"name": "trans-ecsiva", "dtype": "string", "description": "English Commentary By Swami Sivananda"}, {"name": "trans-etsiva", "dtype": "string", "description": "English Translation By Swami Sivananda"}, {"name": "trans-etpurohit", "dtype": "string", "description": "English Translation By Purohit Swami"}, {"name": "trans-etgb", "dtype": "string", "description": "English Translation By Swami Gambirananda"}, {"name": "trans-setgb", "dtype": "string", "description": "English Translation Of Sri Shankaracharya By Swami Gambirananda"}, {"name": "trans-etssa", "dtype": "string", "description": "English Translation By Dr. S. Sankaranarayan"}, {"name": "trans-etassa", "dtype": "string", "description": "English Translation of Abhinavgupta's Sanskrit Commentary By Dr. S. Sankaranarayan"}, {"name": "trans-etradi", "dtype": "string", "description": "English Translation of Ramanujacharya's Sanskrit Commentary By Swami Adidevananda"}, {"name": "trans-etadi", "dtype": "string", "description": "English Translation By Swami Adidevananda"}, {"name": "trans-htshg", "dtype": "string", "description": "Hindi Translation Of Sri Shankaracharya's Sanskrit Commentary By Sri Harikrishnadas Goenka"}, {"name": "trans-scsh", "dtype": "string", "description": "Sanskrit Commentary By Sri Shankaracharya"}, {"name": "trans-scjaya", "dtype": "string", "description": "Sanskrit Commentary By Sri Jayatirtha"}, {"name": "trans-scmad", "dtype": "string", "description": "Sanskrit Commentary By Sri Madhvacharya"}, {"name": "script-dv", "dtype": "string", "description": "Devanagari"}, {"name": "script-as", "dtype": "string", "description": "Assamese"}, {"name": "script-bn", "dtype": "string", "description": "Bengali"}, {"name": "script-gu", "dtype": "string", "description": "Gujarati"}, {"name": "script-pa", "dtype": "string", "description": "Gurmukhi"}, {"name": "script-kn", "dtype": "string", "description": "Kannada"}, {"name": "script-ml", "dtype": "string", "description": "Malayalam"}, {"name": "script-or", "dtype": "string", "description": "Odia"}, {"name": "script-ro", "dtype": "string", "description": "Roman"}, {"name": "script-ta", "dtype": "string", "description": "Tamil"}, {"name": "script-te", "dtype": "string", "description": "Telugu"}], "splits": [{"name": "train", "num_bytes": 31628579, "num_examples": 701}], "download_size": 11660830, "dataset_size": 31628579}}
|
2023-10-22T12:39:17+00:00
|
[] |
[] |
TAGS
#size_categories-n<1K #region-us
|
# Dataset Card for "gita_supersite_dump"
Extracted from: URL
To recreate checkout this notebook
Translation column names:
- 'htrskd' - Hindi Translation By Swami Ramsukhdas
- 'httyn' - Hindi Translation By Swami Tejomayananda
- 'htshg' - Hindi Translation Of Sri Shankaracharya's Sanskrit Commentary By Sri Harikrishnadas Goenka
- 'scsh' - Sanskrit Commentary By Sri Shankaracharya
- 'hcchi' - Hindi Commentary By Swami Chinmayananda
- 'hcrskd' - Hindi Commentary By Swami Ramsukhdas
- 'scang' - Sanskrit Commentary By Sri Abhinavgupta
- 'scram' - Sanskrit Commentary By Sri Ramanujacharya
- 'scanand' - Sanskrit Commentary By Sri Anandgiri
- 'scjaya' - Sanskrit Commentary By Sri Jayatirtha
- 'scmad' - Sanskrit Commentary By Sri Madhvacharya
- 'scval' - Sanskrit Commentary By Sri Vallabhacharya
- 'scms' - Sanskrit Commentary By Sri Madhusudan Saraswati
- 'scsri' - Sanskrit Commentary By Sri Sridhara Swami
- 'scvv' - Sanskrit Commentary By Sri Vedantadeshikacharya Venkatanatha
- 'scpur' - Sanskrit Commentary By Sri Purushottamji
- 'scneel' - Sanskrit Commentary By Sri Neelkanth
- 'scdhan' - Sanskrit Commentary By Sri Dhanpati
- 'ecsiva' - English Commentary By Swami Sivananda
- 'etsiva' - English Translation By Swami Sivananda
- 'etpurohit' - English Translation By Purohit Swami
- 'etgb' - English Translation By Swami Gambirananda
- 'setgb' - English Translation Of Sri Shankaracharya By Swami Gambirananda
- 'etssa' - English Translation By Dr. S. Sankaranarayan
- 'etassa' - English Translation of Abhinavgupta's Sanskrit Commentary By Dr. S. Sankaranarayan
- 'etradi' - English Translation of Ramanujacharya's Sanskrit Commentary By Swami Adidevananda
- 'etadi' - English Translation By Swami Adidevananda
Script column names:
- 'dv' - "Devanagari"
- 'as' - "Assamese"
- 'bn' - "Bengali"
- 'gu' - "Gujarati"
- 'pa' - "Gurmukhi"
- 'kn' - "Kannada"
- 'ml' - "Malayalam"
- 'or' - "Odia"
- 'ro' - "Roman"
- 'ta' - "Tamil"
- 'te' - "Telugu"
|
[
"# Dataset Card for \"gita_supersite_dump\"\n\nExtracted from: URL\n\nTo recreate checkout this notebook\n\nTranslation column names:\n- 'htrskd' - Hindi Translation By Swami Ramsukhdas\n- 'httyn' - Hindi Translation By Swami Tejomayananda\n- 'htshg' - Hindi Translation Of Sri Shankaracharya's Sanskrit Commentary By Sri Harikrishnadas Goenka\n- 'scsh' - Sanskrit Commentary By Sri Shankaracharya\n- 'hcchi' - Hindi Commentary By Swami Chinmayananda\n- 'hcrskd' - Hindi Commentary By Swami Ramsukhdas\n- 'scang' - Sanskrit Commentary By Sri Abhinavgupta\n- 'scram' - Sanskrit Commentary By Sri Ramanujacharya\n- 'scanand' - Sanskrit Commentary By Sri Anandgiri\n- 'scjaya' - Sanskrit Commentary By Sri Jayatirtha\n- 'scmad' - Sanskrit Commentary By Sri Madhvacharya\n- 'scval' - Sanskrit Commentary By Sri Vallabhacharya\n- 'scms' - Sanskrit Commentary By Sri Madhusudan Saraswati\n- 'scsri' - Sanskrit Commentary By Sri Sridhara Swami\n- 'scvv' - Sanskrit Commentary By Sri Vedantadeshikacharya Venkatanatha\n- 'scpur' - Sanskrit Commentary By Sri Purushottamji\n- 'scneel' - Sanskrit Commentary By Sri Neelkanth\n- 'scdhan' - Sanskrit Commentary By Sri Dhanpati\n- 'ecsiva' - English Commentary By Swami Sivananda\n- 'etsiva' - English Translation By Swami Sivananda\n- 'etpurohit' - English Translation By Purohit Swami\n- 'etgb' - English Translation By Swami Gambirananda\n- 'setgb' - English Translation Of Sri Shankaracharya By Swami Gambirananda\n- 'etssa' - English Translation By Dr. S. Sankaranarayan\n- 'etassa' - English Translation of Abhinavgupta's Sanskrit Commentary By Dr. S. Sankaranarayan\n- 'etradi' - English Translation of Ramanujacharya's Sanskrit Commentary By Swami Adidevananda\n- 'etadi' - English Translation By Swami Adidevananda\n\nScript column names:\n- 'dv' - \"Devanagari\"\n- 'as' - \"Assamese\"\n- 'bn' - \"Bengali\"\n- 'gu' - \"Gujarati\"\n- 'pa' - \"Gurmukhi\"\n- 'kn' - \"Kannada\"\n- 'ml' - \"Malayalam\"\n- 'or' - \"Odia\"\n- 'ro' - \"Roman\"\n- 'ta' - \"Tamil\"\n- 'te' - \"Telugu\""
] |
[
"TAGS\n#size_categories-n<1K #region-us \n",
"# Dataset Card for \"gita_supersite_dump\"\n\nExtracted from: URL\n\nTo recreate checkout this notebook\n\nTranslation column names:\n- 'htrskd' - Hindi Translation By Swami Ramsukhdas\n- 'httyn' - Hindi Translation By Swami Tejomayananda\n- 'htshg' - Hindi Translation Of Sri Shankaracharya's Sanskrit Commentary By Sri Harikrishnadas Goenka\n- 'scsh' - Sanskrit Commentary By Sri Shankaracharya\n- 'hcchi' - Hindi Commentary By Swami Chinmayananda\n- 'hcrskd' - Hindi Commentary By Swami Ramsukhdas\n- 'scang' - Sanskrit Commentary By Sri Abhinavgupta\n- 'scram' - Sanskrit Commentary By Sri Ramanujacharya\n- 'scanand' - Sanskrit Commentary By Sri Anandgiri\n- 'scjaya' - Sanskrit Commentary By Sri Jayatirtha\n- 'scmad' - Sanskrit Commentary By Sri Madhvacharya\n- 'scval' - Sanskrit Commentary By Sri Vallabhacharya\n- 'scms' - Sanskrit Commentary By Sri Madhusudan Saraswati\n- 'scsri' - Sanskrit Commentary By Sri Sridhara Swami\n- 'scvv' - Sanskrit Commentary By Sri Vedantadeshikacharya Venkatanatha\n- 'scpur' - Sanskrit Commentary By Sri Purushottamji\n- 'scneel' - Sanskrit Commentary By Sri Neelkanth\n- 'scdhan' - Sanskrit Commentary By Sri Dhanpati\n- 'ecsiva' - English Commentary By Swami Sivananda\n- 'etsiva' - English Translation By Swami Sivananda\n- 'etpurohit' - English Translation By Purohit Swami\n- 'etgb' - English Translation By Swami Gambirananda\n- 'setgb' - English Translation Of Sri Shankaracharya By Swami Gambirananda\n- 'etssa' - English Translation By Dr. S. Sankaranarayan\n- 'etassa' - English Translation of Abhinavgupta's Sanskrit Commentary By Dr. S. Sankaranarayan\n- 'etradi' - English Translation of Ramanujacharya's Sanskrit Commentary By Swami Adidevananda\n- 'etadi' - English Translation By Swami Adidevananda\n\nScript column names:\n- 'dv' - \"Devanagari\"\n- 'as' - \"Assamese\"\n- 'bn' - \"Bengali\"\n- 'gu' - \"Gujarati\"\n- 'pa' - \"Gurmukhi\"\n- 'kn' - \"Kannada\"\n- 'ml' - \"Malayalam\"\n- 'or' - \"Odia\"\n- 'ro' - \"Roman\"\n- 'ta' - \"Tamil\"\n- 'te' - \"Telugu\""
] |
[
16,
610
] |
[
"passage: TAGS\n#size_categories-n<1K #region-us \n"
] |
d3a5a7a16f1197c6c18081f6f5160bf68e15015f
|
# Dataset Card for "irish-traditional-tunes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
# Dataset Card for "irish-tunes-spectrograms"
## 1. Dataset Description
Dataset is used for the following project
- **Homepage:** [Trad-fusion](https://github.com/hdparmar/Tradi-fusion)
### 1.1 Dataset Summary
This dataset contains 9604 Mel spectrograms that represent Traditional Irish Music.
This dataset is smaller compared to [hdparmar/irish-tunes-spectrogram](https://huggingface.co/datasets/hdparmar/irish-tunes-spectrograms), to reduce the training time and increase the possibilty to train for longer steps/batch.
Each spectrogram image is a 5 second split of audio resulting in dimensions 512x512 and includes 3 channels (mimicking, RGB) because most of the text-to-image models are trained on 3 channels.
Although, I can find publications which says that having 3 channels for Mel Spectrogram can improve generalisation, since the other 2 channel are just the copy of first.
The simple trick I used is to use cv2 to convert a grayscale into RGB, since most of the models are trained on 3 channels.
The primary objective of this dataset is to serve as an abundant resource for those venturing into the fields of music analysis, machine learning, and artificial intelligence.
### 1.2 Languages
The dataset's metadata and documentation are all in English, ensuring accessibility and comprehension.
## 2. Dataset Structure
### 2.1 Data Instances
Each data instance in this dataset is composed of two main elements: an image and a text caption.
The image is a mel spectrogram that reflects a snippet of a traditional Irish tune. Accompanying it is a text field that serves as its caption.
#### Example:
The metadata.csv file the dataset is in this format
```
{"file_name": "path/to/the/image.png",
"text": "An Irish Traditional Tune"}
```
### 2.2 Data Fields
- **file_name**: This is the field that contains the path leading to the image file. It's the specific location where you can find each piece of the dataset.
- **text**: This is the caption accompanying each image. For the sake of uniformity and ease, the caption for every image is "An Irish Traditional Tune."
### 2.3 Data Splits
As of the current version, the dataset consists solely of a training split. Additional data splits like validation or testing may be introduced in future iterations of the dataset.
### 2.4 Uniform Captions: A Special Note
All the spectrograms in this dataset come labeled with a uniform caption: "An Irish Traditional Tune."
This consistency can be perhaps advantageous, especially in text-to-image tasks that focus primarily on image-based features, with the caption acting as a generalized label.
## NOTE
Furthur imformation to follow and same caption for all the mel-spectrograms are for ease of work put into producing the dataset
|
hdparmar/irish-traditional-tunes
|
[
"task_categories:text-to-image",
"task_categories:text-to-audio",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"music",
"region:us"
] |
2023-10-22T11:19:54+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["text-to-image", "text-to-audio"], "pretty_name": "Mel-Spectrograms for Irish Traditional Music", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3322131399.86, "num_examples": 9604}], "download_size": 3282715107, "dataset_size": 3322131399.86}, "tags": ["music"]}
|
2023-10-22T12:45:19+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-to-image #task_categories-text-to-audio #size_categories-1K<n<10K #language-English #license-mit #music #region-us
|
# Dataset Card for "irish-traditional-tunes"
More Information needed
# Dataset Card for "irish-tunes-spectrograms"
## 1. Dataset Description
Dataset is used for the following project
- Homepage: Trad-fusion
### 1.1 Dataset Summary
This dataset contains 9604 Mel spectrograms that represent Traditional Irish Music.
This dataset is smaller compared to hdparmar/irish-tunes-spectrogram, to reduce the training time and increase the possibilty to train for longer steps/batch.
Each spectrogram image is a 5 second split of audio resulting in dimensions 512x512 and includes 3 channels (mimicking, RGB) because most of the text-to-image models are trained on 3 channels.
Although, I can find publications which says that having 3 channels for Mel Spectrogram can improve generalisation, since the other 2 channel are just the copy of first.
The simple trick I used is to use cv2 to convert a grayscale into RGB, since most of the models are trained on 3 channels.
The primary objective of this dataset is to serve as an abundant resource for those venturing into the fields of music analysis, machine learning, and artificial intelligence.
### 1.2 Languages
The dataset's metadata and documentation are all in English, ensuring accessibility and comprehension.
## 2. Dataset Structure
### 2.1 Data Instances
Each data instance in this dataset is composed of two main elements: an image and a text caption.
The image is a mel spectrogram that reflects a snippet of a traditional Irish tune. Accompanying it is a text field that serves as its caption.
#### Example:
The URL file the dataset is in this format
### 2.2 Data Fields
- file_name: This is the field that contains the path leading to the image file. It's the specific location where you can find each piece of the dataset.
- text: This is the caption accompanying each image. For the sake of uniformity and ease, the caption for every image is "An Irish Traditional Tune."
### 2.3 Data Splits
As of the current version, the dataset consists solely of a training split. Additional data splits like validation or testing may be introduced in future iterations of the dataset.
### 2.4 Uniform Captions: A Special Note
All the spectrograms in this dataset come labeled with a uniform caption: "An Irish Traditional Tune."
This consistency can be perhaps advantageous, especially in text-to-image tasks that focus primarily on image-based features, with the caption acting as a generalized label.
## NOTE
Furthur imformation to follow and same caption for all the mel-spectrograms are for ease of work put into producing the dataset
|
[
"# Dataset Card for \"irish-traditional-tunes\"\n\nMore Information needed",
"# Dataset Card for \"irish-tunes-spectrograms\"",
"## 1. Dataset Description\n Dataset is used for the following project\n- Homepage: Trad-fusion",
"### 1.1 Dataset Summary\nThis dataset contains 9604 Mel spectrograms that represent Traditional Irish Music. \nThis dataset is smaller compared to hdparmar/irish-tunes-spectrogram, to reduce the training time and increase the possibilty to train for longer steps/batch.\nEach spectrogram image is a 5 second split of audio resulting in dimensions 512x512 and includes 3 channels (mimicking, RGB) because most of the text-to-image models are trained on 3 channels. \n\nAlthough, I can find publications which says that having 3 channels for Mel Spectrogram can improve generalisation, since the other 2 channel are just the copy of first.\nThe simple trick I used is to use cv2 to convert a grayscale into RGB, since most of the models are trained on 3 channels.\n\nThe primary objective of this dataset is to serve as an abundant resource for those venturing into the fields of music analysis, machine learning, and artificial intelligence.",
"### 1.2 Languages\nThe dataset's metadata and documentation are all in English, ensuring accessibility and comprehension.",
"## 2. Dataset Structure",
"### 2.1 Data Instances\nEach data instance in this dataset is composed of two main elements: an image and a text caption. \nThe image is a mel spectrogram that reflects a snippet of a traditional Irish tune. Accompanying it is a text field that serves as its caption.",
"#### Example:\nThe URL file the dataset is in this format",
"### 2.2 Data Fields\n- file_name: This is the field that contains the path leading to the image file. It's the specific location where you can find each piece of the dataset.\n- text: This is the caption accompanying each image. For the sake of uniformity and ease, the caption for every image is \"An Irish Traditional Tune.\"",
"### 2.3 Data Splits\nAs of the current version, the dataset consists solely of a training split. Additional data splits like validation or testing may be introduced in future iterations of the dataset.",
"### 2.4 Uniform Captions: A Special Note\nAll the spectrograms in this dataset come labeled with a uniform caption: \"An Irish Traditional Tune.\" \n\nThis consistency can be perhaps advantageous, especially in text-to-image tasks that focus primarily on image-based features, with the caption acting as a generalized label.",
"## NOTE\nFurthur imformation to follow and same caption for all the mel-spectrograms are for ease of work put into producing the dataset"
] |
[
"TAGS\n#task_categories-text-to-image #task_categories-text-to-audio #size_categories-1K<n<10K #language-English #license-mit #music #region-us \n",
"# Dataset Card for \"irish-traditional-tunes\"\n\nMore Information needed",
"# Dataset Card for \"irish-tunes-spectrograms\"",
"## 1. Dataset Description\n Dataset is used for the following project\n- Homepage: Trad-fusion",
"### 1.1 Dataset Summary\nThis dataset contains 9604 Mel spectrograms that represent Traditional Irish Music. \nThis dataset is smaller compared to hdparmar/irish-tunes-spectrogram, to reduce the training time and increase the possibilty to train for longer steps/batch.\nEach spectrogram image is a 5 second split of audio resulting in dimensions 512x512 and includes 3 channels (mimicking, RGB) because most of the text-to-image models are trained on 3 channels. \n\nAlthough, I can find publications which says that having 3 channels for Mel Spectrogram can improve generalisation, since the other 2 channel are just the copy of first.\nThe simple trick I used is to use cv2 to convert a grayscale into RGB, since most of the models are trained on 3 channels.\n\nThe primary objective of this dataset is to serve as an abundant resource for those venturing into the fields of music analysis, machine learning, and artificial intelligence.",
"### 1.2 Languages\nThe dataset's metadata and documentation are all in English, ensuring accessibility and comprehension.",
"## 2. Dataset Structure",
"### 2.1 Data Instances\nEach data instance in this dataset is composed of two main elements: an image and a text caption. \nThe image is a mel spectrogram that reflects a snippet of a traditional Irish tune. Accompanying it is a text field that serves as its caption.",
"#### Example:\nThe URL file the dataset is in this format",
"### 2.2 Data Fields\n- file_name: This is the field that contains the path leading to the image file. It's the specific location where you can find each piece of the dataset.\n- text: This is the caption accompanying each image. For the sake of uniformity and ease, the caption for every image is \"An Irish Traditional Tune.\"",
"### 2.3 Data Splits\nAs of the current version, the dataset consists solely of a training split. Additional data splits like validation or testing may be introduced in future iterations of the dataset.",
"### 2.4 Uniform Captions: A Special Note\nAll the spectrograms in this dataset come labeled with a uniform caption: \"An Irish Traditional Tune.\" \n\nThis consistency can be perhaps advantageous, especially in text-to-image tasks that focus primarily on image-based features, with the caption acting as a generalized label.",
"## NOTE\nFurthur imformation to follow and same caption for all the mel-spectrograms are for ease of work put into producing the dataset"
] |
[
54,
18,
17,
20,
220,
30,
7,
65,
15,
80,
49,
76,
34
] |
[
"passage: TAGS\n#task_categories-text-to-image #task_categories-text-to-audio #size_categories-1K<n<10K #language-English #license-mit #music #region-us \n# Dataset Card for \"irish-traditional-tunes\"\n\nMore Information needed# Dataset Card for \"irish-tunes-spectrograms\"## 1. Dataset Description\n Dataset is used for the following project\n- Homepage: Trad-fusion### 1.1 Dataset Summary\nThis dataset contains 9604 Mel spectrograms that represent Traditional Irish Music. \nThis dataset is smaller compared to hdparmar/irish-tunes-spectrogram, to reduce the training time and increase the possibilty to train for longer steps/batch.\nEach spectrogram image is a 5 second split of audio resulting in dimensions 512x512 and includes 3 channels (mimicking, RGB) because most of the text-to-image models are trained on 3 channels. \n\nAlthough, I can find publications which says that having 3 channels for Mel Spectrogram can improve generalisation, since the other 2 channel are just the copy of first.\nThe simple trick I used is to use cv2 to convert a grayscale into RGB, since most of the models are trained on 3 channels.\n\nThe primary objective of this dataset is to serve as an abundant resource for those venturing into the fields of music analysis, machine learning, and artificial intelligence.### 1.2 Languages\nThe dataset's metadata and documentation are all in English, ensuring accessibility and comprehension.## 2. Dataset Structure### 2.1 Data Instances\nEach data instance in this dataset is composed of two main elements: an image and a text caption. \nThe image is a mel spectrogram that reflects a snippet of a traditional Irish tune. Accompanying it is a text field that serves as its caption.#### Example:\nThe URL file the dataset is in this format"
] |
2c4266d3c315ed6a68b3c02b12af0fe8383facdc
|
# Dataset Card for Evaluation run of yeontaek/llama-2-13b-QLoRA
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/yeontaek/llama-2-13b-QLoRA
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [yeontaek/llama-2-13b-QLoRA](https://huggingface.co/yeontaek/llama-2-13b-QLoRA) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_yeontaek__llama-2-13b-QLoRA",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-22T12:23:14.210663](https://huggingface.co/datasets/open-llm-leaderboard/details_yeontaek__llama-2-13b-QLoRA/blob/main/results_2023-10-22T12-23-14.210663.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.24979026845637584,
"em_stderr": 0.004433214605677736,
"f1": 0.2997955117449669,
"f1_stderr": 0.0043747471349110095,
"acc": 0.40422445791070105,
"acc_stderr": 0.008306034881356837
},
"harness|drop|3": {
"em": 0.24979026845637584,
"em_stderr": 0.004433214605677736,
"f1": 0.2997955117449669,
"f1_stderr": 0.0043747471349110095
},
"harness|gsm8k|5": {
"acc": 0.032600454890068235,
"acc_stderr": 0.0048916690219395756
},
"harness|winogrande|5": {
"acc": 0.7758484609313339,
"acc_stderr": 0.011720400740774099
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_yeontaek__llama-2-13b-QLoRA
|
[
"region:us"
] |
2023-10-22T11:23:18+00:00
|
{"pretty_name": "Evaluation run of yeontaek/llama-2-13b-QLoRA", "dataset_summary": "Dataset automatically created during the evaluation run of model [yeontaek/llama-2-13b-QLoRA](https://huggingface.co/yeontaek/llama-2-13b-QLoRA) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_yeontaek__llama-2-13b-QLoRA\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-22T12:23:14.210663](https://huggingface.co/datasets/open-llm-leaderboard/details_yeontaek__llama-2-13b-QLoRA/blob/main/results_2023-10-22T12-23-14.210663.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.24979026845637584,\n \"em_stderr\": 0.004433214605677736,\n \"f1\": 0.2997955117449669,\n \"f1_stderr\": 0.0043747471349110095,\n \"acc\": 0.40422445791070105,\n \"acc_stderr\": 0.008306034881356837\n },\n \"harness|drop|3\": {\n \"em\": 0.24979026845637584,\n \"em_stderr\": 0.004433214605677736,\n \"f1\": 0.2997955117449669,\n \"f1_stderr\": 0.0043747471349110095\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.032600454890068235,\n \"acc_stderr\": 0.0048916690219395756\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7758484609313339,\n \"acc_stderr\": 0.011720400740774099\n }\n}\n```", "repo_url": "https://huggingface.co/yeontaek/llama-2-13b-QLoRA", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_22T12_23_14.210663", "path": ["**/details_harness|drop|3_2023-10-22T12-23-14.210663.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-22T12-23-14.210663.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_22T12_23_14.210663", "path": ["**/details_harness|gsm8k|5_2023-10-22T12-23-14.210663.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-22T12-23-14.210663.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_22T12_23_14.210663", "path": ["**/details_harness|winogrande|5_2023-10-22T12-23-14.210663.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-22T12-23-14.210663.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_22T12_23_14.210663", "path": ["results_2023-10-22T12-23-14.210663.parquet"]}, {"split": "latest", "path": ["results_2023-10-22T12-23-14.210663.parquet"]}]}]}
|
2023-10-22T11:23:25+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of yeontaek/llama-2-13b-QLoRA
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model yeontaek/llama-2-13b-QLoRA on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-22T12:23:14.210663(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of yeontaek/llama-2-13b-QLoRA",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model yeontaek/llama-2-13b-QLoRA on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-22T12:23:14.210663(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of yeontaek/llama-2-13b-QLoRA",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model yeontaek/llama-2-13b-QLoRA on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-22T12:23:14.210663(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
22,
31,
170,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of yeontaek/llama-2-13b-QLoRA## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model yeontaek/llama-2-13b-QLoRA on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-22T12:23:14.210663(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
5970ff5edf986f793fe461ee8961cd376bcb7bb1
|
# Dataset Card for "gita_supersite_sanskrit_tts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yashnbx/gita_supersite_sanskrit_tts
|
[
"region:us"
] |
2023-10-22T11:36:23+00:00
|
{"dataset_info": {"features": [{"name": "shloka_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "audio", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 25244323.0, "num_examples": 701}], "download_size": 24905370, "dataset_size": 25244323.0}}
|
2023-10-22T11:36:31+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "gita_supersite_sanskrit_tts"
More Information needed
|
[
"# Dataset Card for \"gita_supersite_sanskrit_tts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"gita_supersite_sanskrit_tts\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"gita_supersite_sanskrit_tts\"\n\nMore Information needed"
] |
2f5832b288617147ccd5b1bb840aeb7ac1d786c1
|
# Dataset Card for "kogo-bonjin-translation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
p1atdev/kogo-bonjin-translation
|
[
"region:us"
] |
2023-10-22T12:35:45+00:00
|
{"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "source", "dtype": "string"}, {"name": "original", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "section", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 579002, "num_examples": 197}], "download_size": 343306, "dataset_size": 579002}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T12:35:49+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "kogo-bonjin-translation"
More Information needed
|
[
"# Dataset Card for \"kogo-bonjin-translation\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"kogo-bonjin-translation\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"kogo-bonjin-translation\"\n\nMore Information needed"
] |
39c768a57d5a69fc543636a44012043ce7d69172
|
# Knowledge Graph Triplet format
Generated using ChatGPT3.5 on,
1. Astroawani news, https://github.com/mesolitica/malaysian-dataset/tree/master/knowledge-graph/chatgpt-astroawani, [kg-astroawani.translated.jsonl](kg-astroawani.translated.jsonl), 9162 rows, 125 MB
2. MS Wikipedia, https://github.com/mesolitica/malaysian-dataset/tree/master/knowledge-graph/chatgpt-wikipedia, [kg-paragraph-wikipedia.translated.jsonl](kg-paragraph-wikipedia.translated.jsonl), 25032 rows, 166 MB
## Example data
```json
{'id': 221733,
'title': "Padah jalin hubungan sulit dengan pekerja sendiri, CEO McDonald's dipecat serta merta",
'description': 'CEO tidak boleh menjalin hubungan dengan mana-mana kakitangan.',
'body': ["SYARIKAT rantaian makanan segera terkemuka dunia, McDonald's Corp mengesahkan telah memecat Ketua Pegawai Eksekutif (CEO), Steve Easterbrook selepas menjalinkan hubungan sulit dengan salah seorang kakitangannya.",
"Menurut McDonald's dalam satu kenyataan, tindakan tersebut diambil berikutan Easterbrook, 52, didakwa melanggar polisi syarikat, yang tidak membenarkan CEO mempunyai hubungan dengan mana-mana kakitangan syarikat.",
"Susulan pemecatan tersebut, restoran terbesar dunia itu melantik bekas presiden McDonald's Amerika Syarikat (AS), Chris Kempczinski, sebagai CEO baharu berkuat kuasa serta-merta.",
'Sementara itu, Easterbrook menerusi emel kepada kakitangannya mengakui hubungan tersebut merupakan "satu kesilapan" yang bertentangan dengan dasar syarikat.',
'"Mengambil nilai syarikat ini, saya bersetuju untuk mengundurkan diri," demikian katanya.',
"Easterbrook pernah bercerai dan memulakan kerjaya dengan McDonald's pada tahun 1993 sebagai pengurus di London sebelum dinaikkan pangkat.",
"Beliau dilantik sebagai CEO McDonald's Corporation pada tahun 2015. -"],
'title_kg': {'triplets': [{'subject': 'Padah',
'predicate': 'memiliki',
'object': 'hubungan sulit'},
{'subject': 'hubungan sulit',
'predicate': 'dengan',
'object': 'pekerja sendiri'},
{'subject': 'Padah', 'predicate': 'dipecat', 'object': "CEO McDonald's"}]},
'description_kg': {'triplets': [{'subject': 'CEO',
'predicate': 'tidak boleh menjalin hubungan dengan',
'object': 'kakitangan'}]},
'body_kg': [["SYARIKAT rantaian makanan segera terkemuka dunia, McDonald's Corp mengesahkan telah memecat Ketua Pegawai Eksekutif (CEO), Steve Easterbrook selepas menjalinkan hubungan sulit dengan salah seorang kakitangannya.",
{'triplets': [{'subject': "McDonald's Corp",
'predicate': 'is a',
'object': "world's leading fast food chain company"},
{'subject': "McDonald's Corp",
'predicate': 'confirmed',
'object': 'firing CEO Steve Easterbrook'},
{'subject': 'Steve Easterbrook',
'predicate': 'had',
'object': 'an inappropriate relationship with an employee'}]}],
["Menurut McDonald's dalam satu kenyataan, tindakan tersebut diambil berikutan Easterbrook, 52, didakwa melanggar polisi syarikat, yang tidak membenarkan CEO mempunyai hubungan dengan mana-mana kakitangan syarikat.",
{'triplets': [{'subject': "McDonald's",
'predicate': 'statement',
'object': 'Tindakan diambil berikutan Easterbrook didakwa melanggar polisi syarikat yang tidak membenarkan CEO mempunyai hubungan dengan mana-mana kakitangan syarikat.'}]}],
["Susulan pemecatan tersebut, restoran terbesar dunia itu melantik bekas presiden McDonald's Amerika Syarikat (AS), Chris Kempczinski, sebagai CEO baharu berkuat kuasa serta-merta.",
{'triplets': [{'subject': 'restoran terbesar dunia',
'predicate': 'melantik',
'object': 'Chris Kempczinski'},
{'subject': 'restoran terbesar dunia',
'predicate': 'sebagai',
'object': 'CEO'},
{'subject': 'restoran terbesar dunia',
'predicate': 'berkuat kuasa',
'object': 'serta-merta'}]}],
['Sementara itu, Easterbrook menerusi emel kepada kakitangannya mengakui hubungan tersebut merupakan "satu kesilapan" yang bertentangan dengan dasar syarikat.',
{'triplets': [{'subject': 'Easterbrook',
'predicate': 'admits',
'object': 'relationship'},
{'subject': 'relationship', 'predicate': 'is', 'object': 'mistake'},
{'subject': 'relationship',
'predicate': 'contradicts',
'object': 'company policy'}]}],
['"Mengambil nilai syarikat ini, saya bersetuju untuk mengundurkan diri," demikian katanya.',
{'triplets': [{'subject': 'saya',
'predicate': 'mengambil',
'object': 'nilai syarikat ini'},
{'subject': 'saya',
'predicate': 'bersetuju',
'object': 'mengundurkan diri'}]}],
["Easterbrook pernah bercerai dan memulakan kerjaya dengan McDonald's pada tahun 1993 sebagai pengurus di London sebelum dinaikkan pangkat.",
{'triplets': [{'subject': 'Easterbrook',
'predicate': 'bercerai',
'object': 'true'},
{'subject': 'Easterbrook',
'predicate': 'memulakan kerjaya',
'object': "McDonald's"},
{'subject': 'Easterbrook', 'predicate': 'tahun', 'object': '1993'},
{'subject': 'Easterbrook', 'predicate': 'pengurus', 'object': 'London'},
{'subject': 'Easterbrook',
'predicate': 'dinaikkan pangkat',
'object': 'true'}]}],
["Beliau dilantik sebagai CEO McDonald's Corporation pada tahun 2015. -",
{'triplets': [{'subject': 'Beliau',
'predicate': 'dilantik sebagai',
'object': "CEO McDonald's Corporation"},
{'subject': 'Beliau', 'predicate': 'pada tahun', 'object': '2015'}]}]],
'title_kg_ms': [{'head': 'Padah',
'type': 'mempunyai',
'tail': 'hubungan sulit'},
{'head': 'hubungan sulit', 'type': 'dengan', 'tail': 'pekerja sendiri'},
{'head': 'Padah', 'type': 'dipecat', 'tail': "CEO McDonald's"}],
'description_kg_ms': [{'head': 'CEO',
'type': 'tidak boleh menjalin hubungan dengan',
'tail': 'kakitangan'}],
'body_kg_ms': [["SYARIKAT rantaian makanan segera terkemuka dunia, McDonald's Corp mengesahkan telah memecat Ketua Pegawai Eksekutif (CEO), Steve Easterbrook selepas menjalinkan hubungan sulit dengan salah seorang kakitangannya.",
[{'head': '',
'type': 'mengesahkan',
'tail': 'yang telah memecat Steve Easterbrook'},
{'head': 'Steve Easterbrook',
'type': 'telah',
'tail': 'hubungan yang tidak sesuai dengan pekerja'}]],
["Menurut McDonald's dalam satu kenyataan, tindakan tersebut diambil berikutan Easterbrook, 52, didakwa melanggar polisi syarikat, yang tidak membenarkan CEO mempunyai hubungan dengan mana-mana kakitangan syarikat.",
[]],
["Susulan pemecatan tersebut, restoran terbesar dunia itu melantik bekas presiden McDonald's Amerika Syarikat (AS), Chris Kempczinski, sebagai CEO baharu berkuat kuasa serta-merta.",
[{'head': '', 'type': 'melantik', 'tail': 'Chris Kempczinski'},
{'head': '', 'type': 'sebagai', 'tail': 'CEO'}]],
['Sementara itu, Easterbrook menerusi emel kepada kakitangannya mengakui hubungan tersebut merupakan "satu kesilapan" yang bertentangan dengan dasar syarikat.',
[{'head': 'Easterbrook', 'type': 'mengakui', 'tail': 'hubungan'},
{'head': 'hubungan', 'type': 'ialah', 'tail': 'kesilapan'},
{'head': 'hubungan', 'type': 'bercanggah', 'tail': 'dasar syarikat'}]],
['"Mengambil nilai syarikat ini, saya bersetuju untuk mengundurkan diri," demikian katanya.',
[{'head': 'Saya', 'type': 'mengambil', 'tail': 'nilai syarikat ini'},
{'head': 'Saya', 'type': 'bersetuju', 'tail': 'meletak jawatan'}]],
["Easterbrook pernah bercerai dan memulakan kerjaya dengan McDonald's pada tahun 1993 sebagai pengurus di London sebelum dinaikkan pangkat.",
[{'head': 'Easterbrook', 'type': 'bercerai', 'tail': 'benar'},
{'head': 'Easterbrook', 'type': 'memulakan kerjaya', 'tail': "McDonald's"},
{'head': 'Easterbrook', 'type': 'tahun', 'tail': '1993'},
{'head': 'Easterbrook', 'type': 'pengurus', 'tail': 'London'},
{'head': 'Easterbrook', 'type': 'dinaikkan pangkat', 'tail': 'benar'}]],
["Beliau dilantik sebagai CEO McDonald's Corporation pada tahun 2015. -",
[{'head': "Beliau adalah CEO McDonald's Corporation",
'type': 'pada tahun',
'tail': '2015'}]]]}
```
|
mesolitica/chatgpt-kg-triplets
|
[
"language:ms",
"region:us"
] |
2023-10-22T12:39:59+00:00
|
{"language": ["ms"], "pretty_name": "malay-kg-triplets"}
|
2024-02-02T08:21:56+00:00
|
[] |
[
"ms"
] |
TAGS
#language-Malay (macrolanguage) #region-us
|
# Knowledge Graph Triplet format
Generated using ChatGPT3.5 on,
1. Astroawani news, URL URL, 9162 rows, 125 MB
2. MS Wikipedia, URL URL, 25032 rows, 166 MB
## Example data
|
[
"# Knowledge Graph Triplet format\n\nGenerated using ChatGPT3.5 on,\n1. Astroawani news, URL URL, 9162 rows, 125 MB\n2. MS Wikipedia, URL URL, 25032 rows, 166 MB",
"## Example data"
] |
[
"TAGS\n#language-Malay (macrolanguage) #region-us \n",
"# Knowledge Graph Triplet format\n\nGenerated using ChatGPT3.5 on,\n1. Astroawani news, URL URL, 9162 rows, 125 MB\n2. MS Wikipedia, URL URL, 25032 rows, 166 MB",
"## Example data"
] |
[
16,
48,
4
] |
[
"passage: TAGS\n#language-Malay (macrolanguage) #region-us \n# Knowledge Graph Triplet format\n\nGenerated using ChatGPT3.5 on,\n1. Astroawani news, URL URL, 9162 rows, 125 MB\n2. MS Wikipedia, URL URL, 25032 rows, 166 MB## Example data"
] |
3c3d44d5becbb03ea91936d492dfc781077fa40e
|
# Dataset Card for Evaluation run of TheBloke/tulu-30B-fp16
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TheBloke/tulu-30B-fp16
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [TheBloke/tulu-30B-fp16](https://huggingface.co/TheBloke/tulu-30B-fp16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TheBloke__tulu-30B-fp16",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-22T14:05:44.356727](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__tulu-30B-fp16/blob/main/results_2023-10-22T14-05-44.356727.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.4158976510067114,
"em_stderr": 0.005047512015363023,
"f1": 0.4501331795302018,
"f1_stderr": 0.004938014903871411,
"acc": 0.5026636978936352,
"acc_stderr": 0.011011615647480079
},
"harness|drop|3": {
"em": 0.4158976510067114,
"em_stderr": 0.005047512015363023,
"f1": 0.4501331795302018,
"f1_stderr": 0.004938014903871411
},
"harness|gsm8k|5": {
"acc": 0.19711902956785443,
"acc_stderr": 0.01095802163030063
},
"harness|winogrande|5": {
"acc": 0.8082083662194159,
"acc_stderr": 0.011065209664659527
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_TheBloke__tulu-30B-fp16
|
[
"region:us"
] |
2023-10-22T13:05:48+00:00
|
{"pretty_name": "Evaluation run of TheBloke/tulu-30B-fp16", "dataset_summary": "Dataset automatically created during the evaluation run of model [TheBloke/tulu-30B-fp16](https://huggingface.co/TheBloke/tulu-30B-fp16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheBloke__tulu-30B-fp16\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-22T14:05:44.356727](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__tulu-30B-fp16/blob/main/results_2023-10-22T14-05-44.356727.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.4158976510067114,\n \"em_stderr\": 0.005047512015363023,\n \"f1\": 0.4501331795302018,\n \"f1_stderr\": 0.004938014903871411,\n \"acc\": 0.5026636978936352,\n \"acc_stderr\": 0.011011615647480079\n },\n \"harness|drop|3\": {\n \"em\": 0.4158976510067114,\n \"em_stderr\": 0.005047512015363023,\n \"f1\": 0.4501331795302018,\n \"f1_stderr\": 0.004938014903871411\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.19711902956785443,\n \"acc_stderr\": 0.01095802163030063\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8082083662194159,\n \"acc_stderr\": 0.011065209664659527\n }\n}\n```", "repo_url": "https://huggingface.co/TheBloke/tulu-30B-fp16", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_22T14_05_44.356727", "path": ["**/details_harness|drop|3_2023-10-22T14-05-44.356727.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-22T14-05-44.356727.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_22T14_05_44.356727", "path": ["**/details_harness|gsm8k|5_2023-10-22T14-05-44.356727.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-22T14-05-44.356727.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_22T14_05_44.356727", "path": ["**/details_harness|winogrande|5_2023-10-22T14-05-44.356727.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-22T14-05-44.356727.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_22T14_05_44.356727", "path": ["results_2023-10-22T14-05-44.356727.parquet"]}, {"split": "latest", "path": ["results_2023-10-22T14-05-44.356727.parquet"]}]}]}
|
2023-10-22T13:05:56+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of TheBloke/tulu-30B-fp16
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model TheBloke/tulu-30B-fp16 on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-22T14:05:44.356727(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of TheBloke/tulu-30B-fp16",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/tulu-30B-fp16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-22T14:05:44.356727(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of TheBloke/tulu-30B-fp16",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/tulu-30B-fp16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-22T14:05:44.356727(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
22,
31,
170,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of TheBloke/tulu-30B-fp16## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/tulu-30B-fp16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-22T14:05:44.356727(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
fcae2ebbae8d726921fbb222f74597cc01b5e1e1
|
# Dataset Card for "tam-bert"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Harsha9044/tam-bert
|
[
"region:us"
] |
2023-10-22T13:06:48+00:00
|
{"dataset_info": {"features": [{"name": "Transcript", "dtype": "string"}, {"name": "Labels", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 282399, "num_examples": 64}], "download_size": 0, "dataset_size": 282399}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T13:11:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "tam-bert"
More Information needed
|
[
"# Dataset Card for \"tam-bert\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"tam-bert\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"tam-bert\"\n\nMore Information needed"
] |
1dc4ae3aaadfd1c144cf4b1e4d4a0bd75e0b2052
|
# Stockmark Business Questions
|
stockmark/business-questions
|
[
"language:ja",
"license:mit",
"region:us"
] |
2023-10-22T13:47:54+00:00
|
{"language": ["ja"], "license": "mit"}
|
2023-10-25T04:01:33+00:00
|
[] |
[
"ja"
] |
TAGS
#language-Japanese #license-mit #region-us
|
# Stockmark Business Questions
|
[
"# Stockmark Business Questions"
] |
[
"TAGS\n#language-Japanese #license-mit #region-us \n",
"# Stockmark Business Questions"
] |
[
17,
6
] |
[
"passage: TAGS\n#language-Japanese #license-mit #region-us \n# Stockmark Business Questions"
] |
3470181f5ed4691210b0ff1eba03b261f1259a69
|
# Dataset Card for "Soldering-Data-pix2pix-1022"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ouvic215/Soldering-Data-pix2pix-1022
|
[
"region:us"
] |
2023-10-22T13:49:48+00:00
|
{"dataset_info": {"features": [{"name": "mask_image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1579612223.25, "num_examples": 19151}], "download_size": 1218052724, "dataset_size": 1579612223.25}}
|
2023-10-22T13:52:28+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Soldering-Data-pix2pix-1022"
More Information needed
|
[
"# Dataset Card for \"Soldering-Data-pix2pix-1022\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Soldering-Data-pix2pix-1022\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Soldering-Data-pix2pix-1022\"\n\nMore Information needed"
] |
eaa046e98b4731a92e192e5185eff5c98362164f
|
# Dataset Card for "xsum_10_percents"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
thdangtr/xsum_10_percents
|
[
"region:us"
] |
2023-10-22T14:06:45+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "document", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 47919462.033629835, "num_examples": 20404}, {"name": "validation", "num_bytes": 2628823.6534592304, "num_examples": 1133}, {"name": "test", "num_bytes": 2674669.821157579, "num_examples": 1133}], "download_size": 33669166, "dataset_size": 53222955.508246645}}
|
2023-10-22T14:07:07+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "xsum_10_percents"
More Information needed
|
[
"# Dataset Card for \"xsum_10_percents\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"xsum_10_percents\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"xsum_10_percents\"\n\nMore Information needed"
] |
8f2e1124e6eae1df7b49f08cde1d9654ca8b72dc
|
# Dataset Card for RAG-Instruct-Financial-Test-Dataset
### Dataset Summary
This is a test dataset for "retrieval augmented generation" (RAG) use cases, especially for financial data extraction and analysis, including a series of questions relating to tabular financial data and common-sense math operations (small increments, decrements, sorting and ordering as well as recognizing when information is not included in a particular source). This test dataset includes 100 samples with context passages pulled from common 'retrieval scenarios' in financial markets, including financial earnings releases, stock market updates, financial tables and financial news. The primary use case is to evaluate the effectiveness of an
instruct-fine-tuned LLM used in conjunction with closed-context, fact-based question-answering, key-value extraction, and summarization with bulletpoints. The context passages are relatively short in this test-set ranging from ~100 tokens to ~500 tokens, and was designed for use with the
BLING series of models but is suitable for comparison evaluations of any LLM for basic RAG scenarios.
This is part of a series of RAG-Instruct test datasets from llmware.
### Languages
English
## Dataset Structure
100 JSONL samples with 4 keys - "query" | "context" | "answer" | "sample_number"
### Personal and Sensitive Information
The dataset samples were written bespoke for this objective, but do rely upon some public information, including major public figures and widely reported events.
Any other names were created/masked and any overlap with real companies or people is coincidental.
## Dataset Card Contact
Darren Oberst & llmware team
Please reach out anytime if you are interested in this project and would like to participate and work with us!
|
llmware/rag_instruct_test_dataset2_financial_0.1
|
[
"license:apache-2.0",
"finance",
"retrieval augmented generation",
"RAG",
"region:us"
] |
2023-10-22T14:19:52+00:00
|
{"license": "apache-2.0", "pretty_name": "RAG Instruct Test Dataset 2 - Financial - v0.1", "tags": ["finance", "retrieval augmented generation", "RAG"]}
|
2023-10-23T14:01:44+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #finance #retrieval augmented generation #RAG #region-us
|
# Dataset Card for RAG-Instruct-Financial-Test-Dataset
### Dataset Summary
This is a test dataset for "retrieval augmented generation" (RAG) use cases, especially for financial data extraction and analysis, including a series of questions relating to tabular financial data and common-sense math operations (small increments, decrements, sorting and ordering as well as recognizing when information is not included in a particular source). This test dataset includes 100 samples with context passages pulled from common 'retrieval scenarios' in financial markets, including financial earnings releases, stock market updates, financial tables and financial news. The primary use case is to evaluate the effectiveness of an
instruct-fine-tuned LLM used in conjunction with closed-context, fact-based question-answering, key-value extraction, and summarization with bulletpoints. The context passages are relatively short in this test-set ranging from ~100 tokens to ~500 tokens, and was designed for use with the
BLING series of models but is suitable for comparison evaluations of any LLM for basic RAG scenarios.
This is part of a series of RAG-Instruct test datasets from llmware.
### Languages
English
## Dataset Structure
100 JSONL samples with 4 keys - "query" | "context" | "answer" | "sample_number"
### Personal and Sensitive Information
The dataset samples were written bespoke for this objective, but do rely upon some public information, including major public figures and widely reported events.
Any other names were created/masked and any overlap with real companies or people is coincidental.
## Dataset Card Contact
Darren Oberst & llmware team
Please reach out anytime if you are interested in this project and would like to participate and work with us!
|
[
"# Dataset Card for RAG-Instruct-Financial-Test-Dataset",
"### Dataset Summary\n\nThis is a test dataset for \"retrieval augmented generation\" (RAG) use cases, especially for financial data extraction and analysis, including a series of questions relating to tabular financial data and common-sense math operations (small increments, decrements, sorting and ordering as well as recognizing when information is not included in a particular source). This test dataset includes 100 samples with context passages pulled from common 'retrieval scenarios' in financial markets, including financial earnings releases, stock market updates, financial tables and financial news. The primary use case is to evaluate the effectiveness of an\ninstruct-fine-tuned LLM used in conjunction with closed-context, fact-based question-answering, key-value extraction, and summarization with bulletpoints. The context passages are relatively short in this test-set ranging from ~100 tokens to ~500 tokens, and was designed for use with the\nBLING series of models but is suitable for comparison evaluations of any LLM for basic RAG scenarios. \n\nThis is part of a series of RAG-Instruct test datasets from llmware.",
"### Languages\n\nEnglish",
"## Dataset Structure\n\n100 JSONL samples with 4 keys - \"query\" | \"context\" | \"answer\" | \"sample_number\"",
"### Personal and Sensitive Information\n\nThe dataset samples were written bespoke for this objective, but do rely upon some public information, including major public figures and widely reported events. \nAny other names were created/masked and any overlap with real companies or people is coincidental.",
"## Dataset Card Contact\n\nDarren Oberst & llmware team\n\nPlease reach out anytime if you are interested in this project and would like to participate and work with us!"
] |
[
"TAGS\n#license-apache-2.0 #finance #retrieval augmented generation #RAG #region-us \n",
"# Dataset Card for RAG-Instruct-Financial-Test-Dataset",
"### Dataset Summary\n\nThis is a test dataset for \"retrieval augmented generation\" (RAG) use cases, especially for financial data extraction and analysis, including a series of questions relating to tabular financial data and common-sense math operations (small increments, decrements, sorting and ordering as well as recognizing when information is not included in a particular source). This test dataset includes 100 samples with context passages pulled from common 'retrieval scenarios' in financial markets, including financial earnings releases, stock market updates, financial tables and financial news. The primary use case is to evaluate the effectiveness of an\ninstruct-fine-tuned LLM used in conjunction with closed-context, fact-based question-answering, key-value extraction, and summarization with bulletpoints. The context passages are relatively short in this test-set ranging from ~100 tokens to ~500 tokens, and was designed for use with the\nBLING series of models but is suitable for comparison evaluations of any LLM for basic RAG scenarios. \n\nThis is part of a series of RAG-Instruct test datasets from llmware.",
"### Languages\n\nEnglish",
"## Dataset Structure\n\n100 JSONL samples with 4 keys - \"query\" | \"context\" | \"answer\" | \"sample_number\"",
"### Personal and Sensitive Information\n\nThe dataset samples were written bespoke for this objective, but do rely upon some public information, including major public figures and widely reported events. \nAny other names were created/masked and any overlap with real companies or people is coincidental.",
"## Dataset Card Contact\n\nDarren Oberst & llmware team\n\nPlease reach out anytime if you are interested in this project and would like to participate and work with us!"
] |
[
28,
18,
269,
5,
42,
63,
37
] |
[
"passage: TAGS\n#license-apache-2.0 #finance #retrieval augmented generation #RAG #region-us \n# Dataset Card for RAG-Instruct-Financial-Test-Dataset### Dataset Summary\n\nThis is a test dataset for \"retrieval augmented generation\" (RAG) use cases, especially for financial data extraction and analysis, including a series of questions relating to tabular financial data and common-sense math operations (small increments, decrements, sorting and ordering as well as recognizing when information is not included in a particular source). This test dataset includes 100 samples with context passages pulled from common 'retrieval scenarios' in financial markets, including financial earnings releases, stock market updates, financial tables and financial news. The primary use case is to evaluate the effectiveness of an\ninstruct-fine-tuned LLM used in conjunction with closed-context, fact-based question-answering, key-value extraction, and summarization with bulletpoints. The context passages are relatively short in this test-set ranging from ~100 tokens to ~500 tokens, and was designed for use with the\nBLING series of models but is suitable for comparison evaluations of any LLM for basic RAG scenarios. \n\nThis is part of a series of RAG-Instruct test datasets from llmware.### Languages\n\nEnglish## Dataset Structure\n\n100 JSONL samples with 4 keys - \"query\" | \"context\" | \"answer\" | \"sample_number\"### Personal and Sensitive Information\n\nThe dataset samples were written bespoke for this objective, but do rely upon some public information, including major public figures and widely reported events. \nAny other names were created/masked and any overlap with real companies or people is coincidental.## Dataset Card Contact\n\nDarren Oberst & llmware team\n\nPlease reach out anytime if you are interested in this project and would like to participate and work with us!"
] |
f7ef8250b7059b79f48b120af74e84afe95483bd
|
# Dataset Card for "4q"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
xcz9811/4q
|
[
"region:us"
] |
2023-10-22T14:30:21+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "file", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "quadrant", "dtype": {"class_label": {"names": {"0": "Q1", "1": "Q2", "2": "Q3", "3": "Q4"}}}}], "splits": [{"name": "train", "num_bytes": 291173680.0, "num_examples": 900}], "download_size": 291039981, "dataset_size": 291173680.0}}
|
2023-10-22T14:33:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "4q"
More Information needed
|
[
"# Dataset Card for \"4q\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"4q\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"4q\"\n\nMore Information needed"
] |
fea33539c9438e97474674232d25770457a930c4
|
# Dataset Card for "claude2_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
umd-zhou-lab/claude2_alpaca
|
[
"region:us"
] |
2023-10-22T14:33:30+00:00
|
{"dataset_info": {"features": [{"name": "data", "struct": [{"name": "input", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 43416526, "num_examples": 52002}], "download_size": 26338365, "dataset_size": 43416526}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T14:42:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "claude2_alpaca"
More Information needed
|
[
"# Dataset Card for \"claude2_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"claude2_alpaca\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"claude2_alpaca\"\n\nMore Information needed"
] |
67ebf5340d40588812222e83291cdc8dddc2ad6f
|
# Dataset Card for "snips_llm_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
benayas/snips_llm_v1
|
[
"region:us"
] |
2023-10-22T14:47:07+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5359378, "num_examples": 13084}, {"name": "test", "num_bytes": 574870, "num_examples": 1400}], "download_size": 761618, "dataset_size": 5934248}}
|
2023-11-26T22:29:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "snips_llm_v1"
More Information needed
|
[
"# Dataset Card for \"snips_llm_v1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"snips_llm_v1\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"snips_llm_v1\"\n\nMore Information needed"
] |
9639b3552594a612ceafa76238a34f0f6eeaa207
|
# 🕰️ Conversation Chronicles
We introduce Conversation Chronicles, a new high-quality 1M multi-session dataset that includes more various time intervals and fine-grained speaker relationships!
## Load with Datasets
To load our dataset with Hugging Face Datasets, please use the following code:
```python
from datasets import load_dataset
cc = load_dataset("jihyoung/ConversationChronicles")
```
## Languages
The language of Conversation Chronicles is ***English***.
## Dataset Size
| Feature | Conut |
| ---------------------- | ----- |
| # of Sessions | 1M |
| # of Episodes | 200K |
| # of Turns | 11.7M |
| Avg. Turns per session | 11.7 |
| Avg. Words per Turn | 18.03 |
### Dataset Splits
| Split | Number of Sessions | Number of Episodes |
| ------------- | ------------------ | ------------------ |
| Train | 800,000 | 160,000 |
| Validation | 100,000 | 20,000 |
| Test | 100,000 | 20,000 |
## Dataset Structure
| Fields | Type | Description |
| ------------------------- | --------------- | ---------------------------------------------------- |
| `dataID` | string | unique ID of an episode |
| `relationship` | string | relationships between the speakers in the episode |
| `time_interval` | sequence (list) | time intervals between sessions (total of 5) |
| `summary` | sequence (list) | chronological summaries of each session (total of 5) |
| `first_session_dialogue` | sequence (list) | utterance in the first session |
| `first_session_speakers` | sequence (list) | speaker matching for the first session utterance |
| `second_session_dialogue` | sequence (list) | utterance in the second session |
| `second_session_speakers` | sequence (list) | speaker matching for the second session utterance |
| `third_session_dialogue` | sequence (list) | utterance in the third session |
| `third_session_speakers` | sequence (list) | speaker matching for the third session utterance |
| `fourth_session_dialogue` | sequence (list) | utterance in the fourth session |
| `fourth_session_speakers` | sequence (list) | speaker matching for the fourth session utterance |
| `fifth_session_dialogue` | sequence (list) | utterance in the fifth session |
| `fifth_session_speakers` | sequence (list) | speaker matching for the fifth session utterance |
## Chronological Dynamics
our Conversation Chronicles implements chronological dynamics by integrating time interval and speaker relationship.
| Time Interval | Count |
| ------------------- | ------------------ |
| `A few hours` | 159,975 |
| `A few days` | 159,928 |
| `A few weeks` | 160,670 |
| `A few months` | 160,050 |
| `A couple of years` | 159,377 |
| Relationship | Count | Ratio |
| ------------------- | ------- | ----- |
| Classmates | 66,090 | 33.05% |
| Neighbors | 49,521 | 24.76% |
| Co-workers | 28,856 | 14.43% |
| Mentee and Mentor | 16,035 | 8.02% |
| Husband and Wife | 13,486 | 6.74% |
| Patient and Doctor | 6,980 | 3.49% |
| Parent and Child | 6,514 | 3.26% |
| Student and Teacher | 5,018 | 2.51% |
| Employee and Boss | 4,811 | 2.41% |
| Athlete and Coach | 2,689 | 1.34% |
| Total | 200,000 | |
## Citation Information
```
@inproceedings{jang-etal-2023-conversation,
title = "Conversation Chronicles: Towards Diverse Temporal and Relational Dynamics in Multi-Session Conversations",
author = "Jang, Jihyoung and
Boo, Minseong and
Kim, Hyounghun",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.838",
doi = "10.18653/v1/2023.emnlp-main.838",
pages = "13584--13606",
}
```
|
jihyoung/ConversationChronicles
|
[
"task_categories:conversational",
"language:en",
"license:cc-by-4.0",
"region:us"
] |
2023-10-22T14:59:38+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "task_categories": ["conversational"], "pretty_name": "CC"}
|
2023-12-21T05:20:04+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-conversational #language-English #license-cc-by-4.0 #region-us
|
️ Conversation Chronicles
=========================
We introduce Conversation Chronicles, a new high-quality 1M multi-session dataset that includes more various time intervals and fine-grained speaker relationships!
Load with Datasets
------------------
To load our dataset with Hugging Face Datasets, please use the following code:
Languages
---------
The language of Conversation Chronicles is *English*.
Dataset Size
------------
### Dataset Splits
Split: Train, Number of Sessions: 800,000, Number of Episodes: 160,000
Split: Validation, Number of Sessions: 100,000, Number of Episodes: 20,000
Split: Test, Number of Sessions: 100,000, Number of Episodes: 20,000
Dataset Structure
-----------------
Fields: 'dataID', Type: string, Description: unique ID of an episode
Fields: 'relationship', Type: string, Description: relationships between the speakers in the episode
Fields: 'time\_interval', Type: sequence (list), Description: time intervals between sessions (total of 5)
Fields: 'summary', Type: sequence (list), Description: chronological summaries of each session (total of 5)
Fields: 'first\_session\_dialogue', Type: sequence (list), Description: utterance in the first session
Fields: 'first\_session\_speakers', Type: sequence (list), Description: speaker matching for the first session utterance
Fields: 'second\_session\_dialogue', Type: sequence (list), Description: utterance in the second session
Fields: 'second\_session\_speakers', Type: sequence (list), Description: speaker matching for the second session utterance
Fields: 'third\_session\_dialogue', Type: sequence (list), Description: utterance in the third session
Fields: 'third\_session\_speakers', Type: sequence (list), Description: speaker matching for the third session utterance
Fields: 'fourth\_session\_dialogue', Type: sequence (list), Description: utterance in the fourth session
Fields: 'fourth\_session\_speakers', Type: sequence (list), Description: speaker matching for the fourth session utterance
Fields: 'fifth\_session\_dialogue', Type: sequence (list), Description: utterance in the fifth session
Fields: 'fifth\_session\_speakers', Type: sequence (list), Description: speaker matching for the fifth session utterance
Chronological Dynamics
----------------------
our Conversation Chronicles implements chronological dynamics by integrating time interval and speaker relationship.
Relationship: Classmates, Count: 66,090, Ratio: 33.05%
Relationship: Neighbors, Count: 49,521, Ratio: 24.76%
Relationship: Co-workers, Count: 28,856, Ratio: 14.43%
Relationship: Mentee and Mentor, Count: 16,035, Ratio: 8.02%
Relationship: Husband and Wife, Count: 13,486, Ratio: 6.74%
Relationship: Patient and Doctor, Count: 6,980, Ratio: 3.49%
Relationship: Parent and Child, Count: 6,514, Ratio: 3.26%
Relationship: Student and Teacher, Count: 5,018, Ratio: 2.51%
Relationship: Employee and Boss, Count: 4,811, Ratio: 2.41%
Relationship: Athlete and Coach, Count: 2,689, Ratio: 1.34%
Relationship: Total, Count: 200,000, Ratio:
|
[
"### Dataset Splits\n\n\nSplit: Train, Number of Sessions: 800,000, Number of Episodes: 160,000\nSplit: Validation, Number of Sessions: 100,000, Number of Episodes: 20,000\nSplit: Test, Number of Sessions: 100,000, Number of Episodes: 20,000\n\n\nDataset Structure\n-----------------\n\n\nFields: 'dataID', Type: string, Description: unique ID of an episode\nFields: 'relationship', Type: string, Description: relationships between the speakers in the episode\nFields: 'time\\_interval', Type: sequence (list), Description: time intervals between sessions (total of 5)\nFields: 'summary', Type: sequence (list), Description: chronological summaries of each session (total of 5)\nFields: 'first\\_session\\_dialogue', Type: sequence (list), Description: utterance in the first session\nFields: 'first\\_session\\_speakers', Type: sequence (list), Description: speaker matching for the first session utterance\nFields: 'second\\_session\\_dialogue', Type: sequence (list), Description: utterance in the second session\nFields: 'second\\_session\\_speakers', Type: sequence (list), Description: speaker matching for the second session utterance\nFields: 'third\\_session\\_dialogue', Type: sequence (list), Description: utterance in the third session\nFields: 'third\\_session\\_speakers', Type: sequence (list), Description: speaker matching for the third session utterance\nFields: 'fourth\\_session\\_dialogue', Type: sequence (list), Description: utterance in the fourth session\nFields: 'fourth\\_session\\_speakers', Type: sequence (list), Description: speaker matching for the fourth session utterance\nFields: 'fifth\\_session\\_dialogue', Type: sequence (list), Description: utterance in the fifth session\nFields: 'fifth\\_session\\_speakers', Type: sequence (list), Description: speaker matching for the fifth session utterance\n\n\nChronological Dynamics\n----------------------\n\n\nour Conversation Chronicles implements chronological dynamics by integrating time interval and speaker relationship.\n\n\n\nRelationship: Classmates, Count: 66,090, Ratio: 33.05%\nRelationship: Neighbors, Count: 49,521, Ratio: 24.76%\nRelationship: Co-workers, Count: 28,856, Ratio: 14.43%\nRelationship: Mentee and Mentor, Count: 16,035, Ratio: 8.02%\nRelationship: Husband and Wife, Count: 13,486, Ratio: 6.74%\nRelationship: Patient and Doctor, Count: 6,980, Ratio: 3.49%\nRelationship: Parent and Child, Count: 6,514, Ratio: 3.26%\nRelationship: Student and Teacher, Count: 5,018, Ratio: 2.51%\nRelationship: Employee and Boss, Count: 4,811, Ratio: 2.41%\nRelationship: Athlete and Coach, Count: 2,689, Ratio: 1.34%\nRelationship: Total, Count: 200,000, Ratio:"
] |
[
"TAGS\n#task_categories-conversational #language-English #license-cc-by-4.0 #region-us \n",
"### Dataset Splits\n\n\nSplit: Train, Number of Sessions: 800,000, Number of Episodes: 160,000\nSplit: Validation, Number of Sessions: 100,000, Number of Episodes: 20,000\nSplit: Test, Number of Sessions: 100,000, Number of Episodes: 20,000\n\n\nDataset Structure\n-----------------\n\n\nFields: 'dataID', Type: string, Description: unique ID of an episode\nFields: 'relationship', Type: string, Description: relationships between the speakers in the episode\nFields: 'time\\_interval', Type: sequence (list), Description: time intervals between sessions (total of 5)\nFields: 'summary', Type: sequence (list), Description: chronological summaries of each session (total of 5)\nFields: 'first\\_session\\_dialogue', Type: sequence (list), Description: utterance in the first session\nFields: 'first\\_session\\_speakers', Type: sequence (list), Description: speaker matching for the first session utterance\nFields: 'second\\_session\\_dialogue', Type: sequence (list), Description: utterance in the second session\nFields: 'second\\_session\\_speakers', Type: sequence (list), Description: speaker matching for the second session utterance\nFields: 'third\\_session\\_dialogue', Type: sequence (list), Description: utterance in the third session\nFields: 'third\\_session\\_speakers', Type: sequence (list), Description: speaker matching for the third session utterance\nFields: 'fourth\\_session\\_dialogue', Type: sequence (list), Description: utterance in the fourth session\nFields: 'fourth\\_session\\_speakers', Type: sequence (list), Description: speaker matching for the fourth session utterance\nFields: 'fifth\\_session\\_dialogue', Type: sequence (list), Description: utterance in the fifth session\nFields: 'fifth\\_session\\_speakers', Type: sequence (list), Description: speaker matching for the fifth session utterance\n\n\nChronological Dynamics\n----------------------\n\n\nour Conversation Chronicles implements chronological dynamics by integrating time interval and speaker relationship.\n\n\n\nRelationship: Classmates, Count: 66,090, Ratio: 33.05%\nRelationship: Neighbors, Count: 49,521, Ratio: 24.76%\nRelationship: Co-workers, Count: 28,856, Ratio: 14.43%\nRelationship: Mentee and Mentor, Count: 16,035, Ratio: 8.02%\nRelationship: Husband and Wife, Count: 13,486, Ratio: 6.74%\nRelationship: Patient and Doctor, Count: 6,980, Ratio: 3.49%\nRelationship: Parent and Child, Count: 6,514, Ratio: 3.26%\nRelationship: Student and Teacher, Count: 5,018, Ratio: 2.51%\nRelationship: Employee and Boss, Count: 4,811, Ratio: 2.41%\nRelationship: Athlete and Coach, Count: 2,689, Ratio: 1.34%\nRelationship: Total, Count: 200,000, Ratio:"
] |
[
29,
778
] |
[
"passage: TAGS\n#task_categories-conversational #language-English #license-cc-by-4.0 #region-us \n"
] |
787ccfeea290550b41e582be03033ba9c5caa018
|
# Dataset Card for "sotaysv-qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Back-up/sotaysv-qa
|
[
"region:us"
] |
2023-10-22T15:02:20+00:00
|
{"dataset_info": {"features": [{"name": "Questions", "dtype": "string"}, {"name": "Answers", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 129518, "num_examples": 176}], "download_size": 56231, "dataset_size": 129518}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T15:02:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "sotaysv-qa"
More Information needed
|
[
"# Dataset Card for \"sotaysv-qa\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"sotaysv-qa\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"sotaysv-qa\"\n\nMore Information needed"
] |
9631ece751629d3280e3e5750ff6f4931ce8a76a
|
# Dataset Card for "test_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Back-up/test_qa
|
[
"region:us"
] |
2023-10-22T15:06:16+00:00
|
{"dataset_info": {"features": [{"name": "Questions", "dtype": "string"}, {"name": "Answers", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13640, "num_examples": 18}], "download_size": 14990, "dataset_size": 13640}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T15:06:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "test_qa"
More Information needed
|
[
"# Dataset Card for \"test_qa\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"test_qa\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"test_qa\"\n\nMore Information needed"
] |
135024302a003311dca47c912f35b99dbacc7e72
|
# Dataset Card for "CNAMCD_Cropped_256"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ericyu/CNAMCD_Cropped_256
|
[
"region:us"
] |
2023-10-22T15:08:07+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "full", "path": "data/full-*"}]}], "dataset_info": {"features": [{"name": "imageA", "dtype": "image"}, {"name": "imageB", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "full", "num_bytes": 238076781.888, "num_examples": 10032}], "download_size": 240024878, "dataset_size": 238076781.888}}
|
2023-10-22T15:08:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "CNAMCD_Cropped_256"
More Information needed
|
[
"# Dataset Card for \"CNAMCD_Cropped_256\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"CNAMCD_Cropped_256\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"CNAMCD_Cropped_256\"\n\nMore Information needed"
] |
be19071d5be3919f995dd903c7ff90ec6872c9d8
|
# Dataset Card for "CLCD_Cropped_256"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ericyu/CLCD_Cropped_256
|
[
"region:us"
] |
2023-10-22T15:21:12+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "val", "path": "data/val-*"}]}], "dataset_info": {"features": [{"name": "imageA", "dtype": "image"}, {"name": "imageB", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 29228609.52, "num_examples": 1440}, {"name": "test", "num_bytes": 9716986.0, "num_examples": 480}, {"name": "val", "num_bytes": 9686310.0, "num_examples": 480}], "download_size": 48264072, "dataset_size": 48631905.519999996}}
|
2023-10-22T15:21:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "CLCD_Cropped_256"
More Information needed
|
[
"# Dataset Card for \"CLCD_Cropped_256\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"CLCD_Cropped_256\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"CLCD_Cropped_256\"\n\nMore Information needed"
] |
85804f1147a97d8ca78860de94da923a493650d3
|
# Dataset Card for "shingazidja-lexicon"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nairaxo/shingazidja-lexicon
|
[
"region:us"
] |
2023-10-22T15:32:33+00:00
|
{"dataset_info": {"features": [{"name": "ID", "dtype": "int64"}, {"name": "Word", "dtype": "string"}, {"name": "Origin", "dtype": "string"}, {"name": "Nominal Class", "dtype": "string"}, {"name": "Plural", "dtype": "string"}, {"name": "Word (Simplified)", "dtype": "string"}, {"name": "Plural (Simplified)", "dtype": "string"}, {"name": "Translation (en)", "dtype": "string"}, {"name": "Translation (fr) (Google)", "dtype": "string"}, {"name": "POS", "dtype": "string"}, {"name": "Polarity", "dtype": "float64"}, {"name": "Sentiment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 672158, "num_examples": 5714}], "download_size": 335616, "dataset_size": 672158}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T15:32:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "shingazidja-lexicon"
More Information needed
|
[
"# Dataset Card for \"shingazidja-lexicon\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"shingazidja-lexicon\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"shingazidja-lexicon\"\n\nMore Information needed"
] |
ea6e37a4654cb30615043655c22d1641965da264
|
# Dataset Card for "shimaore-lexicon"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nairaxo/shimaore-lexicon
|
[
"region:us"
] |
2023-10-22T15:32:35+00:00
|
{"dataset_info": {"features": [{"name": "ID", "dtype": "int64"}, {"name": "Word", "dtype": "string"}, {"name": "Word Form", "dtype": "string"}, {"name": "Translation (fr)", "dtype": "string"}, {"name": "Translation (en) (Google)", "dtype": "string"}, {"name": "POS", "dtype": "string"}, {"name": "Polarity", "dtype": "float64"}, {"name": "Sentiment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 192121, "num_examples": 2161}], "download_size": 73500, "dataset_size": 192121}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T15:32:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "shimaore-lexicon"
More Information needed
|
[
"# Dataset Card for \"shimaore-lexicon\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"shimaore-lexicon\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"shimaore-lexicon\"\n\nMore Information needed"
] |
ef6d0511f04d95247e999df2b499c09a3bf8a72d
|
# Dataset Card for "dmae-ve-da2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Augusto777/dmae-ve-da2
|
[
"region:us"
] |
2023-10-22T15:36:39+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "avanzada", "1": "leve", "2": "moderada", "3": "no dmae"}}}}], "splits": [{"name": "train", "num_bytes": 63782511.0, "num_examples": 578}, {"name": "test", "num_bytes": 17213294.0, "num_examples": 53}, {"name": "validation", "num_bytes": 14157715.0, "num_examples": 53}], "download_size": 94981677, "dataset_size": 95153520.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
|
2023-10-22T15:41:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "dmae-ve-da2"
More Information needed
|
[
"# Dataset Card for \"dmae-ve-da2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"dmae-ve-da2\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"dmae-ve-da2\"\n\nMore Information needed"
] |
3b7d071bd038fca58dae7cdbcd981280135bc73d
|
# Dataset Card for "cnn_dailymail_100_finetune"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jamestalentium/cnn_dailymail_100_finetune
|
[
"region:us"
] |
2023-10-22T15:39:49+00:00
|
{"dataset_info": {"features": [{"name": "input_text", "dtype": "string"}, {"name": "output_text", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 439445.02164652944, "num_examples": 100}], "download_size": 128996, "dataset_size": 439445.02164652944}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T15:39:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cnn_dailymail_100_finetune"
More Information needed
|
[
"# Dataset Card for \"cnn_dailymail_100_finetune\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cnn_dailymail_100_finetune\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cnn_dailymail_100_finetune\"\n\nMore Information needed"
] |
449b0444b0ab8a04d69040528b5e59a772c0c272
|
# Dataset Card for "cnn_dailymail_100_rm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jamestalentium/cnn_dailymail_100_rm
|
[
"region:us"
] |
2023-10-22T15:39:50+00:00
|
{"dataset_info": {"features": [{"name": "input_text", "dtype": "string"}, {"name": "output_text", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 439445.02164652944, "num_examples": 100}], "download_size": 134076, "dataset_size": 439445.02164652944}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T15:39:51+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cnn_dailymail_100_rm"
More Information needed
|
[
"# Dataset Card for \"cnn_dailymail_100_rm\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cnn_dailymail_100_rm\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cnn_dailymail_100_rm\"\n\nMore Information needed"
] |
2dd751918d969bb75f4f4f7fe4196543344e9756
|
# Dataset Card for "cnn_dailymail_100_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jamestalentium/cnn_dailymail_100_test
|
[
"region:us"
] |
2023-10-22T15:39:51+00:00
|
{"dataset_info": {"features": [{"name": "input_text", "dtype": "string"}, {"name": "output_text", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 8255778.137510879, "num_examples": 1900}], "download_size": 2210923, "dataset_size": 8255778.137510879}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]}
|
2023-10-22T15:39:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cnn_dailymail_100_test"
More Information needed
|
[
"# Dataset Card for \"cnn_dailymail_100_test\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cnn_dailymail_100_test\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cnn_dailymail_100_test\"\n\nMore Information needed"
] |
99cfd9e7bf3aea67712953a89c7a84eec3bfd480
|
# Dataset Card for "cnn_dailymail_250_finetune"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jamestalentium/cnn_dailymail_250_finetune
|
[
"region:us"
] |
2023-10-22T15:39:58+00:00
|
{"dataset_info": {"features": [{"name": "input_text", "dtype": "string"}, {"name": "output_text", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1098612.5541163236, "num_examples": 250}], "download_size": 307394, "dataset_size": 1098612.5541163236}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T15:40:00+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cnn_dailymail_250_finetune"
More Information needed
|
[
"# Dataset Card for \"cnn_dailymail_250_finetune\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cnn_dailymail_250_finetune\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cnn_dailymail_250_finetune\"\n\nMore Information needed"
] |
b9d914ab07b702e60be7462532de1b299bda2884
|
# Dataset Card for "cnn_dailymail_250_rm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jamestalentium/cnn_dailymail_250_rm
|
[
"region:us"
] |
2023-10-22T15:40:00+00:00
|
{"dataset_info": {"features": [{"name": "input_text", "dtype": "string"}, {"name": "output_text", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1098612.5541163236, "num_examples": 250}], "download_size": 303396, "dataset_size": 1098612.5541163236}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T15:40:01+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cnn_dailymail_250_rm"
More Information needed
|
[
"# Dataset Card for \"cnn_dailymail_250_rm\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cnn_dailymail_250_rm\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cnn_dailymail_250_rm\"\n\nMore Information needed"
] |
dff6785df8a33231e1633e995ba4743529027709
|
# Dataset Card for "cnn_dailymail_250_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jamestalentium/cnn_dailymail_250_test
|
[
"region:us"
] |
2023-10-22T15:40:01+00:00
|
{"dataset_info": {"features": [{"name": "input_text", "dtype": "string"}, {"name": "output_text", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 8255778.137510879, "num_examples": 1900}], "download_size": 2210923, "dataset_size": 8255778.137510879}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]}
|
2023-10-22T15:40:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cnn_dailymail_250_test"
More Information needed
|
[
"# Dataset Card for \"cnn_dailymail_250_test\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cnn_dailymail_250_test\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cnn_dailymail_250_test\"\n\nMore Information needed"
] |
1e6fcf1afb2aabfe10da15e7c08cf0412153de3b
|
# Dataset Card for "general10k_for-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
haseong8012/general10k_for-test
|
[
"region:us"
] |
2023-10-22T15:40:28+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "audio", "sequence": "float32"}], "splits": [{"name": "test", "num_bytes": 1824475284.3333333, "num_examples": 10000}], "download_size": 0, "dataset_size": 1824475284.3333333}}
|
2023-10-22T16:31:53+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "general10k_for-test"
More Information needed
|
[
"# Dataset Card for \"general10k_for-test\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"general10k_for-test\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"general10k_for-test\"\n\nMore Information needed"
] |
fa1aa2f7a050b015a03417b2ac45c34506c031b8
|
# Dataset Card for "SYSU_CD"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ericyu/SYSU_CD
|
[
"region:us"
] |
2023-10-22T15:44:46+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "val", "path": "data/val-*"}]}], "dataset_info": {"features": [{"name": "imageA", "dtype": "image"}, {"name": "imageB", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3393267984.0, "num_examples": 12000}, {"name": "test", "num_bytes": 1196988392.0, "num_examples": 4000}, {"name": "val", "num_bytes": 1164865940.0, "num_examples": 4000}], "download_size": 5814133284, "dataset_size": 5755122316.0}}
|
2023-10-22T15:50:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "SYSU_CD"
More Information needed
|
[
"# Dataset Card for \"SYSU_CD\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"SYSU_CD\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"SYSU_CD\"\n\nMore Information needed"
] |
15bf339320d74faa9caf93ba78db38a89fcaac98
|
# [Roy: Rapid Prototyping of Agents with Hotswappable Components](https://github.com/JosefAlbers/Roy)
[<img src="https://colab.research.google.com/assets/colab-badge.svg" />](https://colab.research.google.com/github/JosefAlbers/Roy/blob/main/quickstart.ipynb)
[](https://zenodo.org/badge/latestdoi/699801819)
Roy is a lightweight alternative to `autogen` for developing advanced multi-agent systems using language models. It aims to simplify and democratize the development of emergent collective intelligence.
## Features
- **Model Agnostic**: Use any LLM, no external APIs required. Defaults to a 4-bit quantized wizard-coder-python model for efficiency.
- **Modular and Composable**: Roy decomposes agent interactions into reusable building blocks - templating, retrieving, generating, executing.
- **Transparent and Customizable**: Every method has a clear purpose. Easily swap out components or add new capabilities.
## Quickstart
```sh
git clone https://github.com/JosefAlbers/Roy
cd Roy
pip install -r requirements.txt
pip install -U transformers optimum accelerate auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
```python
from roy import Roy, Roys
roy = Roy()
s = '"What date is today? Which big tech stock has the largest year-to-date gain this year? How much is the gain?'
roy.generate(roy.format(s))
```
### **Rapid Benchmarking**
Roy provides a simple way to evaluate and iterate on your model architecture.. This allows you to:
- Easily swap out components, such as language models, prompt formats, agent architectures, etc
- Benchmark on different tasks like arithmetic, python coding, etc (default is OpenAI's HumanEval)
- Identify agent's areas of strengths and weaknesses
```python
from Roy.util import piecewise_human_eval
# Comparing different language models
piecewise_human_eval(0, lm_id='TheBloke/WizardCoder-Python-7B-V1.0-GPTQ')
# -> {'pass@1': 0.6341463414634146}
piecewise_human_eval(0, lm_id='TheBloke/tora-code-7B-v1.0-GPTQ')
# -> {'pass@1': 0.5609756097560976}
piecewise_human_eval(0, lm_id='TheBloke/Arithmo-Mistral-7B-GPTQ')
# -> {'pass@1': 0.5121951219512195}
# Testing a custom agent architecture
piecewise_human_eval(0, fx=<your_custom_Roy_agent>)
```
*Takes around 30 minutes each on a free Google Colab runtime.*
### **Constrained Beam Search**
Use templates to structure conversations (control output length, format, etc)
```python
roy.generate(s, ('\n```python', '\n```')) # Generate a python code block
roy.generate(s, (('\n```python', '\n```javascript'), '\n```')) # Generate python or javascript codes
roy.generate(s, ('\n```python', 100, '\n```')) # Generate a code block of size less than 100 tokens
```
### **Retrieval Augmented Generation**
Enhance generation with relevant knowledge.
```python
s = 'Create a text to image generator.'
r = roy.retrieve(s, n_topk=3, src='huggingface')
[roy.generate(s) for s in r]
```
### **Auto-Feedback**
Agents recursively improve via critiquing each other.
```python
s = "Create a secure and unique secret code word with a Python script that involves multiple steps to ensure the highest level of confidentiality and protection.\n"
for i in range(2):
c = roy.generate(s, prohibitions=['input'])
s += roy.execute(c)
```
### **Auto-Grinding**
Agents collaborate in tight loops to iteratively refine outputs to specification.
```python
user_request = "Compare the year-to-date gain for META and TESLA."
ai_response = roy.generate(user_request, ('\n```python', ' yfinance', '\n```'))
for i in range(2):
shell_execution = roy.execute(ai_response)
if 'ModuleNotFoundError' in shell_execution:
roy.execute(roy.generate(roy.format(f'Write a shell command to address the error encountered while running this Python code:\n\n{shell_execution}')))
elif 'Error' in shell_execution:
ai_response = roy.generate(roy.format(f'Modify the code to address the error encountered:\n\n{shell_execution}'))
else:
break
```
### **Multi-Agent**
Flexible primitives to build ecosystems of agents.
```python
roys = Roys()
# AutoFeedback
roys.create(agents = {'Coder': 'i = execute(generate(i))'})
roys.start(requests = {'i': 'Create a mobile application that can track the health of elderly people living alone in rural areas.'})
# Retrieval Augmented Generation
roys.create(
agents = {
'Retriever': 'r = retrieve(i)',
'Generator': 'o = generate(r)',
})
roys.start(requests = {'i': 'Create a Deutsch to English translator.'})
# Providing a custom tool to one of the agents using lambda
roys.create(
agents = {
'Coder': 'c = generate(i)',
'Proxy': 'c = custom(execute(c))',
},
tools = {'custom': lambda x:f'Modify the code to address the error encountered:\n\n{x}' if 'Error' in x else None})
roys.start(requests = {'i': 'Compare the year-to-date gain for META and TESLA.'})
# Another way to create a custom tool for agents
def custom_switch(self, c):
py_str = 'Modify the code to address the error encountered:\n\n'
sh_str = 'Write a shell command to address the error encountered while running this Python code:\n\n'
x = self.execute(c)
if 'ModuleNotFoundError' in x:
self.execute(self.generate(sh_str+x))
elif 'Error' in x:
self.dict_cache['i'] = [py_str+x]
else:
return '<<<Success>>>:\n\n'+x
roys.create(
agents = {
'Coder': 'c = generate(i)',
'Proxy': '_ = protocol(c)',
},
tools = {'protocol': custom_switch})
roys.start(requests = {'i': 'Compare the year-to-date gain for META and TESLA.'})
```
## Emergent Multi-Agent Dynamics
Roy aims to facilitate the emergence of complex, adaptive multi-agent systems. It draws inspiration from biological and AI concepts to enable decentralized coordination and continual learning.
- **Survival of the Fittest** - Periodically evaluate and selectively retain high-performing agents based on accuracy, speed etc. Agents adapt through peer interactions.
- **Mixture of Experts** - Designate agent expertise, dynamically assemble specialist teams, and route tasks to optimal experts. Continuously refine and augment experts.
These mechanisms facilitate the emergence of capable, adaptive, and efficient agent collectives.
## Get Involved
Roy is under active development. We welcome contributions - feel free to open issues and PRs!
## Support the Project
If you found this project helpful or interesting and want to support more of these experiments, feel free to buy me a coffee!
<a href="https://www.buymeacoffee.com/albersj66a" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy Me A Coffee" height="25" width="100"></a>
|
JosefAlbers/Roy
|
[
"task_categories:question-answering",
"task_categories:translation",
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:conversational",
"agent",
"multi-agent",
"autogpt",
"autogen",
"agentgpt",
"gptq",
"wizard",
"code-generation",
"retrieval-augmented-generation",
"humaneval",
"region:us"
] |
2023-10-22T15:49:15+00:00
|
{"task_categories": ["question-answering", "translation", "summarization", "text-generation", "text2text-generation", "conversational"], "tags": ["agent", "multi-agent", "autogpt", "autogen", "agentgpt", "gptq", "wizard", "code-generation", "retrieval-augmented-generation", "humaneval"]}
|
2023-10-22T15:52:35+00:00
|
[] |
[] |
TAGS
#task_categories-question-answering #task_categories-translation #task_categories-summarization #task_categories-text-generation #task_categories-text2text-generation #task_categories-conversational #agent #multi-agent #autogpt #autogen #agentgpt #gptq #wizard #code-generation #retrieval-augmented-generation #humaneval #region-us
|
# Roy: Rapid Prototyping of Agents with Hotswappable Components
<img src="URL />

- Identify agent's areas of strengths and weaknesses
*Takes around 30 minutes each on a free Google Colab runtime.*
### Constrained Beam Search
Use templates to structure conversations (control output length, format, etc)
python', '\npython', '\n')) # Generate python or javascript codes
roy.generate(s, ('\n')) # Generate a code block of size less than 100 tokens
python
s = 'Create a text to image generator.'
r = roy.retrieve(s, n_topk=3, src='huggingface')
[roy.generate(s) for s in r]
python
s = "Create a secure and unique secret code word with a Python script that involves multiple steps to ensure the highest level of confidentiality and protection.\n"
for i in range(2):
c = roy.generate(s, prohibitions=['input'])
s += roy.execute(c)
python
user_request = "Compare the year-to-date gain for META and TESLA."
ai_response = roy.generate(user_request, ('\n'))
for i in range(2):
shell_execution = roy.execute(ai_response)
if 'ModuleNotFoundError' in shell_execution:
roy.execute(roy.generate(URL(f'Write a shell command to address the error encountered while running this Python code:\n\n{shell_execution}')))
elif 'Error' in shell_execution:
ai_response = roy.generate(URL(f'Modify the code to address the error encountered:\n\n{shell_execution}'))
else:
break
python
roys = Roys()
# AutoFeedback
URL(agents = {'Coder': 'i = execute(generate(i))'})
URL(requests = {'i': 'Create a mobile application that can track the health of elderly people living alone in rural areas.'})
# Retrieval Augmented Generation
URL(
agents = {
'Retriever': 'r = retrieve(i)',
'Generator': 'o = generate(r)',
})
URL(requests = {'i': 'Create a Deutsch to English translator.'})
# Providing a custom tool to one of the agents using lambda
URL(
agents = {
'Coder': 'c = generate(i)',
'Proxy': 'c = custom(execute(c))',
},
tools = {'custom': lambda x:f'Modify the code to address the error encountered:\n\n{x}' if 'Error' in x else None})
URL(requests = {'i': 'Compare the year-to-date gain for META and TESLA.'})
# Another way to create a custom tool for agents
def custom_switch(self, c):
py_str = 'Modify the code to address the error encountered:\n\n'
sh_str = 'Write a shell command to address the error encountered while running this Python code:\n\n'
x = self.execute(c)
if 'ModuleNotFoundError' in x:
self.execute(self.generate(sh_str+x))
elif 'Error' in x:
self.dict_cache['i'] = [py_str+x]
else:
return '<<<Success>>>:\n\n'+x
URL(
agents = {
'Coder': 'c = generate(i)',
'Proxy': '_ = protocol(c)',
},
tools = {'protocol': custom_switch})
URL(requests = {'i': 'Compare the year-to-date gain for META and TESLA.'})
'''
## Emergent Multi-Agent Dynamics
Roy aims to facilitate the emergence of complex, adaptive multi-agent systems. It draws inspiration from biological and AI concepts to enable decentralized coordination and continual learning.
- Survival of the Fittest - Periodically evaluate and selectively retain high-performing agents based on accuracy, speed etc. Agents adapt through peer interactions.
- Mixture of Experts - Designate agent expertise, dynamically assemble specialist teams, and route tasks to optimal experts. Continuously refine and augment experts.
These mechanisms facilitate the emergence of capable, adaptive, and efficient agent collectives.
## Get Involved
Roy is under active development. We welcome contributions - feel free to open issues and PRs!
## Support the Project
If you found this project helpful or interesting and want to support more of these experiments, feel free to buy me a coffee!
<a href="URL target="_blank"><img src="URL alt="Buy Me A Coffee" height="25" width="100"></a>
|
[
"# Roy: Rapid Prototyping of Agents with Hotswappable Components\n\n<img src=\"URL />\n\n\n- Identify agent's areas of strengths and weaknesses\n\n\n\n*Takes around 30 minutes each on a free Google Colab runtime.*",
"### Constrained Beam Search\n\nUse templates to structure conversations (control output length, format, etc)\n\npython', '\\npython', '\\n')) # Generate python or javascript codes\nroy.generate(s, ('\\n')) # Generate a code block of size less than 100 tokens\npython\ns = 'Create a text to image generator.'\nr = roy.retrieve(s, n_topk=3, src='huggingface')\n[roy.generate(s) for s in r]\npython\ns = \"Create a secure and unique secret code word with a Python script that involves multiple steps to ensure the highest level of confidentiality and protection.\\n\"\nfor i in range(2):\n c = roy.generate(s, prohibitions=['input'])\n s += roy.execute(c)\npython\nuser_request = \"Compare the year-to-date gain for META and TESLA.\"\nai_response = roy.generate(user_request, ('\\n'))\nfor i in range(2):\n shell_execution = roy.execute(ai_response)\n if 'ModuleNotFoundError' in shell_execution:\n roy.execute(roy.generate(URL(f'Write a shell command to address the error encountered while running this Python code:\\n\\n{shell_execution}')))\n elif 'Error' in shell_execution:\n ai_response = roy.generate(URL(f'Modify the code to address the error encountered:\\n\\n{shell_execution}'))\n else:\n break\npython\nroys = Roys()",
"# AutoFeedback\nURL(agents = {'Coder': 'i = execute(generate(i))'})\nURL(requests = {'i': 'Create a mobile application that can track the health of elderly people living alone in rural areas.'})",
"# Retrieval Augmented Generation\nURL(\n agents = {\n 'Retriever': 'r = retrieve(i)',\n 'Generator': 'o = generate(r)',\n })\nURL(requests = {'i': 'Create a Deutsch to English translator.'})",
"# Providing a custom tool to one of the agents using lambda\nURL(\n agents = {\n 'Coder': 'c = generate(i)',\n 'Proxy': 'c = custom(execute(c))',\n },\n tools = {'custom': lambda x:f'Modify the code to address the error encountered:\\n\\n{x}' if 'Error' in x else None})\nURL(requests = {'i': 'Compare the year-to-date gain for META and TESLA.'})",
"# Another way to create a custom tool for agents\ndef custom_switch(self, c):\n py_str = 'Modify the code to address the error encountered:\\n\\n'\n sh_str = 'Write a shell command to address the error encountered while running this Python code:\\n\\n'\n x = self.execute(c)\n if 'ModuleNotFoundError' in x:\n self.execute(self.generate(sh_str+x))\n elif 'Error' in x:\n self.dict_cache['i'] = [py_str+x]\n else:\n return '<<<Success>>>:\\n\\n'+x\n \nURL(\n agents = {\n 'Coder': 'c = generate(i)',\n 'Proxy': '_ = protocol(c)',\n },\n tools = {'protocol': custom_switch})\nURL(requests = {'i': 'Compare the year-to-date gain for META and TESLA.'})\n'''",
"## Emergent Multi-Agent Dynamics\n\nRoy aims to facilitate the emergence of complex, adaptive multi-agent systems. It draws inspiration from biological and AI concepts to enable decentralized coordination and continual learning.\n\n- Survival of the Fittest - Periodically evaluate and selectively retain high-performing agents based on accuracy, speed etc. Agents adapt through peer interactions.\n\n- Mixture of Experts - Designate agent expertise, dynamically assemble specialist teams, and route tasks to optimal experts. Continuously refine and augment experts. \n\nThese mechanisms facilitate the emergence of capable, adaptive, and efficient agent collectives.",
"## Get Involved\n\nRoy is under active development. We welcome contributions - feel free to open issues and PRs!",
"## Support the Project\n\nIf you found this project helpful or interesting and want to support more of these experiments, feel free to buy me a coffee!\n\n<a href=\"URL target=\"_blank\"><img src=\"URL alt=\"Buy Me A Coffee\" height=\"25\" width=\"100\"></a>"
] |
[
"TAGS\n#task_categories-question-answering #task_categories-translation #task_categories-summarization #task_categories-text-generation #task_categories-text2text-generation #task_categories-conversational #agent #multi-agent #autogpt #autogen #agentgpt #gptq #wizard #code-generation #retrieval-augmented-generation #humaneval #region-us \n",
"# Roy: Rapid Prototyping of Agents with Hotswappable Components\n\n<img src=\"URL />\n\n\n- Identify agent's areas of strengths and weaknesses\n\n\n\n*Takes around 30 minutes each on a free Google Colab runtime.*",
"### Constrained Beam Search\n\nUse templates to structure conversations (control output length, format, etc)\n\npython', '\\npython', '\\n')) # Generate python or javascript codes\nroy.generate(s, ('\\n')) # Generate a code block of size less than 100 tokens\npython\ns = 'Create a text to image generator.'\nr = roy.retrieve(s, n_topk=3, src='huggingface')\n[roy.generate(s) for s in r]\npython\ns = \"Create a secure and unique secret code word with a Python script that involves multiple steps to ensure the highest level of confidentiality and protection.\\n\"\nfor i in range(2):\n c = roy.generate(s, prohibitions=['input'])\n s += roy.execute(c)\npython\nuser_request = \"Compare the year-to-date gain for META and TESLA.\"\nai_response = roy.generate(user_request, ('\\n'))\nfor i in range(2):\n shell_execution = roy.execute(ai_response)\n if 'ModuleNotFoundError' in shell_execution:\n roy.execute(roy.generate(URL(f'Write a shell command to address the error encountered while running this Python code:\\n\\n{shell_execution}')))\n elif 'Error' in shell_execution:\n ai_response = roy.generate(URL(f'Modify the code to address the error encountered:\\n\\n{shell_execution}'))\n else:\n break\npython\nroys = Roys()",
"# AutoFeedback\nURL(agents = {'Coder': 'i = execute(generate(i))'})\nURL(requests = {'i': 'Create a mobile application that can track the health of elderly people living alone in rural areas.'})",
"# Retrieval Augmented Generation\nURL(\n agents = {\n 'Retriever': 'r = retrieve(i)',\n 'Generator': 'o = generate(r)',\n })\nURL(requests = {'i': 'Create a Deutsch to English translator.'})",
"# Providing a custom tool to one of the agents using lambda\nURL(\n agents = {\n 'Coder': 'c = generate(i)',\n 'Proxy': 'c = custom(execute(c))',\n },\n tools = {'custom': lambda x:f'Modify the code to address the error encountered:\\n\\n{x}' if 'Error' in x else None})\nURL(requests = {'i': 'Compare the year-to-date gain for META and TESLA.'})",
"# Another way to create a custom tool for agents\ndef custom_switch(self, c):\n py_str = 'Modify the code to address the error encountered:\\n\\n'\n sh_str = 'Write a shell command to address the error encountered while running this Python code:\\n\\n'\n x = self.execute(c)\n if 'ModuleNotFoundError' in x:\n self.execute(self.generate(sh_str+x))\n elif 'Error' in x:\n self.dict_cache['i'] = [py_str+x]\n else:\n return '<<<Success>>>:\\n\\n'+x\n \nURL(\n agents = {\n 'Coder': 'c = generate(i)',\n 'Proxy': '_ = protocol(c)',\n },\n tools = {'protocol': custom_switch})\nURL(requests = {'i': 'Compare the year-to-date gain for META and TESLA.'})\n'''",
"## Emergent Multi-Agent Dynamics\n\nRoy aims to facilitate the emergence of complex, adaptive multi-agent systems. It draws inspiration from biological and AI concepts to enable decentralized coordination and continual learning.\n\n- Survival of the Fittest - Periodically evaluate and selectively retain high-performing agents based on accuracy, speed etc. Agents adapt through peer interactions.\n\n- Mixture of Experts - Designate agent expertise, dynamically assemble specialist teams, and route tasks to optimal experts. Continuously refine and augment experts. \n\nThese mechanisms facilitate the emergence of capable, adaptive, and efficient agent collectives.",
"## Get Involved\n\nRoy is under active development. We welcome contributions - feel free to open issues and PRs!",
"## Support the Project\n\nIf you found this project helpful or interesting and want to support more of these experiments, feel free to buy me a coffee!\n\n<a href=\"URL target=\"_blank\"><img src=\"URL alt=\"Buy Me A Coffee\" height=\"25\" width=\"100\"></a>"
] |
[
115,
73,
108,
3,
118,
407,
67,
72,
132,
238,
149,
25,
68
] |
[
"passage: TAGS\n#task_categories-question-answering #task_categories-translation #task_categories-summarization #task_categories-text-generation #task_categories-text2text-generation #task_categories-conversational #agent #multi-agent #autogpt #autogen #agentgpt #gptq #wizard #code-generation #retrieval-augmented-generation #humaneval #region-us \n# Roy: Rapid Prototyping of Agents with Hotswappable Components\n\n<img src=\"URL />\n\n\n- Identify agent's areas of strengths and weaknesses\n\n\n\n*Takes around 30 minutes each on a free Google Colab runtime.*",
"passage: ### Constrained Beam Search\n\nUse templates to structure conversations (control output length, format, etc)\n\npython', '\\npython', '\\n')) # Generate python or javascript codes\nroy.generate(s, ('\\n')) # Generate a code block of size less than 100 tokens\npython\ns = 'Create a text to image generator.'\nr = roy.retrieve(s, n_topk=3, src='huggingface')\n[roy.generate(s) for s in r]\npython\ns = \"Create a secure and unique secret code word with a Python script that involves multiple steps to ensure the highest level of confidentiality and protection.\\n\"\nfor i in range(2):\n c = roy.generate(s, prohibitions=['input'])\n s += roy.execute(c)\npython\nuser_request = \"Compare the year-to-date gain for META and TESLA.\"\nai_response = roy.generate(user_request, ('\\n'))\nfor i in range(2):\n shell_execution = roy.execute(ai_response)\n if 'ModuleNotFoundError' in shell_execution:\n roy.execute(roy.generate(URL(f'Write a shell command to address the error encountered while running this Python code:\\n\\n{shell_execution}')))\n elif 'Error' in shell_execution:\n ai_response = roy.generate(URL(f'Modify the code to address the error encountered:\\n\\n{shell_execution}'))\n else:\n break\npython\nroys = Roys()# AutoFeedback\nURL(agents = {'Coder': 'i = execute(generate(i))'})\nURL(requests = {'i': 'Create a mobile application that can track the health of elderly people living alone in rural areas.'})# Retrieval Augmented Generation\nURL(\n agents = {\n 'Retriever': 'r = retrieve(i)',\n 'Generator': 'o = generate(r)',\n })\nURL(requests = {'i': 'Create a Deutsch to English translator.'})# Providing a custom tool to one of the agents using lambda\nURL(\n agents = {\n 'Coder': 'c = generate(i)',\n 'Proxy': 'c = custom(execute(c))',\n },\n tools = {'custom': lambda x:f'Modify the code to address the error encountered:\\n\\n{x}' if 'Error' in x else None})\nURL(requests = {'i': 'Compare the year-to-date gain for META and TESLA.'})"
] |
24242af820037d1021376b9d4378c370cab1d380
|
# Dataset Card for "text-data-various-domain"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yongchanskii/text-data-various-domain
|
[
"region:us"
] |
2023-10-22T16:31:18+00:00
|
{"dataset_info": [{"config_name": "default", "features": [{"name": "docId", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "domainTag", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3898906.4, "num_examples": 12000}, {"name": "test", "num_bytes": 974726.6, "num_examples": 3000}], "download_size": 2812933, "dataset_size": 4873633.0}, {"config_name": "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", "features": [{"name": "docId", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "domainTag", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 830349677, "num_examples": 2608866}, {"name": "test", "num_bytes": 207814022, "num_examples": 652217}], "download_size": 624238878, "dataset_size": 1038163699}], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}, {"config_name": "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", "data_files": [{"split": "train", "path": "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/train-*"}, {"split": "test", "path": "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/test-*"}]}]}
|
2023-12-15T00:54:53+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "text-data-various-domain"
More Information needed
|
[
"# Dataset Card for \"text-data-various-domain\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"text-data-various-domain\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"text-data-various-domain\"\n\nMore Information needed"
] |
9ea52dd1d89ba918c22f0b00d1679ce020c229a9
|
This dataset contains question/answer pairs from a French legal protection insurance (https://www.service-public.fr/particuliers/vosdroits/F3049?lang=en).
The objective of this dataset is to contribute to open source research projects aiming to, for instance:
* fine-tune LLMs on high-quality datasets, specializing them in the insurance domain
* develop new question/answer applications using Retrieval Augmented Generation (RAG) for insurance contracts
* assess the knowledge of language models in the insurance field
* more generally, apply LLMs to the insurance domain for better understanding and increased transparency of this industry.
Other datasets of the same kind are also available - or will be available soon - and are part of this research effort. See here: https://huggingface.co/collections/zelros/legal-protection-insurance-6536e8f389dd48faca78447e
Here is an example of usages of this dataset: https://huggingface.co/spaces/zelros/The-legal-protection-insurance-comparator
|
zelros/pj-da
|
[
"insurance",
"region:us"
] |
2023-10-22T16:50:52+00:00
|
{"tags": ["insurance"]}
|
2023-11-05T23:25:21+00:00
|
[] |
[] |
TAGS
#insurance #region-us
|
This dataset contains question/answer pairs from a French legal protection insurance (URL
The objective of this dataset is to contribute to open source research projects aiming to, for instance:
* fine-tune LLMs on high-quality datasets, specializing them in the insurance domain
* develop new question/answer applications using Retrieval Augmented Generation (RAG) for insurance contracts
* assess the knowledge of language models in the insurance field
* more generally, apply LLMs to the insurance domain for better understanding and increased transparency of this industry.
Other datasets of the same kind are also available - or will be available soon - and are part of this research effort. See here: URL
Here is an example of usages of this dataset: URL
|
[] |
[
"TAGS\n#insurance #region-us \n"
] |
[
9
] |
[
"passage: TAGS\n#insurance #region-us \n"
] |
437baa37596cd064b1cc1845bc70e9c1b039edcf
|
# Dataset Card for "k8s-kubectl-35k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ComponentSoft/k8s-kubectl-35k
|
[
"region:us"
] |
2023-10-22T16:54:35+00:00
|
{"dataset_info": {"features": [{"name": "objective", "dtype": "string"}, {"name": "command_name", "dtype": "string"}, {"name": "command", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "syntax", "dtype": "string"}, {"name": "flags", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "chain_of_thought", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 42766088, "num_examples": 34884}], "download_size": 3522531, "dataset_size": 42766088}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T16:54:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "k8s-kubectl-35k"
More Information needed
|
[
"# Dataset Card for \"k8s-kubectl-35k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"k8s-kubectl-35k\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"k8s-kubectl-35k\"\n\nMore Information needed"
] |
e126ea080bdf22d5e2250ca311133a8d4358643e
|
# Dataset Card for "control-chatbot-to-subdomain"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Back-up/control-chatbot-to-subdomain
|
[
"region:us"
] |
2023-10-22T16:55:35+00:00
|
{"dataset_info": {"features": [{"name": "answers", "dtype": "string"}, {"name": "questions", "dtype": "string"}, {"name": "system_prompt", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 2337677, "num_examples": 5495}], "download_size": 241329, "dataset_size": 2337677}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]}
|
2023-10-22T17:00:04+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "control-chatbot-to-subdomain"
More Information needed
|
[
"# Dataset Card for \"control-chatbot-to-subdomain\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"control-chatbot-to-subdomain\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"control-chatbot-to-subdomain\"\n\nMore Information needed"
] |
98758b25ce705b8aa7f1283e7da6e19e60871870
|
# Dataset Card for "chemnlp-mp-cifs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
kjappelbaum/chemnlp-mp-cifs
|
[
"region:us"
] |
2023-10-22T17:23:04+00:00
|
{"dataset_info": {"features": [{"name": "formula", "dtype": "string"}, {"name": "density", "dtype": "float64"}, {"name": "spacegroup", "dtype": "string"}, {"name": "spacegroup_number", "dtype": "int64"}, {"name": "cif", "dtype": "string"}, {"name": "is_longer_than_allowed", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 284514500, "num_examples": 154387}], "download_size": 95647734, "dataset_size": 284514500}}
|
2023-10-30T08:56:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chemnlp-mp-cifs"
More Information needed
|
[
"# Dataset Card for \"chemnlp-mp-cifs\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chemnlp-mp-cifs\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chemnlp-mp-cifs\"\n\nMore Information needed"
] |
4100a5c7852c137d0c43eeaa732a7cd0180ec75a
|
# Dataset Card for "physics_dataset_standardized_unified"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/physics_dataset_standardized_unified
|
[
"region:us"
] |
2023-10-22T17:28:31+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 49677244, "num_examples": 19999}], "download_size": 22747201, "dataset_size": 49677244}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-12-03T16:18:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "physics_dataset_standardized_unified"
More Information needed
|
[
"# Dataset Card for \"physics_dataset_standardized_unified\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"physics_dataset_standardized_unified\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"physics_dataset_standardized_unified\"\n\nMore Information needed"
] |
96e7fc956115eff1541c1b7da45c0a3e97627734
|
# Dataset Card for "physics_dataset_standardized_embedded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/physics_dataset_standardized_embedded
|
[
"region:us"
] |
2023-10-22T17:29:17+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 131673144, "num_examples": 19999}], "download_size": 62942340, "dataset_size": 131673144}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-12-03T16:19:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "physics_dataset_standardized_embedded"
More Information needed
|
[
"# Dataset Card for \"physics_dataset_standardized_embedded\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"physics_dataset_standardized_embedded\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"physics_dataset_standardized_embedded\"\n\nMore Information needed"
] |
29dd70292615bef8ff2a46a33262e6977ccd699d
|
# Dataset Card for "physics_dataset_standardized_cluster_0_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/physics_dataset_standardized_cluster_0_std
|
[
"region:us"
] |
2023-10-22T17:30:29+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 51217167, "num_examples": 39998}], "download_size": 23141121, "dataset_size": 51217167}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-12-01T18:57:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "physics_dataset_standardized_cluster_0_std"
More Information needed
|
[
"# Dataset Card for \"physics_dataset_standardized_cluster_0_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"physics_dataset_standardized_cluster_0_std\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"physics_dataset_standardized_cluster_0_std\"\n\nMore Information needed"
] |
ba51babc4f4d45f26f687a55d552c14da07a1183
|
# Dataset Card for "physics_dataset_standardized_cluster_0_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/physics_dataset_standardized_cluster_0_alpaca
|
[
"region:us"
] |
2023-10-22T17:30:30+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 50213758, "num_examples": 19998}], "download_size": 23638589, "dataset_size": 50213758}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-12-01T18:57:49+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "physics_dataset_standardized_cluster_0_alpaca"
More Information needed
|
[
"# Dataset Card for \"physics_dataset_standardized_cluster_0_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"physics_dataset_standardized_cluster_0_alpaca\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"physics_dataset_standardized_cluster_0_alpaca\"\n\nMore Information needed"
] |
e0f20288b5c4876f31a6ffcfa73288579c60fe17
|
# Dataset Card for "physics_dataset_standardized_cluster_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/physics_dataset_standardized_cluster_0
|
[
"region:us"
] |
2023-10-22T17:30:31+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 213749040, "num_examples": 19999}], "download_size": 63087699, "dataset_size": 213749040}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-12-01T18:57:51+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "physics_dataset_standardized_cluster_0"
More Information needed
|
[
"# Dataset Card for \"physics_dataset_standardized_cluster_0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"physics_dataset_standardized_cluster_0\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"physics_dataset_standardized_cluster_0\"\n\nMore Information needed"
] |
c40a560c99508a93a6ced020df6584466c7e9618
|
# Dataset Card for "physics_dataset_standardized_cluster_1_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/physics_dataset_standardized_cluster_1_std
|
[
"region:us"
] |
2023-10-22T17:30:44+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 13270296, "num_examples": 8714}], "download_size": 0, "dataset_size": 13270296}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T00:51:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "physics_dataset_standardized_cluster_1_std"
More Information needed
|
[
"# Dataset Card for \"physics_dataset_standardized_cluster_1_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"physics_dataset_standardized_cluster_1_std\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"physics_dataset_standardized_cluster_1_std\"\n\nMore Information needed"
] |
6143150811fed3695521a4c6bbc26da61658f6fa
|
# Dataset Card for "physics_dataset_standardized_cluster_1_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/physics_dataset_standardized_cluster_1_alpaca
|
[
"region:us"
] |
2023-10-22T17:30:46+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13048987, "num_examples": 4356}], "download_size": 0, "dataset_size": 13048987}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T00:52:01+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "physics_dataset_standardized_cluster_1_alpaca"
More Information needed
|
[
"# Dataset Card for \"physics_dataset_standardized_cluster_1_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"physics_dataset_standardized_cluster_1_alpaca\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"physics_dataset_standardized_cluster_1_alpaca\"\n\nMore Information needed"
] |
6ba3438758b5a89b3090daa16b6b00c335ff04a3
|
# Dataset Card for "physics_dataset_standardized_cluster_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/physics_dataset_standardized_cluster_1
|
[
"region:us"
] |
2023-10-22T17:30:48+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 48679635, "num_examples": 4357}], "download_size": 0, "dataset_size": 48679635}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T00:52:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "physics_dataset_standardized_cluster_1"
More Information needed
|
[
"# Dataset Card for \"physics_dataset_standardized_cluster_1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"physics_dataset_standardized_cluster_1\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"physics_dataset_standardized_cluster_1\"\n\nMore Information needed"
] |
19124f32bb44f2275a17b046d4af2ce20e43530c
|
# Dataset Card for "physics_dataset_standardized_cluster_2_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/physics_dataset_standardized_cluster_2_std
|
[
"region:us"
] |
2023-10-22T17:31:02+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 17146201, "num_examples": 11144}], "download_size": 0, "dataset_size": 17146201}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T00:52:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "physics_dataset_standardized_cluster_2_std"
More Information needed
|
[
"# Dataset Card for \"physics_dataset_standardized_cluster_2_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"physics_dataset_standardized_cluster_2_std\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"physics_dataset_standardized_cluster_2_std\"\n\nMore Information needed"
] |
e3b3544353b3abeb48f7bcfe0132ed2fb1beb2cf
|
# Dataset Card for "physics_dataset_standardized_cluster_2_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/physics_dataset_standardized_cluster_2_alpaca
|
[
"region:us"
] |
2023-10-22T17:31:04+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16864397, "num_examples": 5571}], "download_size": 0, "dataset_size": 16864397}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T00:52:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "physics_dataset_standardized_cluster_2_alpaca"
More Information needed
|
[
"# Dataset Card for \"physics_dataset_standardized_cluster_2_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"physics_dataset_standardized_cluster_2_alpaca\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"physics_dataset_standardized_cluster_2_alpaca\"\n\nMore Information needed"
] |
a6715e4381d2f5dfd7432fa0edf2f5c8e9a4353c
|
# Dataset Card for "physics_dataset_standardized_cluster_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/physics_dataset_standardized_cluster_2
|
[
"region:us"
] |
2023-10-22T17:31:05+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 62429845, "num_examples": 5572}], "download_size": 0, "dataset_size": 62429845}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T00:52:25+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "physics_dataset_standardized_cluster_2"
More Information needed
|
[
"# Dataset Card for \"physics_dataset_standardized_cluster_2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"physics_dataset_standardized_cluster_2\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"physics_dataset_standardized_cluster_2\"\n\nMore Information needed"
] |
7ddabcc318e89aa0f97d8a753a69ba0c72849efa
|
# Dataset Card for "physics_dataset_standardized_cluster_3_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/physics_dataset_standardized_cluster_3_std
|
[
"region:us"
] |
2023-10-22T17:31:21+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 8742291, "num_examples": 10242}], "download_size": 0, "dataset_size": 8742291}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T00:52:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "physics_dataset_standardized_cluster_3_std"
More Information needed
|
[
"# Dataset Card for \"physics_dataset_standardized_cluster_3_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"physics_dataset_standardized_cluster_3_std\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"physics_dataset_standardized_cluster_3_std\"\n\nMore Information needed"
] |
abf5175d3046097ae8f80fdc8f54ef720a8441f1
|
# Dataset Card for "physics_dataset_standardized_cluster_3_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/physics_dataset_standardized_cluster_3_alpaca
|
[
"region:us"
] |
2023-10-22T17:31:23+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8485286, "num_examples": 5120}], "download_size": 0, "dataset_size": 8485286}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T00:52:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "physics_dataset_standardized_cluster_3_alpaca"
More Information needed
|
[
"# Dataset Card for \"physics_dataset_standardized_cluster_3_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"physics_dataset_standardized_cluster_3_alpaca\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"physics_dataset_standardized_cluster_3_alpaca\"\n\nMore Information needed"
] |
c3bd2eae7e31f59be721d8ccfbde5bb95307b41c
|
# Dataset Card for "physics_dataset_standardized_cluster_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/physics_dataset_standardized_cluster_3
|
[
"region:us"
] |
2023-10-22T17:31:24+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 50360658, "num_examples": 5121}], "download_size": 0, "dataset_size": 50360658}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T00:52:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "physics_dataset_standardized_cluster_3"
More Information needed
|
[
"# Dataset Card for \"physics_dataset_standardized_cluster_3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"physics_dataset_standardized_cluster_3\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"physics_dataset_standardized_cluster_3\"\n\nMore Information needed"
] |
eb8e2aab3f18fccd3382820614c5563c56331c8f
|
# Dataset Card for "physics_dataset_standardized_cluster_4_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/physics_dataset_standardized_cluster_4_std
|
[
"region:us"
] |
2023-10-22T17:31:39+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 7508945, "num_examples": 6876}], "download_size": 0, "dataset_size": 7508945}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T00:53:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "physics_dataset_standardized_cluster_4_std"
More Information needed
|
[
"# Dataset Card for \"physics_dataset_standardized_cluster_4_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"physics_dataset_standardized_cluster_4_std\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"physics_dataset_standardized_cluster_4_std\"\n\nMore Information needed"
] |
cce140474175b08b743060686176d76f97435df3
|
# Dataset Card for "physics_dataset_standardized_cluster_4_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/physics_dataset_standardized_cluster_4_alpaca
|
[
"region:us"
] |
2023-10-22T17:31:40+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7334043, "num_examples": 3437}], "download_size": 0, "dataset_size": 7334043}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T00:53:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "physics_dataset_standardized_cluster_4_alpaca"
More Information needed
|
[
"# Dataset Card for \"physics_dataset_standardized_cluster_4_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"physics_dataset_standardized_cluster_4_alpaca\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"physics_dataset_standardized_cluster_4_alpaca\"\n\nMore Information needed"
] |
be2b58f43e95406943a4c83240b73f02510693b3
|
# Dataset Card for "physics_dataset_standardized_cluster_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/physics_dataset_standardized_cluster_4
|
[
"region:us"
] |
2023-10-22T17:31:41+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 35449571, "num_examples": 3438}], "download_size": 0, "dataset_size": 35449571}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T00:53:07+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "physics_dataset_standardized_cluster_4"
More Information needed
|
[
"# Dataset Card for \"physics_dataset_standardized_cluster_4\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"physics_dataset_standardized_cluster_4\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"physics_dataset_standardized_cluster_4\"\n\nMore Information needed"
] |
a68c0134b878b1e760224ca1819a473430e91fd3
|
# Dataset Card for "biology_dataset_standardized_unified"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/biology_dataset_standardized_unified
|
[
"region:us"
] |
2023-10-22T17:39:24+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 59401701, "num_examples": 19999}], "download_size": 0, "dataset_size": 59401701}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T13:42:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_unified"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_unified\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_unified\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_unified\"\n\nMore Information needed"
] |
e05c64cc51be53ba81f1c470ca75dea94372d02f
|
# Dataset Card for "biology_dataset_standardized_embedded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/biology_dataset_standardized_embedded
|
[
"region:us"
] |
2023-10-22T17:40:18+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 141397601, "num_examples": 19999}], "download_size": 0, "dataset_size": 141397601}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T13:43:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_embedded"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_embedded\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_embedded\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_embedded\"\n\nMore Information needed"
] |
47d7b6232c48ccf31af48ce206aa12f359324ecd
|
# Dataset Card for "biology_dataset_standardized_cluster_0_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/biology_dataset_standardized_cluster_0_std
|
[
"region:us"
] |
2023-10-22T17:46:35+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 13287061, "num_examples": 8108}], "download_size": 0, "dataset_size": 13287061}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T13:44:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_0_std"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_0_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_0_std\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_0_std\"\n\nMore Information needed"
] |
0f379113bdf0e9fa51666d2dde77ee16a10eac3f
|
# Dataset Card for "biology_dataset_standardized_cluster_0_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/biology_dataset_standardized_cluster_0_alpaca
|
[
"region:us"
] |
2023-10-22T17:46:37+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13081092, "num_examples": 4053}], "download_size": 0, "dataset_size": 13081092}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T13:44:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_0_alpaca"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_0_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_0_alpaca\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_0_alpaca\"\n\nMore Information needed"
] |
314205c3ed245a5f38ace37bf7fec11945a6e6cf
|
# Dataset Card for "biology_dataset_standardized_cluster_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/biology_dataset_standardized_cluster_0
|
[
"region:us"
] |
2023-10-22T17:46:38+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 46233919, "num_examples": 4054}], "download_size": 0, "dataset_size": 46233919}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T13:44:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_0"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_0\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_0\"\n\nMore Information needed"
] |
60a174a02822dbf35bb9ce04e3338eac271cbbcf
|
# Dataset Card for "biology_dataset_standardized_cluster_1_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/biology_dataset_standardized_cluster_1_std
|
[
"region:us"
] |
2023-10-22T17:46:52+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2614933, "num_examples": 1914}], "download_size": 0, "dataset_size": 2614933}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T13:44:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_1_std"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_1_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_1_std\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_1_std\"\n\nMore Information needed"
] |
152bdbfef2c8e50ac689dca4077987b6d2347bad
|
# Dataset Card for "biology_dataset_standardized_cluster_1_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/biology_dataset_standardized_cluster_1_alpaca
|
[
"region:us"
] |
2023-10-22T17:46:53+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2564875, "num_examples": 956}], "download_size": 0, "dataset_size": 2564875}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T13:45:00+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_1_alpaca"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_1_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_1_alpaca\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_1_alpaca\"\n\nMore Information needed"
] |
d400f7b67afe251c7931f6326af9bfcfd8b856c2
|
# Dataset Card for "biology_dataset_standardized_cluster_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/biology_dataset_standardized_cluster_1
|
[
"region:us"
] |
2023-10-22T17:46:54+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 10392472, "num_examples": 957}], "download_size": 0, "dataset_size": 10392472}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T13:45:01+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_1"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_1\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_1\"\n\nMore Information needed"
] |
30e3a795393bf51cf32ec4b109ca9b08112e70a6
|
# Dataset Card for "biology_dataset_standardized_cluster_2_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/biology_dataset_standardized_cluster_2_std
|
[
"region:us"
] |
2023-10-22T17:47:07+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 9617078, "num_examples": 6614}], "download_size": 0, "dataset_size": 9617078}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T13:45:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_2_std"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_2_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_2_std\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_2_std\"\n\nMore Information needed"
] |
e02ac5f9985e713ec35d928da5e17f5ed00e4ad3
|
# Dataset Card for "biology_dataset_standardized_cluster_2_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/biology_dataset_standardized_cluster_2_alpaca
|
[
"region:us"
] |
2023-10-22T17:47:08+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9448881, "num_examples": 3306}], "download_size": 0, "dataset_size": 9448881}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T13:45:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_2_alpaca"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_2_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_2_alpaca\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_2_alpaca\"\n\nMore Information needed"
] |
c9922ec5a59e7a4cc5fe5baead29b6d74ae81f62
|
# Dataset Card for "biology_dataset_standardized_cluster_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/biology_dataset_standardized_cluster_2
|
[
"region:us"
] |
2023-10-22T17:47:09+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 36493067, "num_examples": 3307}], "download_size": 0, "dataset_size": 36493067}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T13:45:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_2"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_2\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_2\"\n\nMore Information needed"
] |
bef81b9f3df2d70e0e6e46ad82ab076f985f9985
|
# Dataset Card for "biology_dataset_standardized_cluster_3_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/biology_dataset_standardized_cluster_3_std
|
[
"region:us"
] |
2023-10-22T17:47:23+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 23229882, "num_examples": 14928}], "download_size": 0, "dataset_size": 23229882}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T13:45:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_3_std"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_3_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_3_std\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_3_std\"\n\nMore Information needed"
] |
05a0440b640b58cf65375c1a2dfe7e07d3800765
|
# Dataset Card for "biology_dataset_standardized_cluster_3_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/biology_dataset_standardized_cluster_3_alpaca
|
[
"region:us"
] |
2023-10-22T17:47:26+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22853614, "num_examples": 7463}], "download_size": 0, "dataset_size": 22853614}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T13:45:44+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_3_alpaca"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_3_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_3_alpaca\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_3_alpaca\"\n\nMore Information needed"
] |
17138787c6d7bfbaeb31a8281aa0d9d08411ad62
|
# Dataset Card for "biology_dataset_standardized_cluster_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/biology_dataset_standardized_cluster_3
|
[
"region:us"
] |
2023-10-22T17:47:27+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 83889810, "num_examples": 7464}], "download_size": 0, "dataset_size": 83889810}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T13:45:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_3"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_3\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_3\"\n\nMore Information needed"
] |
8afcb948c71319c227251d5a15292157f7ed892e
|
# Dataset Card for "biology_dataset_standardized_cluster_4_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/biology_dataset_standardized_cluster_4_std
|
[
"region:us"
] |
2023-10-22T17:47:43+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 12192670, "num_examples": 8434}], "download_size": 0, "dataset_size": 12192670}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T13:46:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_4_std"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_4_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_4_std\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_4_std\"\n\nMore Information needed"
] |
ec9e1d44420735f220d310bf282a9149fa750ec7
|
# Dataset Card for "biology_dataset_standardized_cluster_4_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/biology_dataset_standardized_cluster_4_alpaca
|
[
"region:us"
] |
2023-10-22T17:47:44+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11979113, "num_examples": 4216}], "download_size": 0, "dataset_size": 11979113}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T13:46:08+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_4_alpaca"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_4_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_4_alpaca\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_4_alpaca\"\n\nMore Information needed"
] |
96084075def78262028590a6343d423773999302
|
# Dataset Card for "biology_dataset_standardized_cluster_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/biology_dataset_standardized_cluster_4
|
[
"region:us"
] |
2023-10-22T17:47:45+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 46464229, "num_examples": 4217}], "download_size": 0, "dataset_size": 46464229}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T13:46:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_4"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_4\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_4\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_4\"\n\nMore Information needed"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.