sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fb8ea937e8ace15512a56c8b15fcb45e76550b59 | # Dataset Card for "tokenized_T5_base"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | witchling22/tokenized_T5_base | [
"region:us"
] | 2023-12-01T20:42:42+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "source_labels", "dtype": "string"}, {"name": "rouge_scores", "dtype": "string"}, {"name": "paper_id", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "full_source_text", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 17340567, "num_examples": 1992}, {"name": "test", "num_bytes": 5620222, "num_examples": 618}, {"name": "validation", "num_bytes": 5534448, "num_examples": 619}], "download_size": 6371599, "dataset_size": 28495237}} | 2023-12-03T04:39:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "tokenized_T5_base"
More Information needed | [
"# Dataset Card for \"tokenized_T5_base\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"tokenized_T5_base\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"tokenized_T5_base\"\n\nMore Information needed"
] |
df4bf8a86b32f84ca38d49a835b2f41a4113a83f | # Dataset Card for "tokenized_dataset_bart_fblarge"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | witchling22/tokenized_dataset_bart_fblarge | [
"region:us"
] | 2023-12-01T20:42:48+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "source_labels", "dtype": "string"}, {"name": "rouge_scores", "dtype": "string"}, {"name": "paper_id", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "full_source_text", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 17340567, "num_examples": 1992}, {"name": "test", "num_bytes": 5620222, "num_examples": 618}, {"name": "validation", "num_bytes": 5534448, "num_examples": 619}], "download_size": 6328102, "dataset_size": 28495237}} | 2023-12-01T20:42:51+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "tokenized_dataset_bart_fblarge"
More Information needed | [
"# Dataset Card for \"tokenized_dataset_bart_fblarge\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"tokenized_dataset_bart_fblarge\"\n\nMore Information needed"
] | [
6,
22
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"tokenized_dataset_bart_fblarge\"\n\nMore Information needed"
] |
1c212d15105835749ea9942c2d1f16e13b3b3a81 |
<div align="center">
# SkyScenes: A Synthetic Dataset for Aerial Scene Understanding
[Sahil Khose](https://sahilkhose.github.io/)\*, [Anisha Pal](https://anipal.github.io/)\*, [Aayushi Agarwal](https://www.linkedin.com/in/aayushiag/)\*, [Deepanshi](https://www.linkedin.com/in/deepanshi-d/)\*, [Judy Hoffman](https://faculty.cc.gatech.edu/~judy/), [Prithvijit Chattopadhyay](https://prithv1.xyz/)
</div>
<!-- This repository is the official Pytorch implementation for [SkyScenes](). -->
[](https://huggingface.co/datasets/hoffman-lab/SkyScenes) [](https://hoffman-group.github.io/SkyScenes/) [](https://arxiv.org/abs/2312.06719)
<!-- [](./assets/robust_aerial_videos.mp4) -->
<img src="./assets/skyscene_intro_teaser.png" width="100%"/>
## π Release Updates
Support for downloading the contents of [Images](https://huggingface.co/datasets/hoffman-lab/SkyScenes/tree/main/Images) using [datasets](https://huggingface.co/docs/datasets/index) is under developement and will be made available soon. In the meanwhile it is advised to use wget.
## Dataset Summary
Real-world aerial scene understanding is limited by a lack of datasets that contain densely annotated images curated under a diverse set of conditions.
Due to inherent challenges in obtaining such images in controlled real-world settings,
we present SkyScenes, a synthetic dataset of densely annotated aerial images captured from Unmanned Aerial Vehicle (UAV) perspectives.
**SkyScenes** images are carefully curated from **CARLA** to comprehensively capture diversity across layout (urban and rural maps), weather conditions, times of day, pitch angles and altitudes with corresponding semantic, instance and depth annotations.
**SkyScenes** features **33,600** images in total, which are spread across 8 towns, 5 weather and daytime conditions and 12 height and pitch variations.
<details>
<summary>Click to view the detailed list of all variations</summary>
- **Layout Variations(Total 8):**:
- Town01
- Town02
- Town03
- Town04
- Town05
- Town06
- Town07
- Town10HD
_Town07 features Rural Scenes, whereas the rest of the towns feature Urban scenes_
- **Weather & Daytime Variations(Total 5):**
- ClearNoon
- ClearSunset
- ClearNight
- CloudyNoon
- MidRainyNoon
- **Height and Pitch Variations of UAV Flight(Total 12):**
- Height = 15m, Pitch = 0Β°
- Height = 15m, Pitch = 45Β°
- Height = 15m, Pitch = 60Β°
- Height = 15m, Pitch = 90Β°
- Height = 35m, Pitch = 0Β°
- Height = 35m, Pitch = 45Β°
- Height = 35m, Pitch = 60Β°
- Height = 35m, Pitch = 90Β°
- Height = 60m, Pitch = 0Β°
- Height = 60m, Pitch = 45Β°
- Height = 60m, Pitch = 60Β°
- Height = 60m, Pitch = 90Β°
</details>
<details>
<summary>Click to view class definitions, color palette and class IDs for Semantic Segmentation</summary>
**SkyScenes** semantic segmentation labels span 28 classes which can be further collapsed to 20 classes.
| Class ID | Class ID (collapsed) | RGB Color Palette | Class Name | Definition |
|----------|--------------------|-------------------|------------------|----------------------------------------------------------------------------------------------------|
| 0 | -1 | <span style="color:rgb(0, 0, 0)"> (0, 0, 0) </span> | unlabeled | Elements/objects in the scene that have not been categorized |
| 1 | 2 | <span style="color:rgb(70, 70, 70)"> (70, 70, 70) </span> | building | Includes houses, skyscrapers, and the elements attached to them |
| 2 | 4 | <span style="color:rgb(190, 153, 153)"> (190, 153, 153) </span> | fence | Wood or wire assemblies that enclose an area of ground |
| 3 | -1 | <span style="color:rgb(55, 90, 80)"> (55, 90, 80) </span> | other | Uncategorized elements |
| 4 | 11 | <span style="color:rgb(220, 20, 60)"> (220, 20, 60) </span> | pedestrian | Humans that walk |
| 5 | 5 | <span style="color:rgb(153, 153, 153)"> (153, 153, 153) </span> | pole | Vertically oriented pole and its horizontal components if any |
| 6 | 16 | <span style="color:rgb(157, 234, 50)"> (157, 234, 50) </span> | roadline | Markings on road |
| 7 | 0 | <span style="color:rgb(128, 64, 128)"> (128, 64, 128) </span> | road | Lanes, streets, paved areas on which cars drive |
| 8 | 1 | <span style="color:rgb(244, 35, 232)"> (244, 35, 232) </span> | sidewalk | Parts of ground designated for pedestrians or cyclists |
| 9 | 8 | <span style="color:rgb(107, 142, 35)"> (107, 142, 35) </span> | vegetation | Trees, hedges, all kinds of vertical vegetation (ground-level vegetation is not included here) |
| 10 | 13 | <span style="color:rgb(0, 0, 142)"> (0, 0, 142) </span> | cars | Cars in scene |
| 11 | 3 | <span style="color:rgb(102, 102, 156)"> (102, 102, 156) </span> | wall | Individual standing walls, not part of buildings |
| 12 | 7 | <span style="color:rgb(220, 220, 0)"> (220, 220, 0) </span> | traffic sign | Signs installed by the state/city authority, usually for traffic regulation |
| 13 | 10 | <span style="color:rgb(70, 130, 180)"> (70, 130, 180) </span> | sky | Open sky, including clouds and sun |
| 14 | -1 | <span style="color:rgb(81, 0, 81)"> (81, 0, 81) </span> | ground | Any horizontal ground-level structures that do not match any other category |
| 15 | -1 | <span style="color:rgb(150, 100, 100)"> (150, 100, 100) </span> | bridge | The structure of the bridge |
| 16 | -1 | <span style="color:rgb(230, 150, 140)"> (230, 150, 140) </span> | railtrack | Rail tracks that are non-drivable by cars |
| 17 | -1 | <span style="color:rgb(180, 165, 180)"> (180, 165, 180) </span> | guardrail | Guard rails / crash barriers |
| 18 | 6 | <span style="color:rgb(250, 170, 30)"> (250, 170, 30) </span> | traffic light | Traffic light boxes without their poles |
| 19 | -1 | <span style="color:rgb(110, 190, 160)"> (110, 190, 160) </span> | static | Elements in the scene and props that are immovable |
| 20 | -1 | <span style="color:rgb(170, 120, 50)"> (170, 120, 50) </span> | dynamic | Elements whose position is susceptible to change over time |
| 21 | 19 | <span style="color:rgb(45, 60, 150)"> (45, 60, 150) </span> | water | Horizontal water surfaces |
| 22 | 9 | <span style="color:rgb(152, 251, 152)"> (152, 251, 152) </span> | terrain | Grass, ground-level vegetation, soil, or sand |
| 23 | 12 | <span style="color:rgb(255, 0, 0)"> (255, 0, 0) </span> | rider | Humans that ride/drive any kind of vehicle or mobility system |
| 24 | 18 | <span style="color:rgb(119, 11, 32)"> (119, 11, 32) </span> | bicycle | Bicycles in scenes |
| 25 | 17 | <span style="color:rgb(0, 0, 230)"> (0, 0, 230) </span> | motorcycle | Motorcycles in scene |
| 26 | 15 | <span style="color:rgb(0, 60, 100)"> (0, 60, 100) </span> | bus | Buses in scenes |
| 27 | 14 | <span style="color:rgb(0, 0, 70)"> (0, 0, 70) </span> | truck | Trucks in scenes |
|
</details>
## Dataset Structure
The dataset is organized in the following structure:
<!--<details>
<summary><strong>Images (RGB Images)</strong></summary>
- ***H_15_P_0***
- *ClearNoon*
- Town01.tar.gz
- Town02.tar.gz
- ...
- Town10HD.tar.gz
- *ClearSunset*
- Town01.tar.gz
- Town02.tar.gz
- ...
- Town10HD.tar.gz
- *ClearNight*
- Town01.tar.gz
- Town02.tar.gz
- ...
- Town10HD.tar.gz
- *CloudyNoon*
- Town01.tar.gz
- Town02.tar.gz
- ...
- Town10HD.tar.gz
- *MidRainyNoon*
- Town01.tar.gz
- Town02.tar.gz
- ...
- Town10HD.tar.gz
- ***H_15_P_45***
- ...
- ...
- ***H_60_P_90***
- ...
</details>
<details>
<summary><strong>Instance (Instance Segmentation Annotations)</strong></summary>
- ***H_35_P_45***
- *ClearNoon*
- Town01.tar.gz
- Town02.tar.gz
- ...
- Town10HD.tar.gz
</details>
<details>
<summary><strong>Segment (Semantic Segmentation Annotations)</strong></summary>
- ***H_15_P_0***
- *ClearNoon*
- Town01.tar.gz
- Town02.tar.gz
- ...
- Town10HD.tar.gz
- ***H_15_P_45***
- ...
- ...
- ***H_60_P_90***
</details>
<details>
<summary><strong>Depth (Depth Annotations)</strong></summary>
- ***H_35_P_45***
- *ClearNoon*
- Town01.tar.gz
- Town02.tar.gz
- ...
- Town10HD.tar.gz
</details>
-->
```
βββ Images (RGB Images)
β βββ H_15_P_0
β β βββ ClearNoon
β β β βββ Town01
β β β β βββ Town01.tar.gz
β β β βββ Town02
β β β β βββ Town02.tar.gz
β β β βββ ...
β β β βββ Town10HD
β β β βββ Town10HD.tar.gz
β β βββ ClearSunset
β β β βββ Town01
β β β β βββ Town01.tar.gz
β β β βββ Town02
β β β β βββ Town02.tar.gz
β β β βββ ...
β β β βββ Town10HD
β β β βββ Town10HD.tar.gz
β β βββ ClearNight
β β β βββ Town01
β β β β βββ Town01.tar.gz
β β β βββ Town02
β β β β βββ Town02.tar.gz
β β β βββ ...
β β β βββ Town10HD
β β β βββ Town10HD.tar.gz
β β βββ CloudyNoon
β β β βββ Town01
β β β β βββ Town01.tar.gz
β β β βββ Town02
β β β β βββ Town02.tar.gz
β β β βββ ...
β β β βββ Town10HD
β β β βββ Town10HD.tar.gz
β β βββ MidRainyNoon
β β βββ Town01
β β β βββ Town01.tar.gz
β β βββ Town02
β β β βββ Town02.tar.gz
β β βββ ...
β β βββ Town10HD
β β βββ Town10HD.tar.gz
β βββ H_15_P_45
β β βββ ...
β βββ ...
β βββ H_60_P_90
β βββ ...
βββ Instance (Instance Segmentation Annotations)
β βββ H_35_P_45
β β βββ ClearNoon
β β βββ Town01
β β β βββ Town01.tar.gz
β β βββ Town02
β β β βββ Town02.tar.gz
β β βββ ...
β β βββ Town10HD
β β βββ Town10HD.tar.gz
β βββ ...
βββ Segment (Semantic Segmentation Annotations)
β βββ H_15_P_0
β β βββ ClearNoon
β β β βββ Town01
β β β β βββ Town01.tar.gz
β β β βββ Town02
β β β β βββ Town02.tar.gz
β β β βββ ...
β β β βββ Town10HD
β β β βββ Town10HD.tar.gz
β β βββ H_15_P_45
β β β βββ ...
β β βββ ...
β β βββ H_60_P_90
β β βββ ...
β βββ ...
βββ Depth (Depth Annotations)
βββ H_35_P_45
β βββ ClearNoon
β βββ Town01
β β βββ Town01.tar.gz
β βββ Town02
β β βββ Town02.tar.gz
β βββ ...
β βββ Town10HD
β βββ Town10HD.tar.gz
βββ ...
```
**Note**: Since the same viewpoint is reproduced across each weather variation, hence ClearNoon annotations can be used for all images pertaining to the different weather variations.
## Dataset Download
The dataset can be downloaded using both [datasets](https://huggingface.co/docs/datasets/index) library by Hugging Face and wget.
Since SkyScenes offers variations across different axes we enable different subsets for download that can aid in model sensitivity analysis across these axes.
### Download instructions: wget
**Example script for downloading different subsets of data using wget**
```bash
#!/bin/bash
#Change here to download a specific Height and Pitch Variation, for example - H_15_P_0
#Note- Currently supporting only H_35_P_45 for Instance and Depth Maps
# HP=('H_15_P_45' 'H_15_P_60' 'H_15_P_90')
HP=('H_15_P_0' 'H_15_P_45' 'H_15_P_60' 'H_15_P_90' 'H_35_P_0' 'H_35_P_45' 'H_35_P_60' 'H_35_P_90' 'H_60_P_0' 'H_60_P_45' 'H_60_P_60' 'H_60_P_90')
#Change here to download a specific weather subset, for example - ClearNoon
#Note - For Segment, Instance and Depth annotations this field should only have ClearNoon variation
# weather=('ClearNoon' 'ClearNight')
weather=('ClearNoon' 'ClearNight' 'ClearSunset' 'CloudyNoon' 'MidRainyNoon')
#Change here to download a specific Town subset, for example - Town07
layout=('Town01' 'Town02' 'Town03' 'Town04' 'Town05' 'Town06' 'Town07' 'Town10HD')
#Change here for any specific annotation, for example - https://huggingface.co/datasets/hoffman-lab/SkyScenes/resolve/main/Segment
base_url=('https://huggingface.co/datasets/hoffman-lab/SkyScenes/resolve/main/Images')
#Change here for base download folder
base_download_folder='SkyScenes'
for hp in "${HP[@]}"; do
for w in "${weather[@]}"; do
for t in "${layout[@]}"; do
folder=$(echo "$base_url" | awk -F '/' '{print $(NF)}')
download_url="${base_url}/${hp}/${w}/${t}/${t}.tar.gz"
download_folder="${base_download_folder}/${folder}/${hp}/${w}/${t}"
mkdir -p "$download_folder"
echo "Downloading: $download_url"
wget -P "$download_folder" "$download_url"
done
done
done
```
### Download instructions: [datasets](https://huggingface.co/docs/datasets/index)
<details>
<summary>Click to view all the available keys for downloading subsets of the data</summary>
* **Layout Variations**
- Rural
- Urban
* **Weather Variations**
- ClearNoon
- ClearNight (only images)
- ClearSunset (only images)
- CloudyNoon (only images)
- MidRainyNoon (only images)
* **Height Variations**
- H_15
- H_35
- H_60
* **Pitch Variations**
- P_0
- P_45
- P_60
- P_90
* **Height and Pitch Variations**
- H_15_P_0
- H_15_P_45
- H_15_P_60
- H_15_P_90
- H_35_P_0
- H_35_P_45
- H_35_P_60
- H_35_P_90
- H_60_P_0
- H_60_P_45
- H_60_P_60
- H_60_P_90
Full dataset key: full
**π‘Notes**:
- To download **images** append subset key with **images**, example - ```H_35_P_45 images```
- To download **semantic segmentation** maps append subset key with **semseg**, example - ```H_35_P_45 semseg```
- To download **instance segmentation** maps append subset key with **instance**, example - ```H_35_P_45 instance```
- To download **depth** maps append subset key with **depth**, example - ```H_35_P_45 depth```
</details>
**Example script for loading H_35_P_45 images**
```python
from datasets import load_dataset
dataset = load_dataset('hoffman-lab/SkyScenes',name="H_35_P_45 images")
```
**Example script for loading H_35_P_45 semantic segmentation maps**
```python
from datasets import load_dataset
dataset = load_dataset('hoffman-lab/SkyScenes',name="H_35_P_45 semseg")
```
**Example script for loading H_35_P_45 instance segmentation maps**
```python
from datasets import load_dataset
dataset = load_dataset('hoffman-lab/SkyScenes',name="H_35_P_45 instance")
```
**Example script for loading H_35_P_45 depth maps**
```python
from datasets import load_dataset
dataset = load_dataset('hoffman-lab/SkyScenes',name="H_35_P_45 depth")
```
### π‘ Notes
- Depth and Instance segmentation maps are available for only H_35_P_45, other variations will be made available soon.
- To prevent issues when loading datasets using [datasets](https://huggingface.co/docs/datasets/index) library, it is recommended to avoid downloading subsets that contain overlapping directories. If there are any overlapping directories between the existing downloads and new ones, it's essential to clear the .cache directory of any such overlaps before proceeding with the new downloads. This step will ensure a clean and conflict-free environment for handling datasets.
## BibTex
If you find this work useful please like β€οΈ our dataset repo and cite π our paper. Thanks for your support!
```
@misc{khose2023skyscenes,
title={SkyScenes: A Synthetic Dataset for Aerial Scene Understanding},
author={Sahil Khose and Anisha Pal and Aayushi Agarwal and Deepanshi and Judy Hoffman and Prithvijit Chattopadhyay},
year={2023},
eprint={2312.06719},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` | hoffman-lab/SkyScenes | [
"task_categories:object-detection",
"task_categories:depth-estimation",
"task_categories:image-segmentation",
"language:en",
"license:mit",
"dataset",
"aerial",
"synthetic",
"domain adaptation",
"sim2real",
"arxiv:2312.06719",
"region:us"
] | 2023-12-01T21:01:18+00:00 | {"language": ["en"], "license": "mit", "task_categories": ["object-detection", "depth-estimation", "image-segmentation"], "tags": ["dataset", "aerial", "synthetic", "domain adaptation", "sim2real"]} | 2024-01-01T06:35:38+00:00 | [
"2312.06719"
] | [
"en"
] | TAGS
#task_categories-object-detection #task_categories-depth-estimation #task_categories-image-segmentation #language-English #license-mit #dataset #aerial #synthetic #domain adaptation #sim2real #arxiv-2312.06719 #region-us
|
SkyScenes: A Synthetic Dataset for Aerial Scene Understanding
=============================================================
Sahil Khose\*, Anisha Pal\*, Aayushi Agarwal\*, Deepanshi\*, Judy Hoffman, Prithvijit Chattopadhyay

Release Updates
---------------
Support for downloading the contents of Images using datasets is under developement and will be made available soon. In the meanwhile it is advised to use wget.
Dataset Summary
---------------
Real-world aerial scene understanding is limited by a lack of datasets that contain densely annotated images curated under a diverse set of conditions.
Due to inherent challenges in obtaining such images in controlled real-world settings,
we present SkyScenes, a synthetic dataset of densely annotated aerial images captured from Unmanned Aerial Vehicle (UAV) perspectives.
SkyScenes images are carefully curated from CARLA to comprehensively capture diversity across layout (urban and rural maps), weather conditions, times of day, pitch angles and altitudes with corresponding semantic, instance and depth annotations.
SkyScenes features 33,600 images in total, which are spread across 8 towns, 5 weather and daytime conditions and 12 height and pitch variations.
Click to view the detailed list of all variations
* Layout Variations(Total 8)::
+ Town01
+ Town02
+ Town03
+ Town04
+ Town05
+ Town06
+ Town07
+ Town10HD
*Town07 features Rural Scenes, whereas the rest of the towns feature Urban scenes*
* Weather & Daytime Variations(Total 5):
+ ClearNoon
+ ClearSunset
+ ClearNight
+ CloudyNoon
+ MidRainyNoon
* Height and Pitch Variations of UAV Flight(Total 12):
+ Height = 15m, Pitch = 0Β°
+ Height = 15m, Pitch = 45Β°
+ Height = 15m, Pitch = 60Β°
+ Height = 15m, Pitch = 90Β°
+ Height = 35m, Pitch = 0Β°
+ Height = 35m, Pitch = 45Β°
+ Height = 35m, Pitch = 60Β°
+ Height = 35m, Pitch = 90Β°
+ Height = 60m, Pitch = 0Β°
+ Height = 60m, Pitch = 45Β°
+ Height = 60m, Pitch = 60Β°
+ Height = 60m, Pitch = 90Β°
Click to view class definitions, color palette and class IDs for Semantic Segmentation
SkyScenes semantic segmentation labels span 28 classes which can be further collapsed to 20 classes.
```
|
```
Dataset Structure
-----------------
The dataset is organized in the following structure:
Note: Since the same viewpoint is reproduced across each weather variation, hence ClearNoon annotations can be used for all images pertaining to the different weather variations.
Dataset Download
----------------
The dataset can be downloaded using both datasets library by Hugging Face and wget.
Since SkyScenes offers variations across different axes we enable different subsets for download that can aid in model sensitivity analysis across these axes.
### Download instructions: wget
Example script for downloading different subsets of data using wget
### Download instructions: datasets
Click to view all the available keys for downloading subsets of the data
* Layout Variations
+ Rural
+ Urban
* Weather Variations
+ ClearNoon
+ ClearNight (only images)
+ ClearSunset (only images)
+ CloudyNoon (only images)
+ MidRainyNoon (only images)
* Height Variations
+ H\_15
+ H\_35
+ H\_60
* Pitch Variations
+ P\_0
+ P\_45
+ P\_60
+ P\_90
* Height and Pitch Variations
+ H\_15\_P\_0
+ H\_15\_P\_45
+ H\_15\_P\_60
+ H\_15\_P\_90
+ H\_35\_P\_0
+ H\_35\_P\_45
+ H\_35\_P\_60
+ H\_35\_P\_90
+ H\_60\_P\_0
+ H\_60\_P\_45
+ H\_60\_P\_60
+ H\_60\_P\_90Full dataset key: full
Notes:
+ To download images append subset key with images, example -
+ To download semantic segmentation maps append subset key with semseg, example -
+ To download instance segmentation maps append subset key with instance, example -
+ To download depth maps append subset key with depth, example -
Example script for loading H\_35\_P\_45 images
Example script for loading H\_35\_P\_45 semantic segmentation maps
Example script for loading H\_35\_P\_45 instance segmentation maps
Example script for loading H\_35\_P\_45 depth maps
### Notes
* Depth and Instance segmentation maps are available for only H\_35\_P\_45, other variations will be made available soon.
* To prevent issues when loading datasets using datasets library, it is recommended to avoid downloading subsets that contain overlapping directories. If there are any overlapping directories between the existing downloads and new ones, it's essential to clear the .cache directory of any such overlaps before proceeding with the new downloads. This step will ensure a clean and conflict-free environment for handling datasets.
BibTex
------
If you find this work useful please like οΈ our dataset repo and cite our paper. Thanks for your support!
| [
"### Download instructions: wget\n\n\nExample script for downloading different subsets of data using wget",
"### Download instructions: datasets\n\n\n\nClick to view all the available keys for downloading subsets of the data\n* Layout Variations\n\n\n\t+ Rural\n\t+ Urban\n* Weather Variations\n\n\n\t+ ClearNoon\n\t+ ClearNight (only images)\n\t+ ClearSunset (only images)\n\t+ CloudyNoon (only images)\n\t+ MidRainyNoon (only images)\n* Height Variations\n\n\n\t+ H\\_15\n\t+ H\\_35\n\t+ H\\_60\n* Pitch Variations\n\n\n\t+ P\\_0\n\t+ P\\_45\n\t+ P\\_60\n\t+ P\\_90\n* Height and Pitch Variations\n\n\n\t+ H\\_15\\_P\\_0\n\t+ H\\_15\\_P\\_45\n\t+ H\\_15\\_P\\_60\n\t+ H\\_15\\_P\\_90\n\t+ H\\_35\\_P\\_0\n\t+ H\\_35\\_P\\_45\n\t+ H\\_35\\_P\\_60\n\t+ H\\_35\\_P\\_90\n\t+ H\\_60\\_P\\_0\n\t+ H\\_60\\_P\\_45\n\t+ H\\_60\\_P\\_60\n\t+ H\\_60\\_P\\_90Full dataset key: full\n\n\nNotes:\n\n\n\t+ To download images append subset key with images, example -\n\t+ To download semantic segmentation maps append subset key with semseg, example -\n\t+ To download instance segmentation maps append subset key with instance, example -\n\t+ To download depth maps append subset key with depth, example -\n\n\n\nExample script for loading H\\_35\\_P\\_45 images\n\n\nExample script for loading H\\_35\\_P\\_45 semantic segmentation maps\n\n\nExample script for loading H\\_35\\_P\\_45 instance segmentation maps\n\n\nExample script for loading H\\_35\\_P\\_45 depth maps",
"### Notes\n\n\n* Depth and Instance segmentation maps are available for only H\\_35\\_P\\_45, other variations will be made available soon.\n* To prevent issues when loading datasets using datasets library, it is recommended to avoid downloading subsets that contain overlapping directories. If there are any overlapping directories between the existing downloads and new ones, it's essential to clear the .cache directory of any such overlaps before proceeding with the new downloads. This step will ensure a clean and conflict-free environment for handling datasets.\n\n\nBibTex\n------\n\n\nIf you find this work useful please like οΈ our dataset repo and cite our paper. Thanks for your support!"
] | [
"TAGS\n#task_categories-object-detection #task_categories-depth-estimation #task_categories-image-segmentation #language-English #license-mit #dataset #aerial #synthetic #domain adaptation #sim2real #arxiv-2312.06719 #region-us \n",
"### Download instructions: wget\n\n\nExample script for downloading different subsets of data using wget",
"### Download instructions: datasets\n\n\n\nClick to view all the available keys for downloading subsets of the data\n* Layout Variations\n\n\n\t+ Rural\n\t+ Urban\n* Weather Variations\n\n\n\t+ ClearNoon\n\t+ ClearNight (only images)\n\t+ ClearSunset (only images)\n\t+ CloudyNoon (only images)\n\t+ MidRainyNoon (only images)\n* Height Variations\n\n\n\t+ H\\_15\n\t+ H\\_35\n\t+ H\\_60\n* Pitch Variations\n\n\n\t+ P\\_0\n\t+ P\\_45\n\t+ P\\_60\n\t+ P\\_90\n* Height and Pitch Variations\n\n\n\t+ H\\_15\\_P\\_0\n\t+ H\\_15\\_P\\_45\n\t+ H\\_15\\_P\\_60\n\t+ H\\_15\\_P\\_90\n\t+ H\\_35\\_P\\_0\n\t+ H\\_35\\_P\\_45\n\t+ H\\_35\\_P\\_60\n\t+ H\\_35\\_P\\_90\n\t+ H\\_60\\_P\\_0\n\t+ H\\_60\\_P\\_45\n\t+ H\\_60\\_P\\_60\n\t+ H\\_60\\_P\\_90Full dataset key: full\n\n\nNotes:\n\n\n\t+ To download images append subset key with images, example -\n\t+ To download semantic segmentation maps append subset key with semseg, example -\n\t+ To download instance segmentation maps append subset key with instance, example -\n\t+ To download depth maps append subset key with depth, example -\n\n\n\nExample script for loading H\\_35\\_P\\_45 images\n\n\nExample script for loading H\\_35\\_P\\_45 semantic segmentation maps\n\n\nExample script for loading H\\_35\\_P\\_45 instance segmentation maps\n\n\nExample script for loading H\\_35\\_P\\_45 depth maps",
"### Notes\n\n\n* Depth and Instance segmentation maps are available for only H\\_35\\_P\\_45, other variations will be made available soon.\n* To prevent issues when loading datasets using datasets library, it is recommended to avoid downloading subsets that contain overlapping directories. If there are any overlapping directories between the existing downloads and new ones, it's essential to clear the .cache directory of any such overlaps before proceeding with the new downloads. This step will ensure a clean and conflict-free environment for handling datasets.\n\n\nBibTex\n------\n\n\nIf you find this work useful please like οΈ our dataset repo and cite our paper. Thanks for your support!"
] | [
78,
22,
424,
166
] | [
"passage: TAGS\n#task_categories-object-detection #task_categories-depth-estimation #task_categories-image-segmentation #language-English #license-mit #dataset #aerial #synthetic #domain adaptation #sim2real #arxiv-2312.06719 #region-us \n### Download instructions: wget\n\n\nExample script for downloading different subsets of data using wget"
] |
bc9c9735002548946bc9a13a39714853d22e9bc7 | # Dataset Card for "humaneval_x_with_stats"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JeremiahZ/humaneval_x_with_stats | [
"region:us"
] | 2023-12-01T21:18:24+00:00 | {"dataset_info": {"features": [{"name": "task_id", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "declaration", "dtype": "string"}, {"name": "canonical_solution", "dtype": "string"}, {"name": "test", "dtype": "string"}, {"name": "example_test", "dtype": "string"}, {"name": "llvm_ir", "dtype": "string"}, {"name": "wat", "dtype": "string"}, {"name": "number_of_lines", "dtype": "int64"}, {"name": "number_of_chars", "dtype": "int64"}, {"name": "wat_number_of_chars", "dtype": "int64"}, {"name": "wat_number_of_lines", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 4393622, "num_examples": 161}], "download_size": 1046507, "dataset_size": 4393622}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-01T21:18:27+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "humaneval_x_with_stats"
More Information needed | [
"# Dataset Card for \"humaneval_x_with_stats\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"humaneval_x_with_stats\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"humaneval_x_with_stats\"\n\nMore Information needed"
] |
e89ea1289e219ce9caae6654dbf254f8ab3b192c | # Dataset Card for "mbxp_with_stats"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JeremiahZ/mbxp_with_stats | [
"region:us"
] | 2023-12-01T21:19:26+00:00 | {"dataset_info": {"features": [{"name": "task_id", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "test", "dtype": "string"}, {"name": "entry_point", "dtype": "string"}, {"name": "canonical_solution", "dtype": "string"}, {"name": "llvm_ir", "dtype": "string"}, {"name": "wat", "dtype": "string"}, {"name": "number_of_lines", "dtype": "int64"}, {"name": "number_of_chars", "dtype": "int64"}, {"name": "wat_number_of_chars", "dtype": "int64"}, {"name": "wat_number_of_lines", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 13572947, "num_examples": 773}], "download_size": 2868972, "dataset_size": 13572947}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]} | 2023-12-01T21:19:28+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "mbxp_with_stats"
More Information needed | [
"# Dataset Card for \"mbxp_with_stats\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"mbxp_with_stats\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"mbxp_with_stats\"\n\nMore Information needed"
] |
2067dfdce5abc7efdbb0927f4721ae282f0b6c51 | # HallusionBench: An Advanced Diagnostic Suite for Entangled Language Hallucination & Visual Illusion in Large Vision-Language Models
You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
[Tianrui Guan*](https://tianruiguan.phd), [Fuxiao Liu*](https://fuxiaoliu.github.io/), Xiyang Wu, Ruiqi Xian, Zongxia Li, Xiaoyu Liu, Xijun Wang, Lichang Chen, Furong Huang, Yaser Yacoob, Dinesh Manocha, Tianyi Zhou
π₯π₯π₯
## We welcome everyone to contribute the failure cases of Large Multimodal Models (GPT-4V) to our community!
π₯π₯π₯
Large language models (LLMs), after being aligned with vision models and integrated into vision-language models (VLMs), can bring impressive improvement in image reasoning tasks. This was shown by the recently released GPT-4V(ison), LLaVA-1.5, etc. However, the strong language prior in these SOTA LVLMs can be a double-edged sword: they may ignore the image context and solely rely on the (even contradictory) language prior for reasoning. In contrast, the vision modules in VLMs are weaker than LLMs and may result in misleading visual representations, which are then translated to confident mistakes by LLMs. To study these two types of VLM mistakes, i.e., language hallucination and visual illusion, we curated HallusionBench, an image-context reasoning benchmark that is still challenging to even GPT-4V and LLaVA-1.5. We provide a detailed analysis of examples in HallusionBench, which sheds novel insights on the illusion or hallucination of VLMs and how to improve them in the future.
If you find our paper useful, please cite our paper:
```bibtex
@misc{guan2023hallusionbench,
title={HallusionBench: An Advanced Diagnostic Suite for Entangled Language Hallucination & Visual Illusion in Large Vision-Language Models},
author={Tianrui Guan and Fuxiao Liu and Xiyang Wu and Ruiqi Xian and Zongxia Li and Xiaoyu Liu and Xijun Wang and Lichang Chen and Furong Huang and Yaser Yacoob and Dinesh Manocha and Tianyi Zhou},
year={2023},
eprint={2310.14566},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@misc{liu2023mitigating,
title={Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning},
author={Fuxiao Liu and Kevin Lin and Linjie Li and Jianfeng Wang and Yaser Yacoob and Lijuan Wang},
year={2023},
eprint={2306.14565},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Updates
- [11/28] π₯ The full paper is uploaded and can be accessed [here](https://arxiv.org/abs/2310.14566). The dataset is expanded and leaderboard is updated.
- [11/13] π₯ Evaluation result on LLaVA-1.5 is updated. More model results to come!
- [10/27] π₯ The [leaderboard](https://paperswithcode.com/sota/visual-question-answering-vqa-on-3) and evaluation code is released! **Welcome to update your model on our leaderboard!**
- [10/24] π₯ The early report with case analysis and insights is available [here](https://arxiv.org/abs/2310.14566).
- [10/23] π₯ Please check our previous work on mitigating hallucinations of LMMs ["Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning"](https://github.com/FuxiaoLiu/LRV-Instruction).
## Dataset Download
To keep evaluation simple, we only provide the question in form of yes/no questions.
| Updated on | Questions and Annotations | Figures | Question Count | Figure Count |
| ----------- | :----: | :----: | :----: | :----: |
| Oct 27, 2023 | [HallusionBench.json](./HallusionBench.json) | [hallusion_bench.zip](https://drive.google.com/file/d/1eeO1i0G9BSZTE1yd5XeFwmrbe1hwyf_0/view?usp=sharing) | 254 | 69 |
### Evaluation
1. Clone the repo.
```
git clone https://github.com/tianyi-lab/HallusionBench.git
cd ./HallusionBench
```
2. Download the images [hallusion_bench.zip](https://drive.google.com/file/d/1eeO1i0G9BSZTE1yd5XeFwmrbe1hwyf_0/view?usp=sharing) and unzip the folder in the same directory.
3. The questions and image locations are saved in `./HallusionBench.json`. The data sample are as follows:
```
{'category': 'VD', 'subcategory': 'illusion', 'visual_input': '1', 'set_id': '0', 'figure_id': '0', 'sample_note': 'circle', 'question_id': '0', 'question': 'Is the right orange circle the same size as the left orange circle?', 'gt_answer_details': 'The right orange circle is the same size as the left orange circle.', 'gt_answer': '1', 'filename': './hallusion_bench/VD/illusion/0_0.png'}
```
The key `visual_input`means whether the question needs visual input like images. If `visual_input=1`, it means the question need visual input. If `visual_input=0`, it means the question doesn't need visual input. It's the text-only question.
4. Run your model on `./HallusionBench.json` and save the ouput file as `./HallusionBench_result.json`. You need to add the output of your model in the key `'model_prediction'`. We provide an sample result [here](./HallusionBench_result_sample.json).
5. Finally, run the following code for evaluation:
```
python evaluation.py
```
You can use your own API key for GPT4 evaluation by editing the code [here](./utils.py#L10).
## Leaderboard
### Definition
* **Visual Dependent (VD) Questions**: questions that do not have an affirmative answer without the visual context.
* **Easy**: Original images that are obtained from Internet.
* **Hard**: Edited images from the original images.
* **Visual Supplement (VS) Questions**: questions that can be answered without the visual input; the visual component merely provides supplemental information.
* **Easy**: No visual input. Uncertain answer without hallucination is also considered correct response.
* **Hard**: With visual input. The answer must follow the provided figure and visual context.
### Metric
* **Accuracy per Figure (Consistency Test)**: Accuracy based on each figure. To make sure the mode truly understand image, we ask variant of questions based on the same knowledge on the same figure, and consider it correct if the model can answer all questions correctly. For example, the model should not give inconsistent responses on the questions "Is A bigger than B?" and "Is B smaller A?".
* **Accuracy per Question**: Accuracy of all questions, including easy and hard questions.
* **Accuracy per Question Pair**: We ask the same questions on similar images (or, with and without images). We consider the same question text on different visual contexts a **question pair** (usually they come in with an *easy* question and a corresponding *hard* question). This metric calculate accuracy of all question pairs.
| Model | Question Pair Acc | Figure Acc | Easy Question Acc | Hard Question Acc | Question Acc | Json |
| ----- | :----: | :----: | :----: | :----: | :----: | :----: |
| **GPT4V** <br />Sep 25, 2023 Version <br />(Human Eval) | 31.42 | 44.22 | 79.56 | 38.37 | 67.58 | [VD](), [VS]() |
| **GPT4V** <br />Sep 25, 2023 Version <br />(GPT Eval) | 28.79 | 39.88 | 75.60 | 37.67 | 65.28 | [VD](), [VS]() |
| **LLaVA-1.5** <br />(Human Eval) | 9.45 | 25.43 | 50.77 | 29.07 | 47.12 | [VD](), [VS]() |
| **LLaVA-1.5** <br />(GPT Eval) | 10.55 | 24.86 | 49.67 | 29.77 | 46.94 | [VD](), [VS]() |
| **BLIP2-T5** <br />(GPT Eval) | 15.16 | 20.52 | 45.49 | 43.49 | 48.09 | [VD](), [VS]() |
| **InstructBLIP** <br />(GPT Eval) | 9.45 | 10.11 | 35.60 | 45.12 | 45.26 | [VD](), [VS]() |
| **Qwen-VL** <br />(GPT Eval) | 5.93 | 6.65 | 31.43 | 24.88 | 39.15 | [VD](), [VS]() |
| **Open-Flamingo** <br />(GPT Eval) | 6.37 | 11.27 | 39.56 | 27.21 | 38.44 | [VD](), [VS]() |
| **MiniGPT5** <br />(GPT Eval) |10.55 | 9.83 | 36.04| 28.37 | 40.30 | [VD](), [VS]() |
| **MiniGPT4** <br />(GPT Eval) |8.79 | 10.12 | 31.87| 27.67 | 35.78 | [VD](), [VS]() |
| **mPLUG_Owl-v2** <br />(GPT Eval) |13.85 | 19.94 | 44.84| 39.07 | 47.30 | [VD](), [VS]() |
| **mPLUG_Owl-v1** <br />(GPT Eval) |9.45 | 10.40 | 39.34| 29.77 | 43.93 | [VD](), [VS]() |
| **GiT** <br />(GPT Eval) |5.27 | 6.36 | 26.81| 31.86 | 34.37 | [VD](), [VS]() |
### Reproduce GPT4V results on leaderboard
1. We saved the ouput of GPT4V with our annotation. Put `HallusionBench.tsv` in the root directory of this repo, or set `input_file_name` in [gpt4v_benchmark.py](./gpt4v_benchmark.py) to the location of the [HallusionBench.tsv](https://drive.google.com/file/d/1q8db7-7IlA4WLZ_5Jt-TpLDyAWg8Ybx4/view?usp=sharing) file.
2. (Optional) If you don't have access to GPT API, you don't need to run it since we have saved evaluation results. They can be downloaded for [Visual Dependent]() and [Visual Supplement](). Put the json files in the root directory of this repo, or set `save_json_path_vd` and `save_json_path_vd` in [gpt4v_benchmark.py](./gpt4v_benchmark.py) to their respective locations.
3. Run `python gpt4v_benchmark.py`.
## Examples and Analysis
<p align="center" >
<img src="./examples/f-01.png" alt="Example 1" class="center" width="800"/>
<img src="./examples/f-02.png" alt="Example 2" class="center" width="800"/>
<img src="./examples/f-04.png" alt="Example 3" class="center" width="800"/>
<img src="./examples/f-05.png" alt="Example 4" class="center" width="800"/>
<img src="./examples/f-08.png" alt="Example 5" class="center" width="800"/>
<img src="./examples/f-15.png" alt="Example 6" class="center" width="800"/>
<img src="./examples/f-10.png" alt="Example 7" class="center" width="800"/>
<img src="./examples/f-12.png" alt="Example 8" class="center" width="800"/>
<img src="./examples/f-17.png" alt="Example 9" class="center" width="800"/>
</p>
---
license: bsd-3-clause
---
| rayguan/HallusionBench | [
"arxiv:2310.14566",
"arxiv:2306.14565",
"region:us"
] | 2023-12-01T21:34:25+00:00 | {} | 2023-12-10T18:14:47+00:00 | [
"2310.14566",
"2306.14565"
] | [] | TAGS
#arxiv-2310.14566 #arxiv-2306.14565 #region-us
| HallusionBench: An Advanced Diagnostic Suite for Entangled Language Hallucination & Visual Illusion in Large Vision-Language Models
===================================================================================================================================
You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
Tianrui Guan\*, Fuxiao Liu\*, Xiyang Wu, Ruiqi Xian, Zongxia Li, Xiaoyu Liu, Xijun Wang, Lichang Chen, Furong Huang, Yaser Yacoob, Dinesh Manocha, Tianyi Zhou
We welcome everyone to contribute the failure cases of Large Multimodal Models (GPT-4V) to our community!
---------------------------------------------------------------------------------------------------------
Large language models (LLMs), after being aligned with vision models and integrated into vision-language models (VLMs), can bring impressive improvement in image reasoning tasks. This was shown by the recently released GPT-4V(ison), LLaVA-1.5, etc. However, the strong language prior in these SOTA LVLMs can be a double-edged sword: they may ignore the image context and solely rely on the (even contradictory) language prior for reasoning. In contrast, the vision modules in VLMs are weaker than LLMs and may result in misleading visual representations, which are then translated to confident mistakes by LLMs. To study these two types of VLM mistakes, i.e., language hallucination and visual illusion, we curated HallusionBench, an image-context reasoning benchmark that is still challenging to even GPT-4V and LLaVA-1.5. We provide a detailed analysis of examples in HallusionBench, which sheds novel insights on the illusion or hallucination of VLMs and how to improve them in the future.
If you find our paper useful, please cite our paper:
Updates
-------
* [11/28] The full paper is uploaded and can be accessed here. The dataset is expanded and leaderboard is updated.
* [11/13] Evaluation result on LLaVA-1.5 is updated. More model results to come!
* [10/27] The leaderboard and evaluation code is released! Welcome to update your model on our leaderboard!
* [10/24] The early report with case analysis and insights is available here.
* [10/23] Please check our previous work on mitigating hallucinations of LMMs "Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning".
Dataset Download
----------------
To keep evaluation simple, we only provide the question in form of yes/no questions.
### Evaluation
1. Clone the repo.
2. Download the images hallusion\_bench.zip and unzip the folder in the same directory.
3. The questions and image locations are saved in './URL'. The data sample are as follows:
The key 'visual\_input'means whether the question needs visual input like images. If 'visual\_input=1', it means the question need visual input. If 'visual\_input=0', it means the question doesn't need visual input. It's the text-only question.
4. Run your model on './URL' and save the ouput file as './HallusionBench\_result.json'. You need to add the output of your model in the key ''model\_prediction''. We provide an sample result here.
5. Finally, run the following code for evaluation:
You can use your own API key for GPT4 evaluation by editing the code here.
Leaderboard
-----------
### Definition
* Visual Dependent (VD) Questions: questions that do not have an affirmative answer without the visual context.
+ Easy: Original images that are obtained from Internet.
+ Hard: Edited images from the original images.
* Visual Supplement (VS) Questions: questions that can be answered without the visual input; the visual component merely provides supplemental information.
+ Easy: No visual input. Uncertain answer without hallucination is also considered correct response.
+ Hard: With visual input. The answer must follow the provided figure and visual context.
### Metric
* Accuracy per Figure (Consistency Test): Accuracy based on each figure. To make sure the mode truly understand image, we ask variant of questions based on the same knowledge on the same figure, and consider it correct if the model can answer all questions correctly. For example, the model should not give inconsistent responses on the questions "Is A bigger than B?" and "Is B smaller A?".
* Accuracy per Question: Accuracy of all questions, including easy and hard questions.
* Accuracy per Question Pair: We ask the same questions on similar images (or, with and without images). We consider the same question text on different visual contexts a question pair (usually they come in with an *easy* question and a corresponding *hard* question). This metric calculate accuracy of all question pairs.
### Reproduce GPT4V results on leaderboard
1. We saved the ouput of GPT4V with our annotation. Put 'URL' in the root directory of this repo, or set 'input\_file\_name' in gpt4v\_benchmark.py to the location of the URL file.
2. (Optional) If you don't have access to GPT API, you don't need to run it since we have saved evaluation results. They can be downloaded for Visual Dependent and Visual Supplement. Put the json files in the root directory of this repo, or set 'save\_json\_path\_vd' and 'save\_json\_path\_vd' in gpt4v\_benchmark.py to their respective locations.
3. Run 'python gpt4v\_benchmark.py'.
Examples and Analysis
---------------------









---
license: bsd-3-clause
---------------------
| [
"### Evaluation\n\n\n1. Clone the repo.\n2. Download the images hallusion\\_bench.zip and unzip the folder in the same directory.\n3. The questions and image locations are saved in './URL'. The data sample are as follows:\n\n\nThe key 'visual\\_input'means whether the question needs visual input like images. If 'visual\\_input=1', it means the question need visual input. If 'visual\\_input=0', it means the question doesn't need visual input. It's the text-only question.\n\n\n4. Run your model on './URL' and save the ouput file as './HallusionBench\\_result.json'. You need to add the output of your model in the key ''model\\_prediction''. We provide an sample result here.\n5. Finally, run the following code for evaluation:\n\n\nYou can use your own API key for GPT4 evaluation by editing the code here.\n\n\nLeaderboard\n-----------",
"### Definition\n\n\n* Visual Dependent (VD) Questions: questions that do not have an affirmative answer without the visual context.\n\t+ Easy: Original images that are obtained from Internet.\n\t+ Hard: Edited images from the original images.\n* Visual Supplement (VS) Questions: questions that can be answered without the visual input; the visual component merely provides supplemental information.\n\t+ Easy: No visual input. Uncertain answer without hallucination is also considered correct response.\n\t+ Hard: With visual input. The answer must follow the provided figure and visual context.",
"### Metric\n\n\n* Accuracy per Figure (Consistency Test): Accuracy based on each figure. To make sure the mode truly understand image, we ask variant of questions based on the same knowledge on the same figure, and consider it correct if the model can answer all questions correctly. For example, the model should not give inconsistent responses on the questions \"Is A bigger than B?\" and \"Is B smaller A?\".\n* Accuracy per Question: Accuracy of all questions, including easy and hard questions.\n* Accuracy per Question Pair: We ask the same questions on similar images (or, with and without images). We consider the same question text on different visual contexts a question pair (usually they come in with an *easy* question and a corresponding *hard* question). This metric calculate accuracy of all question pairs.",
"### Reproduce GPT4V results on leaderboard\n\n\n1. We saved the ouput of GPT4V with our annotation. Put 'URL' in the root directory of this repo, or set 'input\\_file\\_name' in gpt4v\\_benchmark.py to the location of the URL file.\n2. (Optional) If you don't have access to GPT API, you don't need to run it since we have saved evaluation results. They can be downloaded for Visual Dependent and Visual Supplement. Put the json files in the root directory of this repo, or set 'save\\_json\\_path\\_vd' and 'save\\_json\\_path\\_vd' in gpt4v\\_benchmark.py to their respective locations.\n3. Run 'python gpt4v\\_benchmark.py'.\n\n\nExamples and Analysis\n---------------------\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n---\n\n\nlicense: bsd-3-clause\n---------------------"
] | [
"TAGS\n#arxiv-2310.14566 #arxiv-2306.14565 #region-us \n",
"### Evaluation\n\n\n1. Clone the repo.\n2. Download the images hallusion\\_bench.zip and unzip the folder in the same directory.\n3. The questions and image locations are saved in './URL'. The data sample are as follows:\n\n\nThe key 'visual\\_input'means whether the question needs visual input like images. If 'visual\\_input=1', it means the question need visual input. If 'visual\\_input=0', it means the question doesn't need visual input. It's the text-only question.\n\n\n4. Run your model on './URL' and save the ouput file as './HallusionBench\\_result.json'. You need to add the output of your model in the key ''model\\_prediction''. We provide an sample result here.\n5. Finally, run the following code for evaluation:\n\n\nYou can use your own API key for GPT4 evaluation by editing the code here.\n\n\nLeaderboard\n-----------",
"### Definition\n\n\n* Visual Dependent (VD) Questions: questions that do not have an affirmative answer without the visual context.\n\t+ Easy: Original images that are obtained from Internet.\n\t+ Hard: Edited images from the original images.\n* Visual Supplement (VS) Questions: questions that can be answered without the visual input; the visual component merely provides supplemental information.\n\t+ Easy: No visual input. Uncertain answer without hallucination is also considered correct response.\n\t+ Hard: With visual input. The answer must follow the provided figure and visual context.",
"### Metric\n\n\n* Accuracy per Figure (Consistency Test): Accuracy based on each figure. To make sure the mode truly understand image, we ask variant of questions based on the same knowledge on the same figure, and consider it correct if the model can answer all questions correctly. For example, the model should not give inconsistent responses on the questions \"Is A bigger than B?\" and \"Is B smaller A?\".\n* Accuracy per Question: Accuracy of all questions, including easy and hard questions.\n* Accuracy per Question Pair: We ask the same questions on similar images (or, with and without images). We consider the same question text on different visual contexts a question pair (usually they come in with an *easy* question and a corresponding *hard* question). This metric calculate accuracy of all question pairs.",
"### Reproduce GPT4V results on leaderboard\n\n\n1. We saved the ouput of GPT4V with our annotation. Put 'URL' in the root directory of this repo, or set 'input\\_file\\_name' in gpt4v\\_benchmark.py to the location of the URL file.\n2. (Optional) If you don't have access to GPT API, you don't need to run it since we have saved evaluation results. They can be downloaded for Visual Dependent and Visual Supplement. Put the json files in the root directory of this repo, or set 'save\\_json\\_path\\_vd' and 'save\\_json\\_path\\_vd' in gpt4v\\_benchmark.py to their respective locations.\n3. Run 'python gpt4v\\_benchmark.py'.\n\n\nExamples and Analysis\n---------------------\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n---\n\n\nlicense: bsd-3-clause\n---------------------"
] | [
23,
226,
121,
192,
365
] | [
"passage: TAGS\n#arxiv-2310.14566 #arxiv-2306.14565 #region-us \n### Evaluation\n\n\n1. Clone the repo.\n2. Download the images hallusion\\_bench.zip and unzip the folder in the same directory.\n3. The questions and image locations are saved in './URL'. The data sample are as follows:\n\n\nThe key 'visual\\_input'means whether the question needs visual input like images. If 'visual\\_input=1', it means the question need visual input. If 'visual\\_input=0', it means the question doesn't need visual input. It's the text-only question.\n\n\n4. Run your model on './URL' and save the ouput file as './HallusionBench\\_result.json'. You need to add the output of your model in the key ''model\\_prediction''. We provide an sample result here.\n5. Finally, run the following code for evaluation:\n\n\nYou can use your own API key for GPT4 evaluation by editing the code here.\n\n\nLeaderboard\n-----------### Definition\n\n\n* Visual Dependent (VD) Questions: questions that do not have an affirmative answer without the visual context.\n\t+ Easy: Original images that are obtained from Internet.\n\t+ Hard: Edited images from the original images.\n* Visual Supplement (VS) Questions: questions that can be answered without the visual input; the visual component merely provides supplemental information.\n\t+ Easy: No visual input. Uncertain answer without hallucination is also considered correct response.\n\t+ Hard: With visual input. The answer must follow the provided figure and visual context."
] |
278c44469e6fb709e6c2e1305edc82ff98fdc308 | Dataset with Niji outputs for one of the loras that was made on Civit AI by Earth & Dusk | Capsekai/Holiday-Niji | [
"license:creativeml-openrail-m",
"region:us"
] | 2023-12-01T22:33:33+00:00 | {"license": "creativeml-openrail-m"} | 2024-01-02T03:47:55+00:00 | [] | [] | TAGS
#license-creativeml-openrail-m #region-us
| Dataset with Niji outputs for one of the loras that was made on Civit AI by Earth & Dusk | [] | [
"TAGS\n#license-creativeml-openrail-m #region-us \n"
] | [
18
] | [
"passage: TAGS\n#license-creativeml-openrail-m #region-us \n"
] |
14729fbf35870fe21a8ae76d6d37e36dff0c08f8 | # Before Anything and Everything β±
_In the time of writing this Dataset Card, ~**17,490**~ **18,412** civilian has been killed in Palestine (~**7,870**~ **8,000** are children and ~**6,121**~ **6,200** are women)._
**Se**ek **a**ny **n**on-**pro**fit **organi**zation **t**o **he**lp **th**em **wi**th **wh**at **y**ou **c**an (For myself, [I use Mersal](https://www.every.org/mersal/f/support-humanitarian)) π΅πΈ
## Dataset Description
The Multilingual TTS dataset is an exceptional compilation of text-to-speech (TTS) samples, meticulously crafted to showcase the richness and diversity of human languages. This dataset encompasses a variety of real-world sentences in fifteen prominent languages, carefully chosen to reflect global linguistic diversity. Each sample is accompanied by its corresponding high-quality audio output.
<style>
.image-container {
display: flex;
justify-content: center;
align-items: center;
height: 65vh;
margin: 0;
}
.image-container img {
max-width: 48%; /* Adjust the width as needed */
height: auto;
}
</style>
<div class="image-container">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6116d0584ef9fdfbf45dc4d9/UX0s8S2yWSJ3NbbvmOJOi.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6116d0584ef9fdfbf45dc4d9/zIyPCWH7Y58gLVCeIfq4n.png">
</div>
## Key Features:
1. **Language Diversity**: The dataset covers a spectrum of languages, including **Beng**ali, **Mand**arin **Chin**ese, **Turk**ish, **Hin**di, **Fre**nch, **Vietn**amese, **Portu**guese, **Span**ish, **Japa**nese, **Ger**man, **Russ**ian, **Indon**esian, **Stan**dard **Ara**bic, **Engl**ish, **a**nd **Ur**du. This wide linguistic representation ensures inclusivity and applicability to a global audience.
3. **Real-World Sentences**: Comprising 25,000 samples, the dataset mirrors authentic communication scenarios. Sentences span diverse topics, ranging from everyday conversations to informative texts and news snippets, providing a comprehensive linguistic landscape.
4. **Multilingual Sentences**: A distinctive feature of this dataset is its inclusion of sentences that seamlessly integrate multiple languages. Each sample combines at least two languages, capturing the intricate dynamics of multilingual communication and rendering the dataset particularly valuable for training and evaluating multilingual TTS systems.
5. **Audio Quality**: Special attention has been given to the audio quality of each sample. The audio outputs are meticulously designed to be clear, natural-sounding, and faithful representations of the corresponding text, ensuring a rich auditory experience.
6. **Generated by GPT-4 and elevenlabs**: The dataset is the result of a collaboration between GPT-4 and elevenlabs, combining cutting-edge language generation capabilities with domain expertise. This collaboration guarantees a high level of accuracy, coherence, and linguistic nuance in both the text and audio components.
## Potential Use Cases:
1. **Multilingual TTS Model Training**: Researchers and developers can leverage this dataset to train and refine multilingual TTS models, enhancing their proficiency across a diverse array of languages.
2. **Cross-Language Evaluation**: The dataset serves as a valuable resource for evaluating TTS systems in handling multilingual scenarios, offering a benchmark for assessing model capabilities across different languages.
3. **Language Integration Testing**: Developers working on applications requiring multilingual TTS functionality can utilize this dataset to test and optimize language integration, ensuring a seamless user experience across various linguistic contexts.
## Acknowledgments:
The creation of the Multilingual TTS dataset was made possible through the collaborative efforts of **OpenAI's GPT-4** and the expertise of **Elevenlabs Multilingual V2**. We extend our gratitude to the AI and language processing communities for their continuous support in advancing the field of multilingual TTS. This dataset stands as a significant contribution, fostering innovation and progress in language technologies.
| MohamedRashad/multilingual-tts | [
"task_categories:text-to-speech",
"size_categories:10K<n<100K",
"language:ar",
"language:en",
"language:zh",
"language:es",
"language:fr",
"language:hi",
"language:ru",
"language:pt",
"language:ja",
"language:de",
"language:tr",
"language:bn",
"language:id",
"language:ur",
"language:vi",
"license:gpl-3.0",
"region:us"
] | 2023-12-01T23:46:06+00:00 | {"language": ["ar", "en", "zh", "es", "fr", "hi", "ru", "pt", "ja", "de", "tr", "bn", "id", "ur", "vi"], "license": "gpl-3.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-to-speech"], "pretty_name": "Multilingual TTS", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "speaker", "dtype": "string"}, {"name": "languages", "dtype": "string"}, {"name": "audio", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 1561588634.72, "num_examples": 25540}], "download_size": 1548036818, "dataset_size": 1561588634.72}} | 2023-12-12T21:04:06+00:00 | [] | [
"ar",
"en",
"zh",
"es",
"fr",
"hi",
"ru",
"pt",
"ja",
"de",
"tr",
"bn",
"id",
"ur",
"vi"
] | TAGS
#task_categories-text-to-speech #size_categories-10K<n<100K #language-Arabic #language-English #language-Chinese #language-Spanish #language-French #language-Hindi #language-Russian #language-Portuguese #language-Japanese #language-German #language-Turkish #language-Bengali #language-Indonesian #language-Urdu #language-Vietnamese #license-gpl-3.0 #region-us
| # Before Anything and Everything
_In the time of writing this Dataset Card, ~17,490~ 18,412 civilian has been killed in Palestine (~7,870~ 8,000 are children and ~6,121~ 6,200 are women)._
Seek any non-profit organization to help them with what you can (For myself, I use Mersal) π΅πΈ
## Dataset Description
The Multilingual TTS dataset is an exceptional compilation of text-to-speech (TTS) samples, meticulously crafted to showcase the richness and diversity of human languages. This dataset encompasses a variety of real-world sentences in fifteen prominent languages, carefully chosen to reflect global linguistic diversity. Each sample is accompanied by its corresponding high-quality audio output.
<style>
.image-container {
display: flex;
justify-content: center;
align-items: center;
height: 65vh;
margin: 0;
}
.image-container img {
max-width: 48%; /* Adjust the width as needed */
height: auto;
}
</style>
<div class="image-container">
<img src="URL
<img src="URL
</div>
## Key Features:
1. Language Diversity: The dataset covers a spectrum of languages, including Bengali, Mandarin Chinese, Turkish, Hindi, French, Vietnamese, Portuguese, Spanish, Japanese, German, Russian, Indonesian, Standard Arabic, English, and Urdu. This wide linguistic representation ensures inclusivity and applicability to a global audience.
3. Real-World Sentences: Comprising 25,000 samples, the dataset mirrors authentic communication scenarios. Sentences span diverse topics, ranging from everyday conversations to informative texts and news snippets, providing a comprehensive linguistic landscape.
4. Multilingual Sentences: A distinctive feature of this dataset is its inclusion of sentences that seamlessly integrate multiple languages. Each sample combines at least two languages, capturing the intricate dynamics of multilingual communication and rendering the dataset particularly valuable for training and evaluating multilingual TTS systems.
5. Audio Quality: Special attention has been given to the audio quality of each sample. The audio outputs are meticulously designed to be clear, natural-sounding, and faithful representations of the corresponding text, ensuring a rich auditory experience.
6. Generated by GPT-4 and elevenlabs: The dataset is the result of a collaboration between GPT-4 and elevenlabs, combining cutting-edge language generation capabilities with domain expertise. This collaboration guarantees a high level of accuracy, coherence, and linguistic nuance in both the text and audio components.
## Potential Use Cases:
1. Multilingual TTS Model Training: Researchers and developers can leverage this dataset to train and refine multilingual TTS models, enhancing their proficiency across a diverse array of languages.
2. Cross-Language Evaluation: The dataset serves as a valuable resource for evaluating TTS systems in handling multilingual scenarios, offering a benchmark for assessing model capabilities across different languages.
3. Language Integration Testing: Developers working on applications requiring multilingual TTS functionality can utilize this dataset to test and optimize language integration, ensuring a seamless user experience across various linguistic contexts.
## Acknowledgments:
The creation of the Multilingual TTS dataset was made possible through the collaborative efforts of OpenAI's GPT-4 and the expertise of Elevenlabs Multilingual V2. We extend our gratitude to the AI and language processing communities for their continuous support in advancing the field of multilingual TTS. This dataset stands as a significant contribution, fostering innovation and progress in language technologies.
| [
"# Before Anything and Everything \n\n_In the time of writing this Dataset Card, ~17,490~ 18,412 civilian has been killed in Palestine (~7,870~ 8,000 are children and ~6,121~ 6,200 are women)._\n\nSeek any non-profit organization to help them with what you can (For myself, I use Mersal) π΅πΈ",
"## Dataset Description\nThe Multilingual TTS dataset is an exceptional compilation of text-to-speech (TTS) samples, meticulously crafted to showcase the richness and diversity of human languages. This dataset encompasses a variety of real-world sentences in fifteen prominent languages, carefully chosen to reflect global linguistic diversity. Each sample is accompanied by its corresponding high-quality audio output.\n\n<style>\n .image-container {\n display: flex;\n justify-content: center;\n align-items: center;\n height: 65vh;\n margin: 0;\n }\n .image-container img {\n max-width: 48%; /* Adjust the width as needed */\n height: auto;\n}\n</style>\n\n<div class=\"image-container\">\n <img src=\"URL\n <img src=\"URL\n</div>",
"## Key Features:\n\n1. Language Diversity: The dataset covers a spectrum of languages, including Bengali, Mandarin Chinese, Turkish, Hindi, French, Vietnamese, Portuguese, Spanish, Japanese, German, Russian, Indonesian, Standard Arabic, English, and Urdu. This wide linguistic representation ensures inclusivity and applicability to a global audience.\n3. Real-World Sentences: Comprising 25,000 samples, the dataset mirrors authentic communication scenarios. Sentences span diverse topics, ranging from everyday conversations to informative texts and news snippets, providing a comprehensive linguistic landscape.\n4. Multilingual Sentences: A distinctive feature of this dataset is its inclusion of sentences that seamlessly integrate multiple languages. Each sample combines at least two languages, capturing the intricate dynamics of multilingual communication and rendering the dataset particularly valuable for training and evaluating multilingual TTS systems.\n5. Audio Quality: Special attention has been given to the audio quality of each sample. The audio outputs are meticulously designed to be clear, natural-sounding, and faithful representations of the corresponding text, ensuring a rich auditory experience.\n6. Generated by GPT-4 and elevenlabs: The dataset is the result of a collaboration between GPT-4 and elevenlabs, combining cutting-edge language generation capabilities with domain expertise. This collaboration guarantees a high level of accuracy, coherence, and linguistic nuance in both the text and audio components.",
"## Potential Use Cases:\n\n1. Multilingual TTS Model Training: Researchers and developers can leverage this dataset to train and refine multilingual TTS models, enhancing their proficiency across a diverse array of languages.\n2. Cross-Language Evaluation: The dataset serves as a valuable resource for evaluating TTS systems in handling multilingual scenarios, offering a benchmark for assessing model capabilities across different languages.\n3. Language Integration Testing: Developers working on applications requiring multilingual TTS functionality can utilize this dataset to test and optimize language integration, ensuring a seamless user experience across various linguistic contexts.",
"## Acknowledgments:\n\nThe creation of the Multilingual TTS dataset was made possible through the collaborative efforts of OpenAI's GPT-4 and the expertise of Elevenlabs Multilingual V2. We extend our gratitude to the AI and language processing communities for their continuous support in advancing the field of multilingual TTS. This dataset stands as a significant contribution, fostering innovation and progress in language technologies."
] | [
"TAGS\n#task_categories-text-to-speech #size_categories-10K<n<100K #language-Arabic #language-English #language-Chinese #language-Spanish #language-French #language-Hindi #language-Russian #language-Portuguese #language-Japanese #language-German #language-Turkish #language-Bengali #language-Indonesian #language-Urdu #language-Vietnamese #license-gpl-3.0 #region-us \n",
"# Before Anything and Everything \n\n_In the time of writing this Dataset Card, ~17,490~ 18,412 civilian has been killed in Palestine (~7,870~ 8,000 are children and ~6,121~ 6,200 are women)._\n\nSeek any non-profit organization to help them with what you can (For myself, I use Mersal) π΅πΈ",
"## Dataset Description\nThe Multilingual TTS dataset is an exceptional compilation of text-to-speech (TTS) samples, meticulously crafted to showcase the richness and diversity of human languages. This dataset encompasses a variety of real-world sentences in fifteen prominent languages, carefully chosen to reflect global linguistic diversity. Each sample is accompanied by its corresponding high-quality audio output.\n\n<style>\n .image-container {\n display: flex;\n justify-content: center;\n align-items: center;\n height: 65vh;\n margin: 0;\n }\n .image-container img {\n max-width: 48%; /* Adjust the width as needed */\n height: auto;\n}\n</style>\n\n<div class=\"image-container\">\n <img src=\"URL\n <img src=\"URL\n</div>",
"## Key Features:\n\n1. Language Diversity: The dataset covers a spectrum of languages, including Bengali, Mandarin Chinese, Turkish, Hindi, French, Vietnamese, Portuguese, Spanish, Japanese, German, Russian, Indonesian, Standard Arabic, English, and Urdu. This wide linguistic representation ensures inclusivity and applicability to a global audience.\n3. Real-World Sentences: Comprising 25,000 samples, the dataset mirrors authentic communication scenarios. Sentences span diverse topics, ranging from everyday conversations to informative texts and news snippets, providing a comprehensive linguistic landscape.\n4. Multilingual Sentences: A distinctive feature of this dataset is its inclusion of sentences that seamlessly integrate multiple languages. Each sample combines at least two languages, capturing the intricate dynamics of multilingual communication and rendering the dataset particularly valuable for training and evaluating multilingual TTS systems.\n5. Audio Quality: Special attention has been given to the audio quality of each sample. The audio outputs are meticulously designed to be clear, natural-sounding, and faithful representations of the corresponding text, ensuring a rich auditory experience.\n6. Generated by GPT-4 and elevenlabs: The dataset is the result of a collaboration between GPT-4 and elevenlabs, combining cutting-edge language generation capabilities with domain expertise. This collaboration guarantees a high level of accuracy, coherence, and linguistic nuance in both the text and audio components.",
"## Potential Use Cases:\n\n1. Multilingual TTS Model Training: Researchers and developers can leverage this dataset to train and refine multilingual TTS models, enhancing their proficiency across a diverse array of languages.\n2. Cross-Language Evaluation: The dataset serves as a valuable resource for evaluating TTS systems in handling multilingual scenarios, offering a benchmark for assessing model capabilities across different languages.\n3. Language Integration Testing: Developers working on applications requiring multilingual TTS functionality can utilize this dataset to test and optimize language integration, ensuring a seamless user experience across various linguistic contexts.",
"## Acknowledgments:\n\nThe creation of the Multilingual TTS dataset was made possible through the collaborative efforts of OpenAI's GPT-4 and the expertise of Elevenlabs Multilingual V2. We extend our gratitude to the AI and language processing communities for their continuous support in advancing the field of multilingual TTS. This dataset stands as a significant contribution, fostering innovation and progress in language technologies."
] | [
117,
82,
205,
342,
149,
97
] | [
"passage: TAGS\n#task_categories-text-to-speech #size_categories-10K<n<100K #language-Arabic #language-English #language-Chinese #language-Spanish #language-French #language-Hindi #language-Russian #language-Portuguese #language-Japanese #language-German #language-Turkish #language-Bengali #language-Indonesian #language-Urdu #language-Vietnamese #license-gpl-3.0 #region-us \n# Before Anything and Everything \n\n_In the time of writing this Dataset Card, ~17,490~ 18,412 civilian has been killed in Palestine (~7,870~ 8,000 are children and ~6,121~ 6,200 are women)._\n\nSeek any non-profit organization to help them with what you can (For myself, I use Mersal) π΅πΈ## Dataset Description\nThe Multilingual TTS dataset is an exceptional compilation of text-to-speech (TTS) samples, meticulously crafted to showcase the richness and diversity of human languages. This dataset encompasses a variety of real-world sentences in fifteen prominent languages, carefully chosen to reflect global linguistic diversity. Each sample is accompanied by its corresponding high-quality audio output.\n\n<style>\n .image-container {\n display: flex;\n justify-content: center;\n align-items: center;\n height: 65vh;\n margin: 0;\n }\n .image-container img {\n max-width: 48%; /* Adjust the width as needed */\n height: auto;\n}\n</style>\n\n<div class=\"image-container\">\n <img src=\"URL\n <img src=\"URL\n</div>"
] |
7d725aec1ab43b8471c69ecbefbf8133019a03f8 | # Dataset Card for "inuitparrrallel"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | aatherton2024/inuitparrrallel | [
"region:us"
] | 2023-12-02T00:04:49+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "en", "path": "data/en-*"}, {"split": "fr", "path": "data/fr-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "en", "num_bytes": 439558, "num_examples": 10192}, {"name": "fr", "num_bytes": 734362, "num_examples": 10192}], "download_size": 490449, "dataset_size": 1173920}} | 2023-12-02T00:32:28+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "inuitparrrallel"
More Information needed | [
"# Dataset Card for \"inuitparrrallel\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"inuitparrrallel\"\n\nMore Information needed"
] | [
6,
16
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"inuitparrrallel\"\n\nMore Information needed"
] |
d1a5293dc711da2feb15239245cd30063c5125a4 | **Dataset Summary:**
A new and improved verison of InstructMix that has nearly twice as many examples.
**Dataset Contents:**
The dataset contains a collection of instructional data with corresponding inputs and outputs. Each entry has an "Input" field that contains the instructional content, and an "Output" field that represents the corresponding response or completion. Here is a list of the datasets used:
- Locutusque/ColumnedChatCombined
- TokenBender/code_instructions_120k_alpaca_style
- Open-Orca/OpenOrca
- vicgalle/alpaca-gpt4
- ChristophSchuhmann/essays-with-instructions
- checkai/instruction-poems
- pubmed_qa
- BI55/MedText
- nampdn-ai/tiny-codes
- TIGER-Lab/MathInstruct
- garage-bAInd/Open-Platypus
- KnutJaegersberg/WizardLM_evol_instruct_V2_196k_instruct_format
- teknium/openhermes
- ssbuild/ultrachat
It contains two of the following columns:
- Input (string)
- Output (string)
These should hopefully be self-explanatory
**Dataset Composition:**
- Number of samples: 13,639,348
- Languages: English
**Use Cases:**
The InstructiveMix dataset is suitable for various NLP tasks, including text generation, text completion, translation, summarization, and more. It can be used to train and evaluate language models, code generation models, and other NLP-based applications.
**Dataset Creation:**
The InstructiveMix dataset was created by combining multiple existing datasets with instructional content and adding metadata to facilitate seamless integration. The content spans a diverse set of domains and was sourced from reputable datasets and public sources.
**License:**
Please ensure that you read and adhere to the licensing agreements of the datasets included in this compilation, as some may contain specific rules that must be followed. | Locutusque/InstructMix-V2 | [
"task_categories:text-generation",
"task_categories:question-answering",
"task_categories:conversational",
"size_categories:10M<n<100M",
"language:en",
"language:code",
"license:other",
"region:us"
] | 2023-12-02T00:32:00+00:00 | {"language": ["en", "code"], "license": "other", "size_categories": ["10M<n<100M"], "task_categories": ["text-generation", "question-answering", "conversational"], "pretty_name": "InstructMix-V2"} | 2023-12-02T00:57:02+00:00 | [] | [
"en",
"code"
] | TAGS
#task_categories-text-generation #task_categories-question-answering #task_categories-conversational #size_categories-10M<n<100M #language-English #language-code #license-other #region-us
| Dataset Summary:
A new and improved verison of InstructMix that has nearly twice as many examples.
Dataset Contents:
The dataset contains a collection of instructional data with corresponding inputs and outputs. Each entry has an "Input" field that contains the instructional content, and an "Output" field that represents the corresponding response or completion. Here is a list of the datasets used:
- Locutusque/ColumnedChatCombined
- TokenBender/code_instructions_120k_alpaca_style
- Open-Orca/OpenOrca
- vicgalle/alpaca-gpt4
- ChristophSchuhmann/essays-with-instructions
- checkai/instruction-poems
- pubmed_qa
- BI55/MedText
- nampdn-ai/tiny-codes
- TIGER-Lab/MathInstruct
- garage-bAInd/Open-Platypus
- KnutJaegersberg/WizardLM_evol_instruct_V2_196k_instruct_format
- teknium/openhermes
- ssbuild/ultrachat
It contains two of the following columns:
- Input (string)
- Output (string)
These should hopefully be self-explanatory
Dataset Composition:
- Number of samples: 13,639,348
- Languages: English
Use Cases:
The InstructiveMix dataset is suitable for various NLP tasks, including text generation, text completion, translation, summarization, and more. It can be used to train and evaluate language models, code generation models, and other NLP-based applications.
Dataset Creation:
The InstructiveMix dataset was created by combining multiple existing datasets with instructional content and adding metadata to facilitate seamless integration. The content spans a diverse set of domains and was sourced from reputable datasets and public sources.
License:
Please ensure that you read and adhere to the licensing agreements of the datasets included in this compilation, as some may contain specific rules that must be followed. | [] | [
"TAGS\n#task_categories-text-generation #task_categories-question-answering #task_categories-conversational #size_categories-10M<n<100M #language-English #language-code #license-other #region-us \n"
] | [
64
] | [
"passage: TAGS\n#task_categories-text-generation #task_categories-question-answering #task_categories-conversational #size_categories-10M<n<100M #language-English #language-code #license-other #region-us \n"
] |
0e63485f2f9dccd8daf5198a7e3407c8913d45c8 |
<div align="center">
<img width="640" alt="aghent/copiapoa-cactis" src="https://huggingface.co/datasets/aghent/copiapoa-cactis/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['copiapoa']
```
### Number of Images
```json
{'valid': 1071, 'test': 1004, 'train': 7925}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("aghent/copiapoa-cactis", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/uai-63qde/copiapoa-instance-segmentation/dataset/2](https://universe.roboflow.com/uai-63qde/copiapoa-instance-segmentation/dataset/2?ref=roboflow2huggingface)
### Citation
```
@misc{ copiapoa-instance-segmentation_dataset,
title = { Copiapoa Instance Segmentation Dataset },
type = { Open Source Dataset },
author = { UAI },
howpublished = { \\url{ https://universe.roboflow.com/uai-63qde/copiapoa-instance-segmentation } },
url = { https://universe.roboflow.com/uai-63qde/copiapoa-instance-segmentation },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { dec },
note = { visited on 2023-12-02 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on December 2, 2023 at 1:33 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 10000 images.
Copiapoa-copiapoav2 are annotated in COCO format.
The following pre-processing was applied to each image:
No image augmentation techniques were applied.
| aghent/copiapoa-cactis | [
"task_categories:image-segmentation",
"roboflow",
"roboflow2huggingface",
"region:us"
] | 2023-12-02T01:10:53+00:00 | {"task_categories": ["image-segmentation"], "tags": ["roboflow", "roboflow2huggingface"]} | 2023-12-02T01:36:47+00:00 | [] | [] | TAGS
#task_categories-image-segmentation #roboflow #roboflow2huggingface #region-us
|
<div align="center">
<img width="640" alt="aghent/copiapoa-cactis" src="URL
</div>
### Dataset Labels
### Number of Images
### How to Use
- Install datasets:
- Load the dataset:
### Roboflow Dataset Page
URL
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via URL on December 2, 2023 at 1:33 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit URL
To find over 100k other datasets and pre-trained models, visit URL
The dataset includes 10000 images.
Copiapoa-copiapoav2 are annotated in COCO format.
The following pre-processing was applied to each image:
No image augmentation techniques were applied.
| [
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on December 2, 2023 at 1:33 AM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 10000 images.\nCopiapoa-copiapoav2 are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n\nNo image augmentation techniques were applied."
] | [
"TAGS\n#task_categories-image-segmentation #roboflow #roboflow2huggingface #region-us \n",
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on December 2, 2023 at 1:33 AM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 10000 images.\nCopiapoa-copiapoav2 are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n\nNo image augmentation techniques were applied."
] | [
28,
5,
5,
18,
8,
6,
179
] | [
"passage: TAGS\n#task_categories-image-segmentation #roboflow #roboflow2huggingface #region-us \n### Dataset Labels### Number of Images### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:### Roboflow Dataset Page\nURL### License\nCC BY 4.0### Dataset Summary\nThis dataset was exported via URL on December 2, 2023 at 1:33 AM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 10000 images.\nCopiapoa-copiapoav2 are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n\nNo image augmentation techniques were applied."
] |
28899ab7a6a3c14ed1e71b21aa1d2ba18367fe85 | # Dataset Card for "atoul-12-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Chunt0/atoul-12-1 | [
"region:us"
] | 2023-12-02T01:27:58+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4729106.0, "num_examples": 10}], "download_size": 4716614, "dataset_size": 4729106.0}} | 2023-12-02T01:28:04+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "atoul-12-1"
More Information needed | [
"# Dataset Card for \"atoul-12-1\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"atoul-12-1\"\n\nMore Information needed"
] | [
6,
14
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"atoul-12-1\"\n\nMore Information needed"
] |
0a17c2165eb2d97d3fe106c9af841de7050b210f | **Download the Dataset**:
```python
from datasets import load_dataset
dataset = load_dataset("LeoTungAnh/electricity_hourly")
```
**Dataset Card for Electricity Consumption**
This dataset encompasses hourly electricity consumption in kilowatts (kW) across a span of three years (2012-2014), involving 370 individual clients in Portugal.
**Preprocessing information**:
- Grouped by hour (frequency: "1H").
- Applied Standardization as preprocessing technique ("Std").
**Dataset information**:
- Number of time series: 370
- Number of training samples: 26208
- Number of validation samples: 26256 (number_of_training_samples + 48)
- Number of testing samples: 26304 (number_of_validation_samples + 48)
**Dataset format**:
```python
Dataset({
features: ['start', 'target', 'feat_static_cat', 'feat_dynamic_real', 'item_id'],
num_rows: 370
})
```
**Data format for a sample**:
- 'start': datetime.datetime
- 'target': list of a time series data
- 'feat_static_cat': time series index
- 'feat_dynamic_real': None
- 'item_id': name of time series
**Data example**:
```python
{'start': datetime.datetime(2012, 1, 1, 1, 0),
'target': [-0.19363673541224083, -0.08851588245610625, -0.19363673541224083, ... -0.5615597207587115,...],
'feat_static_cat': [0],
'feat_dynamic_real': None,
'item_id': 'MT_001'
}
```
**Usage**:
- The dataset can be used by available Transformer, Autoformer, Informer of Huggingface.
- Other algorithms can extract data directly by making use of 'target' feature. | LeoTungAnh/electricity_hourly | [
"task_categories:time-series-forecasting",
"size_categories:1K<n<10K",
"license:openrail",
"region:us"
] | 2023-12-02T02:00:33+00:00 | {"license": "openrail", "size_categories": ["1K<n<10K"], "task_categories": ["time-series-forecasting"], "dataset_info": {"features": [{"name": "start", "dtype": "timestamp[us]"}, {"name": "target", "sequence": "float64"}, {"name": "feat_static_cat", "sequence": "int64"}, {"name": "feat_dynamic_real", "dtype": "null"}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 77588260, "num_examples": 370}, {"name": "validation", "num_bytes": 77730340, "num_examples": 370}, {"name": "test", "num_bytes": 77872420, "num_examples": 370}], "download_size": 123583004, "dataset_size": 233191020}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2023-12-02T02:03:54+00:00 | [] | [] | TAGS
#task_categories-time-series-forecasting #size_categories-1K<n<10K #license-openrail #region-us
| Download the Dataset:
Dataset Card for Electricity Consumption
This dataset encompasses hourly electricity consumption in kilowatts (kW) across a span of three years (2012-2014), involving 370 individual clients in Portugal.
Preprocessing information:
- Grouped by hour (frequency: "1H").
- Applied Standardization as preprocessing technique ("Std").
Dataset information:
- Number of time series: 370
- Number of training samples: 26208
- Number of validation samples: 26256 (number_of_training_samples + 48)
- Number of testing samples: 26304 (number_of_validation_samples + 48)
Dataset format:
Data format for a sample:
- 'start': datetime.datetime
- 'target': list of a time series data
- 'feat_static_cat': time series index
- 'feat_dynamic_real': None
- 'item_id': name of time series
Data example:
Usage:
- The dataset can be used by available Transformer, Autoformer, Informer of Huggingface.
- Other algorithms can extract data directly by making use of 'target' feature. | [] | [
"TAGS\n#task_categories-time-series-forecasting #size_categories-1K<n<10K #license-openrail #region-us \n"
] | [
39
] | [
"passage: TAGS\n#task_categories-time-series-forecasting #size_categories-1K<n<10K #license-openrail #region-us \n"
] |
facd2907a4fc012ac2aa6d3ecbbecb6e2828f159 | # Dataset Card for "traffic_hourly"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
**Download the Dataset**:
```python
from datasets import load_dataset
dataset = load_dataset("LeoTungAnh/traffic_hourly")
```
**Dataset Card for Electricity Consumption**
this dataset encompasses 862 hourly time series data points revealing the road occupancy rates across freeways in the San Francisco Bay area from 2015 to 2016.
**Preprocessing information**:
- Grouped by hour (frequency: "1H").
- Applied Standardization as preprocessing technique ("Std").
**Dataset information**:
- Number of time series: 862
- Number of training samples: 17448
- Number of validation samples: 17496 (number_of_training_samples + 48)
- Number of testing samples: 17544 (number_of_validation_samples + 48)
**Dataset format**:
```python
Dataset({
features: ['start', 'target', 'feat_static_cat', 'feat_dynamic_real', 'item_id'],
num_rows: 862
})
```
**Data format for a sample**:
- 'start': datetime.datetime
- 'target': list of a time series data
- 'feat_static_cat': time series index
- 'feat_dynamic_real': None
- 'item_id': name of time series
**Data example**:
```python
{'start': datetime.datetime(2015, 1, 1, 0, 0, 1),
'feat_static_cat': [0],
'feat_dynamic_real': None,
'item_id': 'T1',
'target': [-0.7127609544951682, -0.6743409178438863, -0.3749847989359815, ... 0.12447567753068307,...]
}
```
**Usage**:
- The dataset can be used by available Transformer, Autoformer, Informer of Huggingface.
- Other algorithms can extract data directly by making use of 'target' feature. | LeoTungAnh/traffic_hourly | [
"region:us"
] | 2023-12-02T02:10:38+00:00 | {"dataset_info": {"features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}, {"name": "target", "sequence": "float64"}], "splits": [{"name": "train", "num_bytes": 120352440, "num_examples": 862}, {"name": "validation", "num_bytes": 120683448, "num_examples": 862}, {"name": "test", "num_bytes": 121014456, "num_examples": 862}], "download_size": 124542918, "dataset_size": 362050344}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2023-12-02T02:17:29+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "traffic_hourly"
More Information needed
Download the Dataset:
Dataset Card for Electricity Consumption
this dataset encompasses 862 hourly time series data points revealing the road occupancy rates across freeways in the San Francisco Bay area from 2015 to 2016.
Preprocessing information:
- Grouped by hour (frequency: "1H").
- Applied Standardization as preprocessing technique ("Std").
Dataset information:
- Number of time series: 862
- Number of training samples: 17448
- Number of validation samples: 17496 (number_of_training_samples + 48)
- Number of testing samples: 17544 (number_of_validation_samples + 48)
Dataset format:
Data format for a sample:
- 'start': datetime.datetime
- 'target': list of a time series data
- 'feat_static_cat': time series index
- 'feat_dynamic_real': None
- 'item_id': name of time series
Data example:
Usage:
- The dataset can be used by available Transformer, Autoformer, Informer of Huggingface.
- Other algorithms can extract data directly by making use of 'target' feature. | [
"# Dataset Card for \"traffic_hourly\"\n\nMore Information needed\n\nDownload the Dataset:\n\n\nDataset Card for Electricity Consumption\n\nthis dataset encompasses 862 hourly time series data points revealing the road occupancy rates across freeways in the San Francisco Bay area from 2015 to 2016.\n\nPreprocessing information:\n- Grouped by hour (frequency: \"1H\").\n- Applied Standardization as preprocessing technique (\"Std\").\n\nDataset information:\n- Number of time series: 862\n- Number of training samples: 17448\n- Number of validation samples: 17496 (number_of_training_samples + 48)\n- Number of testing samples: 17544 (number_of_validation_samples + 48)\n\nDataset format:\n\nData format for a sample:\n\n- 'start': datetime.datetime\n\n- 'target': list of a time series data\n\n- 'feat_static_cat': time series index\n\n- 'feat_dynamic_real': None\n\n- 'item_id': name of time series\n\n\nData example:\n\n\nUsage:\n- The dataset can be used by available Transformer, Autoformer, Informer of Huggingface.\n- Other algorithms can extract data directly by making use of 'target' feature."
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"traffic_hourly\"\n\nMore Information needed\n\nDownload the Dataset:\n\n\nDataset Card for Electricity Consumption\n\nthis dataset encompasses 862 hourly time series data points revealing the road occupancy rates across freeways in the San Francisco Bay area from 2015 to 2016.\n\nPreprocessing information:\n- Grouped by hour (frequency: \"1H\").\n- Applied Standardization as preprocessing technique (\"Std\").\n\nDataset information:\n- Number of time series: 862\n- Number of training samples: 17448\n- Number of validation samples: 17496 (number_of_training_samples + 48)\n- Number of testing samples: 17544 (number_of_validation_samples + 48)\n\nDataset format:\n\nData format for a sample:\n\n- 'start': datetime.datetime\n\n- 'target': list of a time series data\n\n- 'feat_static_cat': time series index\n\n- 'feat_dynamic_real': None\n\n- 'item_id': name of time series\n\n\nData example:\n\n\nUsage:\n- The dataset can be used by available Transformer, Autoformer, Informer of Huggingface.\n- Other algorithms can extract data directly by making use of 'target' feature."
] | [
6,
279
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"traffic_hourly\"\n\nMore Information needed\n\nDownload the Dataset:\n\n\nDataset Card for Electricity Consumption\n\nthis dataset encompasses 862 hourly time series data points revealing the road occupancy rates across freeways in the San Francisco Bay area from 2015 to 2016.\n\nPreprocessing information:\n- Grouped by hour (frequency: \"1H\").\n- Applied Standardization as preprocessing technique (\"Std\").\n\nDataset information:\n- Number of time series: 862\n- Number of training samples: 17448\n- Number of validation samples: 17496 (number_of_training_samples + 48)\n- Number of testing samples: 17544 (number_of_validation_samples + 48)\n\nDataset format:\n\nData format for a sample:\n\n- 'start': datetime.datetime\n\n- 'target': list of a time series data\n\n- 'feat_static_cat': time series index\n\n- 'feat_dynamic_real': None\n\n- 'item_id': name of time series\n\n\nData example:\n\n\nUsage:\n- The dataset can be used by available Transformer, Autoformer, Informer of Huggingface.\n- Other algorithms can extract data directly by making use of 'target' feature."
] |
2192e0261f619605e18e2d81c5b95e69857d1b60 | # Dataset Card for "prod"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | rishabh063/prod | [
"region:us"
] | 2023-12-02T02:36:02+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "string"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 264377, "num_examples": 619}], "download_size": 119430, "dataset_size": 264377}} | 2023-12-02T02:36:12+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "prod"
More Information needed | [
"# Dataset Card for \"prod\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"prod\"\n\nMore Information needed"
] | [
6,
11
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"prod\"\n\nMore Information needed"
] |
90e3c42f04f28b2833701a8f7e9d4ed6e9514815 | # Dataset Card for "clustered_causal_pairs_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | gh1407/clustered_causal_pairs_3 | [
"region:us"
] | 2023-12-02T02:54:36+00:00 | {"dataset_info": {"features": [{"name": "political_leaning", "dtype": "string"}, {"name": "cause_split", "dtype": "string"}, {"name": "effect_split", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 653278, "num_examples": 3646}], "download_size": 143269, "dataset_size": 653278}} | 2023-12-02T03:05:52+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "clustered_causal_pairs_3"
More Information needed | [
"# Dataset Card for \"clustered_causal_pairs_3\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"clustered_causal_pairs_3\"\n\nMore Information needed"
] | [
6,
22
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"clustered_causal_pairs_3\"\n\nMore Information needed"
] |
75f8433319a7c2f7e266ed1f104c9ddb3f3785a3 | # Dataset Card for "capstone_fromgpt_without_gold_v13_all"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Deojoandco/capstone_fromgpt_without_gold_v13_all | [
"region:us"
] | 2023-12-02T04:04:57+00:00 | {"dataset_info": {"features": [{"name": "dialog_id", "dtype": "int64"}, {"name": "dialogue", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gold_tags", "dtype": "string"}, {"name": "gpt_success", "dtype": "bool"}, {"name": "gpt_response", "dtype": "string"}, {"name": "gold_tags_tokens_count", "dtype": "int64"}, {"name": "GPT_TAGS_FOUND", "dtype": "bool"}, {"name": "gpt_output_tags", "dtype": "string"}, {"name": "gpt_output_tag_tokens_count", "dtype": "int64"}, {"name": "GPT_MI_FOUND", "dtype": "bool"}, {"name": "gpt_tags_token_count", "dtype": "int64"}, {"name": "gpt_tags", "dtype": "string"}, {"name": "tag_token_count_match", "dtype": "bool"}, {"name": "precision", "dtype": "float64"}, {"name": "recall", "dtype": "float64"}, {"name": "f1", "dtype": "float64"}, {"name": "accuracy", "dtype": "float64"}], "splits": [{"name": "test", "num_bytes": 13736, "num_examples": 12}, {"name": "train", "num_bytes": 126588, "num_examples": 76}, {"name": "validation", "num_bytes": 23470, "num_examples": 12}], "download_size": 88672, "dataset_size": 163794}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]} | 2023-12-02T06:44:38+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "capstone_fromgpt_without_gold_v13_all"
More Information needed | [
"# Dataset Card for \"capstone_fromgpt_without_gold_v13_all\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"capstone_fromgpt_without_gold_v13_all\"\n\nMore Information needed"
] | [
6,
26
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"capstone_fromgpt_without_gold_v13_all\"\n\nMore Information needed"
] |
a3bca158f1d4fac1a0e51a1ea0e7ee33bed67618 |
<p align="center">
<img src="https://raw.githubusercontent.com/mu-cai/ViP-LLaVA/main/images/vip-llava_arch.png" width="600"> <br>
</p>
# [ViP-Bench: Making Large Multimodal Models Understand Arbitrary Visual Prompts](https://vip-llava.github.io/)
ViP-Bench a region level multimodal model evaulation benchmark curated by University of Wisconsin-Madison. We provides two kinds of visual prompts: (1) bounding boxes, and (2) human drawn diverse visual prompts.
**Evaluation Code** See [https://github.com/mu-cai/ViP-LLaVA/blob/main/docs/Evaluation.md](https://github.com/mu-cai/ViP-LLaVA/blob/main/docs/Evaluation.md)
**LeaderBoard** See [https://paperswithcode.com/sota/visual-question-answering-on-vip-bench](https://paperswithcode.com/sota/visual-question-answering-on-vip-bench)
**Evaluation Server** Please refer to [https://huggingface.co/spaces/mucai/ViP-Bench_Evaluator](https://huggingface.co/spaces/mucai/ViP-Bench_Evaluator) to use our evaluation server.
## Source annotation
In `source_image`, we provide the source plain images along with the bounding box/mask annotations. Researchers can use such grounding information to match the special tokens such as `<obj>` in `"question"` entry of `vip-bench-meta-data.json`. For example, `<obj>` can be replaced by textual coordinates to evaluate the region-level multimodal models. | mucai/ViP-Bench | [
"license:apache-2.0",
"region:us"
] | 2023-12-02T05:54:04+00:00 | {"license": "apache-2.0"} | 2023-12-15T01:08:04+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
|
<p align="center">
<img src="URL width="600"> <br>
</p>
# ViP-Bench: Making Large Multimodal Models Understand Arbitrary Visual Prompts
ViP-Bench a region level multimodal model evaulation benchmark curated by University of Wisconsin-Madison. We provides two kinds of visual prompts: (1) bounding boxes, and (2) human drawn diverse visual prompts.
Evaluation Code See URL
LeaderBoard See URL
Evaluation Server Please refer to URL to use our evaluation server.
## Source annotation
In 'source_image', we provide the source plain images along with the bounding box/mask annotations. Researchers can use such grounding information to match the special tokens such as '<obj>' in '"question"' entry of 'URL'. For example, '<obj>' can be replaced by textual coordinates to evaluate the region-level multimodal models. | [
"# ViP-Bench: Making Large Multimodal Models Understand Arbitrary Visual Prompts\n\nViP-Bench a region level multimodal model evaulation benchmark curated by University of Wisconsin-Madison. We provides two kinds of visual prompts: (1) bounding boxes, and (2) human drawn diverse visual prompts. \n\nEvaluation Code See URL\n\nLeaderBoard See URL\n\n\nEvaluation Server Please refer to URL to use our evaluation server.",
"## Source annotation\n\nIn 'source_image', we provide the source plain images along with the bounding box/mask annotations. Researchers can use such grounding information to match the special tokens such as '<obj>' in '\"question\"' entry of 'URL'. For example, '<obj>' can be replaced by textual coordinates to evaluate the region-level multimodal models."
] | [
"TAGS\n#license-apache-2.0 #region-us \n",
"# ViP-Bench: Making Large Multimodal Models Understand Arbitrary Visual Prompts\n\nViP-Bench a region level multimodal model evaulation benchmark curated by University of Wisconsin-Madison. We provides two kinds of visual prompts: (1) bounding boxes, and (2) human drawn diverse visual prompts. \n\nEvaluation Code See URL\n\nLeaderBoard See URL\n\n\nEvaluation Server Please refer to URL to use our evaluation server.",
"## Source annotation\n\nIn 'source_image', we provide the source plain images along with the bounding box/mask annotations. Researchers can use such grounding information to match the special tokens such as '<obj>' in '\"question\"' entry of 'URL'. For example, '<obj>' can be replaced by textual coordinates to evaluate the region-level multimodal models."
] | [
14,
97,
95
] | [
"passage: TAGS\n#license-apache-2.0 #region-us \n# ViP-Bench: Making Large Multimodal Models Understand Arbitrary Visual Prompts\n\nViP-Bench a region level multimodal model evaulation benchmark curated by University of Wisconsin-Madison. We provides two kinds of visual prompts: (1) bounding boxes, and (2) human drawn diverse visual prompts. \n\nEvaluation Code See URL\n\nLeaderBoard See URL\n\n\nEvaluation Server Please refer to URL to use our evaluation server.## Source annotation\n\nIn 'source_image', we provide the source plain images along with the bounding box/mask annotations. Researchers can use such grounding information to match the special tokens such as '<obj>' in '\"question\"' entry of 'URL'. For example, '<obj>' can be replaced by textual coordinates to evaluate the region-level multimodal models."
] |
bb8f0adfd7d8968d37e2be7a7bfa22ec7382f61b |
## Word Frequency Profile of Malayalam
The repo contains Malayalam words and their frequencies as obtained from AI4Bharat [Indic NLP corpus](https://github.com/AI4Bharat/indicnlp_corpus).
There is an associated python script to plot the word frequnecy profile.
| kavyamanohar/Malayalam-word-freq | [
"license:cc-by-4.0",
"region:us"
] | 2023-12-02T06:39:28+00:00 | {"license": "cc-by-4.0"} | 2023-12-02T08:34:20+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
|
## Word Frequency Profile of Malayalam
The repo contains Malayalam words and their frequencies as obtained from AI4Bharat Indic NLP corpus.
There is an associated python script to plot the word frequnecy profile.
| [
"## Word Frequency Profile of Malayalam\n\nThe repo contains Malayalam words and their frequencies as obtained from AI4Bharat Indic NLP corpus.\n\nThere is an associated python script to plot the word frequnecy profile."
] | [
"TAGS\n#license-cc-by-4.0 #region-us \n",
"## Word Frequency Profile of Malayalam\n\nThe repo contains Malayalam words and their frequencies as obtained from AI4Bharat Indic NLP corpus.\n\nThere is an associated python script to plot the word frequnecy profile."
] | [
15,
51
] | [
"passage: TAGS\n#license-cc-by-4.0 #region-us \n## Word Frequency Profile of Malayalam\n\nThe repo contains Malayalam words and their frequencies as obtained from AI4Bharat Indic NLP corpus.\n\nThere is an associated python script to plot the word frequnecy profile."
] |
eda2ec2b838fc06fc2df1605a29a3a80833c10c8 |
[ε°θ―ζΈζι]
- εε§ζΈζιδΎθͺgithub: https://github.com/wb14123/couplet-dataset/
- ηΉη°‘ι«δΈζθ½ζοΌδ½Ώη¨(OpenCC): https://github.com/yichen0831/opencc-python
---
license: apache-2.0
---
| stuser2023/couplet-json | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:zh",
"license:apache-2.0",
"region:us"
] | 2023-12-02T07:37:40+00:00 | {"language": ["zh"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"]} | 2023-12-02T14:44:43+00:00 | [] | [
"zh"
] | TAGS
#task_categories-text-generation #size_categories-10K<n<100K #language-Chinese #license-apache-2.0 #region-us
|
[ε°θ―ζΈζι]
- εε§ζΈζιδΎθͺgithub: URL
- ηΉη°‘ι«δΈζθ½ζοΌδ½Ώη¨(OpenCC): URL
---
license: apache-2.0
---
| [] | [
"TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-Chinese #license-apache-2.0 #region-us \n"
] | [
42
] | [
"passage: TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-Chinese #license-apache-2.0 #region-us \n"
] |
4ce422513ee2347539dbdd1d1a5bd9d002cf297d | # Dataset Card for "llama2_7b_chat-boolq-results_jacksee"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | automated-research-group/llama2_7b_chat-boolq-results_jacksee | [
"region:us"
] | 2023-12-02T08:01:45+00:00 | {"dataset_info": {"config_name": "{'do_sample'=False, 'beams'=1}", "features": [{"name": "id", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "bool_accuracy", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 503592, "num_examples": 3270}], "download_size": 265378, "dataset_size": 503592}, "configs": [{"config_name": "{'do_sample'=False, 'beams'=1}", "data_files": [{"split": "train", "path": "{'do_sample'=False, 'beams'=1}/train-*"}]}]} | 2023-12-02T08:01:47+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "llama2_7b_chat-boolq-results_jacksee"
More Information needed | [
"# Dataset Card for \"llama2_7b_chat-boolq-results_jacksee\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"llama2_7b_chat-boolq-results_jacksee\"\n\nMore Information needed"
] | [
6,
28
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"llama2_7b_chat-boolq-results_jacksee\"\n\nMore Information needed"
] |
f9dab2c08d03b53dcad170568e4d146bfdeefd36 | # Dataset Card for "service_rates_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | paul-w-qs/service_rates_v1 | [
"region:us"
] | 2023-12-02T09:22:11+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "JSON_LABEL", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6139301590.43, "num_examples": 16245}], "download_size": 5645548181, "dataset_size": 6139301590.43}} | 2023-12-03T09:21:24+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "service_rates_v1"
More Information needed | [
"# Dataset Card for \"service_rates_v1\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"service_rates_v1\"\n\nMore Information needed"
] | [
6,
17
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"service_rates_v1\"\n\nMore Information needed"
] |
d7b295d008206af9cb3bdddf57bccea3d99485a2 |
# ChatHaruhi Expanded Dataset 118K
62663 instance from original ChatHaruhi-54K
42255 English Data from RoleLLM
13166 Chinese Data from
**github repo:**
https://github.com/LC1332/Chat-Haruhi-Suzumiya
Please star our github repo if you found the dataset is useful
## Regenerate Data
If you want to regenerate data with different context length, different embedding model or using your own chracter
now we refactored the final data generating pipeline
RoleLLM Data was generated by
https://github.com/LC1332/Haruhi-2-Dev/blob/main/notebook/%E4%BB%8EHF%E6%95%B0%E6%8D%AE%E6%A0%BC%E5%BC%8F%E5%BC%80%E5%A7%8B%E7%BB%84%E7%BB%87%E6%95%B0%E6%8D%AE(RoleLLM).ipynb
Chinese Data was generated by
https://github.com/LC1332/Haruhi-2-Dev/blob/main/notebook/%E9%A2%9D%E5%A4%96%E4%B8%AD%E6%96%87%E6%95%B0%E6%8D%AE.ipynb
63K Haruhi Data was generated by
https://github.com/LC1332/Haruhi-2-Dev/blob/main/notebook/%E4%BD%BF%E7%94%A8ChatHaruhi54K%E7%94%9F%E6%88%90%E6%95%B0%E6%8D%AE.ipynb
The data generated in speed around 8 instances per second under T4 GPU
The code was partly implemented by [θ±θ§](https://github.com/goodnessSZW)
and finally verified by [LC1332](https://github.com/LC1332)
The English data was filtered by
https://github.com/LC1332/Haruhi-2-Dev/blob/main/notebook/Prepare_English_Training_set.ipynb
## Citation
Cite our paper if you use this dataset in your publication
```bibtex
@misc{li2023chatharuhi,
title={ChatHaruhi: Reviving Anime Character in Reality via Large Language Model},
author={Cheng Li and Ziang Leng and Chenxi Yan and Junyi Shen and Hao Wang and Weishi MI and Yaying Fei and Xiaoyang Feng and Song Yan and HaoSheng Wang and Linkang Zhan and Yaokai Jia and Pingyu Wu and Haozhen Sun},
year={2023},
eprint={2308.09597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | silk-road/ChatHaruhi-Expand-118K | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:zh",
"language:en",
"license:cc-by-4.0",
"arxiv:2308.09597",
"region:us"
] | 2023-12-02T09:45:05+00:00 | {"language": ["zh", "en"], "license": "cc-by-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation", "text2text-generation"]} | 2023-12-03T02:36:01+00:00 | [
"2308.09597"
] | [
"zh",
"en"
] | TAGS
#task_categories-text-generation #task_categories-text2text-generation #size_categories-100K<n<1M #language-Chinese #language-English #license-cc-by-4.0 #arxiv-2308.09597 #region-us
|
# ChatHaruhi Expanded Dataset 118K
62663 instance from original ChatHaruhi-54K
42255 English Data from RoleLLM
13166 Chinese Data from
github repo:
URL
Please star our github repo if you found the dataset is useful
## Regenerate Data
If you want to regenerate data with different context length, different embedding model or using your own chracter
now we refactored the final data generating pipeline
RoleLLM Data was generated by
URL
Chinese Data was generated by
URL
63K Haruhi Data was generated by
URL
The data generated in speed around 8 instances per second under T4 GPU
The code was partly implemented by θ±θ§
and finally verified by LC1332
The English data was filtered by
URL
Cite our paper if you use this dataset in your publication
| [
"# ChatHaruhi Expanded Dataset 118K\n\n62663 instance from original ChatHaruhi-54K\n\n42255 English Data from RoleLLM\n\n13166 Chinese Data from \n\ngithub repo:\n\nURL\n\nPlease star our github repo if you found the dataset is useful",
"## Regenerate Data\n\nIf you want to regenerate data with different context length, different embedding model or using your own chracter\n\nnow we refactored the final data generating pipeline\n\nRoleLLM Data was generated by\n\nURL\n\nChinese Data was generated by\n\nURL\n\n63K Haruhi Data was generated by \n\nURL\n\nThe data generated in speed around 8 instances per second under T4 GPU\n\nThe code was partly implemented by θ±θ§\n\nand finally verified by LC1332\n\nThe English data was filtered by\n\nURL\n\n\nCite our paper if you use this dataset in your publication"
] | [
"TAGS\n#task_categories-text-generation #task_categories-text2text-generation #size_categories-100K<n<1M #language-Chinese #language-English #license-cc-by-4.0 #arxiv-2308.09597 #region-us \n",
"# ChatHaruhi Expanded Dataset 118K\n\n62663 instance from original ChatHaruhi-54K\n\n42255 English Data from RoleLLM\n\n13166 Chinese Data from \n\ngithub repo:\n\nURL\n\nPlease star our github repo if you found the dataset is useful",
"## Regenerate Data\n\nIf you want to regenerate data with different context length, different embedding model or using your own chracter\n\nnow we refactored the final data generating pipeline\n\nRoleLLM Data was generated by\n\nURL\n\nChinese Data was generated by\n\nURL\n\n63K Haruhi Data was generated by \n\nURL\n\nThe data generated in speed around 8 instances per second under T4 GPU\n\nThe code was partly implemented by θ±θ§\n\nand finally verified by LC1332\n\nThe English data was filtered by\n\nURL\n\n\nCite our paper if you use this dataset in your publication"
] | [
68,
58,
126
] | [
"passage: TAGS\n#task_categories-text-generation #task_categories-text2text-generation #size_categories-100K<n<1M #language-Chinese #language-English #license-cc-by-4.0 #arxiv-2308.09597 #region-us \n# ChatHaruhi Expanded Dataset 118K\n\n62663 instance from original ChatHaruhi-54K\n\n42255 English Data from RoleLLM\n\n13166 Chinese Data from \n\ngithub repo:\n\nURL\n\nPlease star our github repo if you found the dataset is useful## Regenerate Data\n\nIf you want to regenerate data with different context length, different embedding model or using your own chracter\n\nnow we refactored the final data generating pipeline\n\nRoleLLM Data was generated by\n\nURL\n\nChinese Data was generated by\n\nURL\n\n63K Haruhi Data was generated by \n\nURL\n\nThe data generated in speed around 8 instances per second under T4 GPU\n\nThe code was partly implemented by θ±θ§\n\nand finally verified by LC1332\n\nThe English data was filtered by\n\nURL\n\n\nCite our paper if you use this dataset in your publication"
] |
da66cd69a563bcedd8e8019bf6ee6af005872820 |
## Malayalam Phonetic Lexicon
This dataset contains words in Malayalam script and their pronunciation in International Phonetic Alphabet (IPA)
The words in the lexicon are sourced from
1) The most frequest 100 thousand words from [Indic NLP](https://github.com/AI4Bharat/indicnlp_corpus) corpus
2) Curated collection of word categories from [Mlmorph](https://pypi.org/project/mlmorph/) project
This pronunciations are created using [Mlphon](https://pypi.org/project/mlphon/) python Library.
### Applications
- Ready to use pronunciation lexicons for ASR and TTS
- To train datadriven grapheme to phoneme models
If you are using this dataset for any research, cite
```@ARTICLE{kavya2022mlphon,
author={Manohar, Kavya and Jayan, A. R. and Rajan, Rajeev},
journal={IEEE Access},
title={Mlphon: A Multifunctional Grapheme-Phoneme Conversion Tool Using Finite State Transducers},
year={2022},
volume={10},
number={},
pages={97555-97575},
doi={10.1109/ACCESS.2022.3204403}}
``` | smcproject/ml-phonetic-lexicon | [
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:ml",
"license:cc-by-4.0",
"region:us"
] | 2023-12-02T10:36:41+00:00 | {"language": ["ml"], "license": "cc-by-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["text2text-generation"], "pretty_name": "Malayalam Grapheme Phoneme Dataset", "configs": [{"config_name": "commonword", "data_files": "commonword.tsv", "sep": "\t"}, {"config_name": "english_loan", "data_files": "english_loan.tsv", "sep": "\t"}, {"config_name": "noun", "data_files": "noun.tsv", "sep": "\t"}, {"config_name": "personname", "data_files": "personname.tsv", "sep": "\t"}, {"config_name": "placename", "data_files": "placename.tsv", "sep": "\t"}, {"config_name": "pronoun", "data_files": "pronoun.tsv", "sep": "\t"}, {"config_name": "propernoun", "data_files": "propernoun.tsv", "sep": "\t"}, {"config_name": "sanskrit_loan", "data_files": "sanskrit_loan.tsv", "sep": "\t"}, {"config_name": "verb", "data_files": "verb.tsv", "sep": "\t"}]} | 2023-12-02T16:06:40+00:00 | [] | [
"ml"
] | TAGS
#task_categories-text2text-generation #size_categories-100K<n<1M #language-Malayalam #license-cc-by-4.0 #region-us
|
## Malayalam Phonetic Lexicon
This dataset contains words in Malayalam script and their pronunciation in International Phonetic Alphabet (IPA)
The words in the lexicon are sourced from
1) The most frequest 100 thousand words from Indic NLP corpus
2) Curated collection of word categories from Mlmorph project
This pronunciations are created using Mlphon python Library.
### Applications
- Ready to use pronunciation lexicons for ASR and TTS
- To train datadriven grapheme to phoneme models
If you are using this dataset for any research, cite
| [
"## Malayalam Phonetic Lexicon\n\nThis dataset contains words in Malayalam script and their pronunciation in International Phonetic Alphabet (IPA)\n\nThe words in the lexicon are sourced from \n1) The most frequest 100 thousand words from Indic NLP corpus\n2) Curated collection of word categories from Mlmorph project\n\nThis pronunciations are created using Mlphon python Library.",
"### Applications\n\n- Ready to use pronunciation lexicons for ASR and TTS\n- To train datadriven grapheme to phoneme models\n\n\n\nIf you are using this dataset for any research, cite"
] | [
"TAGS\n#task_categories-text2text-generation #size_categories-100K<n<1M #language-Malayalam #license-cc-by-4.0 #region-us \n",
"## Malayalam Phonetic Lexicon\n\nThis dataset contains words in Malayalam script and their pronunciation in International Phonetic Alphabet (IPA)\n\nThe words in the lexicon are sourced from \n1) The most frequest 100 thousand words from Indic NLP corpus\n2) Curated collection of word categories from Mlmorph project\n\nThis pronunciations are created using Mlphon python Library.",
"### Applications\n\n- Ready to use pronunciation lexicons for ASR and TTS\n- To train datadriven grapheme to phoneme models\n\n\n\nIf you are using this dataset for any research, cite"
] | [
46,
79,
45
] | [
"passage: TAGS\n#task_categories-text2text-generation #size_categories-100K<n<1M #language-Malayalam #license-cc-by-4.0 #region-us \n## Malayalam Phonetic Lexicon\n\nThis dataset contains words in Malayalam script and their pronunciation in International Phonetic Alphabet (IPA)\n\nThe words in the lexicon are sourced from \n1) The most frequest 100 thousand words from Indic NLP corpus\n2) Curated collection of word categories from Mlmorph project\n\nThis pronunciations are created using Mlphon python Library.### Applications\n\n- Ready to use pronunciation lexicons for ASR and TTS\n- To train datadriven grapheme to phoneme models\n\n\n\nIf you are using this dataset for any research, cite"
] |
6a95836fad5253d0b62dbdb473625942d0a642f9 | # Dataset Card for "midascontrolourbalancedgood"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | dputilov/midascontrolourbalancedgood | [
"region:us"
] | 2023-12-02T11:21:02+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "conditioning_image", "dtype": "image"}, {"name": "mask", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1210555308.839116, "num_examples": 2188}], "download_size": 1205399220, "dataset_size": 1210555308.839116}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-02T11:22:22+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "midascontrolourbalancedgood"
More Information needed | [
"# Dataset Card for \"midascontrolourbalancedgood\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"midascontrolourbalancedgood\"\n\nMore Information needed"
] | [
6,
17
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"midascontrolourbalancedgood\"\n\nMore Information needed"
] |
781d81041417dd061e518742afba245375043b08 |
# Dataset Card for Evaluation run of Azure99/blossom-v3_1-mistral-7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Azure99/blossom-v3_1-mistral-7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Azure99/blossom-v3_1-mistral-7b](https://huggingface.co/Azure99/blossom-v3_1-mistral-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 6 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Azure99__blossom-v3_1-mistral-7b",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-04T10:15:27.415703](https://huggingface.co/datasets/open-llm-leaderboard/details_Azure99__blossom-v3_1-mistral-7b/blob/main/results_2023-12-04T10-15-27.415703.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.46929492039423804,
"acc_stderr": 0.013746490739560038
},
"harness|gsm8k|5": {
"acc": 0.46929492039423804,
"acc_stderr": 0.013746490739560038
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_Azure99__blossom-v3_1-mistral-7b | [
"region:us"
] | 2023-12-02T12:03:34+00:00 | {"pretty_name": "Evaluation run of Azure99/blossom-v3_1-mistral-7b", "dataset_summary": "Dataset automatically created during the evaluation run of model [Azure99/blossom-v3_1-mistral-7b](https://huggingface.co/Azure99/blossom-v3_1-mistral-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 6 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Azure99__blossom-v3_1-mistral-7b\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-04T10:15:27.415703](https://huggingface.co/datasets/open-llm-leaderboard/details_Azure99__blossom-v3_1-mistral-7b/blob/main/results_2023-12-04T10-15-27.415703.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.46929492039423804,\n \"acc_stderr\": 0.013746490739560038\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.46929492039423804,\n \"acc_stderr\": 0.013746490739560038\n }\n}\n```", "repo_url": "https://huggingface.co/Azure99/blossom-v3_1-mistral-7b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T12_03_31.211153", "path": ["**/details_harness|gsm8k|5_2023-12-02T12-03-31.211153.parquet"]}, {"split": "2023_12_02T12_59_48.772371", "path": ["**/details_harness|gsm8k|5_2023-12-02T12-59-48.772371.parquet"]}, {"split": "2023_12_02T22_05_15.192157", "path": ["**/details_harness|gsm8k|5_2023-12-02T22-05-15.192157.parquet"]}, {"split": "2023_12_03T15_22_48.296021", "path": ["**/details_harness|gsm8k|5_2023-12-03T15-22-48.296021.parquet"]}, {"split": "2023_12_03T16_24_04.733758", "path": ["**/details_harness|gsm8k|5_2023-12-03T16-24-04.733758.parquet"]}, {"split": "2023_12_04T10_15_27.415703", "path": ["**/details_harness|gsm8k|5_2023-12-04T10-15-27.415703.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-04T10-15-27.415703.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T12_03_31.211153", "path": ["results_2023-12-02T12-03-31.211153.parquet"]}, {"split": "2023_12_02T12_59_48.772371", "path": ["results_2023-12-02T12-59-48.772371.parquet"]}, {"split": "2023_12_02T22_05_15.192157", "path": ["results_2023-12-02T22-05-15.192157.parquet"]}, {"split": "2023_12_03T15_22_48.296021", "path": ["results_2023-12-03T15-22-48.296021.parquet"]}, {"split": "2023_12_03T16_24_04.733758", "path": ["results_2023-12-03T16-24-04.733758.parquet"]}, {"split": "2023_12_04T10_15_27.415703", "path": ["results_2023-12-04T10-15-27.415703.parquet"]}, {"split": "latest", "path": ["results_2023-12-04T10-15-27.415703.parquet"]}]}]} | 2023-12-04T10:15:34+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Azure99/blossom-v3_1-mistral-7b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Azure99/blossom-v3_1-mistral-7b on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 6 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-04T10:15:27.415703(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of Azure99/blossom-v3_1-mistral-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Azure99/blossom-v3_1-mistral-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 6 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-04T10:15:27.415703(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Azure99/blossom-v3_1-mistral-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Azure99/blossom-v3_1-mistral-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 6 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-04T10:15:27.415703(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
24,
31,
173,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Azure99/blossom-v3_1-mistral-7b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Azure99/blossom-v3_1-mistral-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 6 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-04T10:15:27.415703(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
3f20cb309505b2a0da4f73df1b18c99521955758 |
# Dataset Card for BenchLMM
BenchLMM is a benchmarking dataset focusing on the cross-style visual capability of large multimodal models. It evaluates these models' performance in various visual contexts.
## Dataset Details
### Dataset Description
- **Curated by:** Rizhao Cai, Zirui Song, Dayan Guan, Zhenhao Chen, Xing Luo, Chenyu Yi, and Alex Kot.
- **Funded by :** Supported in part by the Rapid-Rich Object Search (ROSE) Lab of Nanyang Technological University and the NTU-PKU Joint Research Institute.
- **Shared by :** AIFEG.
- **Language(s) (NLP):** English.
- **License:** Apache-2.0.
### Dataset Sources
- **Repository:** [GitHub - AIFEG/BenchLMM](https://github.com/AIFEG/BenchLMM)
- **Paper :** Cai, R., Song, Z., Guan, D., et al. (2023). BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models. arXiv:2312.02896.
## Uses
### Direct Use
The dataset can be used to benchmark large multimodal models, especially focusing on their capability to interpret and respond to different visual styles.
## Dataset Structure
- **Directory Structure:**
- `baseline/`: Baseline code for LLaVA and InstructBLIP.
- `evaluate/`: Python code for model evaluation.
- `evaluate_results/`: Evaluation results of baseline models.
- `jsonl/`: JSONL files with questions, image locations, and answers.
## Dataset Creation
### Curation Rationale
Developed to assess large multimodal models' performance in diverse visual contexts, helping to understand their capabilities and limitations.
### Source Data
#### Data Collection and Processing
The dataset consists of various visual questions and corresponding answers, structured to evaluate multimodal model performance.
## Bias, Risks, and Limitations
Users should consider the specific visual contexts and question types included in the dataset when interpreting model performance.
## Citation
**BibTeX:**
@misc{cai2023benchlmm,
title={BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models},
author={Rizhao Cai and Zirui Song and Dayan Guan and Zhenhao Chen and Xing Luo and Chenyu Yi and Alex Kot},
year={2023},
eprint={2312.02896},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
**APA:**
Cai, R., Song, Z., Guan, D., Chen, Z., Luo, X., Yi, C., & Kot, A. (2023). BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models. arXiv preprint arXiv:2312.02896.
## Acknowledgements
This research is supported in part by the Rapid-Rich Object Search (ROSE) Lab of Nanyang Technological University and the NTU-PKU Joint Research Institute.
| AIFEG/BenchLMM | [
"task_categories:visual-question-answering",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"arxiv:2312.02896",
"region:us"
] | 2023-12-02T12:15:36+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["visual-question-answering"], "pretty_name": "BenchLMM"} | 2023-12-06T18:02:22+00:00 | [
"2312.02896"
] | [
"en"
] | TAGS
#task_categories-visual-question-answering #size_categories-n<1K #language-English #license-apache-2.0 #arxiv-2312.02896 #region-us
|
# Dataset Card for BenchLMM
BenchLMM is a benchmarking dataset focusing on the cross-style visual capability of large multimodal models. It evaluates these models' performance in various visual contexts.
## Dataset Details
### Dataset Description
- Curated by: Rizhao Cai, Zirui Song, Dayan Guan, Zhenhao Chen, Xing Luo, Chenyu Yi, and Alex Kot.
- Funded by : Supported in part by the Rapid-Rich Object Search (ROSE) Lab of Nanyang Technological University and the NTU-PKU Joint Research Institute.
- Shared by : AIFEG.
- Language(s) (NLP): English.
- License: Apache-2.0.
### Dataset Sources
- Repository: GitHub - AIFEG/BenchLMM
- Paper : Cai, R., Song, Z., Guan, D., et al. (2023). BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models. arXiv:2312.02896.
## Uses
### Direct Use
The dataset can be used to benchmark large multimodal models, especially focusing on their capability to interpret and respond to different visual styles.
## Dataset Structure
- Directory Structure:
- 'baseline/': Baseline code for LLaVA and InstructBLIP.
- 'evaluate/': Python code for model evaluation.
- 'evaluate_results/': Evaluation results of baseline models.
- 'jsonl/': JSONL files with questions, image locations, and answers.
## Dataset Creation
### Curation Rationale
Developed to assess large multimodal models' performance in diverse visual contexts, helping to understand their capabilities and limitations.
### Source Data
#### Data Collection and Processing
The dataset consists of various visual questions and corresponding answers, structured to evaluate multimodal model performance.
## Bias, Risks, and Limitations
Users should consider the specific visual contexts and question types included in the dataset when interpreting model performance.
BibTeX:
@misc{cai2023benchlmm,
title={BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models},
author={Rizhao Cai and Zirui Song and Dayan Guan and Zhenhao Chen and Xing Luo and Chenyu Yi and Alex Kot},
year={2023},
eprint={2312.02896},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
APA:
Cai, R., Song, Z., Guan, D., Chen, Z., Luo, X., Yi, C., & Kot, A. (2023). BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models. arXiv preprint arXiv:2312.02896.
## Acknowledgements
This research is supported in part by the Rapid-Rich Object Search (ROSE) Lab of Nanyang Technological University and the NTU-PKU Joint Research Institute.
| [
"# Dataset Card for BenchLMM\n\nBenchLMM is a benchmarking dataset focusing on the cross-style visual capability of large multimodal models. It evaluates these models' performance in various visual contexts.",
"## Dataset Details",
"### Dataset Description\n\n- Curated by: Rizhao Cai, Zirui Song, Dayan Guan, Zhenhao Chen, Xing Luo, Chenyu Yi, and Alex Kot.\n- Funded by : Supported in part by the Rapid-Rich Object Search (ROSE) Lab of Nanyang Technological University and the NTU-PKU Joint Research Institute.\n- Shared by : AIFEG.\n- Language(s) (NLP): English.\n- License: Apache-2.0.",
"### Dataset Sources \n\n- Repository: GitHub - AIFEG/BenchLMM\n- Paper : Cai, R., Song, Z., Guan, D., et al. (2023). BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models. arXiv:2312.02896.",
"## Uses",
"### Direct Use\n\nThe dataset can be used to benchmark large multimodal models, especially focusing on their capability to interpret and respond to different visual styles.",
"## Dataset Structure\n\n- Directory Structure:\n - 'baseline/': Baseline code for LLaVA and InstructBLIP.\n - 'evaluate/': Python code for model evaluation.\n - 'evaluate_results/': Evaluation results of baseline models.\n - 'jsonl/': JSONL files with questions, image locations, and answers.",
"## Dataset Creation",
"### Curation Rationale\n\nDeveloped to assess large multimodal models' performance in diverse visual contexts, helping to understand their capabilities and limitations.",
"### Source Data",
"#### Data Collection and Processing\n\nThe dataset consists of various visual questions and corresponding answers, structured to evaluate multimodal model performance.",
"## Bias, Risks, and Limitations\n\nUsers should consider the specific visual contexts and question types included in the dataset when interpreting model performance.\n\nBibTeX:\n@misc{cai2023benchlmm,\ntitle={BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models},\nauthor={Rizhao Cai and Zirui Song and Dayan Guan and Zhenhao Chen and Xing Luo and Chenyu Yi and Alex Kot},\nyear={2023},\neprint={2312.02896},\narchivePrefix={arXiv},\nprimaryClass={cs.CV}\n}\n\nAPA:\nCai, R., Song, Z., Guan, D., Chen, Z., Luo, X., Yi, C., & Kot, A. (2023). BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models. arXiv preprint arXiv:2312.02896.",
"## Acknowledgements\n\nThis research is supported in part by the Rapid-Rich Object Search (ROSE) Lab of Nanyang Technological University and the NTU-PKU Joint Research Institute."
] | [
"TAGS\n#task_categories-visual-question-answering #size_categories-n<1K #language-English #license-apache-2.0 #arxiv-2312.02896 #region-us \n",
"# Dataset Card for BenchLMM\n\nBenchLMM is a benchmarking dataset focusing on the cross-style visual capability of large multimodal models. It evaluates these models' performance in various visual contexts.",
"## Dataset Details",
"### Dataset Description\n\n- Curated by: Rizhao Cai, Zirui Song, Dayan Guan, Zhenhao Chen, Xing Luo, Chenyu Yi, and Alex Kot.\n- Funded by : Supported in part by the Rapid-Rich Object Search (ROSE) Lab of Nanyang Technological University and the NTU-PKU Joint Research Institute.\n- Shared by : AIFEG.\n- Language(s) (NLP): English.\n- License: Apache-2.0.",
"### Dataset Sources \n\n- Repository: GitHub - AIFEG/BenchLMM\n- Paper : Cai, R., Song, Z., Guan, D., et al. (2023). BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models. arXiv:2312.02896.",
"## Uses",
"### Direct Use\n\nThe dataset can be used to benchmark large multimodal models, especially focusing on their capability to interpret and respond to different visual styles.",
"## Dataset Structure\n\n- Directory Structure:\n - 'baseline/': Baseline code for LLaVA and InstructBLIP.\n - 'evaluate/': Python code for model evaluation.\n - 'evaluate_results/': Evaluation results of baseline models.\n - 'jsonl/': JSONL files with questions, image locations, and answers.",
"## Dataset Creation",
"### Curation Rationale\n\nDeveloped to assess large multimodal models' performance in diverse visual contexts, helping to understand their capabilities and limitations.",
"### Source Data",
"#### Data Collection and Processing\n\nThe dataset consists of various visual questions and corresponding answers, structured to evaluate multimodal model performance.",
"## Bias, Risks, and Limitations\n\nUsers should consider the specific visual contexts and question types included in the dataset when interpreting model performance.\n\nBibTeX:\n@misc{cai2023benchlmm,\ntitle={BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models},\nauthor={Rizhao Cai and Zirui Song and Dayan Guan and Zhenhao Chen and Xing Luo and Chenyu Yi and Alex Kot},\nyear={2023},\neprint={2312.02896},\narchivePrefix={arXiv},\nprimaryClass={cs.CV}\n}\n\nAPA:\nCai, R., Song, Z., Guan, D., Chen, Z., Luo, X., Yi, C., & Kot, A. (2023). BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models. arXiv preprint arXiv:2312.02896.",
"## Acknowledgements\n\nThis research is supported in part by the Rapid-Rich Object Search (ROSE) Lab of Nanyang Technological University and the NTU-PKU Joint Research Institute."
] | [
52,
49,
4,
109,
79,
3,
35,
88,
5,
34,
4,
33,
228,
43
] | [
"passage: TAGS\n#task_categories-visual-question-answering #size_categories-n<1K #language-English #license-apache-2.0 #arxiv-2312.02896 #region-us \n# Dataset Card for BenchLMM\n\nBenchLMM is a benchmarking dataset focusing on the cross-style visual capability of large multimodal models. It evaluates these models' performance in various visual contexts.## Dataset Details### Dataset Description\n\n- Curated by: Rizhao Cai, Zirui Song, Dayan Guan, Zhenhao Chen, Xing Luo, Chenyu Yi, and Alex Kot.\n- Funded by : Supported in part by the Rapid-Rich Object Search (ROSE) Lab of Nanyang Technological University and the NTU-PKU Joint Research Institute.\n- Shared by : AIFEG.\n- Language(s) (NLP): English.\n- License: Apache-2.0.### Dataset Sources \n\n- Repository: GitHub - AIFEG/BenchLMM\n- Paper : Cai, R., Song, Z., Guan, D., et al. (2023). BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models. arXiv:2312.02896.## Uses### Direct Use\n\nThe dataset can be used to benchmark large multimodal models, especially focusing on their capability to interpret and respond to different visual styles.## Dataset Structure\n\n- Directory Structure:\n - 'baseline/': Baseline code for LLaVA and InstructBLIP.\n - 'evaluate/': Python code for model evaluation.\n - 'evaluate_results/': Evaluation results of baseline models.\n - 'jsonl/': JSONL files with questions, image locations, and answers.## Dataset Creation### Curation Rationale\n\nDeveloped to assess large multimodal models' performance in diverse visual contexts, helping to understand their capabilities and limitations.### Source Data#### Data Collection and Processing\n\nThe dataset consists of various visual questions and corresponding answers, structured to evaluate multimodal model performance."
] |
3b78e5ddd2360c684c1f947cc09a3b4c52e1c0ef | # MedInstruct
<hr>
This is the repo for *MedInstruct*, which is a dataset of synthetically generated medical instructions. The repo contains:
- The 52K medical instruction-response dataset [*MedInstruct-52k*](https://github.com/XZhang97666/AlpaCare/blob/master/data/MedInstruct-52k.json) used for fine-tuning *AlpaCare*, and corresponding [clinican-crafted seed task](https://github.com/XZhang97666/AlpaCare/blob/master/data/med_seed.json) to generate instruction.
- A 217 clinical craft free-form instruction evaluation test set,[*MedInstruct-test*](https://github.com/XZhang97666/AlpaCare/blob/master/data/MedInstruct-test.jsonl).
- The code for:
1. [medical task generation](https://github.com/XZhang97666/AlpaCare/tree/master/test_generation);
2. [fine-tuning LLaMA series models](https://github.com/XZhang97666/AlpaCare/tree/master/training);
3. [instrcution-tuned model response generation](https://github.com/XZhang97666/AlpaCare/tree/master/test_generation);
4. [response evaluation via LLMs](https://github.com/XZhang97666/AlpaCare/tree/master/evaluation).
## Overview
*AlpaCare* models contain 4 models (7B/13B - LLaMA[1]/LLaMA-2[2]) tuned on a 52k medical instruction-following dataset *MedInstruct-52k*, following Alpaca[3] and Self-Instruct[4]. You can find our model weights at:
| Version | Link |
| --- | --- |
| *AlpaCare* -LLaMA_7B |[https://huggingface.co/xz97/AlpaCare-llama1-7b](https://huggingface.co/xz97/AlpaCare-llama1-7b)|
| *AlpaCare* -LLaMA2_7B |[https://huggingface.co/xz97/AlpaCare-llama2-7b](https://huggingface.co/xz97/AlpaCare-llama2-7b)|
| *AlpaCare* -LLaMA_13B |[https://huggingface.co/xz97/AlpaCare-llama-13b](https://huggingface.co/xz97/AlpaCare-llama-13b)|
| *AlpaCare* -LLaMA2_13B |[https://huggingface.co/xz97/AlpaCare-llama2-13b](https://huggingface.co/xz97/AlpaCare-llama2-13b)|
[1]: LLaMA: Open and Efficient Foundation Language Models. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample. https://arxiv.org/abs/2302.13971v1
[2] Llama 2: Open foundation and fine-tuned chat models. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. https://arxiv.org/abs/2307.09288
[3]: Stanford Alpaca: An Instruction-following LLaMA Model.Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto. https://crfm.stanford.edu/2023/03/13/alpaca.html
[4]: Self-Instruct: Aligning Language Model with Self Generated Instructions. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, Hannaneh Hajishirzi. https://arxiv.org/abs/2212.10560
## Data Release
[*MedInstruct*](https://huggingface.co/datasets/xz97/MedInstruct) contains:
- MedInstruct datasets:
1. *MedInstruct-52K*: 52 medical instruction-following data we used for fine-tuning *AlpaCare* models
2. *MedInstruct-test*: 217 clinican craft free-form instruction evulation tasks with reference responses generated by `gpt-4`, `gpt-tubro-3.5`, `text-davinci-003` and `claude-2`.
All files is a list of dictionaries in JSON/JSONL format, each dictionary contains the following fields:
- `instruction`: `str`, the medical task the model should perform. Each of instrcutions in *MedInstruct-52K* and *MedInstruct-test* is unique. The instrcutions in *MedInstruct-52K* are generated by OpenAI `gpt-4`, while *MedInstruct-test* are clinian-craft.
- `input`: `str`, optional medical context or example for the task to further explain the instruction and increase the task diversity.
- `output`: `str`, the answer to the instruction as generated by `text-davinci-003`.
- `difficulity`: `str`, medical knolwedge difficulty level scale from 1-5 evulated by clinicans following the [scoring system](plots/scoresystem.pdf). This only appear in *MedInstruct-test*.
- Instructions:
1. Clinican-crafted instruction seed set, containing 167 instances.
2. All medical instructions generated by OpenAI `gpt-4` with/without filtering.
</details>
## Data Generation Process
We following the data generation pipeline from [self-instruct](https://github.com/yizhongw/self-instruct) and [Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html), and modified several points:
*Instructions*:
- Beginning with a [medical task seed set](https://github.com/XZhang97666/AlpaCare/blob/master/data/med_seed.json) that contains 167 clinical-craft clinical tasks.
- Utilizing the GPT-4 engine for instruction data generation.
- Crafting a [prompt](https://github.com/XZhang97666/AlpaCare/blob/master/prompt/instruction_prompt.txt) to serve as the medical instruction generation guidelines for GPT-4.
- To reduce costs in task generation and introduce greater randomness, 12 instructions are generated at once. This is followed by the selection of 3 random seed instructions from the seed set.
*Responses*:
- The GPT-3.5-turbo engine is used to synthesize responses, processing each instruction individually.
Here's an illustration of an instruction-response pair:
```
{
instruction: "Rewrite this instruction on how to use an inhaler to be more patient-friendly and less technical."
input: "Exhale deeply, engage mouthpiece of inhaler, depress canister while simultaneously inhaling long and deep, hold the breath for ten seconds, then gently exhale"
output: "Here's a more patient-friendly version of the instruction on how to use an inhaler:
1. Take a deep breath and completely breathe out.
2. Put the mouthpiece of the inhaler into your mouth and seal your lips around it properly.
3. Press down the canister while breathing in slowly and deeply at the same time.
4. Hold your breath for ten seconds. 5. Finally, breathe out gently."
}
``` | casey-martin/MedInstruct | [
"task_categories:text-generation",
"language:en",
"medical",
"arxiv:2302.13971",
"arxiv:2307.09288",
"arxiv:2212.10560",
"region:us"
] | 2023-12-02T12:21:30+00:00 | {"language": ["en"], "task_categories": ["text-generation"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/MedInstruct-52k.json"}, {"split": "test", "path": "data/MedInstruct-test.jsonl"}]}], "tags": ["medical"]} | 2023-12-02T12:32:38+00:00 | [
"2302.13971",
"2307.09288",
"2212.10560"
] | [
"en"
] | TAGS
#task_categories-text-generation #language-English #medical #arxiv-2302.13971 #arxiv-2307.09288 #arxiv-2212.10560 #region-us
| MedInstruct
===========
---
This is the repo for *MedInstruct*, which is a dataset of synthetically generated medical instructions. The repo contains:
* The 52K medical instruction-response dataset *MedInstruct-52k* used for fine-tuning *AlpaCare*, and corresponding clinican-crafted seed task to generate instruction.
* A 217 clinical craft free-form instruction evaluation test set,*MedInstruct-test*.
* The code for:
1. medical task generation;
2. fine-tuning LLaMA series models;
3. instrcution-tuned model response generation;
4. response evaluation via LLMs.
Overview
--------
*AlpaCare* models contain 4 models (7B/13B - LLaMA[1]/LLaMA-2[2]) tuned on a 52k medical instruction-following dataset *MedInstruct-52k*, following Alpaca[3] and Self-Instruct[4]. You can find our model weights at:
[1]: LLaMA: Open and Efficient Foundation Language Models. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample. URL
[2] Llama 2: Open foundation and fine-tuned chat models. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. URL
[3]: Stanford Alpaca: An Instruction-following LLaMA Model.Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto. URL
[4]: Self-Instruct: Aligning Language Model with Self Generated Instructions. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, Hannaneh Hajishirzi. URL
Data Release
------------
*MedInstruct* contains:
* MedInstruct datasets:
1. *MedInstruct-52K*: 52 medical instruction-following data we used for fine-tuning *AlpaCare* models
2. *MedInstruct-test*: 217 clinican craft free-form instruction evulation tasks with reference responses generated by 'gpt-4', 'gpt-tubro-3.5', 'text-davinci-003' and 'claude-2'.All files is a list of dictionaries in JSON/JSONL format, each dictionary contains the following fields:
+ 'instruction': 'str', the medical task the model should perform. Each of instrcutions in *MedInstruct-52K* and *MedInstruct-test* is unique. The instrcutions in *MedInstruct-52K* are generated by OpenAI 'gpt-4', while *MedInstruct-test* are clinian-craft.
+ 'input': 'str', optional medical context or example for the task to further explain the instruction and increase the task diversity.
+ 'output': 'str', the answer to the instruction as generated by 'text-davinci-003'.
+ 'difficulity': 'str', medical knolwedge difficulty level scale from 1-5 evulated by clinicans following the scoring system. This only appear in *MedInstruct-test*.
* Instructions:
1. Clinican-crafted instruction seed set, containing 167 instances.
2. All medical instructions generated by OpenAI 'gpt-4' with/without filtering.
Data Generation Process
-----------------------
We following the data generation pipeline from self-instruct and Alpaca, and modified several points:
*Instructions*:
* Beginning with a medical task seed set that contains 167 clinical-craft clinical tasks.
* Utilizing the GPT-4 engine for instruction data generation.
* Crafting a prompt to serve as the medical instruction generation guidelines for GPT-4.
* To reduce costs in task generation and introduce greater randomness, 12 instructions are generated at once. This is followed by the selection of 3 random seed instructions from the seed set.
*Responses*:
* The GPT-3.5-turbo engine is used to synthesize responses, processing each instruction individually.
Here's an illustration of an instruction-response pair:
| [] | [
"TAGS\n#task_categories-text-generation #language-English #medical #arxiv-2302.13971 #arxiv-2307.09288 #arxiv-2212.10560 #region-us \n"
] | [
49
] | [
"passage: TAGS\n#task_categories-text-generation #language-English #medical #arxiv-2302.13971 #arxiv-2307.09288 #arxiv-2212.10560 #region-us \n"
] |
a56d10d578f36eabcaf753f65bfdb4c839c6b3e8 |
# Dataset Card for Evaluation run of ericzzz/falcon-rw-1b-instruct-openorca
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/ericzzz/falcon-rw-1b-instruct-openorca
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [ericzzz/falcon-rw-1b-instruct-openorca](https://huggingface.co/ericzzz/falcon-rw-1b-instruct-openorca) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_ericzzz__falcon-rw-1b-instruct-openorca",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T12:35:28.593271](https://huggingface.co/datasets/open-llm-leaderboard/details_ericzzz__falcon-rw-1b-instruct-openorca/blob/main/results_2023-12-02T12-35-28.593271.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.03411675511751327,
"acc_stderr": 0.005000212600773262
},
"harness|gsm8k|5": {
"acc": 0.03411675511751327,
"acc_stderr": 0.005000212600773262
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_ericzzz__falcon-rw-1b-instruct-openorca | [
"region:us"
] | 2023-12-02T12:35:32+00:00 | {"pretty_name": "Evaluation run of ericzzz/falcon-rw-1b-instruct-openorca", "dataset_summary": "Dataset automatically created during the evaluation run of model [ericzzz/falcon-rw-1b-instruct-openorca](https://huggingface.co/ericzzz/falcon-rw-1b-instruct-openorca) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ericzzz__falcon-rw-1b-instruct-openorca\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T12:35:28.593271](https://huggingface.co/datasets/open-llm-leaderboard/details_ericzzz__falcon-rw-1b-instruct-openorca/blob/main/results_2023-12-02T12-35-28.593271.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.03411675511751327,\n \"acc_stderr\": 0.005000212600773262\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.03411675511751327,\n \"acc_stderr\": 0.005000212600773262\n }\n}\n```", "repo_url": "https://huggingface.co/ericzzz/falcon-rw-1b-instruct-openorca", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T12_35_28.593271", "path": ["**/details_harness|gsm8k|5_2023-12-02T12-35-28.593271.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T12-35-28.593271.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T12_35_28.593271", "path": ["results_2023-12-02T12-35-28.593271.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T12-35-28.593271.parquet"]}]}]} | 2023-12-02T12:35:45+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of ericzzz/falcon-rw-1b-instruct-openorca
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model ericzzz/falcon-rw-1b-instruct-openorca on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T12:35:28.593271(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of ericzzz/falcon-rw-1b-instruct-openorca",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model ericzzz/falcon-rw-1b-instruct-openorca on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T12:35:28.593271(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of ericzzz/falcon-rw-1b-instruct-openorca",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model ericzzz/falcon-rw-1b-instruct-openorca on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T12:35:28.593271(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
27,
31,
176,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of ericzzz/falcon-rw-1b-instruct-openorca## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model ericzzz/falcon-rw-1b-instruct-openorca on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T12:35:28.593271(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
22e444371f0366842101f6ea756cc5a90aeee9db |
# Dataset Card for Evaluation run of llm-jp/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [llm-jp/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_llm-jp__llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T12:37:10.361445](https://huggingface.co/datasets/open-llm-leaderboard/details_llm-jp__llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0/blob/main/results_2023-12-02T12-37-10.361445.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_llm-jp__llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0 | [
"region:us"
] | 2023-12-02T12:37:13+00:00 | {"pretty_name": "Evaluation run of llm-jp/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0", "dataset_summary": "Dataset automatically created during the evaluation run of model [llm-jp/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_llm-jp__llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T12:37:10.361445](https://huggingface.co/datasets/open-llm-leaderboard/details_llm-jp__llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0/blob/main/results_2023-12-02T12-37-10.361445.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T12_37_10.361445", "path": ["**/details_harness|gsm8k|5_2023-12-02T12-37-10.361445.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T12-37-10.361445.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T12_37_10.361445", "path": ["results_2023-12-02T12-37-10.361445.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T12-37-10.361445.parquet"]}]}]} | 2023-12-02T12:37:18+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of llm-jp/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model llm-jp/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T12:37:10.361445(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of llm-jp/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model llm-jp/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T12:37:10.361445(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of llm-jp/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model llm-jp/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T12:37:10.361445(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
38,
31,
187,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of llm-jp/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model llm-jp/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T12:37:10.361445(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
b852cae090a09b06d443ccc8dcef36bc02ef74e2 | 118k Deutsche Quora Fragen und Antworten von unterschiedlichen Themen.
Beispiel:
```
Frage: Was geschah kurz vor Einsteins Tod?
Antwort:
Einstein hatte an seinem Sterbebett seinen Stift und seine Papiere auf dem kleinen Beistelltisch liegen lassen, bevor er von uns ging. In den Papieren ging es um seine Leidenschaft, also seine "Einheitliche Feldtheorie". Am Abend vor seinem Tod schrieb und kritzelte Einstein fleiΓig an seinen Gedanken und Arbeiten zu dieser Theorie. Wenig spΓ€ter sagte er zu seiner Krankenschwester: "...ich glaube, ich werde mich eine Weile ausruhen", und legte seine Sachen auf dem Tisch ab. Es gibt eine schΓΆne Geschichte, die diese Krankenschwester gerne erzΓ€hlte; es ist eine, die auch ich gerne erzΓ€hle. Diese Schwester war in der Tat die letzte Person, die sich mit Einstein unterhalten hat. Sie hatte Einstein zum Fenster des Krankenhauszimmers geschoben, damit er vom Bett die Aussicht auf den kleinen, runden Garten bewundern konnte. "Professor Einstein, glauben Sie, dass Gott diesen Garten gemacht hat?" Er antwortete: "Ja, Gott ist sowohl der GΓ€rtner als auch der Garten", woraufhin die Krankenschwester erwiderte: "Oh, so habe ich das noch gar nicht gesehen", woraufhin Einstein erwiderte: "Ja, und ich habe mein ganzes Leben damit verbracht, einen Blick auf ihn bei seiner Arbeit zu erhaschen". Vielen Dank an Richard Malcolm Smythe fΓΌr das Verfassen der Originalantwort.
Thema:
Wissenschaft
```
```
Frage: Welches war das lΓ€ngste FuΓballspiel im Profi-FuΓball?
Antwort:
Die Partie zwischen dem 1. FC NΓΌrnberg und dem Hamburger SV 1922 im Grunewaldstadion in Berlin ging in die Geschichte als das lΓ€ngste FuΓballspiel aller Zeiten ein. Keine der beiden Mannschaften konnte sich im Finale um die Deutsche Meisterschaft einen klaren Vorteil erspielen. Nach 190 absolvierten Minuten wurde das Spiel wegen ErschΓΆpfung und Dunkelheit abgebrochen. Da es keinen Sieger gab, sollte der Gewinner in einem Wiederholungsspiel ermittelt werden. Allerdings wurde auch das Wiederholungsspiel beim Stand von 1:1 in der VerlΓ€ngerung abgebrochen. In den GeschichtsbΓΌchern des DFB findet sich im Jahr 1922 daher kein Deutscher Meister. Formal wurde der Titel aber dem HSV zugesprochen.
Thema:
Europ%C3%A4ischer-Fu%C3%9Fball
```
Die QualitΓ€t der Antworten varriert teilweise stark -> weitere Filterung notwendig
~~Teilweise wurden die Antworten nicht vollst. geladen -> "(mehr lesen)" in den Antworten filtern~~ **Gefiltert** | SebastianBodza/Quora_Deutsch_ger | [
"region:us"
] | 2023-12-02T12:38:56+00:00 | {} | 2023-12-08T23:19:55+00:00 | [] | [] | TAGS
#region-us
| 118k Deutsche Quora Fragen und Antworten von unterschiedlichen Themen.
Beispiel:
Die QualitΓ€t der Antworten varriert teilweise stark -> weitere Filterung notwendig
~~Teilweise wurden die Antworten nicht vollst. geladen -> "(mehr lesen)" in den Antworten filtern~~ Gefiltert | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
e525c596fc193fdc533e378fc27075948864140f |
# Dataset Card for Evaluation run of GeneZC/MiniChat-1.5-3B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/GeneZC/MiniChat-1.5-3B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [GeneZC/MiniChat-1.5-3B](https://huggingface.co/GeneZC/MiniChat-1.5-3B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_GeneZC__MiniChat-1.5-3B",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T12:48:18.451769](https://huggingface.co/datasets/open-llm-leaderboard/details_GeneZC__MiniChat-1.5-3B/blob/main/results_2023-12-02T12-48-18.451769.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.24184988627748294,
"acc_stderr": 0.011794861371318703
},
"harness|gsm8k|5": {
"acc": 0.24184988627748294,
"acc_stderr": 0.011794861371318703
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_GeneZC__MiniChat-1.5-3B | [
"region:us"
] | 2023-12-02T12:48:21+00:00 | {"pretty_name": "Evaluation run of GeneZC/MiniChat-1.5-3B", "dataset_summary": "Dataset automatically created during the evaluation run of model [GeneZC/MiniChat-1.5-3B](https://huggingface.co/GeneZC/MiniChat-1.5-3B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_GeneZC__MiniChat-1.5-3B\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T12:48:18.451769](https://huggingface.co/datasets/open-llm-leaderboard/details_GeneZC__MiniChat-1.5-3B/blob/main/results_2023-12-02T12-48-18.451769.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.24184988627748294,\n \"acc_stderr\": 0.011794861371318703\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.24184988627748294,\n \"acc_stderr\": 0.011794861371318703\n }\n}\n```", "repo_url": "https://huggingface.co/GeneZC/MiniChat-1.5-3B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T12_48_18.451769", "path": ["**/details_harness|gsm8k|5_2023-12-02T12-48-18.451769.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T12-48-18.451769.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T12_48_18.451769", "path": ["results_2023-12-02T12-48-18.451769.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T12-48-18.451769.parquet"]}]}]} | 2023-12-02T12:48:25+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of GeneZC/MiniChat-1.5-3B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model GeneZC/MiniChat-1.5-3B on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T12:48:18.451769(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of GeneZC/MiniChat-1.5-3B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model GeneZC/MiniChat-1.5-3B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T12:48:18.451769(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of GeneZC/MiniChat-1.5-3B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model GeneZC/MiniChat-1.5-3B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T12:48:18.451769(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
19,
31,
168,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of GeneZC/MiniChat-1.5-3B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model GeneZC/MiniChat-1.5-3B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T12:48:18.451769(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
d9e2f847fa88692e4440a70e6e43eb5a7464f7ce |
# Dataset Card for Evaluation run of amazon/MistralLite
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/amazon/MistralLite
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [amazon/MistralLite](https://huggingface.co/amazon/MistralLite) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_amazon__MistralLite",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T12:51:23.360186](https://huggingface.co/datasets/open-llm-leaderboard/details_amazon__MistralLite/blob/main/results_2023-12-02T12-51-23.360186.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.01061410159211524,
"acc_stderr": 0.002822713322387704
},
"harness|gsm8k|5": {
"acc": 0.01061410159211524,
"acc_stderr": 0.002822713322387704
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_amazon__MistralLite | [
"region:us"
] | 2023-12-02T12:51:26+00:00 | {"pretty_name": "Evaluation run of amazon/MistralLite", "dataset_summary": "Dataset automatically created during the evaluation run of model [amazon/MistralLite](https://huggingface.co/amazon/MistralLite) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_amazon__MistralLite\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T12:51:23.360186](https://huggingface.co/datasets/open-llm-leaderboard/details_amazon__MistralLite/blob/main/results_2023-12-02T12-51-23.360186.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.01061410159211524,\n \"acc_stderr\": 0.002822713322387704\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.01061410159211524,\n \"acc_stderr\": 0.002822713322387704\n }\n}\n```", "repo_url": "https://huggingface.co/amazon/MistralLite", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T12_51_23.360186", "path": ["**/details_harness|gsm8k|5_2023-12-02T12-51-23.360186.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T12-51-23.360186.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T12_51_23.360186", "path": ["results_2023-12-02T12-51-23.360186.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T12-51-23.360186.parquet"]}]}]} | 2023-12-02T12:51:30+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of amazon/MistralLite
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model amazon/MistralLite on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T12:51:23.360186(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of amazon/MistralLite",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model amazon/MistralLite on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T12:51:23.360186(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of amazon/MistralLite",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model amazon/MistralLite on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T12:51:23.360186(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
16,
31,
165,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of amazon/MistralLite## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model amazon/MistralLite on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T12:51:23.360186(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
07c61badec0196cfd103b8fe2e34366432dc679f |
# Dataset Card for Evaluation run of 01-ai/Yi-6B-200K
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/01-ai/Yi-6B-200K
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [01-ai/Yi-6B-200K](https://huggingface.co/01-ai/Yi-6B-200K) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_01-ai__Yi-6B-200K",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T12:58:15.129366](https://huggingface.co/datasets/open-llm-leaderboard/details_01-ai__Yi-6B-200K/blob/main/results_2023-12-02T12-58-15.129366.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.3032600454890068,
"acc_stderr": 0.0126615026634187
},
"harness|gsm8k|5": {
"acc": 0.3032600454890068,
"acc_stderr": 0.0126615026634187
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_01-ai__Yi-6B-200K | [
"region:us"
] | 2023-12-02T12:56:53+00:00 | {"pretty_name": "Evaluation run of 01-ai/Yi-6B-200K", "dataset_summary": "Dataset automatically created during the evaluation run of model [01-ai/Yi-6B-200K](https://huggingface.co/01-ai/Yi-6B-200K) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_01-ai__Yi-6B-200K\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T12:58:15.129366](https://huggingface.co/datasets/open-llm-leaderboard/details_01-ai__Yi-6B-200K/blob/main/results_2023-12-02T12-58-15.129366.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.3032600454890068,\n \"acc_stderr\": 0.0126615026634187\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3032600454890068,\n \"acc_stderr\": 0.0126615026634187\n }\n}\n```", "repo_url": "https://huggingface.co/01-ai/Yi-6B-200K", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T12_56_50.280611", "path": ["**/details_harness|gsm8k|5_2023-12-02T12-56-50.280611.parquet"]}, {"split": "2023_12_02T12_57_04.291453", "path": ["**/details_harness|gsm8k|5_2023-12-02T12-57-04.291453.parquet"]}, {"split": "2023_12_02T12_57_29.899454", "path": ["**/details_harness|gsm8k|5_2023-12-02T12-57-29.899454.parquet"]}, {"split": "2023_12_02T12_58_15.129366", "path": ["**/details_harness|gsm8k|5_2023-12-02T12-58-15.129366.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T12-58-15.129366.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T12_56_50.280611", "path": ["results_2023-12-02T12-56-50.280611.parquet"]}, {"split": "2023_12_02T12_57_04.291453", "path": ["results_2023-12-02T12-57-04.291453.parquet"]}, {"split": "2023_12_02T12_57_29.899454", "path": ["results_2023-12-02T12-57-29.899454.parquet"]}, {"split": "2023_12_02T12_58_15.129366", "path": ["results_2023-12-02T12-58-15.129366.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T12-58-15.129366.parquet"]}]}]} | 2023-12-02T12:58:21+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of 01-ai/Yi-6B-200K
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model 01-ai/Yi-6B-200K on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T12:58:15.129366(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of 01-ai/Yi-6B-200K",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model 01-ai/Yi-6B-200K on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T12:58:15.129366(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of 01-ai/Yi-6B-200K",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model 01-ai/Yi-6B-200K on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T12:58:15.129366(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
19,
31,
168,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of 01-ai/Yi-6B-200K## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model 01-ai/Yi-6B-200K on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T12:58:15.129366(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
8c953fc3939f37575e3a1c8e6ea2aa3fcba94841 |
# Dataset Card for Evaluation run of Azure99/blossom-v3-mistral-7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Azure99/blossom-v3-mistral-7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Azure99/blossom-v3-mistral-7b](https://huggingface.co/Azure99/blossom-v3-mistral-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Azure99__blossom-v3-mistral-7b",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T12:57:13.954407](https://huggingface.co/datasets/open-llm-leaderboard/details_Azure99__blossom-v3-mistral-7b/blob/main/results_2023-12-02T12-57-13.954407.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.4670204700530705,
"acc_stderr": 0.013742492794163416
},
"harness|gsm8k|5": {
"acc": 0.4670204700530705,
"acc_stderr": 0.013742492794163416
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_Azure99__blossom-v3-mistral-7b | [
"region:us"
] | 2023-12-02T12:57:16+00:00 | {"pretty_name": "Evaluation run of Azure99/blossom-v3-mistral-7b", "dataset_summary": "Dataset automatically created during the evaluation run of model [Azure99/blossom-v3-mistral-7b](https://huggingface.co/Azure99/blossom-v3-mistral-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Azure99__blossom-v3-mistral-7b\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T12:57:13.954407](https://huggingface.co/datasets/open-llm-leaderboard/details_Azure99__blossom-v3-mistral-7b/blob/main/results_2023-12-02T12-57-13.954407.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.4670204700530705,\n \"acc_stderr\": 0.013742492794163416\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.4670204700530705,\n \"acc_stderr\": 0.013742492794163416\n }\n}\n```", "repo_url": "https://huggingface.co/Azure99/blossom-v3-mistral-7b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T12_57_13.954407", "path": ["**/details_harness|gsm8k|5_2023-12-02T12-57-13.954407.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T12-57-13.954407.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T12_57_13.954407", "path": ["results_2023-12-02T12-57-13.954407.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T12-57-13.954407.parquet"]}]}]} | 2023-12-02T12:57:21+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Azure99/blossom-v3-mistral-7b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Azure99/blossom-v3-mistral-7b on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T12:57:13.954407(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of Azure99/blossom-v3-mistral-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Azure99/blossom-v3-mistral-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T12:57:13.954407(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Azure99/blossom-v3-mistral-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Azure99/blossom-v3-mistral-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T12:57:13.954407(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
22,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Azure99/blossom-v3-mistral-7b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Azure99/blossom-v3-mistral-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T12:57:13.954407(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
2b79d6f0be0e0ad82f1314a5a919639884aebf7d |
# Dataset Card for Evaluation run of llm-jp/llm-jp-13b-instruct-full-jaster-v1.0
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-jaster-v1.0
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [llm-jp/llm-jp-13b-instruct-full-jaster-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-jaster-v1.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_llm-jp__llm-jp-13b-instruct-full-jaster-v1.0",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T13:00:01.308695](https://huggingface.co/datasets/open-llm-leaderboard/details_llm-jp__llm-jp-13b-instruct-full-jaster-v1.0/blob/main/results_2023-12-02T13-00-01.308695.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_llm-jp__llm-jp-13b-instruct-full-jaster-v1.0 | [
"region:us"
] | 2023-12-02T13:00:04+00:00 | {"pretty_name": "Evaluation run of llm-jp/llm-jp-13b-instruct-full-jaster-v1.0", "dataset_summary": "Dataset automatically created during the evaluation run of model [llm-jp/llm-jp-13b-instruct-full-jaster-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-jaster-v1.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_llm-jp__llm-jp-13b-instruct-full-jaster-v1.0\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T13:00:01.308695](https://huggingface.co/datasets/open-llm-leaderboard/details_llm-jp__llm-jp-13b-instruct-full-jaster-v1.0/blob/main/results_2023-12-02T13-00-01.308695.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-jaster-v1.0", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T13_00_01.308695", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-00-01.308695.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-00-01.308695.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T13_00_01.308695", "path": ["results_2023-12-02T13-00-01.308695.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T13-00-01.308695.parquet"]}]}]} | 2023-12-02T13:00:09+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of llm-jp/llm-jp-13b-instruct-full-jaster-v1.0
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model llm-jp/llm-jp-13b-instruct-full-jaster-v1.0 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T13:00:01.308695(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of llm-jp/llm-jp-13b-instruct-full-jaster-v1.0",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model llm-jp/llm-jp-13b-instruct-full-jaster-v1.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:00:01.308695(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of llm-jp/llm-jp-13b-instruct-full-jaster-v1.0",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model llm-jp/llm-jp-13b-instruct-full-jaster-v1.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:00:01.308695(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
31,
31,
180,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of llm-jp/llm-jp-13b-instruct-full-jaster-v1.0## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model llm-jp/llm-jp-13b-instruct-full-jaster-v1.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T13:00:01.308695(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
4567c245d64f1f990f4be706cde9c61eca4f8235 |
# Dataset Card for Evaluation run of Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Weyaxi__OpenHermes-2.5-neural-chat-7b-v3-1-7B",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T13:00:25.330681](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__OpenHermes-2.5-neural-chat-7b-v3-1-7B/blob/main/results_2023-12-02T13-00-25.330681.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.530705079605762,
"acc_stderr": 0.013746490739560035
},
"harness|gsm8k|5": {
"acc": 0.530705079605762,
"acc_stderr": 0.013746490739560035
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_Weyaxi__OpenHermes-2.5-neural-chat-7b-v3-1-7B | [
"region:us"
] | 2023-12-02T13:00:28+00:00 | {"pretty_name": "Evaluation run of Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Weyaxi__OpenHermes-2.5-neural-chat-7b-v3-1-7B\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T13:00:25.330681](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__OpenHermes-2.5-neural-chat-7b-v3-1-7B/blob/main/results_2023-12-02T13-00-25.330681.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.530705079605762,\n \"acc_stderr\": 0.013746490739560035\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.530705079605762,\n \"acc_stderr\": 0.013746490739560035\n }\n}\n```", "repo_url": "https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T13_00_25.330681", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-00-25.330681.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-00-25.330681.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T13_00_25.330681", "path": ["results_2023-12-02T13-00-25.330681.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T13-00-25.330681.parquet"]}]}]} | 2023-12-02T13:00:33+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T13:00:25.330681(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:00:25.330681(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:00:25.330681(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
30,
31,
179,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T13:00:25.330681(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
a000406d0d74250d63fe118be7206eb50864f391 |
# Dataset Card for Evaluation run of Weyaxi/OpenOrca-Zephyr-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Weyaxi/OpenOrca-Zephyr-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Weyaxi/OpenOrca-Zephyr-7B](https://huggingface.co/Weyaxi/OpenOrca-Zephyr-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Weyaxi__OpenOrca-Zephyr-7B",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T13:01:40.868867](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__OpenOrca-Zephyr-7B/blob/main/results_2023-12-02T13-01-40.868867.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.4624715693707354,
"acc_stderr": 0.013733636059107756
},
"harness|gsm8k|5": {
"acc": 0.4624715693707354,
"acc_stderr": 0.013733636059107756
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_Weyaxi__OpenOrca-Zephyr-7B | [
"region:us"
] | 2023-12-02T13:01:43+00:00 | {"pretty_name": "Evaluation run of Weyaxi/OpenOrca-Zephyr-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [Weyaxi/OpenOrca-Zephyr-7B](https://huggingface.co/Weyaxi/OpenOrca-Zephyr-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Weyaxi__OpenOrca-Zephyr-7B\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T13:01:40.868867](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__OpenOrca-Zephyr-7B/blob/main/results_2023-12-02T13-01-40.868867.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.4624715693707354,\n \"acc_stderr\": 0.013733636059107756\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.4624715693707354,\n \"acc_stderr\": 0.013733636059107756\n }\n}\n```", "repo_url": "https://huggingface.co/Weyaxi/OpenOrca-Zephyr-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T13_01_40.868867", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-01-40.868867.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-01-40.868867.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T13_01_40.868867", "path": ["results_2023-12-02T13-01-40.868867.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T13-01-40.868867.parquet"]}]}]} | 2023-12-02T13:01:47+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Weyaxi/OpenOrca-Zephyr-7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Weyaxi/OpenOrca-Zephyr-7B on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T13:01:40.868867(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of Weyaxi/OpenOrca-Zephyr-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/OpenOrca-Zephyr-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:01:40.868867(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Weyaxi/OpenOrca-Zephyr-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/OpenOrca-Zephyr-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:01:40.868867(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
22,
31,
171,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Weyaxi/OpenOrca-Zephyr-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/OpenOrca-Zephyr-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T13:01:40.868867(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
45095aebc2136d28a2c1ac948ccf17baf9f4f1eb |
# Dataset Card for Evaluation run of Weyaxi/test-help-steer-filtered-orig
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Weyaxi/test-help-steer-filtered-orig
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Weyaxi/test-help-steer-filtered-orig](https://huggingface.co/Weyaxi/test-help-steer-filtered-orig) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Weyaxi__test-help-steer-filtered-orig",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T13:28:23.925402](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__test-help-steer-filtered-orig/blob/main/results_2023-12-02T13-28-23.925402.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.09628506444275967,
"acc_stderr": 0.008125264128215877
},
"harness|gsm8k|5": {
"acc": 0.09628506444275967,
"acc_stderr": 0.008125264128215877
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_Weyaxi__test-help-steer-filtered-orig | [
"region:us"
] | 2023-12-02T13:28:26+00:00 | {"pretty_name": "Evaluation run of Weyaxi/test-help-steer-filtered-orig", "dataset_summary": "Dataset automatically created during the evaluation run of model [Weyaxi/test-help-steer-filtered-orig](https://huggingface.co/Weyaxi/test-help-steer-filtered-orig) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Weyaxi__test-help-steer-filtered-orig\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T13:28:23.925402](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__test-help-steer-filtered-orig/blob/main/results_2023-12-02T13-28-23.925402.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.09628506444275967,\n \"acc_stderr\": 0.008125264128215877\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.09628506444275967,\n \"acc_stderr\": 0.008125264128215877\n }\n}\n```", "repo_url": "https://huggingface.co/Weyaxi/test-help-steer-filtered-orig", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T13_28_23.925402", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-28-23.925402.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-28-23.925402.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T13_28_23.925402", "path": ["results_2023-12-02T13-28-23.925402.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T13-28-23.925402.parquet"]}]}]} | 2023-12-02T13:28:33+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Weyaxi/test-help-steer-filtered-orig
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Weyaxi/test-help-steer-filtered-orig on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T13:28:23.925402(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of Weyaxi/test-help-steer-filtered-orig",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/test-help-steer-filtered-orig on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:28:23.925402(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Weyaxi/test-help-steer-filtered-orig",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/test-help-steer-filtered-orig on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:28:23.925402(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
25,
31,
174,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Weyaxi/test-help-steer-filtered-orig## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/test-help-steer-filtered-orig on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T13:28:23.925402(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
6f512e2f3314c1d45188682a01aa653e95165242 |
# Dataset Card for Evaluation run of Weyaxi/neural-chat-7b-v3-1-Nebula-v2-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Weyaxi/neural-chat-7b-v3-1-Nebula-v2-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Weyaxi/neural-chat-7b-v3-1-Nebula-v2-7B](https://huggingface.co/Weyaxi/neural-chat-7b-v3-1-Nebula-v2-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Weyaxi__neural-chat-7b-v3-1-Nebula-v2-7B",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T13:35:39.622054](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__neural-chat-7b-v3-1-Nebula-v2-7B/blob/main/results_2023-12-02T13-35-39.622054.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.04624715693707354,
"acc_stderr": 0.005784991662691895
},
"harness|gsm8k|5": {
"acc": 0.04624715693707354,
"acc_stderr": 0.005784991662691895
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_Weyaxi__neural-chat-7b-v3-1-Nebula-v2-7B | [
"region:us"
] | 2023-12-02T13:35:43+00:00 | {"pretty_name": "Evaluation run of Weyaxi/neural-chat-7b-v3-1-Nebula-v2-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [Weyaxi/neural-chat-7b-v3-1-Nebula-v2-7B](https://huggingface.co/Weyaxi/neural-chat-7b-v3-1-Nebula-v2-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Weyaxi__neural-chat-7b-v3-1-Nebula-v2-7B\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T13:35:39.622054](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__neural-chat-7b-v3-1-Nebula-v2-7B/blob/main/results_2023-12-02T13-35-39.622054.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.04624715693707354,\n \"acc_stderr\": 0.005784991662691895\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.04624715693707354,\n \"acc_stderr\": 0.005784991662691895\n }\n}\n```", "repo_url": "https://huggingface.co/Weyaxi/neural-chat-7b-v3-1-Nebula-v2-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T13_35_39.622054", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-35-39.622054.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-35-39.622054.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T13_35_39.622054", "path": ["results_2023-12-02T13-35-39.622054.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T13-35-39.622054.parquet"]}]}]} | 2023-12-02T13:35:50+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Weyaxi/neural-chat-7b-v3-1-Nebula-v2-7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Weyaxi/neural-chat-7b-v3-1-Nebula-v2-7B on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T13:35:39.622054(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of Weyaxi/neural-chat-7b-v3-1-Nebula-v2-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/neural-chat-7b-v3-1-Nebula-v2-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:35:39.622054(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Weyaxi/neural-chat-7b-v3-1-Nebula-v2-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/neural-chat-7b-v3-1-Nebula-v2-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:35:39.622054(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
30,
31,
179,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Weyaxi/neural-chat-7b-v3-1-Nebula-v2-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/neural-chat-7b-v3-1-Nebula-v2-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T13:35:39.622054(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
0d9729beb1a14346188939d0debbbabefbc496c0 |
# Dataset Card for Evaluation run of Weyaxi/zephyr-beta-Nebula-v2-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Weyaxi/zephyr-beta-Nebula-v2-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Weyaxi/zephyr-beta-Nebula-v2-7B](https://huggingface.co/Weyaxi/zephyr-beta-Nebula-v2-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Weyaxi__zephyr-beta-Nebula-v2-7B",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T13:42:20.652326](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__zephyr-beta-Nebula-v2-7B/blob/main/results_2023-12-02T13-42-20.652326.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.17513267626990145,
"acc_stderr": 0.010469307043157914
},
"harness|gsm8k|5": {
"acc": 0.17513267626990145,
"acc_stderr": 0.010469307043157914
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_Weyaxi__zephyr-beta-Nebula-v2-7B | [
"region:us"
] | 2023-12-02T13:42:23+00:00 | {"pretty_name": "Evaluation run of Weyaxi/zephyr-beta-Nebula-v2-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [Weyaxi/zephyr-beta-Nebula-v2-7B](https://huggingface.co/Weyaxi/zephyr-beta-Nebula-v2-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Weyaxi__zephyr-beta-Nebula-v2-7B\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T13:42:20.652326](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__zephyr-beta-Nebula-v2-7B/blob/main/results_2023-12-02T13-42-20.652326.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.17513267626990145,\n \"acc_stderr\": 0.010469307043157914\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.17513267626990145,\n \"acc_stderr\": 0.010469307043157914\n }\n}\n```", "repo_url": "https://huggingface.co/Weyaxi/zephyr-beta-Nebula-v2-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T13_42_20.652326", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-42-20.652326.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-42-20.652326.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T13_42_20.652326", "path": ["results_2023-12-02T13-42-20.652326.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T13-42-20.652326.parquet"]}]}]} | 2023-12-02T13:42:30+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Weyaxi/zephyr-beta-Nebula-v2-7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Weyaxi/zephyr-beta-Nebula-v2-7B on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T13:42:20.652326(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of Weyaxi/zephyr-beta-Nebula-v2-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/zephyr-beta-Nebula-v2-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:42:20.652326(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Weyaxi/zephyr-beta-Nebula-v2-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/zephyr-beta-Nebula-v2-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:42:20.652326(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
25,
31,
174,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Weyaxi/zephyr-beta-Nebula-v2-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/zephyr-beta-Nebula-v2-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T13:42:20.652326(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
cd6364ce1a39b3ca4021cc0f89075eb9d9bdc4ed |
# Dataset Card for Evaluation run of yec019/fbopt-350m-8bit
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/yec019/fbopt-350m-8bit
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [yec019/fbopt-350m-8bit](https://huggingface.co/yec019/fbopt-350m-8bit) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_yec019__fbopt-350m-8bit",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T13:46:11.364082](https://huggingface.co/datasets/open-llm-leaderboard/details_yec019__fbopt-350m-8bit/blob/main/results_2023-12-02T13-46-11.364082.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.01288855193328279,
"acc_stderr": 0.0031069012664996336
},
"harness|gsm8k|5": {
"acc": 0.01288855193328279,
"acc_stderr": 0.0031069012664996336
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_yec019__fbopt-350m-8bit | [
"region:us"
] | 2023-12-02T13:46:14+00:00 | {"pretty_name": "Evaluation run of yec019/fbopt-350m-8bit", "dataset_summary": "Dataset automatically created during the evaluation run of model [yec019/fbopt-350m-8bit](https://huggingface.co/yec019/fbopt-350m-8bit) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_yec019__fbopt-350m-8bit\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T13:46:11.364082](https://huggingface.co/datasets/open-llm-leaderboard/details_yec019__fbopt-350m-8bit/blob/main/results_2023-12-02T13-46-11.364082.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.01288855193328279,\n \"acc_stderr\": 0.0031069012664996336\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.01288855193328279,\n \"acc_stderr\": 0.0031069012664996336\n }\n}\n```", "repo_url": "https://huggingface.co/yec019/fbopt-350m-8bit", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T13_46_11.364082", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-46-11.364082.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-46-11.364082.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T13_46_11.364082", "path": ["results_2023-12-02T13-46-11.364082.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T13-46-11.364082.parquet"]}]}]} | 2023-12-02T13:46:22+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of yec019/fbopt-350m-8bit
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model yec019/fbopt-350m-8bit on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T13:46:11.364082(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of yec019/fbopt-350m-8bit",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model yec019/fbopt-350m-8bit on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:46:11.364082(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of yec019/fbopt-350m-8bit",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model yec019/fbopt-350m-8bit on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:46:11.364082(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
21,
31,
170,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of yec019/fbopt-350m-8bit## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model yec019/fbopt-350m-8bit on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T13:46:11.364082(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
2d9719dc73cf0c497865e69c7a50d5aa972fec8a |
# Dataset Card for Evaluation run of TinyLlama/TinyLlama-1.1B-Chat-v0.6
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [TinyLlama/TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TinyLlama__TinyLlama-1.1B-Chat-v0.6",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T13:49:51.667624](https://huggingface.co/datasets/open-llm-leaderboard/details_TinyLlama__TinyLlama-1.1B-Chat-v0.6/blob/main/results_2023-12-02T13-49-51.667624.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.02122820318423048,
"acc_stderr": 0.003970449129848635
},
"harness|gsm8k|5": {
"acc": 0.02122820318423048,
"acc_stderr": 0.003970449129848635
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_TinyLlama__TinyLlama-1.1B-Chat-v0.6 | [
"region:us"
] | 2023-12-02T13:49:54+00:00 | {"pretty_name": "Evaluation run of TinyLlama/TinyLlama-1.1B-Chat-v0.6", "dataset_summary": "Dataset automatically created during the evaluation run of model [TinyLlama/TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TinyLlama__TinyLlama-1.1B-Chat-v0.6\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T13:49:51.667624](https://huggingface.co/datasets/open-llm-leaderboard/details_TinyLlama__TinyLlama-1.1B-Chat-v0.6/blob/main/results_2023-12-02T13-49-51.667624.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.02122820318423048,\n \"acc_stderr\": 0.003970449129848635\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.02122820318423048,\n \"acc_stderr\": 0.003970449129848635\n }\n}\n```", "repo_url": "https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T13_49_51.667624", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-49-51.667624.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-49-51.667624.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T13_49_51.667624", "path": ["results_2023-12-02T13-49-51.667624.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T13-49-51.667624.parquet"]}]}]} | 2023-12-02T13:49:59+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of TinyLlama/TinyLlama-1.1B-Chat-v0.6
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model TinyLlama/TinyLlama-1.1B-Chat-v0.6 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T13:49:51.667624(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TinyLlama/TinyLlama-1.1B-Chat-v0.6 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:49:51.667624(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TinyLlama/TinyLlama-1.1B-Chat-v0.6 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:49:51.667624(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
26,
31,
175,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of TinyLlama/TinyLlama-1.1B-Chat-v0.6## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model TinyLlama/TinyLlama-1.1B-Chat-v0.6 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T13:49:51.667624(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
13e686c1f6c03ef313c535716980feae456b407e |
# Dataset Card for Evaluation run of TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TinyLlama__TinyLlama-1.1B-intermediate-step-955k-token-2T",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T13:49:56.962139](https://huggingface.co/datasets/open-llm-leaderboard/details_TinyLlama__TinyLlama-1.1B-intermediate-step-955k-token-2T/blob/main/results_2023-12-02T13-49-56.962139.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.013646702047005308,
"acc_stderr": 0.0031957470754808434
},
"harness|gsm8k|5": {
"acc": 0.013646702047005308,
"acc_stderr": 0.0031957470754808434
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_TinyLlama__TinyLlama-1.1B-intermediate-step-955k-token-2T | [
"region:us"
] | 2023-12-02T13:49:59+00:00 | {"pretty_name": "Evaluation run of TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T", "dataset_summary": "Dataset automatically created during the evaluation run of model [TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TinyLlama__TinyLlama-1.1B-intermediate-step-955k-token-2T\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T13:49:56.962139](https://huggingface.co/datasets/open-llm-leaderboard/details_TinyLlama__TinyLlama-1.1B-intermediate-step-955k-token-2T/blob/main/results_2023-12-02T13-49-56.962139.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.013646702047005308,\n \"acc_stderr\": 0.0031957470754808434\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.013646702047005308,\n \"acc_stderr\": 0.0031957470754808434\n }\n}\n```", "repo_url": "https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T13_49_56.962139", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-49-56.962139.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-49-56.962139.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T13_49_56.962139", "path": ["results_2023-12-02T13-49-56.962139.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T13-49-56.962139.parquet"]}]}]} | 2023-12-02T13:50:04+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T13:49:56.962139(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:49:56.962139(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:49:56.962139(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
35,
31,
184,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T13:49:56.962139(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
1a23098f304c720040d0a42b4869ba567fb3c740 |
# Dataset Card for Evaluation run of Korabbit/Llama-2-7b-chat-hf-afr-200step-merged
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Korabbit/Llama-2-7b-chat-hf-afr-200step-merged
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Korabbit/Llama-2-7b-chat-hf-afr-200step-merged](https://huggingface.co/Korabbit/Llama-2-7b-chat-hf-afr-200step-merged) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Korabbit__Llama-2-7b-chat-hf-afr-200step-merged",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T13:52:27.757521](https://huggingface.co/datasets/open-llm-leaderboard/details_Korabbit__Llama-2-7b-chat-hf-afr-200step-merged/blob/main/results_2023-12-02T13-52-27.757521.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.18953752843062927,
"acc_stderr": 0.010795837931896377
},
"harness|gsm8k|5": {
"acc": 0.18953752843062927,
"acc_stderr": 0.010795837931896377
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_Korabbit__Llama-2-7b-chat-hf-afr-200step-merged | [
"region:us"
] | 2023-12-02T13:52:30+00:00 | {"pretty_name": "Evaluation run of Korabbit/Llama-2-7b-chat-hf-afr-200step-merged", "dataset_summary": "Dataset automatically created during the evaluation run of model [Korabbit/Llama-2-7b-chat-hf-afr-200step-merged](https://huggingface.co/Korabbit/Llama-2-7b-chat-hf-afr-200step-merged) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Korabbit__Llama-2-7b-chat-hf-afr-200step-merged\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T13:52:27.757521](https://huggingface.co/datasets/open-llm-leaderboard/details_Korabbit__Llama-2-7b-chat-hf-afr-200step-merged/blob/main/results_2023-12-02T13-52-27.757521.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.18953752843062927,\n \"acc_stderr\": 0.010795837931896377\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.18953752843062927,\n \"acc_stderr\": 0.010795837931896377\n }\n}\n```", "repo_url": "https://huggingface.co/Korabbit/Llama-2-7b-chat-hf-afr-200step-merged", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T13_52_27.757521", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-52-27.757521.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-52-27.757521.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T13_52_27.757521", "path": ["results_2023-12-02T13-52-27.757521.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T13-52-27.757521.parquet"]}]}]} | 2023-12-02T13:52:39+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Korabbit/Llama-2-7b-chat-hf-afr-200step-merged
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Korabbit/Llama-2-7b-chat-hf-afr-200step-merged on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T13:52:27.757521(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of Korabbit/Llama-2-7b-chat-hf-afr-200step-merged",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Korabbit/Llama-2-7b-chat-hf-afr-200step-merged on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:52:27.757521(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Korabbit/Llama-2-7b-chat-hf-afr-200step-merged",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Korabbit/Llama-2-7b-chat-hf-afr-200step-merged on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:52:27.757521(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
31,
31,
180,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Korabbit/Llama-2-7b-chat-hf-afr-200step-merged## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Korabbit/Llama-2-7b-chat-hf-afr-200step-merged on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T13:52:27.757521(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
0c0168a29ca5cda1e56a6e072041dcf3c99773de |
# Dataset Card for Evaluation run of Weyaxi/HelpSteer-filtered-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Weyaxi/HelpSteer-filtered-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Weyaxi/HelpSteer-filtered-7B](https://huggingface.co/Weyaxi/HelpSteer-filtered-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Weyaxi__HelpSteer-filtered-7B",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T13:56:09.449355](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__HelpSteer-filtered-7B/blob/main/results_2023-12-02T13-56-09.449355.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.33434420015163,
"acc_stderr": 0.012994634003332771
},
"harness|gsm8k|5": {
"acc": 0.33434420015163,
"acc_stderr": 0.012994634003332771
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_Weyaxi__HelpSteer-filtered-7B | [
"region:us"
] | 2023-12-02T13:56:12+00:00 | {"pretty_name": "Evaluation run of Weyaxi/HelpSteer-filtered-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [Weyaxi/HelpSteer-filtered-7B](https://huggingface.co/Weyaxi/HelpSteer-filtered-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Weyaxi__HelpSteer-filtered-7B\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T13:56:09.449355](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__HelpSteer-filtered-7B/blob/main/results_2023-12-02T13-56-09.449355.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.33434420015163,\n \"acc_stderr\": 0.012994634003332771\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.33434420015163,\n \"acc_stderr\": 0.012994634003332771\n }\n}\n```", "repo_url": "https://huggingface.co/Weyaxi/HelpSteer-filtered-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T13_56_09.449355", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-56-09.449355.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-56-09.449355.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T13_56_09.449355", "path": ["results_2023-12-02T13-56-09.449355.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T13-56-09.449355.parquet"]}]}]} | 2023-12-02T13:56:16+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Weyaxi/HelpSteer-filtered-7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Weyaxi/HelpSteer-filtered-7B on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T13:56:09.449355(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of Weyaxi/HelpSteer-filtered-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/HelpSteer-filtered-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:56:09.449355(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Weyaxi/HelpSteer-filtered-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/HelpSteer-filtered-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:56:09.449355(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
22,
31,
171,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Weyaxi/HelpSteer-filtered-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/HelpSteer-filtered-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T13:56:09.449355(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
4007b7be2b8225b6ee35569967b7e43ac8c94f8e |
# Dataset Card for Evaluation run of Kiddyz/testllm-c2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Kiddyz/testllm-c2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Kiddyz/testllm-c2](https://huggingface.co/Kiddyz/testllm-c2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Kiddyz__testllm-c2",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T13:57:11.515401](https://huggingface.co/datasets/open-llm-leaderboard/details_Kiddyz__testllm-c2/blob/main/results_2023-12-02T13-57-11.515401.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.47384382107657314,
"acc_stderr": 0.013753627037255047
},
"harness|gsm8k|5": {
"acc": 0.47384382107657314,
"acc_stderr": 0.013753627037255047
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_Kiddyz__testllm-c2 | [
"region:us"
] | 2023-12-02T13:57:14+00:00 | {"pretty_name": "Evaluation run of Kiddyz/testllm-c2", "dataset_summary": "Dataset automatically created during the evaluation run of model [Kiddyz/testllm-c2](https://huggingface.co/Kiddyz/testllm-c2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Kiddyz__testllm-c2\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T13:57:11.515401](https://huggingface.co/datasets/open-llm-leaderboard/details_Kiddyz__testllm-c2/blob/main/results_2023-12-02T13-57-11.515401.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.47384382107657314,\n \"acc_stderr\": 0.013753627037255047\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.47384382107657314,\n \"acc_stderr\": 0.013753627037255047\n }\n}\n```", "repo_url": "https://huggingface.co/Kiddyz/testllm-c2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T13_57_11.515401", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-57-11.515401.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-57-11.515401.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T13_57_11.515401", "path": ["results_2023-12-02T13-57-11.515401.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T13-57-11.515401.parquet"]}]}]} | 2023-12-02T13:57:18+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Kiddyz/testllm-c2
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Kiddyz/testllm-c2 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T13:57:11.515401(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of Kiddyz/testllm-c2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Kiddyz/testllm-c2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:57:11.515401(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Kiddyz/testllm-c2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Kiddyz/testllm-c2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:57:11.515401(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
19,
31,
168,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Kiddyz/testllm-c2## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Kiddyz/testllm-c2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T13:57:11.515401(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
49705f9184fcc9088fc8c91f8b6139f0a6ada0b9 |
# Dataset Card for Evaluation run of Weyaxi/SynthIA-v1.3-Nebula-v2-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Weyaxi/SynthIA-v1.3-Nebula-v2-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Weyaxi/SynthIA-v1.3-Nebula-v2-7B](https://huggingface.co/Weyaxi/SynthIA-v1.3-Nebula-v2-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Weyaxi__SynthIA-v1.3-Nebula-v2-7B",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T13:57:31.778075](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__SynthIA-v1.3-Nebula-v2-7B/blob/main/results_2023-12-02T13-57-31.778075.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.2486732373009856,
"acc_stderr": 0.01190614722287996
},
"harness|gsm8k|5": {
"acc": 0.2486732373009856,
"acc_stderr": 0.01190614722287996
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_Weyaxi__SynthIA-v1.3-Nebula-v2-7B | [
"region:us"
] | 2023-12-02T13:57:34+00:00 | {"pretty_name": "Evaluation run of Weyaxi/SynthIA-v1.3-Nebula-v2-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [Weyaxi/SynthIA-v1.3-Nebula-v2-7B](https://huggingface.co/Weyaxi/SynthIA-v1.3-Nebula-v2-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Weyaxi__SynthIA-v1.3-Nebula-v2-7B\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T13:57:31.778075](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__SynthIA-v1.3-Nebula-v2-7B/blob/main/results_2023-12-02T13-57-31.778075.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2486732373009856,\n \"acc_stderr\": 0.01190614722287996\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2486732373009856,\n \"acc_stderr\": 0.01190614722287996\n }\n}\n```", "repo_url": "https://huggingface.co/Weyaxi/SynthIA-v1.3-Nebula-v2-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T13_57_31.778075", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-57-31.778075.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-57-31.778075.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T13_57_31.778075", "path": ["results_2023-12-02T13-57-31.778075.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T13-57-31.778075.parquet"]}]}]} | 2023-12-02T13:57:39+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Weyaxi/SynthIA-v1.3-Nebula-v2-7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Weyaxi/SynthIA-v1.3-Nebula-v2-7B on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T13:57:31.778075(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of Weyaxi/SynthIA-v1.3-Nebula-v2-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/SynthIA-v1.3-Nebula-v2-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:57:31.778075(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Weyaxi/SynthIA-v1.3-Nebula-v2-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/SynthIA-v1.3-Nebula-v2-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:57:31.778075(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
28,
31,
177,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Weyaxi/SynthIA-v1.3-Nebula-v2-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/SynthIA-v1.3-Nebula-v2-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T13:57:31.778075(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
7e6845f6018b8eeebcd6335f282e4dbdfeb90eec |
# Dataset Card for Evaluation run of PulsarAI/Nebula-v2-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/PulsarAI/Nebula-v2-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [PulsarAI/Nebula-v2-7B](https://huggingface.co/PulsarAI/Nebula-v2-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_PulsarAI__Nebula-v2-7B",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T13:58:09.073163](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__Nebula-v2-7B/blob/main/results_2023-12-02T13-58-09.073163.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.3169067475360121,
"acc_stderr": 0.012815868296721373
},
"harness|gsm8k|5": {
"acc": 0.3169067475360121,
"acc_stderr": 0.012815868296721373
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_PulsarAI__Nebula-v2-7B | [
"region:us"
] | 2023-12-02T13:58:11+00:00 | {"pretty_name": "Evaluation run of PulsarAI/Nebula-v2-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [PulsarAI/Nebula-v2-7B](https://huggingface.co/PulsarAI/Nebula-v2-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_PulsarAI__Nebula-v2-7B\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T13:58:09.073163](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__Nebula-v2-7B/blob/main/results_2023-12-02T13-58-09.073163.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.3169067475360121,\n \"acc_stderr\": 0.012815868296721373\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3169067475360121,\n \"acc_stderr\": 0.012815868296721373\n }\n}\n```", "repo_url": "https://huggingface.co/PulsarAI/Nebula-v2-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T13_58_09.073163", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-58-09.073163.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T13-58-09.073163.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T13_58_09.073163", "path": ["results_2023-12-02T13-58-09.073163.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T13-58-09.073163.parquet"]}]}]} | 2023-12-02T13:58:16+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of PulsarAI/Nebula-v2-7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model PulsarAI/Nebula-v2-7B on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T13:58:09.073163(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of PulsarAI/Nebula-v2-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model PulsarAI/Nebula-v2-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:58:09.073163(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of PulsarAI/Nebula-v2-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model PulsarAI/Nebula-v2-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T13:58:09.073163(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
20,
31,
169,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of PulsarAI/Nebula-v2-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model PulsarAI/Nebula-v2-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T13:58:09.073163(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
649ba007559ce0ee546fa7db58bf858521c1a437 | # Dataset Card for "copiapoa-semantic-mask"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | aghent/copiapoa-semantic-mask | [
"region:us"
] | 2023-12-02T13:58:15+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "image", "1": "mask"}}}}], "splits": [{"name": "train", "num_bytes": 296205716.0, "num_examples": 20000}], "download_size": 133771266, "dataset_size": 296205716.0}} | 2023-12-02T14:01:06+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "copiapoa-semantic-mask"
More Information needed | [
"# Dataset Card for \"copiapoa-semantic-mask\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"copiapoa-semantic-mask\"\n\nMore Information needed"
] | [
6,
19
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"copiapoa-semantic-mask\"\n\nMore Information needed"
] |
8dd9030000bfc34e4de328597ef869fc43199f40 |
# Dataset Card for Evaluation run of BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI](https://huggingface.co/BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_BEE-spoke-data__NanoLlama-GQA-L10-A32_KV8-v13-KI",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T14:06:00.673559](https://huggingface.co/datasets/open-llm-leaderboard/details_BEE-spoke-data__NanoLlama-GQA-L10-A32_KV8-v13-KI/blob/main/results_2023-12-02T14-06-00.673559.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.009097801364670205,
"acc_stderr": 0.002615326510775673
},
"harness|gsm8k|5": {
"acc": 0.009097801364670205,
"acc_stderr": 0.002615326510775673
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_BEE-spoke-data__NanoLlama-GQA-L10-A32_KV8-v13-KI | [
"region:us"
] | 2023-12-02T14:06:03+00:00 | {"pretty_name": "Evaluation run of BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI", "dataset_summary": "Dataset automatically created during the evaluation run of model [BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI](https://huggingface.co/BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_BEE-spoke-data__NanoLlama-GQA-L10-A32_KV8-v13-KI\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T14:06:00.673559](https://huggingface.co/datasets/open-llm-leaderboard/details_BEE-spoke-data__NanoLlama-GQA-L10-A32_KV8-v13-KI/blob/main/results_2023-12-02T14-06-00.673559.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.009097801364670205,\n \"acc_stderr\": 0.002615326510775673\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.009097801364670205,\n \"acc_stderr\": 0.002615326510775673\n }\n}\n```", "repo_url": "https://huggingface.co/BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T14_06_00.673559", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-06-00.673559.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-06-00.673559.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T14_06_00.673559", "path": ["results_2023-12-02T14-06-00.673559.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T14-06-00.673559.parquet"]}]}]} | 2023-12-02T14:06:08+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T14:06:00.673559(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:06:00.673559(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:06:00.673559(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
38,
31,
187,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T14:06:00.673559(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
fe7b8602fc5d2e5d49d6ffed5b95708dc52ecab9 |
# Dataset Card for Evaluation run of NurtureAI/Starling-LM-11B-alpha-v1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/NurtureAI/Starling-LM-11B-alpha-v1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [NurtureAI/Starling-LM-11B-alpha-v1](https://huggingface.co/NurtureAI/Starling-LM-11B-alpha-v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_NurtureAI__Starling-LM-11B-alpha-v1",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T14:12:16.708140](https://huggingface.co/datasets/open-llm-leaderboard/details_NurtureAI__Starling-LM-11B-alpha-v1/blob/main/results_2023-12-02T14-12-16.708140.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5094768764215315,
"acc_stderr": 0.01377001065116882
},
"harness|gsm8k|5": {
"acc": 0.5094768764215315,
"acc_stderr": 0.01377001065116882
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_NurtureAI__Starling-LM-11B-alpha-v1 | [
"region:us"
] | 2023-12-02T14:12:19+00:00 | {"pretty_name": "Evaluation run of NurtureAI/Starling-LM-11B-alpha-v1", "dataset_summary": "Dataset automatically created during the evaluation run of model [NurtureAI/Starling-LM-11B-alpha-v1](https://huggingface.co/NurtureAI/Starling-LM-11B-alpha-v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_NurtureAI__Starling-LM-11B-alpha-v1\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T14:12:16.708140](https://huggingface.co/datasets/open-llm-leaderboard/details_NurtureAI__Starling-LM-11B-alpha-v1/blob/main/results_2023-12-02T14-12-16.708140.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5094768764215315,\n \"acc_stderr\": 0.01377001065116882\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5094768764215315,\n \"acc_stderr\": 0.01377001065116882\n }\n}\n```", "repo_url": "https://huggingface.co/NurtureAI/Starling-LM-11B-alpha-v1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T14_12_16.708140", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-12-16.708140.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-12-16.708140.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T14_12_16.708140", "path": ["results_2023-12-02T14-12-16.708140.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T14-12-16.708140.parquet"]}]}]} | 2023-12-02T14:12:24+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of NurtureAI/Starling-LM-11B-alpha-v1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model NurtureAI/Starling-LM-11B-alpha-v1 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T14:12:16.708140(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of NurtureAI/Starling-LM-11B-alpha-v1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model NurtureAI/Starling-LM-11B-alpha-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:12:16.708140(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of NurtureAI/Starling-LM-11B-alpha-v1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model NurtureAI/Starling-LM-11B-alpha-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:12:16.708140(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
25,
31,
174,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of NurtureAI/Starling-LM-11B-alpha-v1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model NurtureAI/Starling-LM-11B-alpha-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T14:12:16.708140(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
311569a6b79b0ff60c903149b4f045ff25d1cc7f |
# Dataset Card for Evaluation run of feidfoe/Metamath-reproduce-7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/feidfoe/Metamath-reproduce-7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [feidfoe/Metamath-reproduce-7b](https://huggingface.co/feidfoe/Metamath-reproduce-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_feidfoe__Metamath-reproduce-7b",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T14:22:23.494556](https://huggingface.co/datasets/open-llm-leaderboard/details_feidfoe__Metamath-reproduce-7b/blob/main/results_2023-12-02T14-22-23.494556.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5815011372251706,
"acc_stderr": 0.013588287284030881
},
"harness|gsm8k|5": {
"acc": 0.5815011372251706,
"acc_stderr": 0.013588287284030881
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_feidfoe__Metamath-reproduce-7b | [
"region:us"
] | 2023-12-02T14:22:26+00:00 | {"pretty_name": "Evaluation run of feidfoe/Metamath-reproduce-7b", "dataset_summary": "Dataset automatically created during the evaluation run of model [feidfoe/Metamath-reproduce-7b](https://huggingface.co/feidfoe/Metamath-reproduce-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_feidfoe__Metamath-reproduce-7b\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T14:22:23.494556](https://huggingface.co/datasets/open-llm-leaderboard/details_feidfoe__Metamath-reproduce-7b/blob/main/results_2023-12-02T14-22-23.494556.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5815011372251706,\n \"acc_stderr\": 0.013588287284030881\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5815011372251706,\n \"acc_stderr\": 0.013588287284030881\n }\n}\n```", "repo_url": "https://huggingface.co/feidfoe/Metamath-reproduce-7b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T14_22_23.494556", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-22-23.494556.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-22-23.494556.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T14_22_23.494556", "path": ["results_2023-12-02T14-22-23.494556.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T14-22-23.494556.parquet"]}]}]} | 2023-12-02T14:22:31+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of feidfoe/Metamath-reproduce-7b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model feidfoe/Metamath-reproduce-7b on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T14:22:23.494556(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of feidfoe/Metamath-reproduce-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model feidfoe/Metamath-reproduce-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:22:23.494556(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of feidfoe/Metamath-reproduce-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model feidfoe/Metamath-reproduce-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:22:23.494556(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
23,
31,
172,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of feidfoe/Metamath-reproduce-7b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model feidfoe/Metamath-reproduce-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T14:22:23.494556(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
4fe42ef058b5cb8797b957250b8e6efef0b236b0 |
# Dataset Card for Evaluation run of Weyaxi/Dolphin-Nebula-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Weyaxi/Dolphin-Nebula-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Weyaxi/Dolphin-Nebula-7B](https://huggingface.co/Weyaxi/Dolphin-Nebula-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Weyaxi__Dolphin-Nebula-7B",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T14:25:38.586013](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Dolphin-Nebula-7B/blob/main/results_2023-12-02T14-25-38.586013.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.3305534495830174,
"acc_stderr": 0.012957496367085026
},
"harness|gsm8k|5": {
"acc": 0.3305534495830174,
"acc_stderr": 0.012957496367085026
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_Weyaxi__Dolphin-Nebula-7B | [
"region:us"
] | 2023-12-02T14:25:41+00:00 | {"pretty_name": "Evaluation run of Weyaxi/Dolphin-Nebula-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [Weyaxi/Dolphin-Nebula-7B](https://huggingface.co/Weyaxi/Dolphin-Nebula-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Weyaxi__Dolphin-Nebula-7B\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T14:25:38.586013](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Dolphin-Nebula-7B/blob/main/results_2023-12-02T14-25-38.586013.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.3305534495830174,\n \"acc_stderr\": 0.012957496367085026\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3305534495830174,\n \"acc_stderr\": 0.012957496367085026\n }\n}\n```", "repo_url": "https://huggingface.co/Weyaxi/Dolphin-Nebula-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T14_25_38.586013", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-25-38.586013.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-25-38.586013.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T14_25_38.586013", "path": ["results_2023-12-02T14-25-38.586013.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T14-25-38.586013.parquet"]}]}]} | 2023-12-02T14:25:46+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Weyaxi/Dolphin-Nebula-7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Weyaxi/Dolphin-Nebula-7B on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T14:25:38.586013(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of Weyaxi/Dolphin-Nebula-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/Dolphin-Nebula-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:25:38.586013(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Weyaxi/Dolphin-Nebula-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/Dolphin-Nebula-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:25:38.586013(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
21,
31,
170,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Weyaxi/Dolphin-Nebula-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/Dolphin-Nebula-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T14:25:38.586013(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
9ae791beb3381ce0ffd506e76a55025a54f824a8 |
# Dataset Card for Evaluation run of CausalLM/14B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/CausalLM/14B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [CausalLM/14B](https://huggingface.co/CausalLM/14B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CausalLM__14B",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T14:28:09.246570](https://huggingface.co/datasets/open-llm-leaderboard/details_CausalLM__14B/blob/main/results_2023-12-02T14-28-09.246570.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5860500379075056,
"acc_stderr": 0.013566991960151781
},
"harness|gsm8k|5": {
"acc": 0.5860500379075056,
"acc_stderr": 0.013566991960151781
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_CausalLM__14B | [
"region:us"
] | 2023-12-02T14:28:11+00:00 | {"pretty_name": "Evaluation run of CausalLM/14B", "dataset_summary": "Dataset automatically created during the evaluation run of model [CausalLM/14B](https://huggingface.co/CausalLM/14B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CausalLM__14B\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T14:28:09.246570](https://huggingface.co/datasets/open-llm-leaderboard/details_CausalLM__14B/blob/main/results_2023-12-02T14-28-09.246570.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5860500379075056,\n \"acc_stderr\": 0.013566991960151781\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5860500379075056,\n \"acc_stderr\": 0.013566991960151781\n }\n}\n```", "repo_url": "https://huggingface.co/CausalLM/14B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T14_28_09.246570", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-28-09.246570.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-28-09.246570.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T14_28_09.246570", "path": ["results_2023-12-02T14-28-09.246570.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T14-28-09.246570.parquet"]}]}]} | 2023-12-02T14:28:16+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of CausalLM/14B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model CausalLM/14B on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T14:28:09.246570(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of CausalLM/14B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CausalLM/14B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:28:09.246570(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of CausalLM/14B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CausalLM/14B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:28:09.246570(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
14,
31,
163,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of CausalLM/14B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model CausalLM/14B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T14:28:09.246570(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
f6adf355dc485001a799f9916bf91b40379d3461 |
# Dataset Card for Evaluation run of vihangd/dopeyplats-1.1b-2T-v1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/vihangd/dopeyplats-1.1b-2T-v1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [vihangd/dopeyplats-1.1b-2T-v1](https://huggingface.co/vihangd/dopeyplats-1.1b-2T-v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_vihangd__dopeyplats-1.1b-2T-v1",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T14:28:36.585578](https://huggingface.co/datasets/open-llm-leaderboard/details_vihangd__dopeyplats-1.1b-2T-v1/blob/main/results_2023-12-02T14-28-36.585578.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.016679302501895376,
"acc_stderr": 0.0035275958887224295
},
"harness|gsm8k|5": {
"acc": 0.016679302501895376,
"acc_stderr": 0.0035275958887224295
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_vihangd__dopeyplats-1.1b-2T-v1 | [
"region:us"
] | 2023-12-02T14:28:39+00:00 | {"pretty_name": "Evaluation run of vihangd/dopeyplats-1.1b-2T-v1", "dataset_summary": "Dataset automatically created during the evaluation run of model [vihangd/dopeyplats-1.1b-2T-v1](https://huggingface.co/vihangd/dopeyplats-1.1b-2T-v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_vihangd__dopeyplats-1.1b-2T-v1\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T14:28:36.585578](https://huggingface.co/datasets/open-llm-leaderboard/details_vihangd__dopeyplats-1.1b-2T-v1/blob/main/results_2023-12-02T14-28-36.585578.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.016679302501895376,\n \"acc_stderr\": 0.0035275958887224295\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.016679302501895376,\n \"acc_stderr\": 0.0035275958887224295\n }\n}\n```", "repo_url": "https://huggingface.co/vihangd/dopeyplats-1.1b-2T-v1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T14_28_36.585578", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-28-36.585578.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-28-36.585578.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T14_28_36.585578", "path": ["results_2023-12-02T14-28-36.585578.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T14-28-36.585578.parquet"]}]}]} | 2023-12-02T14:28:44+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of vihangd/dopeyplats-1.1b-2T-v1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model vihangd/dopeyplats-1.1b-2T-v1 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T14:28:36.585578(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of vihangd/dopeyplats-1.1b-2T-v1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model vihangd/dopeyplats-1.1b-2T-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:28:36.585578(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of vihangd/dopeyplats-1.1b-2T-v1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model vihangd/dopeyplats-1.1b-2T-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:28:36.585578(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
24,
31,
173,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of vihangd/dopeyplats-1.1b-2T-v1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model vihangd/dopeyplats-1.1b-2T-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T14:28:36.585578(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
cec327f71c1f1ef28d1fbe5bc4d59355249be7cc |
# Dataset Card for Evaluation run of CausalLM/14B-DPO-alpha
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/CausalLM/14B-DPO-alpha
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [CausalLM/14B-DPO-alpha](https://huggingface.co/CausalLM/14B-DPO-alpha) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CausalLM__14B-DPO-alpha",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T14:29:47.856386](https://huggingface.co/datasets/open-llm-leaderboard/details_CausalLM__14B-DPO-alpha/blob/main/results_2023-12-02T14-29-47.856386.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6269901440485216,
"acc_stderr": 0.01332087660977722
},
"harness|gsm8k|5": {
"acc": 0.6269901440485216,
"acc_stderr": 0.01332087660977722
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_CausalLM__14B-DPO-alpha | [
"region:us"
] | 2023-12-02T14:28:48+00:00 | {"pretty_name": "Evaluation run of CausalLM/14B-DPO-alpha", "dataset_summary": "Dataset automatically created during the evaluation run of model [CausalLM/14B-DPO-alpha](https://huggingface.co/CausalLM/14B-DPO-alpha) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CausalLM__14B-DPO-alpha\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T14:29:47.856386](https://huggingface.co/datasets/open-llm-leaderboard/details_CausalLM__14B-DPO-alpha/blob/main/results_2023-12-02T14-29-47.856386.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6269901440485216,\n \"acc_stderr\": 0.01332087660977722\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6269901440485216,\n \"acc_stderr\": 0.01332087660977722\n }\n}\n```", "repo_url": "https://huggingface.co/CausalLM/14B-DPO-alpha", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T14_28_45.783629", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-28-45.783629.parquet"]}, {"split": "2023_12_02T14_29_47.856386", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-29-47.856386.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-29-47.856386.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T14_28_45.783629", "path": ["results_2023-12-02T14-28-45.783629.parquet"]}, {"split": "2023_12_02T14_29_47.856386", "path": ["results_2023-12-02T14-29-47.856386.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T14-29-47.856386.parquet"]}]}]} | 2023-12-02T14:29:54+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of CausalLM/14B-DPO-alpha
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model CausalLM/14B-DPO-alpha on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T14:29:47.856386(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of CausalLM/14B-DPO-alpha",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CausalLM/14B-DPO-alpha on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:29:47.856386(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of CausalLM/14B-DPO-alpha",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CausalLM/14B-DPO-alpha on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:29:47.856386(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
20,
31,
169,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of CausalLM/14B-DPO-alpha## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model CausalLM/14B-DPO-alpha on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T14:29:47.856386(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
595908e199e870c2a19bb91859e49e22362ec824 |
# Dataset Card for Evaluation run of monology/openinstruct-mistral-7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/monology/openinstruct-mistral-7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [monology/openinstruct-mistral-7b](https://huggingface.co/monology/openinstruct-mistral-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_monology__openinstruct-mistral-7b",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T14:37:45.308018](https://huggingface.co/datasets/open-llm-leaderboard/details_monology__openinstruct-mistral-7b/blob/main/results_2023-12-02T14-37-45.308018.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5049279757391963,
"acc_stderr": 0.013771815775470575
},
"harness|gsm8k|5": {
"acc": 0.5049279757391963,
"acc_stderr": 0.013771815775470575
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_monology__openinstruct-mistral-7b | [
"region:us"
] | 2023-12-02T14:37:47+00:00 | {"pretty_name": "Evaluation run of monology/openinstruct-mistral-7b", "dataset_summary": "Dataset automatically created during the evaluation run of model [monology/openinstruct-mistral-7b](https://huggingface.co/monology/openinstruct-mistral-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_monology__openinstruct-mistral-7b\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T14:37:45.308018](https://huggingface.co/datasets/open-llm-leaderboard/details_monology__openinstruct-mistral-7b/blob/main/results_2023-12-02T14-37-45.308018.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5049279757391963,\n \"acc_stderr\": 0.013771815775470575\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5049279757391963,\n \"acc_stderr\": 0.013771815775470575\n }\n}\n```", "repo_url": "https://huggingface.co/monology/openinstruct-mistral-7b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T14_37_45.308018", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-37-45.308018.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-37-45.308018.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T14_37_45.308018", "path": ["results_2023-12-02T14-37-45.308018.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T14-37-45.308018.parquet"]}]}]} | 2023-12-02T14:37:52+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of monology/openinstruct-mistral-7b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model monology/openinstruct-mistral-7b on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T14:37:45.308018(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of monology/openinstruct-mistral-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model monology/openinstruct-mistral-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:37:45.308018(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of monology/openinstruct-mistral-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model monology/openinstruct-mistral-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:37:45.308018(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
20,
31,
169,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of monology/openinstruct-mistral-7b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model monology/openinstruct-mistral-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T14:37:45.308018(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
bef08ce141c83a7ab9933047d4506426a87f92a4 |
# Dataset Card for Evaluation run of vihangd/smartyplats-7b-v2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/vihangd/smartyplats-7b-v2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [vihangd/smartyplats-7b-v2](https://huggingface.co/vihangd/smartyplats-7b-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_vihangd__smartyplats-7b-v2",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T14:37:57.896136](https://huggingface.co/datasets/open-llm-leaderboard/details_vihangd__smartyplats-7b-v2/blob/main/results_2023-12-02T14-37-57.896136.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.38817285822592873,
"acc_stderr": 0.013423607564002743
},
"harness|gsm8k|5": {
"acc": 0.38817285822592873,
"acc_stderr": 0.013423607564002743
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_vihangd__smartyplats-7b-v2 | [
"region:us"
] | 2023-12-02T14:38:00+00:00 | {"pretty_name": "Evaluation run of vihangd/smartyplats-7b-v2", "dataset_summary": "Dataset automatically created during the evaluation run of model [vihangd/smartyplats-7b-v2](https://huggingface.co/vihangd/smartyplats-7b-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_vihangd__smartyplats-7b-v2\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T14:37:57.896136](https://huggingface.co/datasets/open-llm-leaderboard/details_vihangd__smartyplats-7b-v2/blob/main/results_2023-12-02T14-37-57.896136.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.38817285822592873,\n \"acc_stderr\": 0.013423607564002743\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.38817285822592873,\n \"acc_stderr\": 0.013423607564002743\n }\n}\n```", "repo_url": "https://huggingface.co/vihangd/smartyplats-7b-v2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T14_37_57.896136", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-37-57.896136.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-37-57.896136.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T14_37_57.896136", "path": ["results_2023-12-02T14-37-57.896136.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T14-37-57.896136.parquet"]}]}]} | 2023-12-02T14:38:05+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of vihangd/smartyplats-7b-v2
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model vihangd/smartyplats-7b-v2 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T14:37:57.896136(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of vihangd/smartyplats-7b-v2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model vihangd/smartyplats-7b-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:37:57.896136(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of vihangd/smartyplats-7b-v2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model vihangd/smartyplats-7b-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:37:57.896136(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
21,
31,
170,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of vihangd/smartyplats-7b-v2## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model vihangd/smartyplats-7b-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T14:37:57.896136(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
fffd4490fe2446d2054c5f1b978a93ab30a2e803 |
# Dataset Card for Evaluation run of VAGOsolutions/SauerkrautLM-7b-HerO
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [VAGOsolutions/SauerkrautLM-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_VAGOsolutions__SauerkrautLM-7b-HerO",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T14:47:07.323198](https://huggingface.co/datasets/open-llm-leaderboard/details_VAGOsolutions__SauerkrautLM-7b-HerO/blob/main/results_2023-12-02T14-47-07.323198.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.4927975739196361,
"acc_stderr": 0.013771055751972868
},
"harness|gsm8k|5": {
"acc": 0.4927975739196361,
"acc_stderr": 0.013771055751972868
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_VAGOsolutions__SauerkrautLM-7b-HerO | [
"region:us"
] | 2023-12-02T14:47:10+00:00 | {"pretty_name": "Evaluation run of VAGOsolutions/SauerkrautLM-7b-HerO", "dataset_summary": "Dataset automatically created during the evaluation run of model [VAGOsolutions/SauerkrautLM-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_VAGOsolutions__SauerkrautLM-7b-HerO\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T14:47:07.323198](https://huggingface.co/datasets/open-llm-leaderboard/details_VAGOsolutions__SauerkrautLM-7b-HerO/blob/main/results_2023-12-02T14-47-07.323198.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.4927975739196361,\n \"acc_stderr\": 0.013771055751972868\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.4927975739196361,\n \"acc_stderr\": 0.013771055751972868\n }\n}\n```", "repo_url": "https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T14_47_07.323198", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-47-07.323198.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-47-07.323198.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T14_47_07.323198", "path": ["results_2023-12-02T14-47-07.323198.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T14-47-07.323198.parquet"]}]}]} | 2023-12-02T14:47:14+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of VAGOsolutions/SauerkrautLM-7b-HerO
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model VAGOsolutions/SauerkrautLM-7b-HerO on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T14:47:07.323198(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of VAGOsolutions/SauerkrautLM-7b-HerO",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model VAGOsolutions/SauerkrautLM-7b-HerO on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:47:07.323198(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of VAGOsolutions/SauerkrautLM-7b-HerO",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model VAGOsolutions/SauerkrautLM-7b-HerO on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:47:07.323198(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
23,
31,
172,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of VAGOsolutions/SauerkrautLM-7b-HerO## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model VAGOsolutions/SauerkrautLM-7b-HerO on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T14:47:07.323198(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
d37eda20da80fb0232d5024aa2baa6a476fda439 |
# Dataset Card for Evaluation run of habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1](https://huggingface.co/habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T14:49:10.140020](https://huggingface.co/datasets/open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1/blob/main/results_2023-12-02T14-49-10.140020.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.01592115238817286,
"acc_stderr": 0.0034478192723889976
},
"harness|gsm8k|5": {
"acc": 0.01592115238817286,
"acc_stderr": 0.0034478192723889976
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1 | [
"region:us"
] | 2023-12-02T14:49:12+00:00 | {"pretty_name": "Evaluation run of habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1", "dataset_summary": "Dataset automatically created during the evaluation run of model [habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1](https://huggingface.co/habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T14:49:10.140020](https://huggingface.co/datasets/open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1/blob/main/results_2023-12-02T14-49-10.140020.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.01592115238817286,\n \"acc_stderr\": 0.0034478192723889976\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.01592115238817286,\n \"acc_stderr\": 0.0034478192723889976\n }\n}\n```", "repo_url": "https://huggingface.co/habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T14_49_10.140020", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-49-10.140020.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-49-10.140020.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T14_49_10.140020", "path": ["results_2023-12-02T14-49-10.140020.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T14-49-10.140020.parquet"]}]}]} | 2023-12-02T14:49:17+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T14:49:10.140020(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:49:10.140020(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:49:10.140020(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
42,
31,
191,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T14:49:10.140020(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
22948d3828fef549a189dd3a4540b56670b20b96 |
# Dataset Card for Evaluation run of habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1](https://huggingface.co/habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T14:49:13.784877](https://huggingface.co/datasets/open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1/blob/main/results_2023-12-02T14-49-13.784877.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.013646702047005308,
"acc_stderr": 0.0031957470754807814
},
"harness|gsm8k|5": {
"acc": 0.013646702047005308,
"acc_stderr": 0.0031957470754807814
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1 | [
"region:us"
] | 2023-12-02T14:49:16+00:00 | {"pretty_name": "Evaluation run of habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1", "dataset_summary": "Dataset automatically created during the evaluation run of model [habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1](https://huggingface.co/habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T14:49:13.784877](https://huggingface.co/datasets/open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1/blob/main/results_2023-12-02T14-49-13.784877.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.013646702047005308,\n \"acc_stderr\": 0.0031957470754807814\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.013646702047005308,\n \"acc_stderr\": 0.0031957470754807814\n }\n}\n```", "repo_url": "https://huggingface.co/habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T14_49_13.784877", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-49-13.784877.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-49-13.784877.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T14_49_13.784877", "path": ["results_2023-12-02T14-49-13.784877.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T14-49-13.784877.parquet"]}]}]} | 2023-12-02T14:49:21+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T14:49:13.784877(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:49:13.784877(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:49:13.784877(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
52,
31,
201,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T14:49:13.784877(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
bae09d3fd9b7fc0f9d25670c2bc84061e7864fb6 |
# Dataset Card for Evaluation run of habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1](https://huggingface.co/habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T14:51:09.635098](https://huggingface.co/datasets/open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1/blob/main/results_2023-12-02T14-51-09.635098.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1 | [
"region:us"
] | 2023-12-02T14:51:12+00:00 | {"pretty_name": "Evaluation run of habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1", "dataset_summary": "Dataset automatically created during the evaluation run of model [habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1](https://huggingface.co/habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T14:51:09.635098](https://huggingface.co/datasets/open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1/blob/main/results_2023-12-02T14-51-09.635098.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T14_51_09.635098", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-51-09.635098.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-51-09.635098.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T14_51_09.635098", "path": ["results_2023-12-02T14-51-09.635098.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T14-51-09.635098.parquet"]}]}]} | 2023-12-02T14:51:17+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T14:51:09.635098(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:51:09.635098(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:51:09.635098(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
41,
31,
190,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T14:51:09.635098(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
7b63923b8ccd9232bbe9e3df5d02e7b4f89c59ac |
# Dataset Card for Evaluation run of habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1](https://huggingface.co/habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T14:51:26.830602](https://huggingface.co/datasets/open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1/blob/main/results_2023-12-02T14-51-26.830602.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.01288855193328279,
"acc_stderr": 0.003106901266499671
},
"harness|gsm8k|5": {
"acc": 0.01288855193328279,
"acc_stderr": 0.003106901266499671
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1 | [
"region:us"
] | 2023-12-02T14:51:29+00:00 | {"pretty_name": "Evaluation run of habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1", "dataset_summary": "Dataset automatically created during the evaluation run of model [habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1](https://huggingface.co/habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T14:51:26.830602](https://huggingface.co/datasets/open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1/blob/main/results_2023-12-02T14-51-26.830602.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.01288855193328279,\n \"acc_stderr\": 0.003106901266499671\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.01288855193328279,\n \"acc_stderr\": 0.003106901266499671\n }\n}\n```", "repo_url": "https://huggingface.co/habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T14_51_26.830602", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-51-26.830602.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-51-26.830602.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T14_51_26.830602", "path": ["results_2023-12-02T14-51-26.830602.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T14-51-26.830602.parquet"]}]}]} | 2023-12-02T14:51:34+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T14:51:26.830602(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:51:26.830602(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:51:26.830602(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
52,
31,
201,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T14:51:26.830602(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information"
] |
c2a9cff6f28f8791522d3ecdc54354338b903848 |
# Dataset Card for Evaluation run of openaccess-ai-collective/openhermes-2_5-dpo-no-robots-v2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/openaccess-ai-collective/openhermes-2_5-dpo-no-robots-v2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [openaccess-ai-collective/openhermes-2_5-dpo-no-robots-v2](https://huggingface.co/openaccess-ai-collective/openhermes-2_5-dpo-no-robots-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_openaccess-ai-collective__openhermes-2_5-dpo-no-robots-v2",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T15:09:06.633053](https://huggingface.co/datasets/open-llm-leaderboard/details_openaccess-ai-collective__openhermes-2_5-dpo-no-robots-v2/blob/main/results_2023-12-02T15-09-06.633053.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.55420773313116,
"acc_stderr": 0.013691305174506691
},
"harness|gsm8k|5": {
"acc": 0.55420773313116,
"acc_stderr": 0.013691305174506691
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_openaccess-ai-collective__openhermes-2_5-dpo-no-robots-v2 | [
"region:us"
] | 2023-12-02T15:03:52+00:00 | {"pretty_name": "Evaluation run of openaccess-ai-collective/openhermes-2_5-dpo-no-robots-v2", "dataset_summary": "Dataset automatically created during the evaluation run of model [openaccess-ai-collective/openhermes-2_5-dpo-no-robots-v2](https://huggingface.co/openaccess-ai-collective/openhermes-2_5-dpo-no-robots-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_openaccess-ai-collective__openhermes-2_5-dpo-no-robots-v2\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T15:09:06.633053](https://huggingface.co/datasets/open-llm-leaderboard/details_openaccess-ai-collective__openhermes-2_5-dpo-no-robots-v2/blob/main/results_2023-12-02T15-09-06.633053.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.55420773313116,\n \"acc_stderr\": 0.013691305174506691\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.55420773313116,\n \"acc_stderr\": 0.013691305174506691\n }\n}\n```", "repo_url": "https://huggingface.co/openaccess-ai-collective/openhermes-2_5-dpo-no-robots-v2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T15_03_49.963342", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-03-49.963342.parquet"]}, {"split": "2023_12_02T15_09_06.633053", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-09-06.633053.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-09-06.633053.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T15_03_49.963342", "path": ["results_2023-12-02T15-03-49.963342.parquet"]}, {"split": "2023_12_02T15_09_06.633053", "path": ["results_2023-12-02T15-09-06.633053.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T15-09-06.633053.parquet"]}]}]} | 2023-12-02T15:09:14+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of openaccess-ai-collective/openhermes-2_5-dpo-no-robots-v2
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model openaccess-ai-collective/openhermes-2_5-dpo-no-robots-v2 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T15:09:06.633053(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of openaccess-ai-collective/openhermes-2_5-dpo-no-robots-v2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model openaccess-ai-collective/openhermes-2_5-dpo-no-robots-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T15:09:06.633053(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of openaccess-ai-collective/openhermes-2_5-dpo-no-robots-v2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model openaccess-ai-collective/openhermes-2_5-dpo-no-robots-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T15:09:06.633053(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
33,
31,
182,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of openaccess-ai-collective/openhermes-2_5-dpo-no-robots-v2## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model openaccess-ai-collective/openhermes-2_5-dpo-no-robots-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T15:09:06.633053(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
c5be2337e19253ad8b12a49a8a4ac0f7cce41921 |
# Dataset Card for Evaluation run of openaccess-ai-collective/openhermes-2_5-dpo-no-robots
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/openaccess-ai-collective/openhermes-2_5-dpo-no-robots
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [openaccess-ai-collective/openhermes-2_5-dpo-no-robots](https://huggingface.co/openaccess-ai-collective/openhermes-2_5-dpo-no-robots) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_openaccess-ai-collective__openhermes-2_5-dpo-no-robots",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T15:08:25.673533](https://huggingface.co/datasets/open-llm-leaderboard/details_openaccess-ai-collective__openhermes-2_5-dpo-no-robots/blob/main/results_2023-12-02T15-08-25.673533.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5526914329037149,
"acc_stderr": 0.013695795709089898
},
"harness|gsm8k|5": {
"acc": 0.5526914329037149,
"acc_stderr": 0.013695795709089898
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_openaccess-ai-collective__openhermes-2_5-dpo-no-robots | [
"region:us"
] | 2023-12-02T15:08:28+00:00 | {"pretty_name": "Evaluation run of openaccess-ai-collective/openhermes-2_5-dpo-no-robots", "dataset_summary": "Dataset automatically created during the evaluation run of model [openaccess-ai-collective/openhermes-2_5-dpo-no-robots](https://huggingface.co/openaccess-ai-collective/openhermes-2_5-dpo-no-robots) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_openaccess-ai-collective__openhermes-2_5-dpo-no-robots\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T15:08:25.673533](https://huggingface.co/datasets/open-llm-leaderboard/details_openaccess-ai-collective__openhermes-2_5-dpo-no-robots/blob/main/results_2023-12-02T15-08-25.673533.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5526914329037149,\n \"acc_stderr\": 0.013695795709089898\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5526914329037149,\n \"acc_stderr\": 0.013695795709089898\n }\n}\n```", "repo_url": "https://huggingface.co/openaccess-ai-collective/openhermes-2_5-dpo-no-robots", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T15_08_25.673533", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-08-25.673533.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-08-25.673533.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T15_08_25.673533", "path": ["results_2023-12-02T15-08-25.673533.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T15-08-25.673533.parquet"]}]}]} | 2023-12-02T15:08:33+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of openaccess-ai-collective/openhermes-2_5-dpo-no-robots
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model openaccess-ai-collective/openhermes-2_5-dpo-no-robots on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T15:08:25.673533(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of openaccess-ai-collective/openhermes-2_5-dpo-no-robots",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model openaccess-ai-collective/openhermes-2_5-dpo-no-robots on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T15:08:25.673533(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of openaccess-ai-collective/openhermes-2_5-dpo-no-robots",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model openaccess-ai-collective/openhermes-2_5-dpo-no-robots on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T15:08:25.673533(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
30,
31,
179,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of openaccess-ai-collective/openhermes-2_5-dpo-no-robots## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model openaccess-ai-collective/openhermes-2_5-dpo-no-robots on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T15:08:25.673533(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
6b02a49da0d80173659b03b3ceb62cae28b1e2c3 | https://github.com/ruixiangcui/WikiResNLI_NatResNLI
```
@inproceedings{cui-etal-2023-failure,
title = "What does the Failure to Reason with {``}Respectively{''} in Zero/Few-Shot Settings Tell Us about Language Models?",
author = "Cui, Ruixiang and
Lee, Seolhwa and
Hershcovich, Daniel and
S{\o}gaard, Anders",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.489",
pages = "8786--8800",
abstract = "Humans can effortlessly understand the coordinate structure of sentences such as {``}Niels Bohr and Kurt Cobain were born in Copenhagen and Seattle, *respectively*{''}. In the context of natural language inference (NLI), we examine how language models (LMs) reason with respective readings (Gawron and Kehler, 2004) from two perspectives: syntactic-semantic and commonsense-world knowledge. We propose a controlled synthetic dataset WikiResNLI and a naturally occurring dataset NatResNLI to encompass various explicit and implicit realizations of {``}respectively{''}. We show that fine-tuned NLI models struggle with understanding such readings without explicit supervision. While few-shot learning is easy in the presence of explicit cues, longer training is required when the reading is evoked implicitly, leaving models to rely on common sense inferences. Furthermore, our fine-grained analysis indicates models fail to generalize across different constructions. To conclude, we demonstrate that LMs still lag behind humans in generalizing to the long tail of linguistic constructions.",
}
``` | tasksource/resnli | [
"license:cc-by-4.0",
"region:us"
] | 2023-12-02T15:09:45+00:00 | {"license": "cc-by-4.0", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "config", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4691316, "num_examples": 25232}, {"name": "validation", "num_bytes": 801878, "num_examples": 4624}, {"name": "test", "num_bytes": 1224540, "num_examples": 7216}], "download_size": 956275, "dataset_size": 6717734}} | 2023-12-05T09:47:40+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
| URL
| [] | [
"TAGS\n#license-cc-by-4.0 #region-us \n"
] | [
15
] | [
"passage: TAGS\n#license-cc-by-4.0 #region-us \n"
] |
cbb485353c16e01b94c7ad6bcf65d2a1a4176029 |
# Dataset Card for Evaluation run of rinna/youri-7b-instruction
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/rinna/youri-7b-instruction
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [rinna/youri-7b-instruction](https://huggingface.co/rinna/youri-7b-instruction) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_rinna__youri-7b-instruction",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T15:10:01.872073](https://huggingface.co/datasets/open-llm-leaderboard/details_rinna__youri-7b-instruction/blob/main/results_2023-12-02T15-10-01.872073.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_rinna__youri-7b-instruction | [
"region:us"
] | 2023-12-02T15:10:04+00:00 | {"pretty_name": "Evaluation run of rinna/youri-7b-instruction", "dataset_summary": "Dataset automatically created during the evaluation run of model [rinna/youri-7b-instruction](https://huggingface.co/rinna/youri-7b-instruction) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_rinna__youri-7b-instruction\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T15:10:01.872073](https://huggingface.co/datasets/open-llm-leaderboard/details_rinna__youri-7b-instruction/blob/main/results_2023-12-02T15-10-01.872073.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/rinna/youri-7b-instruction", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T15_10_01.872073", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-10-01.872073.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-10-01.872073.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T15_10_01.872073", "path": ["results_2023-12-02T15-10-01.872073.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T15-10-01.872073.parquet"]}]}]} | 2023-12-02T15:10:10+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of rinna/youri-7b-instruction
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model rinna/youri-7b-instruction on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T15:10:01.872073(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of rinna/youri-7b-instruction",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model rinna/youri-7b-instruction on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T15:10:01.872073(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of rinna/youri-7b-instruction",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model rinna/youri-7b-instruction on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T15:10:01.872073(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
17,
31,
166,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of rinna/youri-7b-instruction## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model rinna/youri-7b-instruction on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T15:10:01.872073(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
358f651e884a919686e2d455fe11782fc897fe3b |
# Dataset Card for Evaluation run of rinna/youri-7b-chat
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/rinna/youri-7b-chat
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [rinna/youri-7b-chat](https://huggingface.co/rinna/youri-7b-chat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_rinna__youri-7b-chat",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T15:12:23.080545](https://huggingface.co/datasets/open-llm-leaderboard/details_rinna__youri-7b-chat/blob/main/results_2023-12-02T15-12-23.080545.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.013646702047005308,
"acc_stderr": 0.0031957470754808235
},
"harness|gsm8k|5": {
"acc": 0.013646702047005308,
"acc_stderr": 0.0031957470754808235
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_rinna__youri-7b-chat | [
"region:us"
] | 2023-12-02T15:11:39+00:00 | {"pretty_name": "Evaluation run of rinna/youri-7b-chat", "dataset_summary": "Dataset automatically created during the evaluation run of model [rinna/youri-7b-chat](https://huggingface.co/rinna/youri-7b-chat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_rinna__youri-7b-chat\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T15:12:23.080545](https://huggingface.co/datasets/open-llm-leaderboard/details_rinna__youri-7b-chat/blob/main/results_2023-12-02T15-12-23.080545.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.013646702047005308,\n \"acc_stderr\": 0.0031957470754808235\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.013646702047005308,\n \"acc_stderr\": 0.0031957470754808235\n }\n}\n```", "repo_url": "https://huggingface.co/rinna/youri-7b-chat", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T15_11_37.192628", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-11-37.192628.parquet"]}, {"split": "2023_12_02T15_11_43.336973", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-11-43.336973.parquet"]}, {"split": "2023_12_02T15_11_58.617219", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-11-58.617219.parquet"]}, {"split": "2023_12_02T15_12_23.080545", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-12-23.080545.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-12-23.080545.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T15_11_37.192628", "path": ["results_2023-12-02T15-11-37.192628.parquet"]}, {"split": "2023_12_02T15_11_43.336973", "path": ["results_2023-12-02T15-11-43.336973.parquet"]}, {"split": "2023_12_02T15_11_58.617219", "path": ["results_2023-12-02T15-11-58.617219.parquet"]}, {"split": "2023_12_02T15_12_23.080545", "path": ["results_2023-12-02T15-12-23.080545.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T15-12-23.080545.parquet"]}]}]} | 2023-12-02T15:12:29+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of rinna/youri-7b-chat
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model rinna/youri-7b-chat on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T15:12:23.080545(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of rinna/youri-7b-chat",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model rinna/youri-7b-chat on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T15:12:23.080545(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of rinna/youri-7b-chat",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model rinna/youri-7b-chat on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T15:12:23.080545(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
17,
31,
166,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of rinna/youri-7b-chat## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model rinna/youri-7b-chat on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T15:12:23.080545(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
8b1f621f264e4071442355517a6c6d7ddb1be93a |
# Dataset Card for Evaluation run of iGenius-AI-Team/LLAMA-13B-test-finetuning
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/iGenius-AI-Team/LLAMA-13B-test-finetuning
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [iGenius-AI-Team/LLAMA-13B-test-finetuning](https://huggingface.co/iGenius-AI-Team/LLAMA-13B-test-finetuning) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_iGenius-AI-Team__LLAMA-13B-test-finetuning",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T15:13:07.767154](https://huggingface.co/datasets/open-llm-leaderboard/details_iGenius-AI-Team__LLAMA-13B-test-finetuning/blob/main/results_2023-12-02T15-13-07.767154.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.22517058377558757,
"acc_stderr": 0.011505385424294625
},
"harness|gsm8k|5": {
"acc": 0.22517058377558757,
"acc_stderr": 0.011505385424294625
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_iGenius-AI-Team__LLAMA-13B-test-finetuning | [
"region:us"
] | 2023-12-02T15:13:10+00:00 | {"pretty_name": "Evaluation run of iGenius-AI-Team/LLAMA-13B-test-finetuning", "dataset_summary": "Dataset automatically created during the evaluation run of model [iGenius-AI-Team/LLAMA-13B-test-finetuning](https://huggingface.co/iGenius-AI-Team/LLAMA-13B-test-finetuning) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_iGenius-AI-Team__LLAMA-13B-test-finetuning\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T15:13:07.767154](https://huggingface.co/datasets/open-llm-leaderboard/details_iGenius-AI-Team__LLAMA-13B-test-finetuning/blob/main/results_2023-12-02T15-13-07.767154.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.22517058377558757,\n \"acc_stderr\": 0.011505385424294625\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.22517058377558757,\n \"acc_stderr\": 0.011505385424294625\n }\n}\n```", "repo_url": "https://huggingface.co/iGenius-AI-Team/LLAMA-13B-test-finetuning", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T15_13_07.767154", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-13-07.767154.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-13-07.767154.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T15_13_07.767154", "path": ["results_2023-12-02T15-13-07.767154.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T15-13-07.767154.parquet"]}]}]} | 2023-12-02T15:13:15+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of iGenius-AI-Team/LLAMA-13B-test-finetuning
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model iGenius-AI-Team/LLAMA-13B-test-finetuning on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T15:13:07.767154(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of iGenius-AI-Team/LLAMA-13B-test-finetuning",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model iGenius-AI-Team/LLAMA-13B-test-finetuning on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T15:13:07.767154(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of iGenius-AI-Team/LLAMA-13B-test-finetuning",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model iGenius-AI-Team/LLAMA-13B-test-finetuning on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T15:13:07.767154(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
27,
31,
176,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of iGenius-AI-Team/LLAMA-13B-test-finetuning## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model iGenius-AI-Team/LLAMA-13B-test-finetuning on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T15:13:07.767154(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
da7af1b358b9d2a48bcceb481297a6d31f555f17 | # Synthetic CommonSense
Generated using ChatGPT4, originally from https://huggingface.co/datasets/commonsense_qa
Notebook at https://github.com/mesolitica/malaysian-dataset/tree/master/question-answer/chatgpt4-commonsense
- [synthetic-commonsense.jsonl](synthetic-commonsense.jsonl), 36332 rows, 7.34 MB.
## Example data
```python
{'question': '1. Seseorang yang bersara mungkin perlu kembali bekerja jika mereka apa?\n A. mempunyai hutang\n B. mencari pendapatan\n C. meninggalkan pekerjaan\n D. memerlukan wang\n E. kehilangan kunci',
'answer': 'D'}
``` | mesolitica/chatgpt4-commonsense-qa | [
"task_categories:question-answering",
"language:ms",
"region:us"
] | 2023-12-02T15:21:55+00:00 | {"language": ["ms"], "task_categories": ["question-answering"]} | 2024-02-02T05:59:21+00:00 | [] | [
"ms"
] | TAGS
#task_categories-question-answering #language-Malay (macrolanguage) #region-us
| # Synthetic CommonSense
Generated using ChatGPT4, originally from URL
Notebook at URL
- URL, 36332 rows, 7.34 MB.
## Example data
| [
"# Synthetic CommonSense\n\nGenerated using ChatGPT4, originally from URL\n\nNotebook at URL\n\n- URL, 36332 rows, 7.34 MB.",
"## Example data"
] | [
"TAGS\n#task_categories-question-answering #language-Malay (macrolanguage) #region-us \n",
"# Synthetic CommonSense\n\nGenerated using ChatGPT4, originally from URL\n\nNotebook at URL\n\n- URL, 36332 rows, 7.34 MB.",
"## Example data"
] | [
28,
35,
4
] | [
"passage: TAGS\n#task_categories-question-answering #language-Malay (macrolanguage) #region-us \n# Synthetic CommonSense\n\nGenerated using ChatGPT4, originally from URL\n\nNotebook at URL\n\n- URL, 36332 rows, 7.34 MB.## Example data"
] |
c20ae6a9903c9e2ecc086909dd65dd0625189b9e |
# Dataset Card for Evaluation run of rinna/youri-7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/rinna/youri-7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [rinna/youri-7b](https://huggingface.co/rinna/youri-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_rinna__youri-7b",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T15:23:30.420258](https://huggingface.co/datasets/open-llm-leaderboard/details_rinna__youri-7b/blob/main/results_2023-12-02T15-23-30.420258.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.08642911296436695,
"acc_stderr": 0.007740044337103775
},
"harness|gsm8k|5": {
"acc": 0.08642911296436695,
"acc_stderr": 0.007740044337103775
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_rinna__youri-7b | [
"region:us"
] | 2023-12-02T15:23:35+00:00 | {"pretty_name": "Evaluation run of rinna/youri-7b", "dataset_summary": "Dataset automatically created during the evaluation run of model [rinna/youri-7b](https://huggingface.co/rinna/youri-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_rinna__youri-7b\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T15:23:30.420258](https://huggingface.co/datasets/open-llm-leaderboard/details_rinna__youri-7b/blob/main/results_2023-12-02T15-23-30.420258.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.08642911296436695,\n \"acc_stderr\": 0.007740044337103775\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.08642911296436695,\n \"acc_stderr\": 0.007740044337103775\n }\n}\n```", "repo_url": "https://huggingface.co/rinna/youri-7b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T15_23_30.420258", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-23-30.420258.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-23-30.420258.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T15_23_30.420258", "path": ["results_2023-12-02T15-23-30.420258.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T15-23-30.420258.parquet"]}]}]} | 2023-12-02T15:23:49+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of rinna/youri-7b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model rinna/youri-7b on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T15:23:30.420258(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of rinna/youri-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model rinna/youri-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T15:23:30.420258(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of rinna/youri-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model rinna/youri-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T15:23:30.420258(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
15,
31,
164,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of rinna/youri-7b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model rinna/youri-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T15:23:30.420258(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
78d3b1c6272db98ad0d5634d3a8943542c64f25f |
# Dataset Card for Evaluation run of ajibawa-2023/SlimOrca-13B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/ajibawa-2023/SlimOrca-13B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [ajibawa-2023/SlimOrca-13B](https://huggingface.co/ajibawa-2023/SlimOrca-13B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_ajibawa-2023__SlimOrca-13B",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T15:27:16.441316](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__SlimOrca-13B/blob/main/results_2023-12-02T15-27-16.441316.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.3995451099317665,
"acc_stderr": 0.013491660298815988
},
"harness|gsm8k|5": {
"acc": 0.3995451099317665,
"acc_stderr": 0.013491660298815988
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_ajibawa-2023__SlimOrca-13B | [
"region:us"
] | 2023-12-02T15:27:22+00:00 | {"pretty_name": "Evaluation run of ajibawa-2023/SlimOrca-13B", "dataset_summary": "Dataset automatically created during the evaluation run of model [ajibawa-2023/SlimOrca-13B](https://huggingface.co/ajibawa-2023/SlimOrca-13B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ajibawa-2023__SlimOrca-13B\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T15:27:16.441316](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__SlimOrca-13B/blob/main/results_2023-12-02T15-27-16.441316.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.3995451099317665,\n \"acc_stderr\": 0.013491660298815988\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3995451099317665,\n \"acc_stderr\": 0.013491660298815988\n }\n}\n```", "repo_url": "https://huggingface.co/ajibawa-2023/SlimOrca-13B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T15_27_16.441316", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-27-16.441316.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-27-16.441316.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T15_27_16.441316", "path": ["results_2023-12-02T15-27-16.441316.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T15-27-16.441316.parquet"]}]}]} | 2023-12-02T15:27:29+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of ajibawa-2023/SlimOrca-13B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model ajibawa-2023/SlimOrca-13B on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T15:27:16.441316(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of ajibawa-2023/SlimOrca-13B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model ajibawa-2023/SlimOrca-13B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T15:27:16.441316(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of ajibawa-2023/SlimOrca-13B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model ajibawa-2023/SlimOrca-13B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T15:27:16.441316(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
21,
31,
170,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of ajibawa-2023/SlimOrca-13B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model ajibawa-2023/SlimOrca-13B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T15:27:16.441316(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
5c3b51916dace941ad3ad60e218a45a572b487b5 | # Dataset Card for "inat_2021_train_mini_plantae"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | mikehemberger/inat_2021_train_mini_plantae | [
"region:us"
] | 2023-12-02T15:33:44+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "05729_Plantae_Bryophyta_Bryopsida_Bryales_Bryaceae_Bryum_argenteum", "1": "05730_Plantae_Bryophyta_Bryopsida_Bryales_Bryaceae_Rhodobryum_ontariense", "2": "05731_Plantae_Bryophyta_Bryopsida_Bryales_Mniaceae_Leucolepis_acanthoneura", "3": "05732_Plantae_Bryophyta_Bryopsida_Bryales_Mniaceae_Plagiomnium_cuspidatum", "4": "05733_Plantae_Bryophyta_Bryopsida_Bryales_Mniaceae_Plagiomnium_insigne", "5": "05734_Plantae_Bryophyta_Bryopsida_Bryales_Mniaceae_Rhizomnium_glabrescens", "6": "05735_Plantae_Bryophyta_Bryopsida_Dicranales_Dicranaceae_Dicranum_scoparium", "7": "05736_Plantae_Bryophyta_Bryopsida_Dicranales_Ditrichaceae_Ceratodon_purpureus", "8": "05737_Plantae_Bryophyta_Bryopsida_Dicranales_Leucobryaceae_Leucobryum_glaucum", "9": "05738_Plantae_Bryophyta_Bryopsida_Funariales_Funariaceae_Funaria_hygrometrica", "10": "05739_Plantae_Bryophyta_Bryopsida_Funariales_Funariaceae_Physcomitrium_pyriforme", "11": "05740_Plantae_Bryophyta_Bryopsida_Grimmiales_Grimmiaceae_Grimmia_pulvinata", "12": "05741_Plantae_Bryophyta_Bryopsida_Grimmiales_Grimmiaceae_Racomitrium_lanuginosum", "13": "05742_Plantae_Bryophyta_Bryopsida_Hedwigiales_Hedwigiaceae_Hedwigia_ciliata", "14": "05743_Plantae_Bryophyta_Bryopsida_Hypnales_Anomodontaceae_Anomodon_attenuatus", "15": "05744_Plantae_Bryophyta_Bryopsida_Hypnales_Brachytheciaceae_Kindbergia_oregana", "16": "05745_Plantae_Bryophyta_Bryopsida_Hypnales_Hylocomiaceae_Hylocomium_splendens", "17": "05746_Plantae_Bryophyta_Bryopsida_Hypnales_Hylocomiaceae_Pleurozium_schreberi", "18": "05747_Plantae_Bryophyta_Bryopsida_Hypnales_Hylocomiaceae_Rhytidiadelphus_loreus", "19": "05748_Plantae_Bryophyta_Bryopsida_Hypnales_Hylocomiaceae_Rhytidiadelphus_squarrosus", "20": "05749_Plantae_Bryophyta_Bryopsida_Hypnales_Hylocomiaceae_Rhytidiadelphus_triquetrus", "21": "05750_Plantae_Bryophyta_Bryopsida_Hypnales_Hypnaceae_Hypnum_cupressiforme", "22": "05751_Plantae_Bryophyta_Bryopsida_Hypnales_Lembophyllaceae_Isothecium_stoloniferum", "23": "05752_Plantae_Bryophyta_Bryopsida_Hypnales_Leucodontaceae_Dendroalsia_abietina", "24": "05753_Plantae_Bryophyta_Bryopsida_Hypnales_Neckeraceae_Neckera_pennata", "25": "05754_Plantae_Bryophyta_Bryopsida_Hypnales_Plagiotheciaceae_Plagiothecium_undulatum", "26": "05755_Plantae_Bryophyta_Bryopsida_Hypnales_Pylaisiaceae_Ptilium_crista-castrensis", "27": "05756_Plantae_Bryophyta_Bryopsida_Hypnales_Thuidiaceae_Thuidium_delicatulum", "28": "05757_Plantae_Bryophyta_Bryopsida_Pottiales_Pottiaceae_Syntrichia_ruralis", "29": "05758_Plantae_Bryophyta_Bryopsida_Pottiales_Pottiaceae_Tortula_muralis", "30": "05759_Plantae_Bryophyta_Polytrichopsida_Polytrichales_Polytrichaceae_Dawsonia_superba", "31": "05760_Plantae_Bryophyta_Polytrichopsida_Polytrichales_Polytrichaceae_Polytrichum_commune", "32": "05761_Plantae_Bryophyta_Polytrichopsida_Polytrichales_Polytrichaceae_Polytrichum_juniperinum", "33": "05762_Plantae_Bryophyta_Polytrichopsida_Polytrichales_Polytrichaceae_Polytrichum_piliferum", "34": "05763_Plantae_Bryophyta_Sphagnopsida_Sphagnales_Sphagnaceae_Sphagnum_squarrosum", "35": "05764_Plantae_Chlorophyta_Ulvophyceae_Bryopsidales_Codiaceae_Codium_fragile", "36": "05765_Plantae_Chlorophyta_Ulvophyceae_Cladophorales_Cladophoraceae_Cladophora_columbiana", "37": "05766_Plantae_Chlorophyta_Ulvophyceae_Ulvales_Ulvaceae_Ulva_intestinalis", "38": "05767_Plantae_Chlorophyta_Ulvophyceae_Ulvales_Ulvaceae_Ulva_lactuca", "39": "05768_Plantae_Marchantiophyta_Jungermanniopsida_Jungermanniales_Lepidoziaceae_Bazzania_trilobata", "40": "05769_Plantae_Marchantiophyta_Jungermanniopsida_Porellales_Radulaceae_Radula_complanata", "41": "05770_Plantae_Marchantiophyta_Marchantiopsida_Lunulariales_Lunulariaceae_Lunularia_cruciata", "42": "05771_Plantae_Marchantiophyta_Marchantiopsida_Marchantiales_Conocephalaceae_Conocephalum_conicum", "43": "05772_Plantae_Marchantiophyta_Marchantiopsida_Marchantiales_Conocephalaceae_Conocephalum_salebrosum", "44": "05773_Plantae_Marchantiophyta_Marchantiopsida_Marchantiales_Marchantiaceae_Marchantia_polymorpha", "45": "05774_Plantae_Marchantiophyta_Marchantiopsida_Marchantiales_Rebouliaceae_Asterella_californica", "46": "05775_Plantae_Rhodophyta_Florideophyceae_Ceramiales_Rhodomelaceae_Neorhodomela_larix", "47": "05776_Plantae_Rhodophyta_Florideophyceae_Corallinales_Corallinaceae_Corallina_officinalis", "48": "05777_Plantae_Rhodophyta_Florideophyceae_Gigartinales_Endocladiaceae_Endocladia_muricata", "49": "05778_Plantae_Rhodophyta_Florideophyceae_Gigartinales_Gigartinaceae_Chondracanthus_exasperatus", "50": "05779_Plantae_Rhodophyta_Florideophyceae_Gigartinales_Gigartinaceae_Chondrus_crispus", "51": "05780_Plantae_Rhodophyta_Florideophyceae_Gigartinales_Phyllophoraceae_Mastocarpus_papillatus", "52": "05781_Plantae_Rhodophyta_Florideophyceae_Palmariales_Palmariaceae_Halosaccion_glandiforme", "53": "05782_Plantae_Tracheophyta_Cycadopsida_Cycadales_Zamiaceae_Zamia_integrifolia", "54": "05783_Plantae_Tracheophyta_Gnetopsida_Ephedrales_Ephedraceae_Ephedra_trifurca", "55": "05784_Plantae_Tracheophyta_Gnetopsida_Ephedrales_Ephedraceae_Ephedra_viridis", "56": "05785_Plantae_Tracheophyta_Liliopsida_Acorales_Acoraceae_Acorus_calamus", "57": "05786_Plantae_Tracheophyta_Liliopsida_Alismatales_Alismataceae_Alisma_plantago-aquatica", "58": "05787_Plantae_Tracheophyta_Liliopsida_Alismatales_Alismataceae_Alisma_triviale", "59": "05788_Plantae_Tracheophyta_Liliopsida_Alismatales_Alismataceae_Sagittaria_lancifolia", "60": "05789_Plantae_Tracheophyta_Liliopsida_Alismatales_Alismataceae_Sagittaria_latifolia", "61": "05790_Plantae_Tracheophyta_Liliopsida_Alismatales_Alismataceae_Sagittaria_sagittifolia", "62": "05791_Plantae_Tracheophyta_Liliopsida_Alismatales_Araceae_Alocasia_odora", "63": "05792_Plantae_Tracheophyta_Liliopsida_Alismatales_Araceae_Arisaema_dracontium", "64": "05793_Plantae_Tracheophyta_Liliopsida_Alismatales_Araceae_Arisaema_triphyllum", "65": "05794_Plantae_Tracheophyta_Liliopsida_Alismatales_Araceae_Arum_italicum", "66": "05795_Plantae_Tracheophyta_Liliopsida_Alismatales_Araceae_Arum_maculatum", "67": "05796_Plantae_Tracheophyta_Liliopsida_Alismatales_Araceae_Calla_palustris", "68": "05797_Plantae_Tracheophyta_Liliopsida_Alismatales_Araceae_Colocasia_esculenta", "69": "05798_Plantae_Tracheophyta_Liliopsida_Alismatales_Araceae_Epipremnum_aureum", "70": "05799_Plantae_Tracheophyta_Liliopsida_Alismatales_Araceae_Lemna_minor", "71": "05800_Plantae_Tracheophyta_Liliopsida_Alismatales_Araceae_Lemna_trisulca", "72": "05801_Plantae_Tracheophyta_Liliopsida_Alismatales_Araceae_Lysichiton_americanus", "73": "05802_Plantae_Tracheophyta_Liliopsida_Alismatales_Araceae_Monstera_deliciosa", "74": "05803_Plantae_Tracheophyta_Liliopsida_Alismatales_Araceae_Orontium_aquaticum", "75": "05804_Plantae_Tracheophyta_Liliopsida_Alismatales_Araceae_Peltandra_virginica", "76": "05805_Plantae_Tracheophyta_Liliopsida_Alismatales_Araceae_Pistia_stratiotes", "77": "05806_Plantae_Tracheophyta_Liliopsida_Alismatales_Araceae_Spirodela_polyrhiza", "78": "05807_Plantae_Tracheophyta_Liliopsida_Alismatales_Araceae_Symplocarpus_foetidus", "79": "05808_Plantae_Tracheophyta_Liliopsida_Alismatales_Araceae_Syngonium_podophyllum", "80": "05809_Plantae_Tracheophyta_Liliopsida_Alismatales_Araceae_Zantedeschia_aethiopica", "81": "05810_Plantae_Tracheophyta_Liliopsida_Alismatales_Butomaceae_Butomus_umbellatus", "82": "05811_Plantae_Tracheophyta_Liliopsida_Alismatales_Hydrocharitaceae_Elodea_canadensis", "83": "05812_Plantae_Tracheophyta_Liliopsida_Alismatales_Hydrocharitaceae_Hydrocharis_morsus-ranae", "84": "05813_Plantae_Tracheophyta_Liliopsida_Alismatales_Hydrocharitaceae_Stratiotes_aloides", "85": "05814_Plantae_Tracheophyta_Liliopsida_Alismatales_Juncaginaceae_Triglochin_maritima", "86": "05815_Plantae_Tracheophyta_Liliopsida_Alismatales_Potamogetonaceae_Potamogeton_crispus", "87": "05816_Plantae_Tracheophyta_Liliopsida_Alismatales_Potamogetonaceae_Potamogeton_natans", "88": "05817_Plantae_Tracheophyta_Liliopsida_Alismatales_Potamogetonaceae_Potamogeton_nodosus", "89": "05818_Plantae_Tracheophyta_Liliopsida_Alismatales_Potamogetonaceae_Potamogeton_perfoliatus", "90": "05819_Plantae_Tracheophyta_Liliopsida_Alismatales_Scheuchzeriaceae_Scheuchzeria_palustris", "91": "05820_Plantae_Tracheophyta_Liliopsida_Alismatales_Tofieldiaceae_Triantha_glutinosa", "92": "05821_Plantae_Tracheophyta_Liliopsida_Alismatales_Zosteraceae_Phyllospadix_torreyi", "93": "05822_Plantae_Tracheophyta_Liliopsida_Alismatales_Zosteraceae_Zostera_marina", "94": "05823_Plantae_Tracheophyta_Liliopsida_Arecales_Arecaceae_Chamaerops_humilis", "95": "05824_Plantae_Tracheophyta_Liliopsida_Arecales_Arecaceae_Cocos_nucifera", "96": "05825_Plantae_Tracheophyta_Liliopsida_Arecales_Arecaceae_Phoenix_canariensis", "97": "05826_Plantae_Tracheophyta_Liliopsida_Arecales_Arecaceae_Phoenix_dactylifera", "98": "05827_Plantae_Tracheophyta_Liliopsida_Arecales_Arecaceae_Phoenix_reclinata", "99": "05828_Plantae_Tracheophyta_Liliopsida_Arecales_Arecaceae_Rhopalostylis_sapida", "100": "05829_Plantae_Tracheophyta_Liliopsida_Arecales_Arecaceae_Sabal_minor", "101": "05830_Plantae_Tracheophyta_Liliopsida_Arecales_Arecaceae_Sabal_palmetto", "102": "05831_Plantae_Tracheophyta_Liliopsida_Arecales_Arecaceae_Serenoa_repens", "103": "05832_Plantae_Tracheophyta_Liliopsida_Arecales_Arecaceae_Washingtonia_filifera", "104": "05833_Plantae_Tracheophyta_Liliopsida_Arecales_Arecaceae_Washingtonia_robusta", "105": "05834_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Agapanthus_praecox", "106": "05835_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Allium_acuminatum", "107": "05836_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Allium_canadense", "108": "05837_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Allium_cernuum", "109": "05838_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Allium_drummondii", "110": "05839_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Allium_neapolitanum", "111": "05840_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Allium_oleraceum", "112": "05841_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Allium_schoenoprasum", "113": "05842_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Allium_stellatum", "114": "05843_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Allium_tricoccum", "115": "05844_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Allium_triquetrum", "116": "05845_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Allium_ursinum", "117": "05846_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Allium_vineale", "118": "05847_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Amaryllis_belladonna", "119": "05848_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Boophone_disticha", "120": "05849_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Brunsvigia_orientalis", "121": "05850_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Crinum_americanum", "122": "05851_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Crinum_asiaticum", "123": "05852_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Galanthus_nivalis", "124": "05853_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Habranthus_tubispathus", "125": "05854_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Haemanthus_coccineus", "126": "05855_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Haemanthus_sanguineus", "127": "05856_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Hymenocallis_liriosme", "128": "05857_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Hymenocallis_littoralis", "129": "05858_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Ipheion_uniflorum", "130": "05859_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Leucojum_aestivum", "131": "05860_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Leucojum_vernum", "132": "05861_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Lycoris_radiata", "133": "05862_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Narcissus_poeticus", "134": "05863_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Narcissus_pseudonarcissus", "135": "05864_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Narcissus_tazetta", "136": "05865_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Nothoscordum_bivalve", "137": "05866_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Pancratium_maritimum", "138": "05867_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Scadoxus_puniceus", "139": "05868_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Zephyranthes_atamasco", "140": "05869_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Zephyranthes_chlorosolen", "141": "05870_Plantae_Tracheophyta_Liliopsida_Asparagales_Amaryllidaceae_Zephyranthes_drummondii", "142": "05871_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Agave_americana", "143": "05872_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Agave_deserti", "144": "05873_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Agave_lechuguilla", "145": "05874_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Agave_parryi", "146": "05875_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Agave_salmiana", "147": "05876_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Agave_shawii", "148": "05877_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Agave_utahensis", "149": "05878_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Asparagus_acutifolius", "150": "05879_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Asparagus_aethiopicus", "151": "05880_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Asparagus_asparagoides", "152": "05881_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Asparagus_officinalis", "153": "05882_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Asparagus_scandens", "154": "05883_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Bloomeria_crocea", "155": "05884_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Brodiaea_elegans", "156": "05885_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Brodiaea_terrestris", "157": "05886_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Camassia_leichtlinii", "158": "05887_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Camassia_quamash", "159": "05888_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Camassia_scilloides", "160": "05889_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Chlorogalum_pomeridianum", "161": "05890_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Convallaria_majalis", "162": "05891_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Cordyline_australis", "163": "05892_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Cordyline_fruticosa", "164": "05893_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Dasylirion_texanum", "165": "05894_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Dasylirion_wheeleri", "166": "05895_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Dichelostemma_capitatum", "167": "05896_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Dichelostemma_congestum", "168": "05897_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Dichelostemma_volubile", "169": "05898_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Hesperocallis_undulata", "170": "05899_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Hesperoyucca_whipplei", "171": "05900_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Hyacinthoides_hispanica", "172": "05901_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Hyacinthoides_non-scripta", "173": "05902_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Leopoldia_comosa", "174": "05903_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Leucocrinum_montanum", "175": "05904_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Liriope_muscari", "176": "05905_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Maianthemum_bifolium", "177": "05906_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Maianthemum_canadense", "178": "05907_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Maianthemum_dilatatum", "179": "05908_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Maianthemum_racemosum", "180": "05909_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Maianthemum_stellatum", "181": "05910_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Maianthemum_trifolium", "182": "05911_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Manfreda_virginica", "183": "05912_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Milla_biflora", "184": "05913_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Muilla_maritima", "185": "05914_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Muscari_armeniacum", "186": "05915_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Muscari_botryoides", "187": "05916_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Muscari_neglectum", "188": "05917_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Nolina_parryi", "189": "05918_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Nolina_texana", "190": "05919_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Ornithogalum_umbellatum", "191": "05920_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Polygonatum_biflorum", "192": "05921_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Polygonatum_multiflorum", "193": "05922_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Polygonatum_odoratum", "194": "05923_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Polygonatum_pubescens", "195": "05924_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Prospero_autumnale", "196": "05925_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Ruscus_aculeatus", "197": "05926_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Scilla_bifolia", "198": "05927_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Scilla_forbesii", "199": "05928_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Scilla_siberica", "200": "05929_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Triteleia_grandiflora", "201": "05930_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Triteleia_hyacinthina", "202": "05931_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Triteleia_ixioides", "203": "05932_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Triteleia_laxa", "204": "05933_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Yucca_arkansana", "205": "05934_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Yucca_baccata", "206": "05935_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Yucca_brevifolia", "207": "05936_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Yucca_elata", "208": "05937_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Yucca_filamentosa", "209": "05938_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Yucca_filifera", "210": "05939_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Yucca_glauca", "211": "05940_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Yucca_pallida", "212": "05941_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Yucca_rupicola", "213": "05942_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Yucca_schidigera", "214": "05943_Plantae_Tracheophyta_Liliopsida_Asparagales_Asparagaceae_Yucca_treculeana", "215": "05944_Plantae_Tracheophyta_Liliopsida_Asparagales_Asphodelaceae_Aloe_arborescens", "216": "05945_Plantae_Tracheophyta_Liliopsida_Asparagales_Asphodelaceae_Aloe_ferox", "217": "05946_Plantae_Tracheophyta_Liliopsida_Asparagales_Asphodelaceae_Aloe_vera", "218": "05947_Plantae_Tracheophyta_Liliopsida_Asparagales_Asphodelaceae_Asphodelus_fistulosus", "219": "05948_Plantae_Tracheophyta_Liliopsida_Asparagales_Asphodelaceae_Asphodelus_ramosus", "220": "05949_Plantae_Tracheophyta_Liliopsida_Asparagales_Asphodelaceae_Dianella_nigra", "221": "05950_Plantae_Tracheophyta_Liliopsida_Asparagales_Asphodelaceae_Hemerocallis_fulva", "222": "05951_Plantae_Tracheophyta_Liliopsida_Asparagales_Asphodelaceae_Kniphofia_uvaria", "223": "05952_Plantae_Tracheophyta_Liliopsida_Asparagales_Asphodelaceae_Phormium_cookianum", "224": "05953_Plantae_Tracheophyta_Liliopsida_Asparagales_Asphodelaceae_Phormium_tenax", "225": "05954_Plantae_Tracheophyta_Liliopsida_Asparagales_Asteliaceae_Astelia_fragrans", "226": "05955_Plantae_Tracheophyta_Liliopsida_Asparagales_Asteliaceae_Astelia_hastata", "227": "05956_Plantae_Tracheophyta_Liliopsida_Asparagales_Asteliaceae_Astelia_nervosa", "228": "05957_Plantae_Tracheophyta_Liliopsida_Asparagales_Hypoxidaceae_Hypoxis_hirsuta", "229": "05958_Plantae_Tracheophyta_Liliopsida_Asparagales_Hypoxidaceae_Pauridia_capensis", "230": "05959_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Alophia_drummondii", "231": "05960_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Chasmanthe_floribunda", "232": "05961_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Crocus_tommasinianus", "233": "05962_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Crocus_vernus", "234": "05963_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Gladiolus_italicus", "235": "05964_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Herbertia_lahue", "236": "05965_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Iris_cristata", "237": "05966_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Iris_domestica", "238": "05967_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Iris_douglasiana", "239": "05968_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Iris_foetidissima", "240": "05969_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Iris_lacustris", "241": "05970_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Iris_missouriensis", "242": "05971_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Iris_pseudacorus", "243": "05972_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Iris_ruthenica", "244": "05973_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Iris_setosa", "245": "05974_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Iris_sibirica", "246": "05975_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Iris_tenax", "247": "05976_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Iris_verna", "248": "05977_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Iris_versicolor", "249": "05978_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Iris_virginica", "250": "05979_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Moraea_gawleri", "251": "05980_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Moraea_sisyrinchium", "252": "05981_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Nemastylis_geminiflora", "253": "05982_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Olsynium_douglasii", "254": "05983_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Romulea_rosea", "255": "05984_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Sisyrinchium_angustifolium", "256": "05985_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Sisyrinchium_bellum", "257": "05986_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Sisyrinchium_montanum", "258": "05987_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Sisyrinchium_rosulatum", "259": "05988_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Sparaxis_tricolor", "260": "05989_Plantae_Tracheophyta_Liliopsida_Asparagales_Iridaceae_Tigridia_pavonia", "261": "05990_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Acianthus_caudatus", "262": "05991_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Acianthus_sinclairii", "263": "05992_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Anacamptis_coriophora", "264": "05993_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Anacamptis_morio", "265": "05994_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Anacamptis_papilionacea", "266": "05995_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Anacamptis_pyramidalis", "267": "05996_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Aplectrum_hyemale", "268": "05997_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Arethusa_bulbosa", "269": "05998_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Arundina_graminifolia", "270": "05999_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Caladenia_carnea", "271": "06000_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Calopogon_tuberosus", "272": "06001_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Calypso_bulbosa", "273": "06002_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Cephalanthera_austiniae", "274": "06003_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Cephalanthera_damasonium", "275": "06004_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Cephalanthera_longifolia", "276": "06005_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Cephalanthera_rubra", "277": "06006_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Chiloglottis_cornuta", "278": "06007_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Corallorhiza_maculata", "279": "06008_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Corallorhiza_mertensiana", "280": "06009_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Corallorhiza_striata", "281": "06010_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Corallorhiza_trifida", "282": "06011_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Cypripedium_acaule", "283": "06012_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Cypripedium_calceolus", "284": "06013_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Cypripedium_candidum", "285": "06014_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Cypripedium_macranthos", "286": "06015_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Cypripedium_montanum", "287": "06016_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Cypripedium_parviflorum", "288": "06017_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Cypripedium_reginae", "289": "06018_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Dactylorhiza_fuchsii", "290": "06019_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Dactylorhiza_incarnata", "291": "06020_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Dactylorhiza_maculata", "292": "06021_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Dactylorhiza_majalis", "293": "06022_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Dactylorhiza_sambucina", "294": "06023_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Dactylorhiza_viridis", "295": "06024_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Dendrobium_cunninghamii", "296": "06025_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Dichromanthus_cinnabarinus", "297": "06026_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Dipodium_roseum", "298": "06027_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Disa_bracteata", "299": "06028_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Diuris_orientis", "300": "06029_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Diuris_pardina", "301": "06030_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Earina_autumnalis", "302": "06031_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Earina_mucronata", "303": "06032_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Epidendrum_radicans", "304": "06033_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Epipactis_atrorubens", "305": "06034_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Epipactis_gigantea", "306": "06035_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Epipactis_helleborine", "307": "06036_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Epipactis_palustris", "308": "06037_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Galearis_spectabilis", "309": "06038_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Glossodia_major", "310": "06039_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Goodyera_oblongifolia", "311": "06040_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Goodyera_pubescens", "312": "06041_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Goodyera_repens", "313": "06042_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Goodyera_tesselata", "314": "06043_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Gymnadenia_conopsea", "315": "06044_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Gymnadenia_rhellicani", "316": "06045_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Himantoglossum_hircinum", "317": "06046_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Himantoglossum_robertianum", "318": "06047_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Limodorum_abortivum", "319": "06048_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Liparis_liliifolia", "320": "06049_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Liparis_loeselii", "321": "06050_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Microtis_unifolia", "322": "06051_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Neotinea_tridentata", "323": "06052_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Neotinea_ustulata", "324": "06053_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Neottia_cordata", "325": "06054_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Neottia_nidus-avis", "326": "06055_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Neottia_ovata", "327": "06056_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Oeceoclades_maculata", "328": "06057_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Ophrys_apifera", "329": "06058_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Ophrys_fuciflora", "330": "06059_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Ophrys_fusca", "331": "06060_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Ophrys_insectifera", "332": "06061_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Ophrys_lutea", "333": "06062_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Ophrys_scolopax", "334": "06063_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Ophrys_sphegodes", "335": "06064_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Ophrys_tenthredinifera", "336": "06065_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Orchis_anthropophora", "337": "06066_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Orchis_italica", "338": "06067_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Orchis_mascula", "339": "06068_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Orchis_militaris", "340": "06069_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Orchis_purpurea", "341": "06070_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Platanthera_aquilonis", "342": "06071_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Platanthera_bifolia", "343": "06072_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Platanthera_blephariglottis", "344": "06073_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Platanthera_chlorantha", "345": "06074_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Platanthera_ciliaris", "346": "06075_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Platanthera_clavellata", "347": "06076_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Platanthera_dilatata", "348": "06077_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Platanthera_elegans", "349": "06078_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Platanthera_huronensis", "350": "06079_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Platanthera_lacera", "351": "06080_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Platanthera_orbiculata", "352": "06081_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Platanthera_psycodes", "353": "06082_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Platanthera_stricta", "354": "06083_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Platanthera_transversa", "355": "06084_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Pogonia_ophioglossoides", "356": "06085_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Ponerorchis_cucullata", "357": "06086_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Prasophyllum_colensoi", "358": "06087_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Pterostylis_banksii", "359": "06088_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Pterostylis_melagramma", "360": "06089_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Pterostylis_nana", "361": "06090_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Pterostylis_nutans", "362": "06091_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Pterostylis_pedunculata", "363": "06092_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Pyrorchis_nigricans", "364": "06093_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Serapias_lingua", "365": "06094_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Spathoglottis_plicata", "366": "06095_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Spiranthes_cernua", "367": "06096_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Spiranthes_incurva", "368": "06097_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Spiranthes_lacera", "369": "06098_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Spiranthes_magnicamporum", "370": "06099_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Spiranthes_romanzoffiana", "371": "06100_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Spiranthes_vernalis", "372": "06101_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Thelymitra_longifolia", "373": "06102_Plantae_Tracheophyta_Liliopsida_Asparagales_Orchidaceae_Tipularia_discolor", "374": "06103_Plantae_Tracheophyta_Liliopsida_Commelinales_Commelinaceae_Commelina_africana", "375": "06104_Plantae_Tracheophyta_Liliopsida_Commelinales_Commelinaceae_Commelina_communis", "376": "06105_Plantae_Tracheophyta_Liliopsida_Commelinales_Commelinaceae_Commelina_diffusa", "377": "06106_Plantae_Tracheophyta_Liliopsida_Commelinales_Commelinaceae_Commelina_erecta", "378": "06107_Plantae_Tracheophyta_Liliopsida_Commelinales_Commelinaceae_Commelina_virginica", "379": "06108_Plantae_Tracheophyta_Liliopsida_Commelinales_Commelinaceae_Tinantia_anomala", "380": "06109_Plantae_Tracheophyta_Liliopsida_Commelinales_Commelinaceae_Tradescantia_fluminensis", "381": "06110_Plantae_Tracheophyta_Liliopsida_Commelinales_Commelinaceae_Tradescantia_occidentalis", "382": "06111_Plantae_Tracheophyta_Liliopsida_Commelinales_Commelinaceae_Tradescantia_ohiensis", "383": "06112_Plantae_Tracheophyta_Liliopsida_Commelinales_Commelinaceae_Tradescantia_pallida", "384": "06113_Plantae_Tracheophyta_Liliopsida_Commelinales_Commelinaceae_Tradescantia_spathacea", "385": "06114_Plantae_Tracheophyta_Liliopsida_Commelinales_Commelinaceae_Tradescantia_virginiana", "386": "06115_Plantae_Tracheophyta_Liliopsida_Commelinales_Commelinaceae_Tradescantia_zebrina", "387": "06116_Plantae_Tracheophyta_Liliopsida_Commelinales_Haemodoraceae_Lachnanthes_caroliniana", "388": "06117_Plantae_Tracheophyta_Liliopsida_Commelinales_Haemodoraceae_Wachendorfia_paniculata", "389": "06118_Plantae_Tracheophyta_Liliopsida_Commelinales_Pontederiaceae_Pontederia_cordata", "390": "06119_Plantae_Tracheophyta_Liliopsida_Commelinales_Pontederiaceae_Pontederia_crassipes", "391": "06120_Plantae_Tracheophyta_Liliopsida_Dioscoreales_Dioscoreaceae_Dioscorea_bulbifera", "392": "06121_Plantae_Tracheophyta_Liliopsida_Dioscoreales_Dioscoreaceae_Dioscorea_communis", "393": "06122_Plantae_Tracheophyta_Liliopsida_Dioscoreales_Dioscoreaceae_Dioscorea_villosa", "394": "06123_Plantae_Tracheophyta_Liliopsida_Dioscoreales_Nartheciaceae_Narthecium_ossifragum", "395": "06124_Plantae_Tracheophyta_Liliopsida_Liliales_Colchicaceae_Burchardia_umbellata", "396": "06125_Plantae_Tracheophyta_Liliopsida_Liliales_Colchicaceae_Colchicum_autumnale", "397": "06126_Plantae_Tracheophyta_Liliopsida_Liliales_Colchicaceae_Gloriosa_superba", "398": "06127_Plantae_Tracheophyta_Liliopsida_Liliales_Colchicaceae_Uvularia_grandiflora", "399": "06128_Plantae_Tracheophyta_Liliopsida_Liliales_Colchicaceae_Uvularia_perfoliata", "400": "06129_Plantae_Tracheophyta_Liliopsida_Liliales_Colchicaceae_Uvularia_sessilifolia", "401": "06130_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Calochortus_albus", "402": "06131_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Calochortus_amabilis", "403": "06132_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Calochortus_argillosus", "404": "06133_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Calochortus_catalinae", "405": "06134_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Calochortus_clavatus", "406": "06135_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Calochortus_gunnisonii", "407": "06136_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Calochortus_invenustus", "408": "06137_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Calochortus_kennedyi", "409": "06138_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Calochortus_leichtlinii", "410": "06139_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Calochortus_luteus", "411": "06140_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Calochortus_macrocarpus", "412": "06141_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Calochortus_monophyllus", "413": "06142_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Calochortus_nuttallii", "414": "06143_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Calochortus_plummerae", "415": "06144_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Calochortus_pulchellus", "416": "06145_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Calochortus_splendens", "417": "06146_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Calochortus_tolmiei", "418": "06147_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Calochortus_umbellatus", "419": "06148_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Calochortus_venustus", "420": "06149_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Calochortus_weedii", "421": "06150_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Clintonia_andrewsiana", "422": "06151_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Clintonia_borealis", "423": "06152_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Clintonia_uniflora", "424": "06153_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Erythronium_albidum", "425": "06154_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Erythronium_americanum", "426": "06155_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Erythronium_grandiflorum", "427": "06156_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Erythronium_montanum", "428": "06157_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Erythronium_oregonum", "429": "06158_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Erythronium_sibiricum", "430": "06159_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Erythronium_umbilicatum", "431": "06160_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Fritillaria_affinis", "432": "06161_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Fritillaria_atropurpurea", "433": "06162_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Fritillaria_biflora", "434": "06163_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Fritillaria_camschatcensis", "435": "06164_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Fritillaria_pudica", "436": "06165_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Fritillaria_recurva", "437": "06166_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Gagea_lutea", "438": "06167_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Gagea_minima", "439": "06168_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Lilium_bulbiferum", "440": "06169_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Lilium_canadense", "441": "06170_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Lilium_columbianum", "442": "06171_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Lilium_formosanum", "443": "06172_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Lilium_humboldtii", "444": "06173_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Lilium_lancifolium", "445": "06174_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Lilium_martagon", "446": "06175_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Lilium_michiganense", "447": "06176_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Lilium_pardalinum", "448": "06177_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Lilium_parvum", "449": "06178_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Lilium_philadelphicum", "450": "06179_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Lilium_superbum", "451": "06180_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Lilium_washingtonianum", "452": "06181_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Medeola_virginiana", "453": "06182_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Prosartes_hookeri", "454": "06183_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Prosartes_smithii", "455": "06184_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Prosartes_trachycarpa", "456": "06185_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Scoliopus_bigelovii", "457": "06186_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Streptopus_amplexifolius", "458": "06187_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Streptopus_lanceolatus", "459": "06188_Plantae_Tracheophyta_Liliopsida_Liliales_Liliaceae_Tulipa_sylvestris", "460": "06189_Plantae_Tracheophyta_Liliopsida_Liliales_Melanthiaceae_Anticlea_elegans", "461": "06190_Plantae_Tracheophyta_Liliopsida_Liliales_Melanthiaceae_Paris_quadrifolia", "462": "06191_Plantae_Tracheophyta_Liliopsida_Liliales_Melanthiaceae_Toxicoscordion_fremontii", "463": "06192_Plantae_Tracheophyta_Liliopsida_Liliales_Melanthiaceae_Toxicoscordion_venenosum", "464": "06193_Plantae_Tracheophyta_Liliopsida_Liliales_Melanthiaceae_Trillium_albidum", "465": "06194_Plantae_Tracheophyta_Liliopsida_Liliales_Melanthiaceae_Trillium_catesbaei", "466": "06195_Plantae_Tracheophyta_Liliopsida_Liliales_Melanthiaceae_Trillium_cernuum", "467": "06196_Plantae_Tracheophyta_Liliopsida_Liliales_Melanthiaceae_Trillium_chloropetalum", "468": "06197_Plantae_Tracheophyta_Liliopsida_Liliales_Melanthiaceae_Trillium_cuneatum", "469": "06198_Plantae_Tracheophyta_Liliopsida_Liliales_Melanthiaceae_Trillium_erectum", "470": "06199_Plantae_Tracheophyta_Liliopsida_Liliales_Melanthiaceae_Trillium_flexipes", "471": "06200_Plantae_Tracheophyta_Liliopsida_Liliales_Melanthiaceae_Trillium_grandiflorum", "472": "06201_Plantae_Tracheophyta_Liliopsida_Liliales_Melanthiaceae_Trillium_luteum", "473": "06202_Plantae_Tracheophyta_Liliopsida_Liliales_Melanthiaceae_Trillium_nivale", "474": "06203_Plantae_Tracheophyta_Liliopsida_Liliales_Melanthiaceae_Trillium_ovatum", "475": "06204_Plantae_Tracheophyta_Liliopsida_Liliales_Melanthiaceae_Trillium_recurvatum", "476": "06205_Plantae_Tracheophyta_Liliopsida_Liliales_Melanthiaceae_Trillium_sessile", "477": "06206_Plantae_Tracheophyta_Liliopsida_Liliales_Melanthiaceae_Trillium_undulatum", "478": "06207_Plantae_Tracheophyta_Liliopsida_Liliales_Melanthiaceae_Veratrum_californicum", "479": "06208_Plantae_Tracheophyta_Liliopsida_Liliales_Melanthiaceae_Veratrum_lobelianum", "480": "06209_Plantae_Tracheophyta_Liliopsida_Liliales_Melanthiaceae_Veratrum_viride", "481": "06210_Plantae_Tracheophyta_Liliopsida_Liliales_Melanthiaceae_Xerophyllum_tenax", "482": "06211_Plantae_Tracheophyta_Liliopsida_Liliales_Ripogonaceae_Ripogonum_scandens", "483": "06212_Plantae_Tracheophyta_Liliopsida_Liliales_Smilacaceae_Smilax_aspera", "484": "06213_Plantae_Tracheophyta_Liliopsida_Liliales_Smilacaceae_Smilax_auriculata", "485": "06214_Plantae_Tracheophyta_Liliopsida_Liliales_Smilacaceae_Smilax_bona-nox", "486": "06215_Plantae_Tracheophyta_Liliopsida_Liliales_Smilacaceae_Smilax_glauca", "487": "06216_Plantae_Tracheophyta_Liliopsida_Liliales_Smilacaceae_Smilax_herbacea", "488": "06217_Plantae_Tracheophyta_Liliopsida_Liliales_Smilacaceae_Smilax_lasioneura", "489": "06218_Plantae_Tracheophyta_Liliopsida_Liliales_Smilacaceae_Smilax_laurifolia", "490": "06219_Plantae_Tracheophyta_Liliopsida_Liliales_Smilacaceae_Smilax_pumila", "491": "06220_Plantae_Tracheophyta_Liliopsida_Liliales_Smilacaceae_Smilax_rotundifolia", "492": "06221_Plantae_Tracheophyta_Liliopsida_Liliales_Smilacaceae_Smilax_smallii", "493": "06222_Plantae_Tracheophyta_Liliopsida_Liliales_Smilacaceae_Smilax_tamnoides", "494": "06223_Plantae_Tracheophyta_Liliopsida_Pandanales_Pandanaceae_Freycinetia_banksii", "495": "06224_Plantae_Tracheophyta_Liliopsida_Pandanales_Pandanaceae_Pandanus_tectorius", "496": "06225_Plantae_Tracheophyta_Liliopsida_Poales_Bromeliaceae_Bromelia_pinguin", "497": "06226_Plantae_Tracheophyta_Liliopsida_Poales_Bromeliaceae_Tillandsia_fasciculata", "498": "06227_Plantae_Tracheophyta_Liliopsida_Poales_Bromeliaceae_Tillandsia_recurvata", "499": "06228_Plantae_Tracheophyta_Liliopsida_Poales_Bromeliaceae_Tillandsia_usneoides", "500": "06229_Plantae_Tracheophyta_Liliopsida_Poales_Bromeliaceae_Tillandsia_utriculata", "501": "06230_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Bolboschoenus_maritimus", "502": "06231_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_albursina", "503": "06232_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_aurea", "504": "06233_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_blanda", "505": "06234_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_canescens", "506": "06235_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_cherokeensis", "507": "06236_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_comosa", "508": "06237_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_crinita", "509": "06238_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_digitata", "510": "06239_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_eburnea", "511": "06240_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_echinata", "512": "06241_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_flava", "513": "06242_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_gracillima", "514": "06243_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_granularis", "515": "06244_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_grayi", "516": "06245_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_hirta", "517": "06246_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_hystericina", "518": "06247_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_intumescens", "519": "06248_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_leptalea", "520": "06249_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_lupulina", "521": "06250_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_lurida", "522": "06251_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_macrocephala", "523": "06252_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_magellanica", "524": "06253_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_obnupta", "525": "06254_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_oligosperma", "526": "06255_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_pallescens", "527": "06256_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_pedunculata", "528": "06257_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_pendula", "529": "06258_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_pensylvanica", "530": "06259_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_pilosa", "531": "06260_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_plantaginea", "532": "06261_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_pseudocyperus", "533": "06262_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_pumila", "534": "06263_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_rosea", "535": "06264_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_rostrata", "536": "06265_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_spicata", "537": "06266_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_sprengelii", "538": "06267_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_squarrosa", "539": "06268_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_stipata", "540": "06269_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_sylvatica", "541": "06270_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_utriculata", "542": "06271_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_vesicaria", "543": "06272_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_viridula", "544": "06273_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Carex_vulpinoidea", "545": "06274_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Cladium_mariscus", "546": "06275_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Cyperus_brevifolius", "547": "06276_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Cyperus_echinatus", "548": "06277_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Cyperus_eragrostis", "549": "06278_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Cyperus_esculentus", "550": "06279_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Cyperus_involucratus", "551": "06280_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Cyperus_odoratus", "552": "06281_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Cyperus_rotundus", "553": "06282_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Cyperus_strigosus", "554": "06283_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Cyperus_ustulatus", "555": "06284_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Dulichium_arundinaceum", "556": "06285_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Eleocharis_palustris", "557": "06286_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Eriophorum_angustifolium", "558": "06287_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Eriophorum_vaginatum", "559": "06288_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Eriophorum_virginicum", "560": "06289_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Ficinia_nodosa", "561": "06290_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Ficinia_spiralis", "562": "06291_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Rhynchospora_alba", "563": "06292_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Rhynchospora_colorata", "564": "06293_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Schoenoplectus_acutus", "565": "06294_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Schoenoplectus_californicus", "566": "06295_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Schoenoplectus_pungens", "567": "06296_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Schoenoplectus_tabernaemontani", "568": "06297_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Scirpus_atrovirens", "569": "06298_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Scirpus_cyperinus", "570": "06299_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Scirpus_microcarpus", "571": "06300_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Scirpus_pendulus", "572": "06301_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Scirpus_sylvaticus", "573": "06302_Plantae_Tracheophyta_Liliopsida_Poales_Cyperaceae_Trichophorum_cespitosum", "574": "06303_Plantae_Tracheophyta_Liliopsida_Poales_Eriocaulaceae_Eriocaulon_aquaticum", "575": "06304_Plantae_Tracheophyta_Liliopsida_Poales_Juncaceae_Juncus_acutus", "576": "06305_Plantae_Tracheophyta_Liliopsida_Poales_Juncaceae_Juncus_articulatus", "577": "06306_Plantae_Tracheophyta_Liliopsida_Poales_Juncaceae_Juncus_balticus", "578": "06307_Plantae_Tracheophyta_Liliopsida_Poales_Juncaceae_Juncus_bufonius", "579": "06308_Plantae_Tracheophyta_Liliopsida_Poales_Juncaceae_Juncus_compressus", "580": "06309_Plantae_Tracheophyta_Liliopsida_Poales_Juncaceae_Juncus_effusus", "581": "06310_Plantae_Tracheophyta_Liliopsida_Poales_Juncaceae_Juncus_tenuis", "582": "06311_Plantae_Tracheophyta_Liliopsida_Poales_Juncaceae_Juncus_torreyi", "583": "06312_Plantae_Tracheophyta_Liliopsida_Poales_Juncaceae_Luzula_campestris", "584": "06313_Plantae_Tracheophyta_Liliopsida_Poales_Juncaceae_Luzula_multiflora", "585": "06314_Plantae_Tracheophyta_Liliopsida_Poales_Juncaceae_Luzula_pilosa", "586": "06315_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Achnatherum_hymenoides", "587": "06316_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Achnatherum_miliaceum", "588": "06317_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Agropyron_cristatum", "589": "06318_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Alopecurus_aequalis", "590": "06319_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Alopecurus_pratensis", "591": "06320_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Ammophila_arenaria", "592": "06321_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Ammophila_breviligulata", "593": "06322_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Andropogon_gerardi", "594": "06323_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Andropogon_glomeratus", "595": "06324_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Andropogon_ternarius", "596": "06325_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Andropogon_virginicus", "597": "06326_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Anthoxanthum_odoratum", "598": "06327_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Aristida_purpurea", "599": "06328_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Arrhenatherum_elatius", "600": "06329_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Arundinaria_gigantea", "601": "06330_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Arundo_donax", "602": "06331_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Austroderia_richardii", "603": "06332_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Avena_barbata", "604": "06333_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Avena_fatua", "605": "06334_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Bothriochloa_ischaemum", "606": "06335_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Bothriochloa_laguroides", "607": "06336_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Bouteloua_curtipendula", "608": "06337_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Bouteloua_dactyloides", "609": "06338_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Bouteloua_gracilis", "610": "06339_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Bouteloua_hirsuta", "611": "06340_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Bouteloua_rigidiseta", "612": "06341_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Briza_maxima", "613": "06342_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Briza_media", "614": "06343_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Briza_minor", "615": "06344_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Bromus_catharticus", "616": "06345_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Bromus_diandrus", "617": "06346_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Bromus_hordeaceus", "618": "06347_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Bromus_inermis", "619": "06348_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Bromus_rubens", "620": "06349_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Bromus_tectorum", "621": "06350_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Calamagrostis_canadensis", "622": "06351_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Calamagrostis_epigejos", "623": "06352_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Cenchrus_ciliaris", "624": "06353_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Cenchrus_setaceus", "625": "06354_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Chasmanthium_latifolium", "626": "06355_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Chloris_barbata", "627": "06356_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Cortaderia_jubata", "628": "06357_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Cortaderia_selloana", "629": "06358_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Cynodon_dactylon", "630": "06359_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Cynosurus_echinatus", "631": "06360_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Dactylis_glomerata", "632": "06361_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Dactyloctenium_aegyptium", "633": "06362_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Danthonia_spicata", "634": "06363_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Dasyochloa_pulchella", "635": "06364_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Deschampsia_cespitosa", "636": "06365_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Dichanthelium_clandestinum", "637": "06366_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Digitaria_sanguinalis", "638": "06367_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Distichlis_spicata", "639": "06368_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Echinochloa_crus-galli", "640": "06369_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Ehrharta_erecta", "641": "06370_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Eleusine_indica", "642": "06371_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Elymus_canadensis", "643": "06372_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Elymus_elymoides", "644": "06373_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Elymus_hystrix", "645": "06374_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Elymus_repens", "646": "06375_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Elymus_virginicus", "647": "06376_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Glyceria_canadensis", "648": "06377_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Glyceria_maxima", "649": "06378_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Glyceria_striata", "650": "06379_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Hilaria_rigida", "651": "06380_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Holcus_lanatus", "652": "06381_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Hordeum_jubatum", "653": "06382_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Hordeum_murinum", "654": "06383_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Hordeum_pusillum", "655": "06384_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Imperata_cylindrica", "656": "06385_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Koeleria_macrantha", "657": "06386_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Lagurus_ovatus", "658": "06387_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Lamarckia_aurea", "659": "06388_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Leersia_oryzoides", "660": "06389_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Leymus_condensatus", "661": "06390_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Leymus_mollis", "662": "06391_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Lolium_arundinaceum", "663": "06392_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Lolium_perenne", "664": "06393_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Lolium_pratense", "665": "06394_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Melica_nutans", "666": "06395_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Melinis_repens", "667": "06396_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Microstegium_vimineum", "668": "06397_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Milium_effusum", "669": "06398_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Miscanthus_sinensis", "670": "06399_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Muhlenbergia_rigens", "671": "06400_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Nassella_leucotricha", "672": "06401_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Nassella_pulchra", "673": "06402_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Oplismenus_hirtellus", "674": "06403_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Oryzopsis_asperifolia", "675": "06404_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Panicum_capillare", "676": "06405_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Panicum_maximum", "677": "06406_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Panicum_virgatum", "678": "06407_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Paspalum_dilatatum", "679": "06408_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Paspalum_notatum", "680": "06409_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Paspalum_urvillei", "681": "06410_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Pennisetum_clandestinum", "682": "06411_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Phalaris_aquatica", "683": "06412_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Phalaris_arundinacea", "684": "06413_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Phleum_alpinum", "685": "06414_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Phleum_pratense", "686": "06415_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Phragmites_australis", "687": "06416_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Poa_annua", "688": "06417_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Poa_bulbosa", "689": "06418_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Poa_pratensis", "690": "06419_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Polypogon_monspeliensis", "691": "06420_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Schizachyrium_scoparium", "692": "06421_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Setaria_faberi", "693": "06422_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Setaria_palmifolia", "694": "06423_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Setaria_parviflora", "695": "06424_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Setaria_pumila", "696": "06425_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Setaria_viridis", "697": "06426_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Sorghastrum_nutans", "698": "06427_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Sorghum_halepense", "699": "06428_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Spinifex_sericeus", "700": "06429_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Sporobolus_alterniflorus", "701": "06430_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Sporobolus_michauxianus", "702": "06431_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Stenotaphrum_secundatum", "703": "06432_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Stipa_pennata", "704": "06433_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Themeda_triandra", "705": "06434_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Tridens_albescens", "706": "06435_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Tridens_flavus", "707": "06436_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Tripsacum_dactyloides", "708": "06437_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Triticum_aestivum", "709": "06438_Plantae_Tracheophyta_Liliopsida_Poales_Poaceae_Uniola_paniculata", "710": "06439_Plantae_Tracheophyta_Liliopsida_Poales_Restionaceae_Apodasmia_similis", "711": "06440_Plantae_Tracheophyta_Liliopsida_Poales_Typhaceae_Sparganium_emersum", "712": "06441_Plantae_Tracheophyta_Liliopsida_Poales_Typhaceae_Sparganium_erectum", "713": "06442_Plantae_Tracheophyta_Liliopsida_Poales_Typhaceae_Sparganium_eurycarpum", "714": "06443_Plantae_Tracheophyta_Liliopsida_Poales_Typhaceae_Typha_angustifolia", "715": "06444_Plantae_Tracheophyta_Liliopsida_Poales_Typhaceae_Typha_domingensis", "716": "06445_Plantae_Tracheophyta_Liliopsida_Poales_Typhaceae_Typha_latifolia", "717": "06446_Plantae_Tracheophyta_Liliopsida_Poales_Typhaceae_Typha_orientalis", "718": "06447_Plantae_Tracheophyta_Liliopsida_Zingiberales_Cannaceae_Canna_indica", "719": "06448_Plantae_Tracheophyta_Liliopsida_Zingiberales_Costaceae_Hellenia_speciosa", "720": "06449_Plantae_Tracheophyta_Liliopsida_Zingiberales_Heliconiaceae_Heliconia_latispatha", "721": "06450_Plantae_Tracheophyta_Liliopsida_Zingiberales_Heliconiaceae_Heliconia_psittacorum", "722": "06451_Plantae_Tracheophyta_Liliopsida_Zingiberales_Heliconiaceae_Heliconia_rostrata", "723": "06452_Plantae_Tracheophyta_Liliopsida_Zingiberales_Marantaceae_Thalia_geniculata", "724": "06453_Plantae_Tracheophyta_Liliopsida_Zingiberales_Musaceae_Musa_acuminata", "725": "06454_Plantae_Tracheophyta_Liliopsida_Zingiberales_Zingiberaceae_Alpinia_purpurata", "726": "06455_Plantae_Tracheophyta_Liliopsida_Zingiberales_Zingiberaceae_Alpinia_zerumbet", "727": "06456_Plantae_Tracheophyta_Liliopsida_Zingiberales_Zingiberaceae_Etlingera_elatior", "728": "06457_Plantae_Tracheophyta_Liliopsida_Zingiberales_Zingiberaceae_Hedychium_coronarium", "729": "06458_Plantae_Tracheophyta_Liliopsida_Zingiberales_Zingiberaceae_Hedychium_gardnerianum", "730": "06459_Plantae_Tracheophyta_Lycopodiopsida_Lycopodiales_Lycopodiaceae_Dendrolycopodium_dendroideum", "731": "06460_Plantae_Tracheophyta_Lycopodiopsida_Lycopodiales_Lycopodiaceae_Dendrolycopodium_hickeyi", "732": "06461_Plantae_Tracheophyta_Lycopodiopsida_Lycopodiales_Lycopodiaceae_Dendrolycopodium_obscurum", "733": "06462_Plantae_Tracheophyta_Lycopodiopsida_Lycopodiales_Lycopodiaceae_Diphasiastrum_complanatum", "734": "06463_Plantae_Tracheophyta_Lycopodiopsida_Lycopodiales_Lycopodiaceae_Diphasiastrum_digitatum", "735": "06464_Plantae_Tracheophyta_Lycopodiopsida_Lycopodiales_Lycopodiaceae_Huperzia_lucidula", "736": "06465_Plantae_Tracheophyta_Lycopodiopsida_Lycopodiales_Lycopodiaceae_Huperzia_selago", "737": "06466_Plantae_Tracheophyta_Lycopodiopsida_Lycopodiales_Lycopodiaceae_Lycopodiella_cernua", "738": "06467_Plantae_Tracheophyta_Lycopodiopsida_Lycopodiales_Lycopodiaceae_Lycopodiella_inundata", "739": "06468_Plantae_Tracheophyta_Lycopodiopsida_Lycopodiales_Lycopodiaceae_Lycopodium_clavatum", "740": "06469_Plantae_Tracheophyta_Lycopodiopsida_Lycopodiales_Lycopodiaceae_Lycopodium_deuterodensum", "741": "06470_Plantae_Tracheophyta_Lycopodiopsida_Lycopodiales_Lycopodiaceae_Lycopodium_fastigiatum", "742": "06471_Plantae_Tracheophyta_Lycopodiopsida_Lycopodiales_Lycopodiaceae_Lycopodium_scariosum", "743": "06472_Plantae_Tracheophyta_Lycopodiopsida_Lycopodiales_Lycopodiaceae_Lycopodium_volubile", "744": "06473_Plantae_Tracheophyta_Lycopodiopsida_Lycopodiales_Lycopodiaceae_Phlegmariurus_varius", "745": "06474_Plantae_Tracheophyta_Lycopodiopsida_Lycopodiales_Lycopodiaceae_Spinulum_annotinum", "746": "06475_Plantae_Tracheophyta_Lycopodiopsida_Selaginellales_Selaginellaceae_Selaginella_bigelovii", "747": "06476_Plantae_Tracheophyta_Lycopodiopsida_Selaginellales_Selaginellaceae_Selaginella_kraussiana", "748": "06477_Plantae_Tracheophyta_Lycopodiopsida_Selaginellales_Selaginellaceae_Selaginella_lepidophylla", "749": "06478_Plantae_Tracheophyta_Lycopodiopsida_Selaginellales_Selaginellaceae_Selaginella_rupestris", "750": "06479_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Aegopodium_podagraria", "751": "06480_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Angelica_archangelica", "752": "06481_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Angelica_atropurpurea", "753": "06482_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Angelica_capitellata", "754": "06483_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Angelica_sylvestris", "755": "06484_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Anthriscus_caucalis", "756": "06485_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Anthriscus_sylvestris", "757": "06486_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Apium_prostratum", "758": "06487_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Astrantia_major", "759": "06488_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Bifora_americana", "760": "06489_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Bowlesia_incana", "761": "06490_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Carum_carvi", "762": "06491_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Centella_asiatica", "763": "06492_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Chaerophyllum_aromaticum", "764": "06493_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Chaerophyllum_tainturieri", "765": "06494_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Cicuta_bulbifera", "766": "06495_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Cicuta_maculata", "767": "06496_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Cicuta_virosa", "768": "06497_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Conium_maculatum", "769": "06498_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Crithmum_maritimum", "770": "06499_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Cryptotaenia_canadensis", "771": "06500_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Cyclospermum_leptophyllum", "772": "06501_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Daucus_carota", "773": "06502_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Daucus_pusillus", "774": "06503_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Erigenia_bulbosa", "775": "06504_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Eryngium_campestre", "776": "06505_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Eryngium_hookeri", "777": "06506_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Eryngium_leavenworthii", "778": "06507_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Eryngium_maritimum", "779": "06508_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Eryngium_planum", "780": "06509_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Eryngium_yuccifolium", "781": "06510_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Falcaria_vulgaris", "782": "06511_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Ferula_communis", "783": "06512_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Foeniculum_vulgare", "784": "06513_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Heracleum_mantegazzianum", "785": "06514_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Heracleum_maximum", "786": "06515_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Heracleum_sosnowskyi", "787": "06516_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Heracleum_sphondylium", "788": "06517_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Ligusticum_scoticum", "789": "06518_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Lomatium_dissectum", "790": "06519_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Lomatium_triternatum", "791": "06520_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Lomatium_utriculatum", "792": "06521_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Oenanthe_aquatica", "793": "06522_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Oenanthe_sarmentosa", "794": "06523_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Osmorhiza_berteroi", "795": "06524_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Osmorhiza_claytonii", "796": "06525_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Osmorhiza_longistylis", "797": "06526_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Oxypolis_rigidior", "798": "06527_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Pastinaca_sativa", "799": "06528_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Pimpinella_saxifraga", "800": "06529_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Polytaenia_texana", "801": "06530_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Pseudocymopterus_montanus", "802": "06531_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Sanicula_arctopoides", "803": "06532_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Sanicula_bipinnatifida", "804": "06533_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Sanicula_canadensis", "805": "06534_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Sanicula_crassicaulis", "806": "06535_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Sanicula_europaea", "807": "06536_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Sanicula_odorata", "808": "06537_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Scandix_pecten-veneris", "809": "06538_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Seseli_libanotis", "810": "06539_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Sium_suave", "811": "06540_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Smyrnium_olusatrum", "812": "06541_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Taenidia_integerrima", "813": "06542_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Torilis_arvensis", "814": "06543_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Torilis_japonica", "815": "06544_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Torilis_nodosa", "816": "06545_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Zizia_aptera", "817": "06546_Plantae_Tracheophyta_Magnoliopsida_Apiales_Apiaceae_Zizia_aurea", "818": "06547_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Aralia_californica", "819": "06548_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Aralia_elata", "820": "06549_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Aralia_hispida", "821": "06550_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Aralia_nudicaulis", "822": "06551_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Aralia_racemosa", "823": "06552_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Aralia_spinosa", "824": "06553_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Cussonia_spicata", "825": "06554_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Fatsia_japonica", "826": "06555_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Hedera_helix", "827": "06556_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Hydrocotyle_bonariensis", "828": "06557_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Hydrocotyle_moschata", "829": "06558_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Hydrocotyle_ranunculoides", "830": "06559_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Hydrocotyle_umbellata", "831": "06560_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Oplopanax_horridus", "832": "06561_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Panax_quinquefolius", "833": "06562_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Panax_trifolius", "834": "06563_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Pseudopanax_arboreus", "835": "06564_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Pseudopanax_colensoi", "836": "06565_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Pseudopanax_crassifolius", "837": "06566_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Pseudopanax_lessonii", "838": "06567_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Raukaua_simplex", "839": "06568_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Schefflera_actinophylla", "840": "06569_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Schefflera_arboricola", "841": "06570_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Schefflera_digitata", "842": "06571_Plantae_Tracheophyta_Magnoliopsida_Apiales_Araliaceae_Schefflera_heptaphylla", "843": "06572_Plantae_Tracheophyta_Magnoliopsida_Apiales_Griseliniaceae_Griselinia_littoralis", "844": "06573_Plantae_Tracheophyta_Magnoliopsida_Apiales_Pennantiaceae_Pennantia_corymbosa", "845": "06574_Plantae_Tracheophyta_Magnoliopsida_Apiales_Pittosporaceae_Bursaria_spinosa", "846": "06575_Plantae_Tracheophyta_Magnoliopsida_Apiales_Pittosporaceae_Pittosporum_crassifolium", "847": "06576_Plantae_Tracheophyta_Magnoliopsida_Apiales_Pittosporaceae_Pittosporum_eugenioides", "848": "06577_Plantae_Tracheophyta_Magnoliopsida_Apiales_Pittosporaceae_Pittosporum_tenuifolium", "849": "06578_Plantae_Tracheophyta_Magnoliopsida_Apiales_Pittosporaceae_Pittosporum_undulatum", "850": "06579_Plantae_Tracheophyta_Magnoliopsida_Aquifoliales_Aquifoliaceae_Ilex_aquifolium", "851": "06580_Plantae_Tracheophyta_Magnoliopsida_Aquifoliales_Aquifoliaceae_Ilex_cassine", "852": "06581_Plantae_Tracheophyta_Magnoliopsida_Aquifoliales_Aquifoliaceae_Ilex_cornuta", "853": "06582_Plantae_Tracheophyta_Magnoliopsida_Aquifoliales_Aquifoliaceae_Ilex_decidua", "854": "06583_Plantae_Tracheophyta_Magnoliopsida_Aquifoliales_Aquifoliaceae_Ilex_glabra", "855": "06584_Plantae_Tracheophyta_Magnoliopsida_Aquifoliales_Aquifoliaceae_Ilex_mucronata", "856": "06585_Plantae_Tracheophyta_Magnoliopsida_Aquifoliales_Aquifoliaceae_Ilex_opaca", "857": "06586_Plantae_Tracheophyta_Magnoliopsida_Aquifoliales_Aquifoliaceae_Ilex_verticillata", "858": "06587_Plantae_Tracheophyta_Magnoliopsida_Aquifoliales_Aquifoliaceae_Ilex_vomitoria", "859": "06588_Plantae_Tracheophyta_Magnoliopsida_Asterales_Alseuosmiaceae_Alseuosmia_macrophylla", "860": "06589_Plantae_Tracheophyta_Magnoliopsida_Asterales_Argophyllaceae_Corokia_cotoneaster", "861": "06590_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Achillea_millefolium", "862": "06591_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Achillea_ptarmica", "863": "06592_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Achyrachaena_mollis", "864": "06593_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Acmella_repens", "865": "06594_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Acourtia_microcephala", "866": "06595_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Adenocaulon_bicolor", "867": "06596_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Adenophyllum_porophylloides", "868": "06597_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Ageratina_adenophora", "869": "06598_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Ageratina_altissima", "870": "06599_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Ageratina_havanensis", "871": "06600_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Ageratum_conyzoides", "872": "06601_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Ageratum_houstonianum", "873": "06602_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Agoseris_aurantiaca", "874": "06603_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Ambrosia_ambrosioides", "875": "06604_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Ambrosia_artemisiifolia", "876": "06605_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Ambrosia_chamissonis", "877": "06606_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Ambrosia_deltoidea", "878": "06607_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Ambrosia_dumosa", "879": "06608_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Ambrosia_psilostachya", "880": "06609_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Ambrosia_salsola", "881": "06610_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Ambrosia_trifida", "882": "06611_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Anaphalioides_bellidioides", "883": "06612_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Anaphalis_margaritacea", "884": "06613_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Anisocarpus_madioides", "885": "06614_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Antennaria_dioica", "886": "06615_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Antennaria_neglecta", "887": "06616_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Antennaria_rosea", "888": "06617_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Anthemis_cotula", "889": "06618_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Arctium_lappa", "890": "06619_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Arctium_minus", "891": "06620_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Arctium_tomentosum", "892": "06621_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Arctotheca_calendula", "893": "06622_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Arctotheca_prostrata", "894": "06623_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Arnica_cordifolia", "895": "06624_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Arnica_montana", "896": "06625_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Arnoglossum_atriplicifolium", "897": "06626_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Arnoglossum_plantagineum", "898": "06627_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Artemisia_absinthium", "899": "06628_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Artemisia_annua", "900": "06629_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Artemisia_californica", "901": "06630_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Artemisia_campestris", "902": "06631_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Artemisia_douglasiana", "903": "06632_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Artemisia_dracunculus", "904": "06633_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Artemisia_filifolia", "905": "06634_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Artemisia_frigida", "906": "06635_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Artemisia_ludoviciana", "907": "06636_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Artemisia_pycnocephala", "908": "06637_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Artemisia_stelleriana", "909": "06638_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Artemisia_tridentata", "910": "06639_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Artemisia_vulgaris", "911": "06640_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Aster_alpinus", "912": "06641_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Atrichoseris_platyphylla", "913": "06642_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Baccharis_halimifolia", "914": "06643_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Baccharis_neglecta", "915": "06644_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Baccharis_pilularis", "916": "06645_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Baccharis_salicifolia", "917": "06646_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Baccharis_sarothroides", "918": "06647_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Bahiopsis_laciniata", "919": "06648_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Bahiopsis_parishii", "920": "06649_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Baileya_multiradiata", "921": "06650_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Balsamorhiza_sagittata", "922": "06651_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Barkleyanthus_salicifolius", "923": "06652_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Bebbia_juncea", "924": "06653_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Bellis_perennis", "925": "06654_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Berlandiera_lyrata", "926": "06655_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Bidens_aristosa", "927": "06656_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Bidens_bipinnata", "928": "06657_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Bidens_cernua", "929": "06658_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Bidens_frondosa", "930": "06659_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Bidens_pilosa", "931": "06660_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Bidens_tripartita", "932": "06661_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Borrichia_frutescens", "933": "06662_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Brachyglottis_repanda", "934": "06663_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Brickellia_californica", "935": "06664_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Brickellia_eupatorioides", "936": "06665_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Calendula_arvensis", "937": "06666_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Calendula_officinalis", "938": "06667_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Calycoseris_wrightii", "939": "06668_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Calyptocarpus_vialis", "940": "06669_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Carduus_acanthoides", "941": "06670_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Carduus_crispus", "942": "06671_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Carduus_nutans", "943": "06672_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Carduus_pycnocephalus", "944": "06673_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Carlina_acaulis", "945": "06674_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Carlina_biebersteinii", "946": "06675_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Carlina_vulgaris", "947": "06676_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Celmisia_spectabilis", "948": "06677_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Centaurea_calcitrapa", "949": "06678_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Centaurea_cyanus", "950": "06679_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Centaurea_jacea", "951": "06680_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Centaurea_melitensis", "952": "06681_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Centaurea_montana", "953": "06682_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Centaurea_nigra", "954": "06683_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Centaurea_phrygia", "955": "06684_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Centaurea_scabiosa", "956": "06685_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Centaurea_solstitialis", "957": "06686_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Centaurea_stoebe", "958": "06687_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Chaenactis_artemisiifolia", "959": "06688_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Chaenactis_carphoclinia", "960": "06689_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Chaenactis_douglasii", "961": "06690_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Chaenactis_fremontii", "962": "06691_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Chaenactis_glabriuscula", "963": "06692_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Chaenactis_stevioides", "964": "06693_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Chaetopappa_ericoides", "965": "06694_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Chondrilla_juncea", "966": "06695_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Chromolaena_odorata", "967": "06696_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Chrysogonum_virginianum", "968": "06697_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Chrysothamnus_viscidiflorus", "969": "06698_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Chuquiraga_jussieui", "970": "06699_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cichorium_intybus", "971": "06700_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cirsium_acaule", "972": "06701_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cirsium_altissimum", "973": "06702_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cirsium_andersonii", "974": "06703_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cirsium_arizonicum", "975": "06704_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cirsium_arvense", "976": "06705_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cirsium_discolor", "977": "06706_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cirsium_eriophorum", "978": "06707_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cirsium_heterophyllum", "979": "06708_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cirsium_horridulum", "980": "06709_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cirsium_muticum", "981": "06710_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cirsium_neomexicanum", "982": "06711_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cirsium_occidentale", "983": "06712_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cirsium_oleraceum", "984": "06713_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cirsium_palustre", "985": "06714_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cirsium_pitcheri", "986": "06715_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cirsium_scariosum", "987": "06716_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cirsium_texanum", "988": "06717_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cirsium_undulatum", "989": "06718_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cirsium_vulgare", "990": "06719_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Conoclinium_coelestinum", "991": "06720_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Coreopsis_basalis", "992": "06721_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Coreopsis_lanceolata", "993": "06722_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Coreopsis_major", "994": "06723_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Coreopsis_palmata", "995": "06724_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Coreopsis_tinctoria", "996": "06725_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Coreopsis_tripteris", "997": "06726_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Corethrogyne_filaginifolia", "998": "06727_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cosmos_bipinnatus", "999": "06728_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cosmos_sulphureus", "1000": "06729_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cota_tinctoria", "1001": "06730_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cotula_australis", "1002": "06731_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cotula_coronopifolia", "1003": "06732_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Crassocephalum_crepidioides", "1004": "06733_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Crepis_capillaris", "1005": "06734_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Crepis_tectorum", "1006": "06735_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cyanthillium_cinereum", "1007": "06736_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cyclachaena_xanthiifolia", "1008": "06737_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Cynara_cardunculus", "1009": "06738_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Dahlia_coccinea", "1010": "06739_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Deinandra_fasciculata", "1011": "06740_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Delairea_odorata", "1012": "06741_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Diaperia_prolifera", "1013": "06742_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Diaperia_verna", "1014": "06743_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Dieteria_canescens", "1015": "06744_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Dimorphotheca_fruticosa", "1016": "06745_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Dittrichia_graveolens", "1017": "06746_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Dittrichia_viscosa", "1018": "06747_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Doellingeria_umbellata", "1019": "06748_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Echinacea_angustifolia", "1020": "06749_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Echinacea_pallida", "1021": "06750_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Echinacea_purpurea", "1022": "06751_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Echinops_ritro", "1023": "06752_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Echinops_sphaerocephalus", "1024": "06753_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Eclipta_prostrata", "1025": "06754_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Elephantopus_carolinianus", "1026": "06755_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Elephantopus_tomentosus", "1027": "06756_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Elytropappus_rhinocerotis", "1028": "06757_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Emilia_fosbergii", "1029": "06758_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Emilia_sonchifolia", "1030": "06759_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Encelia_actoni", "1031": "06760_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Encelia_californica", "1032": "06761_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Encelia_farinosa", "1033": "06762_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Engelmannia_peristenia", "1034": "06763_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Erechtites_hieraciifolius", "1035": "06764_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Ericameria_cuneata", "1036": "06765_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Ericameria_ericoides", "1037": "06766_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Ericameria_linearifolia", "1038": "06767_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Ericameria_nauseosa", "1039": "06768_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Erigeron_acris", "1040": "06769_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Erigeron_annuus", "1041": "06770_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Erigeron_bonariensis", "1042": "06771_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Erigeron_canadensis", "1043": "06772_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Erigeron_compositus", "1044": "06773_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Erigeron_foliosus", "1045": "06774_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Erigeron_glaucus", "1046": "06775_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Erigeron_karvinskianus", "1047": "06776_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Erigeron_philadelphicus", "1048": "06777_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Erigeron_pulchellus", "1049": "06778_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Erigeron_strigosus", "1050": "06779_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Erigeron_sumatrensis", "1051": "06780_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Eriocephalus_africanus", "1052": "06781_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Eriophyllum_confertiflorum", "1053": "06782_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Eriophyllum_lanatum", "1054": "06783_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Eriophyllum_staechadifolium", "1055": "06784_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Eriophyllum_wallacei", "1056": "06785_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Eupatorium_altissimum", "1057": "06786_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Eupatorium_cannabinum", "1058": "06787_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Eupatorium_capillifolium", "1059": "06788_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Eupatorium_hyssopifolium", "1060": "06789_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Eupatorium_perfoliatum", "1061": "06790_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Eupatorium_rotundifolium", "1062": "06791_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Eupatorium_serotinum", "1063": "06792_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Eurybia_divaricata", "1064": "06793_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Eurybia_macrophylla", "1065": "06794_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Eurybia_sibirica", "1066": "06795_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Euthamia_graminifolia", "1067": "06796_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Eutrochium_fistulosum", "1068": "06797_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Eutrochium_maculatum", "1069": "06798_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Eutrochium_purpureum", "1070": "06799_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Gaillardia_aestivalis", "1071": "06800_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Gaillardia_aristata", "1072": "06801_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Gaillardia_pinnatifida", "1073": "06802_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Gaillardia_pulchella", "1074": "06803_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Galactites_tomentosus", "1075": "06804_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Galinsoga_parviflora", "1076": "06805_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Galinsoga_quadriradiata", "1077": "06806_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Gamochaeta_pensylvanica", "1078": "06807_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Gazania_linearis", "1079": "06808_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Gazania_rigens", "1080": "06809_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Geraea_canescens", "1081": "06810_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Glebionis_coronaria", "1082": "06811_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Gnaphalium_uliginosum", "1083": "06812_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Grindelia_ciliata", "1084": "06813_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Grindelia_squarrosa", "1085": "06814_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Grindelia_stricta", "1086": "06815_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Gutierrezia_sarothrae", "1087": "06816_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Hazardia_squarrosa", "1088": "06817_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Hedypnois_rhagadioloides", "1089": "06818_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Helenium_amarum", "1090": "06819_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Helenium_autumnale", "1091": "06820_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Helenium_bigelovii", "1092": "06821_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Helenium_flexuosum", "1093": "06822_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Helenium_puberulum", "1094": "06823_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Helianthus_angustifolius", "1095": "06824_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Helianthus_annuus", "1096": "06825_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Helianthus_debilis", "1097": "06826_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Helianthus_divaricatus", "1098": "06827_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Helianthus_grosseserratus", "1099": "06828_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Helianthus_maximiliani", "1100": "06829_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Helianthus_mollis", "1101": "06830_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Helianthus_occidentalis", "1102": "06831_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Helianthus_petiolaris", "1103": "06832_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Helianthus_tuberosus", "1104": "06833_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Helichrysum_arenarium", "1105": "06834_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Heliopsis_helianthoides", "1106": "06835_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Helminthotheca_echioides", "1107": "06836_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Hemizonia_congesta", "1108": "06837_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Heterotheca_grandiflora", "1109": "06838_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Heterotheca_subaxillaris", "1110": "06839_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Heterotheca_villosa", "1111": "06840_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Hieracium_albiflorum", "1112": "06841_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Hieracium_umbellatum", "1113": "06842_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Hieracium_venosum", "1114": "06843_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Hymenopappus_artemisiifolius", "1115": "06844_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Hymenopappus_scabiosaeus", "1116": "06845_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Hypochaeris_glabra", "1117": "06846_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Hypochaeris_radicata", "1118": "06847_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Inula_britannica", "1119": "06848_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Inula_helenium", "1120": "06849_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Inula_salicina", "1121": "06850_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Isocoma_acradenia", "1122": "06851_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Isocoma_menziesii", "1123": "06852_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Iva_annua", "1124": "06853_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Iva_frutescens", "1125": "06854_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Ixeris_chinensis", "1126": "06855_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Jacobaea_maritima", "1127": "06856_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Jacobaea_vulgaris", "1128": "06857_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Jaumea_carnosa", "1129": "06858_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Krigia_biflora", "1130": "06859_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Krigia_virginica", "1131": "06860_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Lactuca_biennis", "1132": "06861_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Lactuca_canadensis", "1133": "06862_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Lactuca_floridana", "1134": "06863_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Lactuca_serriola", "1135": "06864_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Lactuca_tatarica", "1136": "06865_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Lapsana_communis", "1137": "06866_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Lasthenia_californica", "1138": "06867_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Lasthenia_gracilis", "1139": "06868_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Layia_glandulosa", "1140": "06869_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Layia_platyglossa", "1141": "06870_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Leontopodium_nivale", "1142": "06871_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Lepidospartum_squamatum", "1143": "06872_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Leptosyne_gigantea", "1144": "06873_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Leptosyne_maritima", "1145": "06874_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Leucanthemum_vulgare", "1146": "06875_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Liatris_aspera", "1147": "06876_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Liatris_cylindracea", "1148": "06877_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Liatris_elegans", "1149": "06878_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Liatris_punctata", "1150": "06879_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Liatris_pycnostachya", "1151": "06880_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Liatris_spicata", "1152": "06881_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Lindheimera_texana", "1153": "06882_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Lygodesmia_texana", "1154": "06883_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Machaeranthera_tanacetifolia", "1155": "06884_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Madia_elegans", "1156": "06885_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Madia_gracilis", "1157": "06886_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Madia_sativa", "1158": "06887_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Malacothrix_glabrata", "1159": "06888_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Malacothrix_saxatilis", "1160": "06889_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Marshallia_caespitosa", "1161": "06890_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Matricaria_chamomilla", "1162": "06891_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Matricaria_discoidea", "1163": "06892_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Melampodium_leucanthum", "1164": "06893_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Melanthera_nivea", "1165": "06894_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Metalasia_densa", "1166": "06895_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Metalasia_muricata", "1167": "06896_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Micropus_californicus", "1168": "06897_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Mikania_micrantha", "1169": "06898_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Mikania_scandens", "1170": "06899_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Monoptilon_bellioides", "1171": "06900_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Mycelis_muralis", "1172": "06901_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Nabalus_albus", "1173": "06902_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Nabalus_altissimus", "1174": "06903_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Nabalus_trifoliolatus", "1175": "06904_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Oclemena_acuminata", "1176": "06905_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Olearia_arborescens", "1177": "06906_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Olearia_avicenniifolia", "1178": "06907_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Olearia_paniculata", "1179": "06908_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Omalotheca_sylvatica", "1180": "06909_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Oncosiphon_pilulifer", "1181": "06910_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Onopordum_acanthium", "1182": "06911_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Osteospermum_moniliferum", "1183": "06912_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Ozothamnus_leptophyllus", "1184": "06913_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Packera_anonyma", "1185": "06914_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Packera_aurea", "1186": "06915_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Packera_glabella", "1187": "06916_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Packera_obovata", "1188": "06917_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Packera_paupercula", "1189": "06918_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Packera_tampicana", "1190": "06919_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Palafoxia_arida", "1191": "06920_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Palafoxia_callosa", "1192": "06921_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Pallenis_spinosa", "1193": "06922_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Parasenecio_hastatus", "1194": "06923_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Parthenium_hysterophorus", "1195": "06924_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Parthenium_incanum", "1196": "06925_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Parthenium_integrifolium", "1197": "06926_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Pectis_papposa", "1198": "06927_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Perityle_emoryi", "1199": "06928_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Petasites_albus", "1200": "06929_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Petasites_frigidus", "1201": "06930_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Petasites_hybridus", "1202": "06931_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Petasites_spurius", "1203": "06932_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Peucephyllum_schottii", "1204": "06933_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Phaenocoma_prolifera", "1205": "06934_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Picris_hieracioides", "1206": "06935_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Pilosella_aurantiaca", "1207": "06936_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Pilosella_caespitosa", "1208": "06937_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Pilosella_officinarum", "1209": "06938_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Pinaropappus_roseus", "1210": "06939_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Pityopsis_falcata", "1211": "06940_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Pityopsis_graminifolia", "1212": "06941_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Plectocephalus_americanus", "1213": "06942_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Pleurocoronis_pluriseta", "1214": "06943_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Pluchea_baccharis", "1215": "06944_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Pluchea_camphorata", "1216": "06945_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Pluchea_carolinensis", "1217": "06946_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Pluchea_odorata", "1218": "06947_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Pluchea_sericea", "1219": "06948_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Polymnia_canadensis", "1220": "06949_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Porophyllum_gracile", "1221": "06950_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Prenanthes_purpurea", "1222": "06951_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Pseudognaphalium_biolettii", "1223": "06952_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Pseudognaphalium_californicum", "1224": "06953_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Pseudognaphalium_luteoalbum", "1225": "06954_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Pseudognaphalium_obtusifolium", "1226": "06955_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Pterocaulon_pycnostachyum", "1227": "06956_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Pulicaria_dysenterica", "1228": "06957_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Pyrrhopappus_pauciflorus", "1229": "06958_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Rafinesquia_californica", "1230": "06959_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Rafinesquia_neomexicana", "1231": "06960_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Ratibida_columnifera", "1232": "06961_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Ratibida_pinnata", "1233": "06962_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Rudbeckia_amplexicaulis", "1234": "06963_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Rudbeckia_hirta", "1235": "06964_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Rudbeckia_laciniata", "1236": "06965_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Rudbeckia_triloba", "1237": "06966_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Sanvitalia_procumbens", "1238": "06967_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Scolymus_hispanicus", "1239": "06968_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Scorzoneroides_autumnalis", "1240": "06969_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Senecio_ampullaceus", "1241": "06970_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Senecio_elegans", "1242": "06971_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Senecio_flaccidus", "1243": "06972_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Senecio_glastifolius", "1244": "06973_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Senecio_glomeratus", "1245": "06974_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Senecio_inaequidens", "1246": "06975_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Senecio_integerrimus", "1247": "06976_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Senecio_minimus", "1248": "06977_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Senecio_ovatus", "1249": "06978_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Senecio_pseudoarnica", "1250": "06979_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Senecio_quadridentatus", "1251": "06980_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Senecio_skirrhodon", "1252": "06981_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Senecio_triangularis", "1253": "06982_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Senecio_vernalis", "1254": "06983_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Senecio_viscosus", "1255": "06984_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Senecio_vulgaris", "1256": "06985_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Seriphium_plumosum", "1257": "06986_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Silphium_albiflorum", "1258": "06987_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Silphium_integrifolium", "1259": "06988_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Silphium_laciniatum", "1260": "06989_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Silphium_perfoliatum", "1261": "06990_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Silphium_terebinthinaceum", "1262": "06991_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Silybum_marianum", "1263": "06992_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Smallanthus_uvedalia", "1264": "06993_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Solidago_altissima", "1265": "06994_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Solidago_bicolor", "1266": "06995_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Solidago_caesia", "1267": "06996_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Solidago_canadensis", "1268": "06997_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Solidago_flexicaulis", "1269": "06998_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Solidago_gigantea", "1270": "06999_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Solidago_hispida", "1271": "07000_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Solidago_juncea", "1272": "07001_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Solidago_multiradiata", "1273": "07002_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Solidago_nemoralis", "1274": "07003_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Solidago_ptarmicoides", "1275": "07004_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Solidago_rigida", "1276": "07005_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Solidago_rugosa", "1277": "07006_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Solidago_sempervirens", "1278": "07007_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Solidago_speciosa", "1279": "07008_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Solidago_uliginosa", "1280": "07009_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Solidago_velutina", "1281": "07010_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Solidago_virgaurea", "1282": "07011_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Soliva_sessilis", "1283": "07012_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Sonchus_arvensis", "1284": "07013_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Sonchus_asper", "1285": "07014_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Sonchus_oleraceus", "1286": "07015_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Sphagneticola_trilobata", "1287": "07016_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Stephanomeria_pauciflora", "1288": "07017_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Symphyotrichum_chilense", "1289": "07018_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Symphyotrichum_ciliolatum", "1290": "07019_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Symphyotrichum_cordifolium", "1291": "07020_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Symphyotrichum_divaricatum", "1292": "07021_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Symphyotrichum_drummondii", "1293": "07022_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Symphyotrichum_ericoides", "1294": "07023_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Symphyotrichum_laeve", "1295": "07024_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Symphyotrichum_lanceolatum", "1296": "07025_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Symphyotrichum_lateriflorum", "1297": "07026_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Symphyotrichum_novae-angliae", "1298": "07027_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Symphyotrichum_novi-belgii", "1299": "07028_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Symphyotrichum_pilosum", "1300": "07029_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Symphyotrichum_puniceum", "1301": "07030_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Symphyotrichum_sericeum", "1302": "07031_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Symphyotrichum_subulatum", "1303": "07032_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Syncarpha_vestita", "1304": "07033_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Tagetes_lucida", "1305": "07034_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Tagetes_lunulata", "1306": "07035_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Tanacetum_bipinnatum", "1307": "07036_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Tanacetum_parthenium", "1308": "07037_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Tanacetum_vulgare", "1309": "07038_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Taraxacum_erythrospermum", "1310": "07039_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Taraxacum_officinale", "1311": "07040_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Tarchonanthus_littoralis", "1312": "07041_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Tetraneuris_linearifolia", "1313": "07042_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Tetraneuris_scaposa", "1314": "07043_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Thelesperma_filifolium", "1315": "07044_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Thelesperma_megapotamicum", "1316": "07045_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Thymophylla_pentachaeta", "1317": "07046_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Tithonia_diversifolia", "1318": "07047_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Tithonia_tubaeformis", "1319": "07048_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Tragopogon_dubius", "1320": "07049_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Tragopogon_porrifolius", "1321": "07050_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Tragopogon_pratensis", "1322": "07051_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Trichoptilium_incisum", "1323": "07052_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Tridax_procumbens", "1324": "07053_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Tripleurospermum_inodorum", "1325": "07054_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Tripolium_pannonicum", "1326": "07055_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Trixis_californica", "1327": "07056_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Tussilago_farfara", "1328": "07057_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Uropappus_lindleyi", "1329": "07058_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Urospermum_dalechampii", "1330": "07059_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Urospermum_picroides", "1331": "07060_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Venegasia_carpesioides", "1332": "07061_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Verbesina_alternifolia", "1333": "07062_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Verbesina_encelioides", "1334": "07063_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Verbesina_virginica", "1335": "07064_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Vernonia_baldwinii", "1336": "07065_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Vernonia_fasciculata", "1337": "07066_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Vernonia_gigantea", "1338": "07067_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Vernonia_missurica", "1339": "07068_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Vernonia_noveboracensis", "1340": "07069_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Viguiera_dentata", "1341": "07070_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Wedelia_hispida", "1342": "07071_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Wyethia_angustifolia", "1343": "07072_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Wyethia_glabra", "1344": "07073_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Wyethia_mollis", "1345": "07074_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Xanthisma_spinulosum", "1346": "07075_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Xanthium_orientale", "1347": "07076_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Xanthium_spinosum", "1348": "07077_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Xanthium_strumarium", "1349": "07078_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Xylorhiza_orcuttii", "1350": "07079_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Xylorhiza_tortifolia", "1351": "07080_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Youngia_japonica", "1352": "07081_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Zinnia_acerosa", "1353": "07082_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Zinnia_elegans", "1354": "07083_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Zinnia_grandiflora", "1355": "07084_Plantae_Tracheophyta_Magnoliopsida_Asterales_Asteraceae_Zinnia_peruviana", "1356": "07085_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Campanula_aparinoides", "1357": "07086_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Campanula_barbata", "1358": "07087_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Campanula_glomerata", "1359": "07088_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Campanula_lasiocarpa", "1360": "07089_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Campanula_latifolia", "1361": "07090_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Campanula_patula", "1362": "07091_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Campanula_persicifolia", "1363": "07092_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Campanula_prenanthoides", "1364": "07093_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Campanula_rapunculoides", "1365": "07094_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Campanula_rapunculus", "1366": "07095_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Campanula_rotundifolia", "1367": "07096_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Campanula_sibirica", "1368": "07097_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Campanula_trachelium", "1369": "07098_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Campanulastrum_americanum", "1370": "07099_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Hippobroma_longiflora", "1371": "07100_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Jasione_montana", "1372": "07101_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Lobelia_anceps", "1373": "07102_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Lobelia_angulata", "1374": "07103_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Lobelia_cardinalis", "1375": "07104_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Lobelia_dortmanna", "1376": "07105_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Lobelia_inflata", "1377": "07106_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Lobelia_kalmii", "1378": "07107_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Lobelia_laxiflora", "1379": "07108_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Lobelia_pinifolia", "1380": "07109_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Lobelia_puberula", "1381": "07110_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Lobelia_siphilitica", "1382": "07111_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Lobelia_spicata", "1383": "07112_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Phyteuma_orbiculare", "1384": "07113_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Phyteuma_spicatum", "1385": "07114_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Triodanis_biflora", "1386": "07115_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Triodanis_perfoliata", "1387": "07116_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Wahlenbergia_albomarginata", "1388": "07117_Plantae_Tracheophyta_Magnoliopsida_Asterales_Campanulaceae_Wahlenbergia_marginata", "1389": "07118_Plantae_Tracheophyta_Magnoliopsida_Asterales_Goodeniaceae_Goodenia_ovata", "1390": "07119_Plantae_Tracheophyta_Magnoliopsida_Asterales_Goodeniaceae_Scaevola_plumieri", "1391": "07120_Plantae_Tracheophyta_Magnoliopsida_Asterales_Goodeniaceae_Scaevola_taccada", "1392": "07121_Plantae_Tracheophyta_Magnoliopsida_Asterales_Goodeniaceae_Selliera_radicans", "1393": "07122_Plantae_Tracheophyta_Magnoliopsida_Asterales_Menyanthaceae_Menyanthes_trifoliata", "1394": "07123_Plantae_Tracheophyta_Magnoliopsida_Asterales_Menyanthaceae_Nephrophyllidium_crista-galli", "1395": "07124_Plantae_Tracheophyta_Magnoliopsida_Asterales_Menyanthaceae_Nymphoides_indica", "1396": "07125_Plantae_Tracheophyta_Magnoliopsida_Asterales_Rousseaceae_Carpodetus_serratus", "1397": "07126_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Adelinia_grande", "1398": "07127_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Aegonychon_purpurocaeruleum", "1399": "07128_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Amsinckia_menziesii", "1400": "07129_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Amsinckia_tessellata", "1401": "07130_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Anchusa_arvensis", "1402": "07131_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Anchusa_officinalis", "1403": "07132_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Andersonglossum_virginianum", "1404": "07133_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Borago_officinalis", "1405": "07134_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Buglossoides_arvensis", "1406": "07135_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Cordia_boissieri", "1407": "07136_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Cordia_sebestena", "1408": "07137_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Cynoglossum_officinale", "1409": "07138_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Echium_candicans", "1410": "07139_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Echium_plantagineum", "1411": "07140_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Echium_vulgare", "1412": "07141_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Ehretia_anacua", "1413": "07142_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Emmenanthe_penduliflora", "1414": "07143_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Eriodictyon_californicum", "1415": "07144_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Eriodictyon_crassifolium", "1416": "07145_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Eriodictyon_trichocalyx", "1417": "07146_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Eucrypta_chrysanthemifolia", "1418": "07147_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Euploca_procumbens", "1419": "07148_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Euploca_tenella", "1420": "07149_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Hackelia_virginiana", "1421": "07150_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Heliotropium_angiospermum", "1422": "07151_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Heliotropium_arboreum", "1423": "07152_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Heliotropium_curassavicum", "1424": "07153_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Heliotropium_europaeum", "1425": "07154_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Heliotropium_indicum", "1426": "07155_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Hydrophyllum_appendiculatum", "1427": "07156_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Hydrophyllum_canadense", "1428": "07157_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Hydrophyllum_capitatum", "1429": "07158_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Hydrophyllum_tenuipes", "1430": "07159_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Hydrophyllum_virginianum", "1431": "07160_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Johnstonella_angustifolia", "1432": "07161_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Lithospermum_canescens", "1433": "07162_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Lithospermum_caroliniense", "1434": "07163_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Lithospermum_incisum", "1435": "07164_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Lithospermum_officinale", "1436": "07165_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Lithospermum_ruderale", "1437": "07166_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Mertensia_ciliata", "1438": "07167_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Mertensia_maritima", "1439": "07168_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Mertensia_paniculata", "1440": "07169_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Mertensia_virginica", "1441": "07170_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Myosotis_arvensis", "1442": "07171_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Myosotis_discolor", "1443": "07172_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Myosotis_latifolia", "1444": "07173_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Myosotis_scorpioides", "1445": "07174_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Myosotis_sylvatica", "1446": "07175_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Nama_demissa", "1447": "07176_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Nama_hispida", "1448": "07177_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Nemophila_heterophylla", "1449": "07178_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Nemophila_maculata", "1450": "07179_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Nemophila_menziesii", "1451": "07180_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Nemophila_parviflora", "1452": "07181_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Nemophila_phacelioides", "1453": "07182_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Nonea_pulla", "1454": "07183_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Nonea_rossica", "1455": "07184_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Pentaglottis_sempervirens", "1456": "07185_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Phacelia_bipinnatifida", "1457": "07186_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Phacelia_californica", "1458": "07187_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Phacelia_campanularia", "1459": "07188_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Phacelia_cicutaria", "1460": "07189_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Phacelia_congesta", "1461": "07190_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Phacelia_crenulata", "1462": "07191_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Phacelia_distans", "1463": "07192_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Phacelia_fremontii", "1464": "07193_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Phacelia_grandiflora", "1465": "07194_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Phacelia_hastata", "1466": "07195_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Phacelia_minor", "1467": "07196_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Phacelia_parryi", "1468": "07197_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Phacelia_purshii", "1469": "07198_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Phacelia_ramosissima", "1470": "07199_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Phacelia_sericea", "1471": "07200_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Phacelia_tanacetifolia", "1472": "07201_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Pholistoma_auritum", "1473": "07202_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Pholistoma_membranaceum", "1474": "07203_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Pulmonaria_mollis", "1475": "07204_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Pulmonaria_obscura", "1476": "07205_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Pulmonaria_officinalis", "1477": "07206_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Symphytum_officinale", "1478": "07207_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Tiquilia_palmeri", "1479": "07208_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Tiquilia_plicata", "1480": "07209_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Tournefortia_gnaphalodes", "1481": "07210_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Turricula_parryi", "1482": "07211_Plantae_Tracheophyta_Magnoliopsida_Boraginales_Boraginaceae_Wigandia_urens", "1483": "07212_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Bataceae_Batis_maritima", "1484": "07213_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Alliaria_petiolata", "1485": "07214_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Arabidopsis_lyrata", "1486": "07215_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Arabidopsis_thaliana", "1487": "07216_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Armoracia_rusticana", "1488": "07217_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Barbarea_orthoceras", "1489": "07218_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Barbarea_vulgaris", "1490": "07219_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Berteroa_incana", "1491": "07220_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Borodinia_laevigata", "1492": "07221_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Brassica_nigra", "1493": "07222_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Brassica_rapa", "1494": "07223_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Brassica_tournefortii", "1495": "07224_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Bunias_orientalis", "1496": "07225_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Cakile_edentula", "1497": "07226_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Cakile_maritima", "1498": "07227_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Capsella_bursa-pastoris", "1499": "07228_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Cardamine_amara", "1500": "07229_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Cardamine_bulbifera", "1501": "07230_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Cardamine_bulbosa", "1502": "07231_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Cardamine_californica", "1503": "07232_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Cardamine_concatenata", "1504": "07233_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Cardamine_diphylla", "1505": "07234_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Cardamine_douglassii", "1506": "07235_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Cardamine_flexuosa", "1507": "07236_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Cardamine_hirsuta", "1508": "07237_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Cardamine_impatiens", "1509": "07238_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Cardamine_nuttallii", "1510": "07239_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Cardamine_pratensis", "1511": "07240_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Caulanthus_lasiophyllus", "1512": "07241_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Chorispora_tenella", "1513": "07242_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Crambe_maritima", "1514": "07243_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Descurainia_pinnata", "1515": "07244_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Descurainia_sophia", "1516": "07245_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Diplotaxis_tenuifolia", "1517": "07246_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Dithyrea_californica", "1518": "07247_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Draba_cuneifolia", "1519": "07248_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Draba_verna", "1520": "07249_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Eruca_vesicaria", "1521": "07250_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Erysimum_capitatum", "1522": "07251_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Erysimum_cheiranthoides", "1523": "07252_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Erysimum_cheiri", "1524": "07253_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Hesperis_matronalis", "1525": "07254_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Hirschfeldia_incana", "1526": "07255_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Lepidium_campestre", "1527": "07256_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Lepidium_densiflorum", "1528": "07257_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Lepidium_didymum", "1529": "07258_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Lepidium_draba", "1530": "07259_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Lepidium_lasiocarpum", "1531": "07260_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Lepidium_latifolium", "1532": "07261_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Lepidium_nitidum", "1533": "07262_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Lepidium_virginicum", "1534": "07263_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Lobularia_maritima", "1535": "07264_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Lunaria_annua", "1536": "07265_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Lunaria_rediviva", "1537": "07266_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Nasturtium_officinale", "1538": "07267_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Physaria_gordonii", "1539": "07268_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Raphanus_raphanistrum", "1540": "07269_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Raphanus_sativus", "1541": "07270_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Rapistrum_rugosum", "1542": "07271_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Rorippa_palustris", "1543": "07272_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Sinapis_arvensis", "1544": "07273_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Sisymbrium_altissimum", "1545": "07274_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Sisymbrium_irio", "1546": "07275_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Sisymbrium_loeselii", "1547": "07276_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Sisymbrium_officinale", "1548": "07277_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Stanleya_pinnata", "1549": "07278_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Streptanthus_glandulosus", "1550": "07279_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Streptanthus_tortuosus", "1551": "07280_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Thlaspi_arvense", "1552": "07281_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Thysanocarpus_curvipes", "1553": "07282_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Brassicaceae_Turritis_glabra", "1554": "07283_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Capparaceae_Capparis_spinosa", "1555": "07284_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Capparaceae_Capparis_zoharyi", "1556": "07285_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Caricaceae_Carica_papaya", "1557": "07286_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Cleomaceae_Cleome_rutidosperma", "1558": "07287_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Cleomaceae_Cleome_viscosa", "1559": "07288_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Cleomaceae_Peritoma_arborea", "1560": "07289_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Cleomaceae_Peritoma_serrulata", "1561": "07290_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Cleomaceae_Polanisia_dodecandra", "1562": "07291_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Koeberliniaceae_Koeberlinia_spinosa", "1563": "07292_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Limnanthaceae_Floerkea_proserpinacoides", "1564": "07293_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Limnanthaceae_Limnanthes_douglasii", "1565": "07294_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Resedaceae_Reseda_lutea", "1566": "07295_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Resedaceae_Reseda_luteola", "1567": "07296_Plantae_Tracheophyta_Magnoliopsida_Brassicales_Tropaeolaceae_Tropaeolum_majus", "1568": "07297_Plantae_Tracheophyta_Magnoliopsida_Bruniales_Bruniaceae_Brunia_noduliflora", "1569": "07298_Plantae_Tracheophyta_Magnoliopsida_Buxales_Buxaceae_Pachysandra_terminalis", "1570": "07299_Plantae_Tracheophyta_Magnoliopsida_Canellales_Winteraceae_Pseudowintera_colorata", "1571": "07300_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Aizoaceae_Carpobrotus_chilensis", "1572": "07301_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Aizoaceae_Carpobrotus_edulis", "1573": "07302_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Aizoaceae_Conicosia_pugioniformis", "1574": "07303_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Aizoaceae_Disphyma_australe", "1575": "07304_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Aizoaceae_Mesembryanthemum_cordifolium", "1576": "07305_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Aizoaceae_Mesembryanthemum_crystallinum", "1577": "07306_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Aizoaceae_Mesembryanthemum_nodiflorum", "1578": "07307_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Aizoaceae_Sesuvium_portulacastrum", "1579": "07308_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Aizoaceae_Tetragonia_implexicoma", "1580": "07309_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Aizoaceae_Tetragonia_tetragonoides", "1581": "07310_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Achyranthes_aspera", "1582": "07311_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Allenrolfea_occidentalis", "1583": "07312_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Alternanthera_philoxeroides", "1584": "07313_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Amaranthus_palmeri", "1585": "07314_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Amaranthus_retroflexus", "1586": "07315_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Atriplex_canescens", "1587": "07316_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Atriplex_hymenelytra", "1588": "07317_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Atriplex_lentiformis", "1589": "07318_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Atriplex_polycarpa", "1590": "07319_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Atriplex_prostrata", "1591": "07320_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Atriplex_sagittata", "1592": "07321_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Atriplex_semibaccata", "1593": "07322_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Bassia_scoparia", "1594": "07323_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Beta_vulgaris", "1595": "07324_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Celosia_argentea", "1596": "07325_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Chenopodiastrum_murale", "1597": "07326_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Chenopodium_album", "1598": "07327_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Dysphania_ambrosioides", "1599": "07328_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Dysphania_pumilio", "1600": "07329_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Froelichia_floridana", "1601": "07330_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Gomphrena_celosioides", "1602": "07331_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Gomphrena_serrata", "1603": "07332_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Grayia_spinosa", "1604": "07333_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Krascheninnikovia_lanata", "1605": "07334_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Oxybasis_glauca", "1606": "07335_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Salicornia_pacifica", "1607": "07336_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Salicornia_quinqueflora", "1608": "07337_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Salsola_kali", "1609": "07338_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Amaranthaceae_Salsola_tragus", "1610": "07339_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Basellaceae_Anredera_cordifolia", "1611": "07340_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Acanthocereus_tetragonus", "1612": "07341_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Bergerocactus_emoryi", "1613": "07342_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Carnegiea_gigantea", "1614": "07343_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Coryphantha_macromeris", "1615": "07344_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Coryphantha_sulcata", "1616": "07345_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Cylindropuntia_acanthocarpa", "1617": "07346_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Cylindropuntia_bigelovii", "1618": "07347_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Cylindropuntia_californica", "1619": "07348_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Cylindropuntia_echinocarpa", "1620": "07349_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Cylindropuntia_fulgida", "1621": "07350_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Cylindropuntia_ganderi", "1622": "07351_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Cylindropuntia_imbricata", "1623": "07352_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Cylindropuntia_leptocaulis", "1624": "07353_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Cylindropuntia_prolifera", "1625": "07354_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Cylindropuntia_ramosissima", "1626": "07355_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Cylindropuntia_thurberi", "1627": "07356_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Cylindropuntia_whipplei", "1628": "07357_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Echinocactus_horizonthalonius", "1629": "07358_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Echinocactus_platyacanthus", "1630": "07359_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Echinocereus_bakeri", "1631": "07360_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Echinocereus_chloranthus", "1632": "07361_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Echinocereus_coccineus", "1633": "07362_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Echinocereus_dasyacanthus", "1634": "07363_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Echinocereus_engelmannii", "1635": "07364_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Echinocereus_enneacanthus", "1636": "07365_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Echinocereus_fendleri", "1637": "07366_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Echinocereus_pectinatus", "1638": "07367_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Echinocereus_pentalophus", "1639": "07368_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Echinocereus_reichenbachii", "1640": "07369_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Echinocereus_rigidissimus", "1641": "07370_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Echinocereus_santaritensis", "1642": "07371_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Echinocereus_stramineus", "1643": "07372_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Echinocereus_triglochidiatus", "1644": "07373_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Escobaria_missouriensis", "1645": "07374_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Escobaria_tuberculosa", "1646": "07375_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Escobaria_vivipara", "1647": "07376_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Ferocactus_cylindraceus", "1648": "07377_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Ferocactus_emoryi", "1649": "07378_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Ferocactus_histrix", "1650": "07379_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Ferocactus_latispinus", "1651": "07380_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Ferocactus_viridescens", "1652": "07381_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Ferocactus_wislizeni", "1653": "07382_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Hamatocactus_hamatacanthus", "1654": "07383_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Homalocephala_polycephala", "1655": "07384_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Homalocephala_texensis", "1656": "07385_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Lophocereus_schottii", "1657": "07386_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Lophophora_williamsii", "1658": "07387_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Mammillaria_dioica", "1659": "07388_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Mammillaria_grahamii", "1660": "07389_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Mammillaria_heyderi", "1661": "07390_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Mammillaria_magnimamma", "1662": "07391_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Mammillaria_tetrancistra", "1663": "07392_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Myrtillocactus_geometrizans", "1664": "07393_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Neolloydia_conoidea", "1665": "07394_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Opuntia_basilaris", "1666": "07395_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Opuntia_cespitosa", "1667": "07396_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Opuntia_chlorotica", "1668": "07397_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Opuntia_cochenillifera", "1669": "07398_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Opuntia_decumbens", "1670": "07399_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Opuntia_engelmannii", "1671": "07400_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Opuntia_ficus-indica", "1672": "07401_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Opuntia_fragilis", "1673": "07402_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Opuntia_humifusa", "1674": "07403_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Opuntia_lindheimeri", "1675": "07404_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Opuntia_littoralis", "1676": "07405_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Opuntia_macrocentra", "1677": "07406_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Opuntia_macrorhiza", "1678": "07407_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Opuntia_microdasys", "1679": "07408_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Opuntia_phaeacantha", "1680": "07409_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Opuntia_polyacantha", "1681": "07410_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Opuntia_robusta", "1682": "07411_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Opuntia_stricta", "1683": "07412_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Opuntia_tomentosa", "1684": "07413_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Pachycereus_pecten-aboriginum", "1685": "07414_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Pachycereus_pringlei", "1686": "07415_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Pediocactus_simpsonii", "1687": "07416_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Peniocereus_greggii", "1688": "07417_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Selenicereus_undatus", "1689": "07418_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Stenocereus_gummosus", "1690": "07419_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Cactaceae_Stenocereus_thurberi", "1691": "07420_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Achyronychia_cooperi", "1692": "07421_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Arenaria_serpyllifolia", "1693": "07422_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Cardionema_ramosissimum", "1694": "07423_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Cerastium_arvense", "1695": "07424_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Cerastium_fontanum", "1696": "07425_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Cerastium_glomeratum", "1697": "07426_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Dianthus_armeria", "1698": "07427_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Dianthus_barbatus", "1699": "07428_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Dianthus_carthusianorum", "1700": "07429_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Dianthus_chinensis", "1701": "07430_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Dianthus_deltoides", "1702": "07431_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Dianthus_superbus", "1703": "07432_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Drymaria_cordata", "1704": "07433_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Herniaria_glabra", "1705": "07434_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Honckenya_peploides", "1706": "07435_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Moehringia_lateriflora", "1707": "07436_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Moehringia_macrophylla", "1708": "07437_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Moehringia_trinervia", "1709": "07438_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Mononeuria_groenlandica", "1710": "07439_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Petrorhagia_dubia", "1711": "07440_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Polycarpon_tetraphyllum", "1712": "07441_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Rabelera_holostea", "1713": "07442_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Sabulina_michauxii", "1714": "07443_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Sagina_procumbens", "1715": "07444_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Saponaria_ocymoides", "1716": "07445_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Saponaria_officinalis", "1717": "07446_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Scleranthus_annuus", "1718": "07447_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Silene_acaulis", "1719": "07448_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Silene_antirrhina", "1720": "07449_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Silene_coronaria", "1721": "07450_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Silene_dioica", "1722": "07451_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Silene_flos-cuculi", "1723": "07452_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Silene_gallica", "1724": "07453_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Silene_laciniata", "1725": "07454_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Silene_latifolia", "1726": "07455_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Silene_nutans", "1727": "07456_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Silene_regia", "1728": "07457_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Silene_stellata", "1729": "07458_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Silene_uniflora", "1730": "07459_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Silene_virginica", "1731": "07460_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Silene_vulgaris", "1732": "07461_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Spergula_arvensis", "1733": "07462_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Spergularia_rubra", "1734": "07463_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Stellaria_aquatica", "1735": "07464_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Stellaria_graminea", "1736": "07465_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Stellaria_media", "1737": "07466_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Stellaria_nemorum", "1738": "07467_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Stellaria_pubera", "1739": "07468_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Caryophyllaceae_Viscaria_vulgaris", "1740": "07469_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Didiereaceae_Portulacaria_afra", "1741": "07470_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Droseraceae_Drosera_aberrans", "1742": "07471_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Droseraceae_Drosera_anglica", "1743": "07472_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Droseraceae_Drosera_arcturi", "1744": "07473_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Droseraceae_Drosera_auriculata", "1745": "07474_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Droseraceae_Drosera_brevifolia", "1746": "07475_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Droseraceae_Drosera_capillaris", "1747": "07476_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Droseraceae_Drosera_cistiflora", "1748": "07477_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Droseraceae_Drosera_intermedia", "1749": "07478_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Droseraceae_Drosera_planchonii", "1750": "07479_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Droseraceae_Drosera_rotundifolia", "1751": "07480_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Droseraceae_Drosera_spatulata", "1752": "07481_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Droseraceae_Drosera_whittakeri", "1753": "07482_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Frankeniaceae_Frankenia_salina", "1754": "07483_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Molluginaceae_Mollugo_verticillata", "1755": "07484_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Montiaceae_Calandrinia_menziesii", "1756": "07485_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Montiaceae_Calyptridium_monandrum", "1757": "07486_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Montiaceae_Calyptridium_monospermum", "1758": "07487_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Montiaceae_Calyptridium_umbellatum", "1759": "07488_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Montiaceae_Claytonia_caroliniana", "1760": "07489_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Montiaceae_Claytonia_lanceolata", "1761": "07490_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Montiaceae_Claytonia_parviflora", "1762": "07491_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Montiaceae_Claytonia_perfoliata", "1763": "07492_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Montiaceae_Claytonia_rubra", "1764": "07493_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Montiaceae_Claytonia_sibirica", "1765": "07494_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Montiaceae_Claytonia_virginica", "1766": "07495_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Montiaceae_Lewisia_rediviva", "1767": "07496_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Montiaceae_Montia_parvifolia", "1768": "07497_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Nyctaginaceae_Abronia_latifolia", "1769": "07498_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Nyctaginaceae_Abronia_maritima", "1770": "07499_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Nyctaginaceae_Abronia_umbellata", "1771": "07500_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Nyctaginaceae_Abronia_villosa", "1772": "07501_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Nyctaginaceae_Allionia_incarnata", "1773": "07502_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Nyctaginaceae_Boerhavia_coccinea", "1774": "07503_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Nyctaginaceae_Mirabilis_jalapa", "1775": "07504_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Nyctaginaceae_Mirabilis_laevis", "1776": "07505_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Nyctaginaceae_Mirabilis_multiflora", "1777": "07506_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Nyctaginaceae_Mirabilis_nyctaginea", "1778": "07507_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Petiveriaceae_Rivina_humilis", "1779": "07508_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Phytolaccaceae_Phytolacca_americana", "1780": "07509_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Phytolaccaceae_Phytolacca_icosandra", "1781": "07510_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Phytolaccaceae_Phytolacca_octandra", "1782": "07511_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Plumbaginaceae_Armeria_maritima", "1783": "07512_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Plumbaginaceae_Limonium_carolinianum", "1784": "07513_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Plumbaginaceae_Limonium_perezii", "1785": "07514_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Plumbaginaceae_Limonium_sinuatum", "1786": "07515_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Plumbaginaceae_Plumbago_auriculata", "1787": "07516_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Plumbaginaceae_Plumbago_zeylanica", "1788": "07517_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Antigonon_leptopus", "1789": "07518_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Bistorta_bistortoides", "1790": "07519_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Bistorta_officinalis", "1791": "07520_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Bistorta_vivipara", "1792": "07521_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Chorizanthe_brevicornu", "1793": "07522_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Chorizanthe_fimbriata", "1794": "07523_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Chorizanthe_rigida", "1795": "07524_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Coccoloba_uvifera", "1796": "07525_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Eriogonum_cinereum", "1797": "07526_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Eriogonum_deflexum", "1798": "07527_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Eriogonum_fasciculatum", "1799": "07528_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Eriogonum_inflatum", "1800": "07529_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Eriogonum_latifolium", "1801": "07530_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Eriogonum_longifolium", "1802": "07531_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Eriogonum_nudum", "1803": "07532_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Eriogonum_ovalifolium", "1804": "07533_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Eriogonum_parvifolium", "1805": "07534_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Eriogonum_umbellatum", "1806": "07535_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Eriogonum_wrightii", "1807": "07536_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Fallopia_cilinodis", "1808": "07537_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Fallopia_convolvulus", "1809": "07538_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Fallopia_dumetorum", "1810": "07539_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Fallopia_scandens", "1811": "07540_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Muehlenbeckia_australis", "1812": "07541_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Muehlenbeckia_axillaris", "1813": "07542_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Muehlenbeckia_complexa", "1814": "07543_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Oxyria_digyna", "1815": "07544_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Persicaria_amphibia", "1816": "07545_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Persicaria_arifolia", "1817": "07546_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Persicaria_capitata", "1818": "07547_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Persicaria_chinensis", "1819": "07548_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Persicaria_hydropiper", "1820": "07549_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Persicaria_hydropiperoides", "1821": "07550_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Persicaria_lapathifolia", "1822": "07551_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Persicaria_longiseta", "1823": "07552_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Persicaria_maculosa", "1824": "07553_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Persicaria_pensylvanica", "1825": "07554_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Persicaria_perfoliata", "1826": "07555_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Persicaria_punctata", "1827": "07556_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Persicaria_sagittata", "1828": "07557_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Persicaria_virginiana", "1829": "07558_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Polygonum_aviculare", "1830": "07559_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Polygonum_paronychia", "1831": "07560_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Pterostegia_drymarioides", "1832": "07561_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Reynoutria_japonica", "1833": "07562_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Reynoutria_sachalinensis", "1834": "07563_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Rumex_acetosa", "1835": "07564_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Rumex_acetosella", "1836": "07565_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Rumex_confertus", "1837": "07566_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Rumex_crispus", "1838": "07567_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Rumex_hymenosepalus", "1839": "07568_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Rumex_maritimus", "1840": "07569_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Polygonaceae_Rumex_obtusifolius", "1841": "07570_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Portulacaceae_Portulaca_oleracea", "1842": "07571_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Portulacaceae_Portulaca_pilosa", "1843": "07572_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Sarcobataceae_Sarcobatus_vermiculatus", "1844": "07573_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Simmondsiaceae_Simmondsia_chinensis", "1845": "07574_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Talinaceae_Talinum_paniculatum", "1846": "07575_Plantae_Tracheophyta_Magnoliopsida_Caryophyllales_Tamaricaceae_Tamarix_ramosissima", "1847": "07576_Plantae_Tracheophyta_Magnoliopsida_Celastrales_Celastraceae_Canotia_holacantha", "1848": "07577_Plantae_Tracheophyta_Magnoliopsida_Celastrales_Celastraceae_Celastrus_orbiculatus", "1849": "07578_Plantae_Tracheophyta_Magnoliopsida_Celastrales_Celastraceae_Celastrus_scandens", "1850": "07579_Plantae_Tracheophyta_Magnoliopsida_Celastrales_Celastraceae_Euonymus_alatus", "1851": "07580_Plantae_Tracheophyta_Magnoliopsida_Celastrales_Celastraceae_Euonymus_americanus", "1852": "07581_Plantae_Tracheophyta_Magnoliopsida_Celastrales_Celastraceae_Euonymus_atropurpureus", "1853": "07582_Plantae_Tracheophyta_Magnoliopsida_Celastrales_Celastraceae_Euonymus_europaeus", "1854": "07583_Plantae_Tracheophyta_Magnoliopsida_Celastrales_Celastraceae_Euonymus_fortunei", "1855": "07584_Plantae_Tracheophyta_Magnoliopsida_Celastrales_Celastraceae_Euonymus_japonicus", "1856": "07585_Plantae_Tracheophyta_Magnoliopsida_Celastrales_Celastraceae_Euonymus_obovatus", "1857": "07586_Plantae_Tracheophyta_Magnoliopsida_Celastrales_Celastraceae_Euonymus_verrucosus", "1858": "07587_Plantae_Tracheophyta_Magnoliopsida_Celastrales_Celastraceae_Parnassia_fimbriata", "1859": "07588_Plantae_Tracheophyta_Magnoliopsida_Celastrales_Celastraceae_Parnassia_glauca", "1860": "07589_Plantae_Tracheophyta_Magnoliopsida_Celastrales_Celastraceae_Parnassia_palustris", "1861": "07590_Plantae_Tracheophyta_Magnoliopsida_Celastrales_Celastraceae_Paxistima_myrsinites", "1862": "07591_Plantae_Tracheophyta_Magnoliopsida_Celastrales_Celastraceae_Pterocelastrus_tricuspidatus", "1863": "07592_Plantae_Tracheophyta_Magnoliopsida_Ceratophyllales_Ceratophyllaceae_Ceratophyllum_demersum", "1864": "07593_Plantae_Tracheophyta_Magnoliopsida_Cornales_Cornaceae_Cornus_alba", "1865": "07594_Plantae_Tracheophyta_Magnoliopsida_Cornales_Cornaceae_Cornus_alternifolia", "1866": "07595_Plantae_Tracheophyta_Magnoliopsida_Cornales_Cornaceae_Cornus_amomum", "1867": "07596_Plantae_Tracheophyta_Magnoliopsida_Cornales_Cornaceae_Cornus_canadensis", "1868": "07597_Plantae_Tracheophyta_Magnoliopsida_Cornales_Cornaceae_Cornus_drummondii", "1869": "07598_Plantae_Tracheophyta_Magnoliopsida_Cornales_Cornaceae_Cornus_florida", "1870": "07599_Plantae_Tracheophyta_Magnoliopsida_Cornales_Cornaceae_Cornus_mas", "1871": "07600_Plantae_Tracheophyta_Magnoliopsida_Cornales_Cornaceae_Cornus_nuttallii", "1872": "07601_Plantae_Tracheophyta_Magnoliopsida_Cornales_Cornaceae_Cornus_racemosa", "1873": "07602_Plantae_Tracheophyta_Magnoliopsida_Cornales_Cornaceae_Cornus_rugosa", "1874": "07603_Plantae_Tracheophyta_Magnoliopsida_Cornales_Cornaceae_Cornus_sanguinea", "1875": "07604_Plantae_Tracheophyta_Magnoliopsida_Cornales_Cornaceae_Cornus_sericea", "1876": "07605_Plantae_Tracheophyta_Magnoliopsida_Cornales_Cornaceae_Cornus_suecica", "1877": "07606_Plantae_Tracheophyta_Magnoliopsida_Cornales_Cornaceae_Cornus_unalaschkensis", "1878": "07607_Plantae_Tracheophyta_Magnoliopsida_Cornales_Hydrangeaceae_Hydrangea_arborescens", "1879": "07608_Plantae_Tracheophyta_Magnoliopsida_Cornales_Hydrangeaceae_Hydrangea_macrophylla", "1880": "07609_Plantae_Tracheophyta_Magnoliopsida_Cornales_Hydrangeaceae_Hydrangea_quercifolia", "1881": "07610_Plantae_Tracheophyta_Magnoliopsida_Cornales_Hydrangeaceae_Philadelphus_lewisii", "1882": "07611_Plantae_Tracheophyta_Magnoliopsida_Cornales_Loasaceae_Cevallia_sinuata", "1883": "07612_Plantae_Tracheophyta_Magnoliopsida_Cornales_Loasaceae_Eucnide_urens", "1884": "07613_Plantae_Tracheophyta_Magnoliopsida_Cornales_Loasaceae_Mentzelia_hispida", "1885": "07614_Plantae_Tracheophyta_Magnoliopsida_Cornales_Loasaceae_Mentzelia_involucrata", "1886": "07615_Plantae_Tracheophyta_Magnoliopsida_Cornales_Loasaceae_Mentzelia_laevicaulis", "1887": "07616_Plantae_Tracheophyta_Magnoliopsida_Cornales_Loasaceae_Petalonyx_thurberi", "1888": "07617_Plantae_Tracheophyta_Magnoliopsida_Cornales_Nyssaceae_Nyssa_sylvatica", "1889": "07618_Plantae_Tracheophyta_Magnoliopsida_Crossosomatales_Staphyleaceae_Staphylea_trifolia", "1890": "07619_Plantae_Tracheophyta_Magnoliopsida_Cucurbitales_Coriariaceae_Coriaria_arborea", "1891": "07620_Plantae_Tracheophyta_Magnoliopsida_Cucurbitales_Corynocarpaceae_Corynocarpus_laevigatus", "1892": "07621_Plantae_Tracheophyta_Magnoliopsida_Cucurbitales_Cucurbitaceae_Brandegea_bigelovii", "1893": "07622_Plantae_Tracheophyta_Magnoliopsida_Cucurbitales_Cucurbitaceae_Bryonia_cretica", "1894": "07623_Plantae_Tracheophyta_Magnoliopsida_Cucurbitales_Cucurbitaceae_Cucurbita_foetidissima", "1895": "07624_Plantae_Tracheophyta_Magnoliopsida_Cucurbitales_Cucurbitaceae_Cucurbita_palmata", "1896": "07625_Plantae_Tracheophyta_Magnoliopsida_Cucurbitales_Cucurbitaceae_Ecballium_elaterium", "1897": "07626_Plantae_Tracheophyta_Magnoliopsida_Cucurbitales_Cucurbitaceae_Echinocystis_lobata", "1898": "07627_Plantae_Tracheophyta_Magnoliopsida_Cucurbitales_Cucurbitaceae_Ibervillea_lindheimeri", "1899": "07628_Plantae_Tracheophyta_Magnoliopsida_Cucurbitales_Cucurbitaceae_Marah_fabacea", "1900": "07629_Plantae_Tracheophyta_Magnoliopsida_Cucurbitales_Cucurbitaceae_Marah_macrocarpa", "1901": "07630_Plantae_Tracheophyta_Magnoliopsida_Cucurbitales_Cucurbitaceae_Marah_oregana", "1902": "07631_Plantae_Tracheophyta_Magnoliopsida_Cucurbitales_Cucurbitaceae_Melothria_pendula", "1903": "07632_Plantae_Tracheophyta_Magnoliopsida_Cucurbitales_Cucurbitaceae_Momordica_charantia", "1904": "07633_Plantae_Tracheophyta_Magnoliopsida_Cucurbitales_Cucurbitaceae_Sicyos_angulatus", "1905": "07634_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Centranthus_ruber", "1906": "07635_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Diervilla_lonicera", "1907": "07636_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Dipsacus_fullonum", "1908": "07637_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Dipsacus_laciniatus", "1909": "07638_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Dipsacus_sativus", "1910": "07639_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Knautia_arvensis", "1911": "07640_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Leycesteria_formosa", "1912": "07641_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Linnaea_borealis", "1913": "07642_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Lonicera_albiflora", "1914": "07643_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Lonicera_canadensis", "1915": "07644_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Lonicera_ciliosa", "1916": "07645_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Lonicera_dioica", "1917": "07646_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Lonicera_hispidula", "1918": "07647_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Lonicera_involucrata", "1919": "07648_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Lonicera_japonica", "1920": "07649_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Lonicera_maackii", "1921": "07650_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Lonicera_morrowii", "1922": "07651_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Lonicera_periclymenum", "1923": "07652_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Lonicera_sempervirens", "1924": "07653_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Lonicera_subspicata", "1925": "07654_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Lonicera_tatarica", "1926": "07655_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Lonicera_xylosteum", "1927": "07656_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Plectritis_congesta", "1928": "07657_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Scabiosa_columbaria", "1929": "07658_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Scabiosa_ochroleuca", "1930": "07659_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Sixalix_atropurpurea", "1931": "07660_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Succisa_pratensis", "1932": "07661_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Symphoricarpos_albus", "1933": "07662_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Symphoricarpos_mollis", "1934": "07663_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Symphoricarpos_occidentalis", "1935": "07664_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Symphoricarpos_orbiculatus", "1936": "07665_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Symphoricarpos_rotundifolius", "1937": "07666_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Triosteum_aurantiacum", "1938": "07667_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Valeriana_officinalis", "1939": "07668_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Valeriana_sitchensis", "1940": "07669_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Valerianella_locusta", "1941": "07670_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Caprifoliaceae_Valerianella_radiata", "1942": "07671_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Viburnaceae_Adoxa_moschatellina", "1943": "07672_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Viburnaceae_Sambucus_canadensis", "1944": "07673_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Viburnaceae_Sambucus_cerulea", "1945": "07674_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Viburnaceae_Sambucus_ebulus", "1946": "07675_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Viburnaceae_Sambucus_javanica", "1947": "07676_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Viburnaceae_Sambucus_nigra", "1948": "07677_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Viburnaceae_Sambucus_racemosa", "1949": "07678_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Viburnaceae_Viburnum_acerifolium", "1950": "07679_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Viburnaceae_Viburnum_dentatum", "1951": "07680_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Viburnaceae_Viburnum_edule", "1952": "07681_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Viburnaceae_Viburnum_lantana", "1953": "07682_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Viburnaceae_Viburnum_lantanoides", "1954": "07683_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Viburnaceae_Viburnum_lentago", "1955": "07684_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Viburnaceae_Viburnum_nudum", "1956": "07685_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Viburnaceae_Viburnum_opulus", "1957": "07686_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Viburnaceae_Viburnum_prunifolium", "1958": "07687_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Viburnaceae_Viburnum_rafinesquianum", "1959": "07688_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Viburnaceae_Viburnum_rhytidophyllum", "1960": "07689_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Viburnaceae_Viburnum_rufidulum", "1961": "07690_Plantae_Tracheophyta_Magnoliopsida_Dipsacales_Viburnaceae_Viburnum_tinus", "1962": "07691_Plantae_Tracheophyta_Magnoliopsida_Ericales_Balsaminaceae_Impatiens_balfourii", "1963": "07692_Plantae_Tracheophyta_Magnoliopsida_Ericales_Balsaminaceae_Impatiens_capensis", "1964": "07693_Plantae_Tracheophyta_Magnoliopsida_Ericales_Balsaminaceae_Impatiens_glandulifera", "1965": "07694_Plantae_Tracheophyta_Magnoliopsida_Ericales_Balsaminaceae_Impatiens_noli-tangere", "1966": "07695_Plantae_Tracheophyta_Magnoliopsida_Ericales_Balsaminaceae_Impatiens_pallida", "1967": "07696_Plantae_Tracheophyta_Magnoliopsida_Ericales_Balsaminaceae_Impatiens_parviflora", "1968": "07697_Plantae_Tracheophyta_Magnoliopsida_Ericales_Balsaminaceae_Impatiens_walleriana", "1969": "07698_Plantae_Tracheophyta_Magnoliopsida_Ericales_Clethraceae_Clethra_alnifolia", "1970": "07699_Plantae_Tracheophyta_Magnoliopsida_Ericales_Cyrillaceae_Cyrilla_racemiflora", "1971": "07700_Plantae_Tracheophyta_Magnoliopsida_Ericales_Diapensiaceae_Diapensia_lapponica", "1972": "07701_Plantae_Tracheophyta_Magnoliopsida_Ericales_Diapensiaceae_Galax_urceolata", "1973": "07702_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ebenaceae_Diospyros_texana", "1974": "07703_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ebenaceae_Diospyros_virginiana", "1975": "07704_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Allotropa_virgata", "1976": "07705_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Andromeda_polifolia", "1977": "07706_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Arbutus_menziesii", "1978": "07707_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Arbutus_unedo", "1979": "07708_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Arbutus_xalapensis", "1980": "07709_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Arctostaphylos_glauca", "1981": "07710_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Arctostaphylos_nevadensis", "1982": "07711_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Arctostaphylos_patula", "1983": "07712_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Arctostaphylos_pungens", "1984": "07713_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Arctostaphylos_uva-ursi", "1985": "07714_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Arctous_alpina", "1986": "07715_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Calluna_vulgaris", "1987": "07716_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Cassiope_mertensiana", "1988": "07717_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Cassiope_tetragona", "1989": "07718_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Ceratiola_ericoides", "1990": "07719_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Chamaedaphne_calyculata", "1991": "07720_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Chimaphila_maculata", "1992": "07721_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Chimaphila_menziesii", "1993": "07722_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Chimaphila_umbellata", "1994": "07723_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Dracophyllum_longifolium", "1995": "07724_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Elliottia_pyroliflora", "1996": "07725_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Empetrum_nigrum", "1997": "07726_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Epacris_impressa", "1998": "07727_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Epigaea_repens", "1999": "07728_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Erica_abietina", "2000": "07729_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Erica_arborea", "2001": "07730_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Erica_cerinthoides", "2002": "07731_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Erica_cinerea", "2003": "07732_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Erica_coccinea", "2004": "07733_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Erica_corifolia", "2005": "07734_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Erica_discolor", "2006": "07735_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Erica_imbricata", "2007": "07736_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Erica_lusitanica", "2008": "07737_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Erica_plukenetii", "2009": "07738_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Erica_tetralix", "2010": "07739_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Erica_viscaria", "2011": "07740_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Gaultheria_antipoda", "2012": "07741_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Gaultheria_depressa", "2013": "07742_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Gaultheria_hispidula", "2014": "07743_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Gaultheria_procumbens", "2015": "07744_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Gaultheria_shallon", "2016": "07745_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Gaylussacia_baccata", "2017": "07746_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Kalmia_angustifolia", "2018": "07747_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Kalmia_latifolia", "2019": "07748_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Kalmia_microphylla", "2020": "07749_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Kalmia_polifolia", "2021": "07750_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Kalmia_procumbens", "2022": "07751_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Leptecophylla_juniperina", "2023": "07752_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Leptecophylla_tameiameiae", "2024": "07753_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Leucopogon_fasciculatus", "2025": "07754_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Leucopogon_fraseri", "2026": "07755_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Leucothoe_fontanesiana", "2027": "07756_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Lyonia_lucida", "2028": "07757_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Moneses_uniflora", "2029": "07758_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Monotropa_hypopitys", "2030": "07759_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Monotropa_uniflora", "2031": "07760_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Orthilia_secunda", "2032": "07761_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Oxydendrum_arboreum", "2033": "07762_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Pentachondra_pumila", "2034": "07763_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Phyllodoce_breweri", "2035": "07764_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Phyllodoce_empetriformis", "2036": "07765_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Pterospora_andromedea", "2037": "07766_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Pyrola_americana", "2038": "07767_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Pyrola_asarifolia", "2039": "07768_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Pyrola_chlorantha", "2040": "07769_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Pyrola_elliptica", "2041": "07770_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Pyrola_minor", "2042": "07771_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Pyrola_picta", "2043": "07772_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Pyrola_rotundifolia", "2044": "07773_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Rhododendron_albiflorum", "2045": "07774_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Rhododendron_calendulaceum", "2046": "07775_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Rhododendron_canadense", "2047": "07776_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Rhododendron_canescens", "2048": "07777_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Rhododendron_columbianum", "2049": "07778_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Rhododendron_ferrugineum", "2050": "07779_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Rhododendron_groenlandicum", "2051": "07780_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Rhododendron_lapponicum", "2052": "07781_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Rhododendron_macrophyllum", "2053": "07782_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Rhododendron_maximum", "2054": "07783_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Rhododendron_menziesii", "2055": "07784_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Rhododendron_occidentale", "2056": "07785_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Rhododendron_periclymenoides", "2057": "07786_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Rhododendron_tomentosum", "2058": "07787_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Sarcodes_sanguinea", "2059": "07788_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Vaccinium_angustifolium", "2060": "07789_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Vaccinium_arboreum", "2061": "07790_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Vaccinium_corymbosum", "2062": "07791_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Vaccinium_macrocarpon", "2063": "07792_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Vaccinium_membranaceum", "2064": "07793_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Vaccinium_myrtilloides", "2065": "07794_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Vaccinium_myrtillus", "2066": "07795_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Vaccinium_ovalifolium", "2067": "07796_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Vaccinium_ovatum", "2068": "07797_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Vaccinium_oxycoccos", "2069": "07798_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Vaccinium_parvifolium", "2070": "07799_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Vaccinium_stamineum", "2071": "07800_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Vaccinium_uliginosum", "2072": "07801_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Vaccinium_vitis-idaea", "2073": "07802_Plantae_Tracheophyta_Magnoliopsida_Ericales_Ericaceae_Xylococcus_bicolor", "2074": "07803_Plantae_Tracheophyta_Magnoliopsida_Ericales_Fouquieriaceae_Fouquieria_splendens", "2075": "07804_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Aliciella_latifolia", "2076": "07805_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Collomia_grandiflora", "2077": "07806_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Collomia_heterophylla", "2078": "07807_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Collomia_linearis", "2079": "07808_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Eriastrum_densifolium", "2080": "07809_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Eriastrum_eremicum", "2081": "07810_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Gilia_capitata", "2082": "07811_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Gilia_stellata", "2083": "07812_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Gilia_tricolor", "2084": "07813_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Ipomopsis_aggregata", "2085": "07814_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Ipomopsis_longiflora", "2086": "07815_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Ipomopsis_rubra", "2087": "07816_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Langloisia_setosissima", "2088": "07817_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Leptosiphon_bicolor", "2089": "07818_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Leptosiphon_ciliatus", "2090": "07819_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Leptosiphon_nuttallii", "2091": "07820_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Leptosiphon_parviflorus", "2092": "07821_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Linanthus_californicus", "2093": "07822_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Linanthus_dianthiflorus", "2094": "07823_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Linanthus_pungens", "2095": "07824_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Loeselia_mexicana", "2096": "07825_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Loeseliastrum_matthewsii", "2097": "07826_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Microsteris_gracilis", "2098": "07827_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Navarretia_squarrosa", "2099": "07828_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Phlox_diffusa", "2100": "07829_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Phlox_divaricata", "2101": "07830_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Phlox_drummondii", "2102": "07831_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Phlox_hoodii", "2103": "07832_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Phlox_longifolia", "2104": "07833_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Phlox_paniculata", "2105": "07834_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Phlox_pilosa", "2106": "07835_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Phlox_roemeriana", "2107": "07836_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Phlox_subulata", "2108": "07837_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Polemonium_acutiflorum", "2109": "07838_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Polemonium_caeruleum", "2110": "07839_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Polemonium_californicum", "2111": "07840_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Polemonium_pulcherrimum", "2112": "07841_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Polemonium_reptans", "2113": "07842_Plantae_Tracheophyta_Magnoliopsida_Ericales_Polemoniaceae_Polemonium_viscosum", "2114": "07843_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Androsace_septentrionalis", "2115": "07844_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Ardisia_crenata", "2116": "07845_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Bonellia_macrocarpa", "2117": "07846_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Cyclamen_hederifolium", "2118": "07847_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Cyclamen_purpurascens", "2119": "07848_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Lysimachia_arvensis", "2120": "07849_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Lysimachia_borealis", "2121": "07850_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Lysimachia_ciliata", "2122": "07851_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Lysimachia_clethroides", "2123": "07852_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Lysimachia_europaea", "2124": "07853_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Lysimachia_latifolia", "2125": "07854_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Lysimachia_maritima", "2126": "07855_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Lysimachia_nemorum", "2127": "07856_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Lysimachia_nummularia", "2128": "07857_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Lysimachia_punctata", "2129": "07858_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Lysimachia_quadriflora", "2130": "07859_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Lysimachia_quadrifolia", "2131": "07860_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Lysimachia_terrestris", "2132": "07861_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Lysimachia_thyrsiflora", "2133": "07862_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Lysimachia_vulgaris", "2134": "07863_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Myrsine_australis", "2135": "07864_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Myrsine_divaricata", "2136": "07865_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Myrsine_salicina", "2137": "07866_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Primula_clevelandii", "2138": "07867_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Primula_elatior", "2139": "07868_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Primula_farinosa", "2140": "07869_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Primula_hendersonii", "2141": "07870_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Primula_meadia", "2142": "07871_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Primula_parryi", "2143": "07872_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Primula_pauciflora", "2144": "07873_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Primula_veris", "2145": "07874_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Primula_vulgaris", "2146": "07875_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Samolus_parviflorus", "2147": "07876_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Samolus_repens", "2148": "07877_Plantae_Tracheophyta_Magnoliopsida_Ericales_Primulaceae_Soldanella_alpina", "2149": "07878_Plantae_Tracheophyta_Magnoliopsida_Ericales_Sapotaceae_Sideroxylon_inerme", "2150": "07879_Plantae_Tracheophyta_Magnoliopsida_Ericales_Sapotaceae_Sideroxylon_lanuginosum", "2151": "07880_Plantae_Tracheophyta_Magnoliopsida_Ericales_Sarraceniaceae_Darlingtonia_californica", "2152": "07881_Plantae_Tracheophyta_Magnoliopsida_Ericales_Sarraceniaceae_Sarracenia_alata", "2153": "07882_Plantae_Tracheophyta_Magnoliopsida_Ericales_Sarraceniaceae_Sarracenia_flava", "2154": "07883_Plantae_Tracheophyta_Magnoliopsida_Ericales_Sarraceniaceae_Sarracenia_leucophylla", "2155": "07884_Plantae_Tracheophyta_Magnoliopsida_Ericales_Sarraceniaceae_Sarracenia_minor", "2156": "07885_Plantae_Tracheophyta_Magnoliopsida_Ericales_Sarraceniaceae_Sarracenia_purpurea", "2157": "07886_Plantae_Tracheophyta_Magnoliopsida_Ericales_Symplocaceae_Symplocos_tinctoria", "2158": "07887_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Abrus_precatorius", "2159": "07888_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Acacia_confusa", "2160": "07889_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Acacia_cyclops", "2161": "07890_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Acacia_dealbata", "2162": "07891_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Acacia_longifolia", "2163": "07892_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Acacia_mearnsii", "2164": "07893_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Acacia_melanoxylon", "2165": "07894_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Acacia_myrtifolia", "2166": "07895_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Acacia_paradoxa", "2167": "07896_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Acacia_pycnantha", "2168": "07897_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Acacia_saligna", "2169": "07898_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Acmispon_americanus", "2170": "07899_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Acmispon_argophyllus", "2171": "07900_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Acmispon_glaber", "2172": "07901_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Acmispon_rigidus", "2173": "07902_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Acmispon_strigosus", "2174": "07903_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Albizia_julibrissin", "2175": "07904_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Amorpha_canescens", "2176": "07905_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Amorpha_fruticosa", "2177": "07906_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Amphicarpaea_bracteata", "2178": "07907_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Anthyllis_vulneraria", "2179": "07908_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Apios_americana", "2180": "07909_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Astragalus_alpinus", "2181": "07910_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Astragalus_canadensis", "2182": "07911_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Astragalus_cicer", "2183": "07912_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Astragalus_crassicarpus", "2184": "07913_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Astragalus_danicus", "2185": "07914_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Astragalus_glycyphyllos", "2186": "07915_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Astragalus_lentiginosus", "2187": "07916_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Astragalus_mollissimus", "2188": "07917_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Astragalus_nuttallianus", "2189": "07918_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Astragalus_purshii", "2190": "07919_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Astragalus_trichopodus", "2191": "07920_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Baptisia_alba", "2192": "07921_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Baptisia_australis", "2193": "07922_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Baptisia_bracteata", "2194": "07923_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Baptisia_tinctoria", "2195": "07924_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Bituminaria_bituminosa", "2196": "07925_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Caesalpinia_pulcherrima", "2197": "07926_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Calliandra_eriophylla", "2198": "07927_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Calliandra_houstoniana", "2199": "07928_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Canavalia_rosea", "2200": "07929_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Caragana_arborescens", "2201": "07930_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Carmichaelia_australis", "2202": "07931_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Cassia_fistula", "2203": "07932_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Centrosema_virginianum", "2204": "07933_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Ceratonia_siliqua", "2205": "07934_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Cercis_canadensis", "2206": "07935_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Cercis_occidentalis", "2207": "07936_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Chamaecrista_fasciculata", "2208": "07937_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Chamaecrista_nictitans", "2209": "07938_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Chamaecytisus_ruthenicus", "2210": "07939_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Clitoria_mariana", "2211": "07940_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Clitoria_ternatea", "2212": "07941_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Crotalaria_spectabilis", "2213": "07942_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Cytisus_proliferus", "2214": "07943_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Cytisus_scoparius", "2215": "07944_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Dalea_aurea", "2216": "07945_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Dalea_candida", "2217": "07946_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Dalea_formosa", "2218": "07947_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Dalea_purpurea", "2219": "07948_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Delonix_regia", "2220": "07949_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Dermatophyllum_secundiflorum", "2221": "07950_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Desmanthus_illinoensis", "2222": "07951_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Desmodium_canadense", "2223": "07952_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Desmodium_paniculatum", "2224": "07953_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Dichrostachys_cinerea", "2225": "07954_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Ebenopsis_ebano", "2226": "07955_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Enterolobium_cyclocarpum", "2227": "07956_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Erythrina_herbacea", "2228": "07957_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Erythrostemon_gilliesii", "2229": "07958_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Erythrostemon_mexicanus", "2230": "07959_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Eysenhardtia_texana", "2231": "07960_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Galega_officinalis", "2232": "07961_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Genista_monspessulana", "2233": "07962_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Genista_tinctoria", "2234": "07963_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Gleditsia_triacanthos", "2235": "07964_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Gliricidia_sepium", "2236": "07965_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Glycyrrhiza_lepidota", "2237": "07966_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Guilandina_bonduc", "2238": "07967_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Gymnocladus_dioicus", "2239": "07968_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hardenbergia_violacea", "2240": "07969_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hedysarum_alpinum", "2241": "07970_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hedysarum_boreale", "2242": "07971_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hoffmannseggia_glauca", "2243": "07972_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hoffmannseggia_microphylla", "2244": "07973_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hosackia_gracilis", "2245": "07974_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hylodesmum_glutinosum", "2246": "07975_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hylodesmum_nudiflorum", "2247": "07976_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Indigofera_miniata", "2248": "07977_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Kennedia_prostrata", "2249": "07978_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Laburnum_anagyroides", "2250": "07979_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_hirsutus", "2251": "07980_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_japonicus", "2252": "07981_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_latifolius", "2253": "07982_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_ochroleucus", "2254": "07983_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_palustris", "2255": "07984_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_pratensis", "2256": "07985_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_sylvestris", "2257": "07986_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_tuberosus", "2258": "07987_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_vernus", "2259": "07988_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_vestitus", "2260": "07989_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lespedeza_capitata", "2261": "07990_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lespedeza_cuneata", "2262": "07991_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lespedeza_virginica", "2263": "07992_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lessertia_frutescens", "2264": "07993_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Leucaena_leucocephala", "2265": "07994_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lotus_corniculatus", "2266": "07995_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lotus_pedunculatus", "2267": "07996_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_albifrons", "2268": "07997_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_angustifolius", "2269": "07998_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_arboreus", "2270": "07999_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_arcticus", "2271": "08000_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_argenteus", "2272": "08001_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_arizonicus", "2273": "08002_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_bicolor", "2274": "08003_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_concinnus", "2275": "08004_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_hirsutissimus", "2276": "08005_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_lepidus", "2277": "08006_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_microcarpus", "2278": "08007_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_nanus", "2279": "08008_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_nootkatensis", "2280": "08009_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_perennis", "2281": "08010_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_polyphyllus", "2282": "08011_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_sparsiflorus", "2283": "08012_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_succulentus", "2284": "08013_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_texensis", "2285": "08014_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_truncatus", "2286": "08015_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Macroptilium_atropurpureum", "2287": "08016_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Macroptilium_gibbosifolium", "2288": "08017_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Macroptilium_lathyroides", "2289": "08018_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Medicago_arabica", "2290": "08019_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Medicago_falcata", "2291": "08020_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Medicago_lupulina", "2292": "08021_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Medicago_minima", "2293": "08022_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Medicago_polymorpha", "2294": "08023_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Medicago_sativa", "2295": "08024_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Melilotus_albus", "2296": "08025_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Melilotus_indicus", "2297": "08026_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Melilotus_officinalis", "2298": "08027_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Mimosa_aculeaticarpa", "2299": "08028_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Mimosa_nuttallii", "2300": "08029_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Mimosa_pudica", "2301": "08030_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Mimosa_quadrivalvis", "2302": "08031_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Mimosa_strigillosa", "2303": "08032_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Neptunia_lutea", "2304": "08033_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Neptunia_pubescens", "2305": "08034_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Olneya_tesota", "2306": "08035_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Onobrychis_viciifolia", "2307": "08036_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Ononis_spinosa", "2308": "08037_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Oxytropis_campestris", "2309": "08038_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Oxytropis_lambertii", "2310": "08039_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Paraserianthes_lophantha", "2311": "08040_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Parkinsonia_aculeata", "2312": "08041_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Parkinsonia_florida", "2313": "08042_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Parkinsonia_microphylla", "2314": "08043_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Pediomelum_argophyllum", "2315": "08044_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Pediomelum_tenuiflorum", "2316": "08045_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Pickeringia_montana", "2317": "08046_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Pithecellobium_dulce", "2318": "08047_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Platylobium_obtusangulum", "2319": "08048_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Prosopis_glandulosa", "2320": "08049_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Prosopis_laevigata", "2321": "08050_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Prosopis_pubescens", "2322": "08051_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Prosopis_velutina", "2323": "08052_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Psorothamnus_emoryi", "2324": "08053_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Psorothamnus_schottii", "2325": "08054_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Psorothamnus_spinosus", "2326": "08055_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Pueraria_montana", "2327": "08056_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Robinia_neomexicana", "2328": "08057_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Robinia_pseudoacacia", "2329": "08058_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Rupertia_physodes", "2330": "08059_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Securigera_varia", "2331": "08060_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Senegalia_greggii", "2332": "08061_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Senna_alata", "2333": "08062_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Senna_armata", "2334": "08063_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Senna_covesii", "2335": "08064_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Senna_didymobotrya", "2336": "08065_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Senna_hebecarpa", "2337": "08066_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Senna_lindheimeriana", "2338": "08067_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Senna_marilandica", "2339": "08068_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Senna_obtusifolia", "2340": "08069_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Senna_occidentalis", "2341": "08070_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Senna_roemeriana", "2342": "08071_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Sesbania_drummondii", "2343": "08072_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Sesbania_herbacea", "2344": "08073_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Sesbania_punicea", "2345": "08074_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Sesbania_vesicaria", "2346": "08075_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Sophora_affinis", "2347": "08076_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Sophora_microphylla", "2348": "08077_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Spartium_junceum", "2349": "08078_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Strophostyles_helvola", "2350": "08079_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Strophostyles_umbellata", "2351": "08080_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Stylosanthes_biflora", "2352": "08081_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Tamarindus_indica", "2353": "08082_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Tephrosia_virginiana", "2354": "08083_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Thermopsis_californica", "2355": "08084_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Thermopsis_rhombifolia", "2356": "08085_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Trifolium_angustifolium", "2357": "08086_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Trifolium_arvense", "2358": "08087_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Trifolium_aureum", "2359": "08088_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Trifolium_campestre", "2360": "08089_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Trifolium_depauperatum", "2361": "08090_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Trifolium_dubium", "2362": "08091_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Trifolium_fragiferum", "2363": "08092_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Trifolium_hirtum", "2364": "08093_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Trifolium_hybridum", "2365": "08094_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Trifolium_incarnatum", "2366": "08095_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Trifolium_lupinaster", "2367": "08096_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Trifolium_medium", "2368": "08097_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Trifolium_montanum", "2369": "08098_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Trifolium_pratense", "2370": "08099_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Trifolium_repens", "2371": "08100_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Trifolium_resupinatum", "2372": "08101_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Trifolium_subterraneum", "2373": "08102_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Trifolium_willdenovii", "2374": "08103_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Ulex_europaeus", "2375": "08104_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Vachellia_constricta", "2376": "08105_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Vachellia_cornigera", "2377": "08106_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Vachellia_farnesiana", "2378": "08107_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Vachellia_karroo", "2379": "08108_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Vachellia_rigidula", "2380": "08109_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Vachellia_schaffneri", "2381": "08110_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Vicia_americana", "2382": "08111_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Vicia_benghalensis", "2383": "08112_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Vicia_caroliniana", "2384": "08113_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Vicia_cracca", "2385": "08114_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Vicia_gigantea", "2386": "08115_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Vicia_hirsuta", "2387": "08116_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Vicia_ludoviciana", "2388": "08117_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Vicia_sativa", "2389": "08118_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Vicia_sepium", "2390": "08119_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Vicia_sylvatica", "2391": "08120_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Vicia_tetrasperma", "2392": "08121_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Vicia_villosa", "2393": "08122_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Vigna_luteola", "2394": "08123_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Wisteria_sinensis", "2395": "08124_Plantae_Tracheophyta_Magnoliopsida_Fabales_Polygalaceae_Asemeia_grandiflora", "2396": "08125_Plantae_Tracheophyta_Magnoliopsida_Fabales_Polygalaceae_Muraltia_heisteria", "2397": "08126_Plantae_Tracheophyta_Magnoliopsida_Fabales_Polygalaceae_Muraltia_spinosa", "2398": "08127_Plantae_Tracheophyta_Magnoliopsida_Fabales_Polygalaceae_Polygala_alba", "2399": "08128_Plantae_Tracheophyta_Magnoliopsida_Fabales_Polygalaceae_Polygala_lutea", "2400": "08129_Plantae_Tracheophyta_Magnoliopsida_Fabales_Polygalaceae_Polygala_myrtifolia", "2401": "08130_Plantae_Tracheophyta_Magnoliopsida_Fabales_Polygalaceae_Polygala_nana", "2402": "08131_Plantae_Tracheophyta_Magnoliopsida_Fabales_Polygalaceae_Polygala_polygama", "2403": "08132_Plantae_Tracheophyta_Magnoliopsida_Fabales_Polygalaceae_Polygala_rugelii", "2404": "08133_Plantae_Tracheophyta_Magnoliopsida_Fabales_Polygalaceae_Polygala_sanguinea", "2405": "08134_Plantae_Tracheophyta_Magnoliopsida_Fabales_Polygalaceae_Polygala_senega", "2406": "08135_Plantae_Tracheophyta_Magnoliopsida_Fabales_Polygalaceae_Polygaloides_chamaebuxus", "2407": "08136_Plantae_Tracheophyta_Magnoliopsida_Fabales_Polygalaceae_Polygaloides_paucifolia", "2408": "08137_Plantae_Tracheophyta_Magnoliopsida_Fabales_Polygalaceae_Rhinotropis_californica", "2409": "08138_Plantae_Tracheophyta_Magnoliopsida_Fagales_Betulaceae_Alnus_alnobetula", "2410": "08139_Plantae_Tracheophyta_Magnoliopsida_Fagales_Betulaceae_Alnus_glutinosa", "2411": "08140_Plantae_Tracheophyta_Magnoliopsida_Fagales_Betulaceae_Alnus_incana", "2412": "08141_Plantae_Tracheophyta_Magnoliopsida_Fagales_Betulaceae_Alnus_rhombifolia", "2413": "08142_Plantae_Tracheophyta_Magnoliopsida_Fagales_Betulaceae_Alnus_rubra", "2414": "08143_Plantae_Tracheophyta_Magnoliopsida_Fagales_Betulaceae_Alnus_serrulata", "2415": "08144_Plantae_Tracheophyta_Magnoliopsida_Fagales_Betulaceae_Betula_alleghaniensis", "2416": "08145_Plantae_Tracheophyta_Magnoliopsida_Fagales_Betulaceae_Betula_lenta", "2417": "08146_Plantae_Tracheophyta_Magnoliopsida_Fagales_Betulaceae_Betula_nana", "2418": "08147_Plantae_Tracheophyta_Magnoliopsida_Fagales_Betulaceae_Betula_nigra", "2419": "08148_Plantae_Tracheophyta_Magnoliopsida_Fagales_Betulaceae_Betula_papyrifera", "2420": "08149_Plantae_Tracheophyta_Magnoliopsida_Fagales_Betulaceae_Betula_pendula", "2421": "08150_Plantae_Tracheophyta_Magnoliopsida_Fagales_Betulaceae_Betula_populifolia", "2422": "08151_Plantae_Tracheophyta_Magnoliopsida_Fagales_Betulaceae_Betula_pubescens", "2423": "08152_Plantae_Tracheophyta_Magnoliopsida_Fagales_Betulaceae_Betula_pumila", "2424": "08153_Plantae_Tracheophyta_Magnoliopsida_Fagales_Betulaceae_Carpinus_betulus", "2425": "08154_Plantae_Tracheophyta_Magnoliopsida_Fagales_Betulaceae_Carpinus_caroliniana", "2426": "08155_Plantae_Tracheophyta_Magnoliopsida_Fagales_Betulaceae_Corylus_americana", "2427": "08156_Plantae_Tracheophyta_Magnoliopsida_Fagales_Betulaceae_Corylus_avellana", "2428": "08157_Plantae_Tracheophyta_Magnoliopsida_Fagales_Betulaceae_Corylus_cornuta", "2429": "08158_Plantae_Tracheophyta_Magnoliopsida_Fagales_Betulaceae_Ostrya_virginiana", "2430": "08159_Plantae_Tracheophyta_Magnoliopsida_Fagales_Casuarinaceae_Casuarina_equisetifolia", "2431": "08160_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Castanea_dentata", "2432": "08161_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Castanea_sativa", "2433": "08162_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Chrysolepis_chrysophylla", "2434": "08163_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Chrysolepis_sempervirens", "2435": "08164_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Fagus_grandifolia", "2436": "08165_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Fagus_sylvatica", "2437": "08166_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Notholithocarpus_densiflorus", "2438": "08167_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_agrifolia", "2439": "08168_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_alba", "2440": "08169_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_berberidifolia", "2441": "08170_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_bicolor", "2442": "08171_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_buckleyi", "2443": "08172_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_chrysolepis", "2444": "08173_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_coccifera", "2445": "08174_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_coccinea", "2446": "08175_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_cornelius-mulleri", "2447": "08176_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_douglasii", "2448": "08177_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_durata", "2449": "08178_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_emoryi", "2450": "08179_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_engelmannii", "2451": "08180_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_falcata", "2452": "08181_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_fusiformis", "2453": "08182_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_gambelii", "2454": "08183_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_garryana", "2455": "08184_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_geminata", "2456": "08185_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_ilex", "2457": "08186_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_ilicifolia", "2458": "08187_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_imbricaria", "2459": "08188_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_kelloggii", "2460": "08189_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_laevis", "2461": "08190_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_lobata", "2462": "08191_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_macrocarpa", "2463": "08192_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_marilandica", "2464": "08193_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_michauxii", "2465": "08194_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_montana", "2466": "08195_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_muehlenbergii", "2467": "08196_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_nigra", "2468": "08197_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_palustris", "2469": "08198_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_phellos", "2470": "08199_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_robur", "2471": "08200_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_rotundifolia", "2472": "08201_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_rubra", "2473": "08202_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_shumardii", "2474": "08203_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_sinuata", "2475": "08204_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_stellata", "2476": "08205_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_suber", "2477": "08206_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_turbinella", "2478": "08207_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_velutina", "2479": "08208_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_virginiana", "2480": "08209_Plantae_Tracheophyta_Magnoliopsida_Fagales_Fagaceae_Quercus_wislizeni", "2481": "08210_Plantae_Tracheophyta_Magnoliopsida_Fagales_Juglandaceae_Carya_cordiformis", "2482": "08211_Plantae_Tracheophyta_Magnoliopsida_Fagales_Juglandaceae_Carya_glabra", "2483": "08212_Plantae_Tracheophyta_Magnoliopsida_Fagales_Juglandaceae_Carya_illinoinensis", "2484": "08213_Plantae_Tracheophyta_Magnoliopsida_Fagales_Juglandaceae_Carya_ovata", "2485": "08214_Plantae_Tracheophyta_Magnoliopsida_Fagales_Juglandaceae_Carya_tomentosa", "2486": "08215_Plantae_Tracheophyta_Magnoliopsida_Fagales_Juglandaceae_Juglans_californica", "2487": "08216_Plantae_Tracheophyta_Magnoliopsida_Fagales_Juglandaceae_Juglans_cinerea", "2488": "08217_Plantae_Tracheophyta_Magnoliopsida_Fagales_Juglandaceae_Juglans_nigra", "2489": "08218_Plantae_Tracheophyta_Magnoliopsida_Fagales_Juglandaceae_Juglans_regia", "2490": "08219_Plantae_Tracheophyta_Magnoliopsida_Fagales_Myricaceae_Comptonia_peregrina", "2491": "08220_Plantae_Tracheophyta_Magnoliopsida_Fagales_Myricaceae_Morella_californica", "2492": "08221_Plantae_Tracheophyta_Magnoliopsida_Fagales_Myricaceae_Morella_cerifera", "2493": "08222_Plantae_Tracheophyta_Magnoliopsida_Fagales_Myricaceae_Morella_pensylvanica", "2494": "08223_Plantae_Tracheophyta_Magnoliopsida_Fagales_Myricaceae_Myrica_gale", "2495": "08224_Plantae_Tracheophyta_Magnoliopsida_Fagales_Nothofagaceae_Nothofagus_cliffortioides", "2496": "08225_Plantae_Tracheophyta_Magnoliopsida_Fagales_Nothofagaceae_Nothofagus_menziesii", "2497": "08226_Plantae_Tracheophyta_Magnoliopsida_Garryales_Garryaceae_Garrya_elliptica", "2498": "08227_Plantae_Tracheophyta_Magnoliopsida_Garryales_Garryaceae_Garrya_lindheimeri", "2499": "08228_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Amsonia_ciliata", "2500": "08229_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Amsonia_tabernaemontana", "2501": "08230_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Apocynum_androsaemifolium", "2502": "08231_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Apocynum_cannabinum", "2503": "08232_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Araujia_sericifera", "2504": "08233_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_albicans", "2505": "08234_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_amplexicaulis", "2506": "08235_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_asperula", "2507": "08236_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_californica", "2508": "08237_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_cordifolia", "2509": "08238_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_curassavica", "2510": "08239_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_eriocarpa", "2511": "08240_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_erosa", "2512": "08241_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_exaltata", "2513": "08242_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_fascicularis", "2514": "08243_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_hirtella", "2515": "08244_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_humistrata", "2516": "08245_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_incarnata", "2517": "08246_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_lanceolata", "2518": "08247_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_latifolia", "2519": "08248_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_linaria", "2520": "08249_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_oenotheroides", "2521": "08250_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_perennis", "2522": "08251_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_purpurascens", "2523": "08252_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_quadrifolia", "2524": "08253_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_speciosa", "2525": "08254_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_subulata", "2526": "08255_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_subverticillata", "2527": "08256_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_syriaca", "2528": "08257_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_tuberosa", "2529": "08258_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_variegata", "2530": "08259_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_verticillata", "2531": "08260_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_viridiflora", "2532": "08261_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Asclepias_viridis", "2533": "08262_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Calotropis_gigantea", "2534": "08263_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Calotropis_procera", "2535": "08264_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Carissa_bispinosa", "2536": "08265_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Carissa_macrocarpa", "2537": "08266_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Cascabela_thevetia", "2538": "08267_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Catharanthus_roseus", "2539": "08268_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Cynanchum_laeve", "2540": "08269_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Funastrum_cynanchoides", "2541": "08270_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Funastrum_heterophyllum", "2542": "08271_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Funastrum_hirtellum", "2543": "08272_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Gomphocarpus_cancellatus", "2544": "08273_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Gomphocarpus_fruticosus", "2545": "08274_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Gomphocarpus_physocarpus", "2546": "08275_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Gonolobus_suberosus", "2547": "08276_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Matelea_biflora", "2548": "08277_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Matelea_reticulata", "2549": "08278_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Nerium_oleander", "2550": "08279_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Parsonsia_heterophylla", "2551": "08280_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Plumeria_rubra", "2552": "08281_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Trachelospermum_jasminoides", "2553": "08282_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Vinca_major", "2554": "08283_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Vinca_minor", "2555": "08284_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Vincetoxicum_hirundinaria", "2556": "08285_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Vincetoxicum_nigrum", "2557": "08286_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Apocynaceae_Vincetoxicum_rossicum", "2558": "08287_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gelsemiaceae_Gelsemium_sempervirens", "2559": "08288_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Blackstonia_perfoliata", "2560": "08289_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Centaurium_erythraea", "2561": "08290_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Centaurium_pulchellum", "2562": "08291_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Chironia_baccifera", "2563": "08292_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Eustoma_exaltatum", "2564": "08293_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Frasera_caroliniensis", "2565": "08294_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Frasera_speciosa", "2566": "08295_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Gentiana_acaulis", "2567": "08296_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Gentiana_alba", "2568": "08297_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Gentiana_algida", "2569": "08298_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Gentiana_andrewsii", "2570": "08299_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Gentiana_asclepiadea", "2571": "08300_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Gentiana_calycosa", "2572": "08301_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Gentiana_clausa", "2573": "08302_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Gentiana_cruciata", "2574": "08303_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Gentiana_linearis", "2575": "08304_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Gentiana_lutea", "2576": "08305_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Gentiana_pneumonanthe", "2577": "08306_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Gentiana_sedifolia", "2578": "08307_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Gentiana_verna", "2579": "08308_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Gentianella_amarella", "2580": "08309_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Gentianopsis_ciliata", "2581": "08310_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Gentianopsis_crinita", "2582": "08311_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Obolaria_virginica", "2583": "08312_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Sabatia_angularis", "2584": "08313_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Sabatia_campestris", "2585": "08314_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Sabatia_stellaris", "2586": "08315_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Zeltnera_texensis", "2587": "08316_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Gentianaceae_Zeltnera_venusta", "2588": "08317_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Loganiaceae_Geniostoma_ligustrifolium", "2589": "08318_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Loganiaceae_Spigelia_marilandica", "2590": "08319_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Bouvardia_ternifolia", "2591": "08320_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Cephalanthus_occidentalis", "2592": "08321_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Coffea_arabica", "2593": "08322_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Coprosma_acerosa", "2594": "08323_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Coprosma_areolata", "2595": "08324_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Coprosma_crassifolia", "2596": "08325_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Coprosma_dumosa", "2597": "08326_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Coprosma_foetidissima", "2598": "08327_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Coprosma_grandifolia", "2599": "08328_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Coprosma_lucida", "2600": "08329_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Coprosma_propinqua", "2601": "08330_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Coprosma_repens", "2602": "08331_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Coprosma_rhamnoides", "2603": "08332_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Coprosma_robusta", "2604": "08333_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Coprosma_rotundifolia", "2605": "08334_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Cruciata_laevipes", "2606": "08335_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Diodia_virginiana", "2607": "08336_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Galium_album", "2608": "08337_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Galium_angustifolium", "2609": "08338_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Galium_aparine", "2610": "08339_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Galium_boreale", "2611": "08340_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Galium_circaezans", "2612": "08341_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Galium_mollugo", "2613": "08342_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Galium_odoratum", "2614": "08343_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Galium_palustre", "2615": "08344_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Galium_triflorum", "2616": "08345_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Galium_verum", "2617": "08346_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Hamelia_patens", "2618": "08347_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Hexasepalum_teres", "2619": "08348_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Houstonia_caerulea", "2620": "08349_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Houstonia_longifolia", "2621": "08350_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Houstonia_procumbens", "2622": "08351_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Houstonia_purpurea", "2623": "08352_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Houstonia_pusilla", "2624": "08353_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Ixora_coccinea", "2625": "08354_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Mitchella_repens", "2626": "08355_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Morinda_citrifolia", "2627": "08356_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Nertera_depressa", "2628": "08357_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Oldenlandia_corymbosa", "2629": "08358_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Paederia_foetida", "2630": "08359_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Psychotria_asiatica", "2631": "08360_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Psychotria_nervosa", "2632": "08361_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Psychotria_poeppigiana", "2633": "08362_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Richardia_grandiflora", "2634": "08363_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Sherardia_arvensis", "2635": "08364_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Spermacoce_verticillata", "2636": "08365_Plantae_Tracheophyta_Magnoliopsida_Gentianales_Rubiaceae_Stenaria_nigricans", "2637": "08366_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Erodium_botrys", "2638": "08367_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Erodium_cicutarium", "2639": "08368_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Erodium_moschatum", "2640": "08369_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Erodium_texanum", "2641": "08370_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Geranium_caespitosum", "2642": "08371_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Geranium_carolinianum", "2643": "08372_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Geranium_dissectum", "2644": "08373_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Geranium_erianthum", "2645": "08374_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Geranium_lucidum", "2646": "08375_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Geranium_maculatum", "2647": "08376_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Geranium_molle", "2648": "08377_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Geranium_palustre", "2649": "08378_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Geranium_phaeum", "2650": "08379_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Geranium_pratense", "2651": "08380_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Geranium_purpureum", "2652": "08381_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Geranium_pusillum", "2653": "08382_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Geranium_pyrenaicum", "2654": "08383_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Geranium_richardsonii", "2655": "08384_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Geranium_robertianum", "2656": "08385_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Geranium_sanguineum", "2657": "08386_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Geranium_sibiricum", "2658": "08387_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Geranium_sylvaticum", "2659": "08388_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Geranium_viscosissimum", "2660": "08389_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Pelargonium_capitatum", "2661": "08390_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Pelargonium_cucullatum", "2662": "08391_Plantae_Tracheophyta_Magnoliopsida_Geraniales_Geraniaceae_Pelargonium_triste", "2663": "08392_Plantae_Tracheophyta_Magnoliopsida_Gunnerales_Gunneraceae_Gunnera_tinctoria", "2664": "08393_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Acanthaceae_Acanthus_mollis", "2665": "08394_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Acanthaceae_Asystasia_gangetica", "2666": "08395_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Acanthaceae_Avicennia_germinans", "2667": "08396_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Acanthaceae_Avicennia_marina", "2668": "08397_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Acanthaceae_Dicliptera_brachiata", "2669": "08398_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Acanthaceae_Dyschoriste_linearis", "2670": "08399_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Acanthaceae_Hypoestes_phyllostachya", "2671": "08400_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Acanthaceae_Justicia_americana", "2672": "08401_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Acanthaceae_Justicia_californica", "2673": "08402_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Acanthaceae_Justicia_pilosella", "2674": "08403_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Acanthaceae_Pachystachys_lutea", "2675": "08404_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Acanthaceae_Ruellia_blechum", "2676": "08405_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Acanthaceae_Ruellia_caroliniensis", "2677": "08406_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Acanthaceae_Ruellia_humilis", "2678": "08407_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Acanthaceae_Ruellia_nudiflora", "2679": "08408_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Acanthaceae_Ruellia_simplex", "2680": "08409_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Acanthaceae_Ruellia_strepens", "2681": "08410_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Acanthaceae_Thunbergia_alata", "2682": "08411_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Acanthaceae_Thunbergia_grandiflora", "2683": "08412_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Bignoniaceae_Bignonia_capreolata", "2684": "08413_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Bignoniaceae_Campsis_radicans", "2685": "08414_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Bignoniaceae_Catalpa_speciosa", "2686": "08415_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Bignoniaceae_Chilopsis_linearis", "2687": "08416_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Bignoniaceae_Crescentia_cujete", "2688": "08417_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Bignoniaceae_Jacaranda_mimosifolia", "2689": "08418_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Bignoniaceae_Spathodea_campanulata", "2690": "08419_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Bignoniaceae_Tabebuia_rosea", "2691": "08420_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Bignoniaceae_Tecoma_stans", "2692": "08421_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Bignoniaceae_Tecomaria_capensis", "2693": "08422_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Gesneriaceae_Rhabdothamnus_solandri", "2694": "08423_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Agastache_foeniculum", "2695": "08424_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Agastache_urticifolia", "2696": "08425_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Ajuga_chamaepitys", "2697": "08426_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Ajuga_genevensis", "2698": "08427_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Ajuga_reptans", "2699": "08428_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Ballota_nigra", "2700": "08429_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Betonica_officinalis", "2701": "08430_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Blephilia_hirsuta", "2702": "08431_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Callicarpa_americana", "2703": "08432_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Callicarpa_formosana", "2704": "08433_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Clinopodium_acinos", "2705": "08434_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Clinopodium_douglasii", "2706": "08435_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Clinopodium_vulgare", "2707": "08436_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Collinsonia_canadensis", "2708": "08437_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Condea_emoryi", "2709": "08438_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Cunila_origanoides", "2710": "08439_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Galeopsis_bifida", "2711": "08440_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Galeopsis_speciosa", "2712": "08441_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Galeopsis_tetrahit", "2713": "08442_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Glechoma_hederacea", "2714": "08443_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Hedeoma_acinoides", "2715": "08444_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Hyptis_alata", "2716": "08445_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Lamium_album", "2717": "08446_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Lamium_amplexicaule", "2718": "08447_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Lamium_galeobdolon", "2719": "08448_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Lamium_maculatum", "2720": "08449_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Lamium_purpureum", "2721": "08450_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Lavandula_stoechas", "2722": "08451_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Leonotis_leonurus", "2723": "08452_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Leonotis_nepetifolia", "2724": "08453_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Leonurus_cardiaca", "2725": "08454_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Leonurus_quinquelobatus", "2726": "08455_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Lepechinia_calycina", "2727": "08456_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Lycopus_americanus", "2728": "08457_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Lycopus_europaeus", "2729": "08458_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Lycopus_uniflorus", "2730": "08459_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Marrubium_vulgare", "2731": "08460_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Melissa_officinalis", "2732": "08461_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Melittis_melissophyllum", "2733": "08462_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Mentha_aquatica", "2734": "08463_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Mentha_arvensis", "2735": "08464_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Mentha_canadensis", "2736": "08465_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Mentha_longifolia", "2737": "08466_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Mentha_pulegium", "2738": "08467_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Mentha_spicata", "2739": "08468_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Monarda_citriodora", "2740": "08469_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Monarda_didyma", "2741": "08470_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Monarda_fistulosa", "2742": "08471_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Monarda_punctata", "2743": "08472_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Monardella_odoratissima", "2744": "08473_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Monardella_villosa", "2745": "08474_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Nepeta_cataria", "2746": "08475_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Origanum_vulgare", "2747": "08476_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Perilla_frutescens", "2748": "08477_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Phlomoides_tuberosa", "2749": "08478_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Physostegia_virginiana", "2750": "08479_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Prunella_vulgaris", "2751": "08480_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Pycnanthemum_tenuifolium", "2752": "08481_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Pycnanthemum_virginianum", "2753": "08482_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Salvia_apiana", "2754": "08483_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Salvia_aurea", "2755": "08484_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Salvia_azurea", "2756": "08485_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Salvia_coccinea", "2757": "08486_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Salvia_columbariae", "2758": "08487_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Salvia_dorrii", "2759": "08488_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Salvia_farinacea", "2760": "08489_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Salvia_glutinosa", "2761": "08490_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Salvia_leucophylla", "2762": "08491_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Salvia_lyrata", "2763": "08492_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Salvia_mellifera", "2764": "08493_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Salvia_microphylla", "2765": "08494_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Salvia_pratensis", "2766": "08495_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Salvia_roemeriana", "2767": "08496_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Salvia_rosmarinus", "2768": "08497_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Salvia_spathacea", "2769": "08498_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Salvia_texana", "2770": "08499_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Salvia_verticillata", "2771": "08500_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Scutellaria_drummondii", "2772": "08501_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Scutellaria_galericulata", "2773": "08502_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Scutellaria_integrifolia", "2774": "08503_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Scutellaria_lateriflora", "2775": "08504_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Scutellaria_mexicana", "2776": "08505_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Scutellaria_parvula", "2777": "08506_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Scutellaria_tuberosa", "2778": "08507_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Stachys_bullata", "2779": "08508_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Stachys_byzantina", "2780": "08509_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Stachys_chamissonis", "2781": "08510_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Stachys_coccinea", "2782": "08511_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Stachys_floridana", "2783": "08512_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Stachys_palustris", "2784": "08513_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Stachys_recta", "2785": "08514_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Stachys_sylvatica", "2786": "08515_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Teucrium_canadense", "2787": "08516_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Teucrium_chamaedrys", "2788": "08517_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Teucrium_cubense", "2789": "08518_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Teucrium_scorodonia", "2790": "08519_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Thymus_praecox", "2791": "08520_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Thymus_pulegioides", "2792": "08521_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Thymus_serpyllum", "2793": "08522_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Trichostema_dichotomum", "2794": "08523_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Trichostema_lanatum", "2795": "08524_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Trichostema_lanceolatum", "2796": "08525_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Trichostema_parishii", "2797": "08526_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Vitex_agnus-castus", "2798": "08527_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Vitex_lucens", "2799": "08528_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Vitex_rotundifolia", "2800": "08529_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lamiaceae_Warnockia_scutellarioides", "2801": "08530_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lentibulariaceae_Pinguicula_alpina", "2802": "08531_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lentibulariaceae_Pinguicula_vulgaris", "2803": "08532_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lentibulariaceae_Utricularia_cornuta", "2804": "08533_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lentibulariaceae_Utricularia_gibba", "2805": "08534_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lentibulariaceae_Utricularia_intermedia", "2806": "08535_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lentibulariaceae_Utricularia_macrorhiza", "2807": "08536_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lentibulariaceae_Utricularia_purpurea", "2808": "08537_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Lentibulariaceae_Utricularia_subulata", "2809": "08538_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Linderniaceae_Lindernia_dubia", "2810": "08539_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Linderniaceae_Torenia_crustacea", "2811": "08540_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Martyniaceae_Proboscidea_althaeifolia", "2812": "08541_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Martyniaceae_Proboscidea_louisianica", "2813": "08542_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Mazaceae_Mazus_pumilus", "2814": "08543_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Oleaceae_Chionanthus_virginicus", "2815": "08544_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Oleaceae_Forestiera_pubescens", "2816": "08545_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Oleaceae_Fraxinus_albicans", "2817": "08546_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Oleaceae_Fraxinus_americana", "2818": "08547_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Oleaceae_Fraxinus_excelsior", "2819": "08548_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Oleaceae_Fraxinus_latifolia", "2820": "08549_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Oleaceae_Fraxinus_nigra", "2821": "08550_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Oleaceae_Fraxinus_pennsylvanica", "2822": "08551_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Oleaceae_Fraxinus_velutina", "2823": "08552_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Oleaceae_Ligustrum_lucidum", "2824": "08553_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Oleaceae_Ligustrum_quihoui", "2825": "08554_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Oleaceae_Ligustrum_sinense", "2826": "08555_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Oleaceae_Ligustrum_vulgare", "2827": "08556_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Oleaceae_Olea_europaea", "2828": "08557_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Oleaceae_Syringa_vulgaris", "2829": "08558_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Agalinis_heterophylla", "2830": "08559_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Agalinis_purpurea", "2831": "08560_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Agalinis_tenuifolia", "2832": "08561_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Aphyllon_fasciculatum", "2833": "08562_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Aphyllon_purpureum", "2834": "08563_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Aphyllon_uniflorum", "2835": "08564_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Bartsia_alpina", "2836": "08565_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Bellardia_trixago", "2837": "08566_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Bellardia_viscosa", "2838": "08567_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Castilleja_affinis", "2839": "08568_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Castilleja_applegatei", "2840": "08569_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Castilleja_arvensis", "2841": "08570_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Castilleja_attenuata", "2842": "08571_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Castilleja_chromosa", "2843": "08572_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Castilleja_coccinea", "2844": "08573_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Castilleja_densiflora", "2845": "08574_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Castilleja_exserta", "2846": "08575_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Castilleja_foliolosa", "2847": "08576_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Castilleja_hispida", "2848": "08577_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Castilleja_indivisa", "2849": "08578_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Castilleja_integra", "2850": "08579_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Castilleja_latifolia", "2851": "08580_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Castilleja_linariifolia", "2852": "08581_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Castilleja_miniata", "2853": "08582_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Castilleja_parviflora", "2854": "08583_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Castilleja_purpurea", "2855": "08584_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Castilleja_rhexiifolia", "2856": "08585_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Castilleja_rubicundula", "2857": "08586_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Castilleja_sessiliflora", "2858": "08587_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Castilleja_tenuiflora", "2859": "08588_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Castilleja_wightii", "2860": "08589_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Conopholis_alpina", "2861": "08590_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Conopholis_americana", "2862": "08591_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Cordylanthus_rigidus", "2863": "08592_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Epifagus_virginiana", "2864": "08593_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Euphrasia_nemorosa", "2865": "08594_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Hyobanche_sanguinea", "2866": "08595_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Lathraea_squamaria", "2867": "08596_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Melampyrum_arvense", "2868": "08597_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Melampyrum_lineare", "2869": "08598_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Melampyrum_nemorosum", "2870": "08599_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Melampyrum_pratense", "2871": "08600_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Odontites_vulgaris", "2872": "08601_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Orobanche_hederae", "2873": "08602_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Orobanche_minor", "2874": "08603_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Pedicularis_attollens", "2875": "08604_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Pedicularis_bracteosa", "2876": "08605_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Pedicularis_canadensis", "2877": "08606_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Pedicularis_densiflora", "2878": "08607_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Pedicularis_groenlandica", "2879": "08608_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Pedicularis_labradorica", "2880": "08609_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Pedicularis_lanceolata", "2881": "08610_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Pedicularis_racemosa", "2882": "08611_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Pedicularis_semibarbata", "2883": "08612_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Pedicularis_sylvatica", "2884": "08613_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Pedicularis_verticillata", "2885": "08614_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Rhinanthus_alectorolophus", "2886": "08615_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Rhinanthus_minor", "2887": "08616_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Rhinanthus_serotinus", "2888": "08617_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Triphysaria_eriantha", "2889": "08618_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Orobanchaceae_Triphysaria_pusilla", "2890": "08619_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Paulowniaceae_Paulownia_tomentosa", "2891": "08620_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Phrymaceae_Diplacus_aurantiacus", "2892": "08621_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Phrymaceae_Diplacus_bigelovii", "2893": "08622_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Phrymaceae_Diplacus_brevipes", "2894": "08623_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Phrymaceae_Diplacus_longiflorus", "2895": "08624_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Phrymaceae_Diplacus_puniceus", "2896": "08625_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Phrymaceae_Erythranthe_alsinoides", "2897": "08626_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Phrymaceae_Erythranthe_cardinalis", "2898": "08627_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Phrymaceae_Erythranthe_guttata", "2899": "08628_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Phrymaceae_Erythranthe_lewisii", "2900": "08629_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Phrymaceae_Erythranthe_moschata", "2901": "08630_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Phrymaceae_Erythranthe_primuloides", "2902": "08631_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Phrymaceae_Mimulus_alatus", "2903": "08632_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Phrymaceae_Mimulus_ringens", "2904": "08633_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Phrymaceae_Phryma_leptostachya", "2905": "08634_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Antirrhinum_majus", "2906": "08635_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Bacopa_monnieri", "2907": "08636_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Chaenorhinum_minus", "2908": "08637_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Chelone_glabra", "2909": "08638_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Collinsia_heterophylla", "2910": "08639_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Collinsia_parviflora", "2911": "08640_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Collinsia_verna", "2912": "08641_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Cymbalaria_muralis", "2913": "08642_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Digitalis_grandiflora", "2914": "08643_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Digitalis_purpurea", "2915": "08644_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Hippuris_vulgaris", "2916": "08645_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Keckiella_antirrhinoides", "2917": "08646_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Keckiella_cordifolia", "2918": "08647_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Keckiella_ternata", "2919": "08648_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Leucospora_multifida", "2920": "08649_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Linaria_alpina", "2921": "08650_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Linaria_dalmatica", "2922": "08651_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Linaria_purpurea", "2923": "08652_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Linaria_vulgaris", "2924": "08653_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Maurandya_antirrhiniflora", "2925": "08654_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Mecardonia_procumbens", "2926": "08655_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Mohavea_confertiflora", "2927": "08656_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Nuttallanthus_canadensis", "2928": "08657_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Nuttallanthus_texanus", "2929": "08658_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Penstemon_barbatus", "2930": "08659_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Penstemon_centranthifolius", "2931": "08660_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Penstemon_cobaea", "2932": "08661_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Penstemon_davidsonii", "2933": "08662_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Penstemon_digitalis", "2934": "08663_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Penstemon_eatonii", "2935": "08664_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Penstemon_grandiflorus", "2936": "08665_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Penstemon_heterophyllus", "2937": "08666_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Penstemon_hirsutus", "2938": "08667_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Penstemon_newberryi", "2939": "08668_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Penstemon_palmeri", "2940": "08669_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Penstemon_procerus", "2941": "08670_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Penstemon_rostriflorus", "2942": "08671_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Penstemon_spectabilis", "2943": "08672_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Penstemon_whippleanus", "2944": "08673_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Plantago_aristata", "2945": "08674_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Plantago_coronopus", "2946": "08675_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Plantago_erecta", "2947": "08676_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Plantago_lanceolata", "2948": "08677_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Plantago_major", "2949": "08678_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Plantago_maritima", "2950": "08679_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Plantago_media", "2951": "08680_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Plantago_ovata", "2952": "08681_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Plantago_patagonica", "2953": "08682_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Plantago_rugelii", "2954": "08683_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Plantago_virginica", "2955": "08684_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Sairocarpus_coulterianus", "2956": "08685_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Sairocarpus_nuttallianus", "2957": "08686_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Scoparia_dulcis", "2958": "08687_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Veronica_americana", "2959": "08688_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Veronica_anagallis-aquatica", "2960": "08689_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Veronica_arvensis", "2961": "08690_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Veronica_beccabunga", "2962": "08691_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Veronica_chamaedrys", "2963": "08692_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Veronica_elliptica", "2964": "08693_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Veronica_filiformis", "2965": "08694_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Veronica_hederifolia", "2966": "08695_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Veronica_longifolia", "2967": "08696_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Veronica_officinalis", "2968": "08697_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Veronica_peregrina", "2969": "08698_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Veronica_persica", "2970": "08699_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Veronica_polita", "2971": "08700_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Veronica_salicifolia", "2972": "08701_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Veronica_scutellata", "2973": "08702_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Veronica_serpyllifolia", "2974": "08703_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Veronica_spicata", "2975": "08704_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Veronica_stricta", "2976": "08705_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Veronica_teucrium", "2977": "08706_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Veronica_wormskjoldii", "2978": "08707_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Plantaginaceae_Veronicastrum_virginicum", "2979": "08708_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Scrophulariaceae_Buddleja_cordata", "2980": "08709_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Scrophulariaceae_Buddleja_davidii", "2981": "08710_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Scrophulariaceae_Leucophyllum_frutescens", "2982": "08711_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Scrophulariaceae_Myoporum_laetum", "2983": "08712_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Scrophulariaceae_Oftia_africana", "2984": "08713_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Scrophulariaceae_Scrophularia_californica", "2985": "08714_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Scrophulariaceae_Scrophularia_nodosa", "2986": "08715_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Scrophulariaceae_Verbascum_blattaria", "2987": "08716_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Scrophulariaceae_Verbascum_lychnitis", "2988": "08717_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Scrophulariaceae_Verbascum_nigrum", "2989": "08718_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Scrophulariaceae_Verbascum_thapsus", "2990": "08719_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Scrophulariaceae_Verbascum_virgatum", "2991": "08720_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Stilbaceae_Halleria_lucida", "2992": "08721_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Tetrachondraceae_Polypremum_procumbens", "2993": "08722_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Verbenaceae_Aloysia_gratissima", "2994": "08723_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Verbenaceae_Duranta_erecta", "2995": "08724_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Verbenaceae_Glandularia_bipinnatifida", "2996": "08725_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Verbenaceae_Glandularia_canadensis", "2997": "08726_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Verbenaceae_Glandularia_pumila", "2998": "08727_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Verbenaceae_Lantana_camara", "2999": "08728_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Verbenaceae_Lantana_involucrata", "3000": "08729_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Verbenaceae_Lantana_montevidensis", "3001": "08730_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Verbenaceae_Phyla_lanceolata", "3002": "08731_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Verbenaceae_Phyla_nodiflora", "3003": "08732_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Verbenaceae_Stachytarpheta_jamaicensis", "3004": "08733_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Verbenaceae_Verbena_bonariensis", "3005": "08734_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Verbenaceae_Verbena_bracteata", "3006": "08735_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Verbenaceae_Verbena_brasiliensis", "3007": "08736_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Verbenaceae_Verbena_halei", "3008": "08737_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Verbenaceae_Verbena_hastata", "3009": "08738_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Verbenaceae_Verbena_lasiostachys", "3010": "08739_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Verbenaceae_Verbena_macdougalii", "3011": "08740_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Verbenaceae_Verbena_officinalis", "3012": "08741_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Verbenaceae_Verbena_rigida", "3013": "08742_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Verbenaceae_Verbena_stricta", "3014": "08743_Plantae_Tracheophyta_Magnoliopsida_Lamiales_Verbenaceae_Verbena_urticifolia", "3015": "08744_Plantae_Tracheophyta_Magnoliopsida_Laurales_Atherospermataceae_Laurelia_novae-zelandiae", "3016": "08745_Plantae_Tracheophyta_Magnoliopsida_Laurales_Calycanthaceae_Calycanthus_floridus", "3017": "08746_Plantae_Tracheophyta_Magnoliopsida_Laurales_Calycanthaceae_Calycanthus_occidentalis", "3018": "08747_Plantae_Tracheophyta_Magnoliopsida_Laurales_Lauraceae_Beilschmiedia_tawa", "3019": "08748_Plantae_Tracheophyta_Magnoliopsida_Laurales_Lauraceae_Cassytha_filiformis", "3020": "08749_Plantae_Tracheophyta_Magnoliopsida_Laurales_Lauraceae_Cinnamomum_camphora", "3021": "08750_Plantae_Tracheophyta_Magnoliopsida_Laurales_Lauraceae_Laurus_nobilis", "3022": "08751_Plantae_Tracheophyta_Magnoliopsida_Laurales_Lauraceae_Lindera_benzoin", "3023": "08752_Plantae_Tracheophyta_Magnoliopsida_Laurales_Lauraceae_Persea_americana", "3024": "08753_Plantae_Tracheophyta_Magnoliopsida_Laurales_Lauraceae_Sassafras_albidum", "3025": "08754_Plantae_Tracheophyta_Magnoliopsida_Laurales_Lauraceae_Umbellularia_californica", "3026": "08755_Plantae_Tracheophyta_Magnoliopsida_Laurales_Monimiaceae_Hedycarya_arborea", "3027": "08756_Plantae_Tracheophyta_Magnoliopsida_Magnoliales_Annonaceae_Annona_glabra", "3028": "08757_Plantae_Tracheophyta_Magnoliopsida_Magnoliales_Annonaceae_Asimina_triloba", "3029": "08758_Plantae_Tracheophyta_Magnoliopsida_Magnoliales_Magnoliaceae_Liriodendron_tulipifera", "3030": "08759_Plantae_Tracheophyta_Magnoliopsida_Magnoliales_Magnoliaceae_Magnolia_acuminata", "3031": "08760_Plantae_Tracheophyta_Magnoliopsida_Magnoliales_Magnoliaceae_Magnolia_fraseri", "3032": "08761_Plantae_Tracheophyta_Magnoliopsida_Magnoliales_Magnoliaceae_Magnolia_grandiflora", "3033": "08762_Plantae_Tracheophyta_Magnoliopsida_Magnoliales_Magnoliaceae_Magnolia_macrophylla", "3034": "08763_Plantae_Tracheophyta_Magnoliopsida_Magnoliales_Magnoliaceae_Magnolia_tripetala", "3035": "08764_Plantae_Tracheophyta_Magnoliopsida_Magnoliales_Magnoliaceae_Magnolia_virginiana", "3036": "08765_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Chrysobalanaceae_Chrysobalanus_icaco", "3037": "08766_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Acalypha_ostryifolia", "3038": "08767_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Acalypha_rhomboidea", "3039": "08768_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Cnidoscolus_stimulosus", "3040": "08769_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Cnidoscolus_texanus", "3041": "08770_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Codiaeum_variegatum", "3042": "08771_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Croton_californicus", "3043": "08772_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Croton_fruticulosus", "3044": "08773_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Croton_glandulosus", "3045": "08774_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Croton_lindheimeri", "3046": "08775_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Croton_monanthogynus", "3047": "08776_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Croton_punctatus", "3048": "08777_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Croton_setiger", "3049": "08778_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Ditaxis_lanceolata", "3050": "08779_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_albomarginata", "3051": "08780_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_amygdaloides", "3052": "08781_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_antisyphilitica", "3053": "08782_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_bicolor", "3054": "08783_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_caput-medusae", "3055": "08784_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_characias", "3056": "08785_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_corollata", "3057": "08786_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_cyathophora", "3058": "08787_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_cyparissias", "3059": "08788_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_dentata", "3060": "08789_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_fendleri", "3061": "08790_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_helioscopia", "3062": "08791_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_heterophylla", "3063": "08792_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_hirta", "3064": "08793_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_hypericifolia", "3065": "08794_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_hyssopifolia", "3066": "08795_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_lathyris", "3067": "08796_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_maculata", "3068": "08797_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_marginata", "3069": "08798_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_mauritanica", "3070": "08799_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_melanadenia", "3071": "08800_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_misera", "3072": "08801_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_myrsinites", "3073": "08802_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_nutans", "3074": "08803_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_oblongata", "3075": "08804_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_ophthalmica", "3076": "08805_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_paralias", "3077": "08806_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_peplus", "3078": "08807_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_polycarpa", "3079": "08808_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_prostrata", "3080": "08809_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_serpens", "3081": "08810_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_serpillifolia", "3082": "08811_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_spathulata", "3083": "08812_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_terracina", "3084": "08813_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Euphorbia_virgata", "3085": "08814_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Jatropha_dioica", "3086": "08815_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Jatropha_gossypiifolia", "3087": "08816_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Macaranga_tanarius", "3088": "08817_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Mallotus_paniculatus", "3089": "08818_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Manihot_esculenta", "3090": "08819_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Mercurialis_annua", "3091": "08820_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Mercurialis_perennis", "3092": "08821_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Ricinus_communis", "3093": "08822_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Stillingia_sylvatica", "3094": "08823_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Stillingia_texana", "3095": "08824_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Euphorbiaceae_Triadica_sebifera", "3096": "08825_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Hypericaceae_Hypericum_androsaemum", "3097": "08826_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Hypericaceae_Hypericum_ascyron", "3098": "08827_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Hypericaceae_Hypericum_calycinum", "3099": "08828_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Hypericaceae_Hypericum_fraseri", "3100": "08829_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Hypericaceae_Hypericum_gentianoides", "3101": "08830_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Hypericaceae_Hypericum_hypericoides", "3102": "08831_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Hypericaceae_Hypericum_kalmianum", "3103": "08832_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Hypericaceae_Hypericum_maculatum", "3104": "08833_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Hypericaceae_Hypericum_mutilum", "3105": "08834_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Hypericaceae_Hypericum_perforatum", "3106": "08835_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Hypericaceae_Hypericum_punctatum", "3107": "08836_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Hypericaceae_Hypericum_tetrapetalum", "3108": "08837_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Linaceae_Linum_bienne", "3109": "08838_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Linaceae_Linum_lewisii", "3110": "08839_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Linaceae_Linum_pratense", "3111": "08840_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Linaceae_Linum_rigidum", "3112": "08841_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Malpighiaceae_Byrsonima_crassifolia", "3113": "08842_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Passifloraceae_Passiflora_caerulea", "3114": "08843_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Passifloraceae_Passiflora_edulis", "3115": "08844_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Passifloraceae_Passiflora_foetida", "3116": "08845_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Passifloraceae_Passiflora_incarnata", "3117": "08846_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Passifloraceae_Passiflora_lutea", "3118": "08847_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Passifloraceae_Passiflora_suberosa", "3119": "08848_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Passifloraceae_Passiflora_tetrandra", "3120": "08849_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Passifloraceae_Passiflora_tripartita", "3121": "08850_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Passifloraceae_Passiflora_vitifolia", "3122": "08851_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Passifloraceae_Turnera_ulmifolia", "3123": "08852_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Phyllanthaceae_Bischofia_javanica", "3124": "08853_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Phyllanthaceae_Phyllanthus_tenellus", "3125": "08854_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Phyllanthaceae_Phyllanthus_urinaria", "3126": "08855_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Rhizophoraceae_Rhizophora_mangle", "3127": "08856_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Populus_alba", "3128": "08857_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Populus_balsamifera", "3129": "08858_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Populus_deltoides", "3130": "08859_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Populus_fremontii", "3131": "08860_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Populus_grandidentata", "3132": "08861_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Populus_nigra", "3133": "08862_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Populus_tremula", "3134": "08863_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Populus_tremuloides", "3135": "08864_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Populus_trichocarpa", "3136": "08865_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Salix_alba", "3137": "08866_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Salix_babylonica", "3138": "08867_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Salix_bebbiana", "3139": "08868_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Salix_caprea", "3140": "08869_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Salix_cinerea", "3141": "08870_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Salix_discolor", "3142": "08871_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Salix_eriocephala", "3143": "08872_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Salix_exigua", "3144": "08873_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Salix_gooddingii", "3145": "08874_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Salix_interior", "3146": "08875_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Salix_laevigata", "3147": "08876_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Salix_lasiolepis", "3148": "08877_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Salix_myrsinifolia", "3149": "08878_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Salix_nigra", "3150": "08879_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Salix_reticulata", "3151": "08880_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Salix_triandra", "3152": "08881_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Salicaceae_Salix_viminalis", "3153": "08882_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Melicytus_alpinus", "3154": "08883_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Melicytus_ramiflorus", "3155": "08884_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_adunca", "3156": "08885_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_arvensis", "3157": "08886_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_bicolor", "3158": "08887_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_biflora", "3159": "08888_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_blanda", "3160": "08889_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_canadensis", "3161": "08890_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_canina", "3162": "08891_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_glabella", "3163": "08892_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_hastata", "3164": "08893_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_hirta", "3165": "08894_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_labradorica", "3166": "08895_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_lanceolata", "3167": "08896_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_lobata", "3168": "08897_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_macloskeyi", "3169": "08898_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_mirabilis", "3170": "08899_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_nuttallii", "3171": "08900_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_ocellata", "3172": "08901_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_odorata", "3173": "08902_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_palmata", "3174": "08903_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_pedata", "3175": "08904_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_pedatifida", "3176": "08905_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_pedunculata", "3177": "08906_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_primulifolia", "3178": "08907_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_pubescens", "3179": "08908_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_purpurea", "3180": "08909_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_reichenbachiana", "3181": "08910_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_riviniana", "3182": "08911_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_rostrata", "3183": "08912_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_rotundifolia", "3184": "08913_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_sagittata", "3185": "08914_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_sempervirens", "3186": "08915_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_sororia", "3187": "08916_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_striata", "3188": "08917_Plantae_Tracheophyta_Magnoliopsida_Malpighiales_Violaceae_Viola_tricolor", "3189": "08918_Plantae_Tracheophyta_Magnoliopsida_Malvales_Bixaceae_Bixa_orellana", "3190": "08919_Plantae_Tracheophyta_Magnoliopsida_Malvales_Bixaceae_Cochlospermum_vitifolium", "3191": "08920_Plantae_Tracheophyta_Magnoliopsida_Malvales_Cistaceae_Cistus_albidus", "3192": "08921_Plantae_Tracheophyta_Magnoliopsida_Malvales_Cistaceae_Cistus_creticus", "3193": "08922_Plantae_Tracheophyta_Magnoliopsida_Malvales_Cistaceae_Cistus_ladanifer", "3194": "08923_Plantae_Tracheophyta_Magnoliopsida_Malvales_Cistaceae_Cistus_monspeliensis", "3195": "08924_Plantae_Tracheophyta_Magnoliopsida_Malvales_Cistaceae_Cistus_salviifolius", "3196": "08925_Plantae_Tracheophyta_Magnoliopsida_Malvales_Cistaceae_Crocanthemum_canadense", "3197": "08926_Plantae_Tracheophyta_Magnoliopsida_Malvales_Cistaceae_Crocanthemum_scoparium", "3198": "08927_Plantae_Tracheophyta_Magnoliopsida_Malvales_Cistaceae_Helianthemum_nummularium", "3199": "08928_Plantae_Tracheophyta_Magnoliopsida_Malvales_Cistaceae_Hudsonia_tomentosa", "3200": "08929_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Abutilon_fruticosum", "3201": "08930_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Abutilon_theophrasti", "3202": "08931_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Adansonia_digitata", "3203": "08932_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Alcea_rosea", "3204": "08933_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Allowissadula_holosericea", "3205": "08934_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Anoda_cristata", "3206": "08935_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Callirhoe_involucrata", "3207": "08936_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Callirhoe_pedata", "3208": "08937_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Ceiba_pentandra", "3209": "08938_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Entelea_arborescens", "3210": "08939_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Eremalche_rotundifolia", "3211": "08940_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Fremontodendron_californicum", "3212": "08941_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Gossypium_hirsutum", "3213": "08942_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Guazuma_ulmifolia", "3214": "08943_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Herissantia_crispa", "3215": "08944_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Hibiscus_denudatus", "3216": "08945_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Hibiscus_laevis", "3217": "08946_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Hibiscus_moscheutos", "3218": "08947_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Hibiscus_rosa-sinensis", "3219": "08948_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Hibiscus_syriacus", "3220": "08949_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Hibiscus_tiliaceus", "3221": "08950_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Hibiscus_trionum", "3222": "08951_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Hoheria_populnea", "3223": "08952_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Horsfordia_newberryi", "3224": "08953_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Kosteletzkya_pentacarpos", "3225": "08954_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Malacothamnus_fasciculatus", "3226": "08955_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Malva_arborea", "3227": "08956_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Malva_assurgentiflora", "3228": "08957_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Malva_moschata", "3229": "08958_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Malva_neglecta", "3230": "08959_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Malva_parviflora", "3231": "08960_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Malva_sylvestris", "3232": "08961_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Malva_thuringiaca", "3233": "08962_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Malvastrum_coromandelianum", "3234": "08963_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Malvaviscus_arboreus", "3235": "08964_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Malvella_leprosa", "3236": "08965_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Melochia_pyramidata", "3237": "08966_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Melochia_tomentosa", "3238": "08967_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Modiola_caroliniana", "3239": "08968_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Pachira_aquatica", "3240": "08969_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Pavonia_lasiopetala", "3241": "08970_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Plagianthus_divaricatus", "3242": "08971_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Plagianthus_regius", "3243": "08972_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Sida_abutifolia", "3244": "08973_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Sida_ciliaris", "3245": "08974_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Sida_rhombifolia", "3246": "08975_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Sidalcea_malviflora", "3247": "08976_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Sidalcea_oregana", "3248": "08977_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Sidalcea_sparsifolia", "3249": "08978_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Sphaeralcea_ambigua", "3250": "08979_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Sphaeralcea_angustifolia", "3251": "08980_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Sphaeralcea_coccinea", "3252": "08981_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Theobroma_cacao", "3253": "08982_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Thespesia_populnea", "3254": "08983_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Tilia_americana", "3255": "08984_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Tilia_cordata", "3256": "08985_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Urena_lobata", "3257": "08986_Plantae_Tracheophyta_Magnoliopsida_Malvales_Malvaceae_Waltheria_indica", "3258": "08987_Plantae_Tracheophyta_Magnoliopsida_Malvales_Muntingiaceae_Muntingia_calabura", "3259": "08988_Plantae_Tracheophyta_Magnoliopsida_Malvales_Thymelaeaceae_Daphne_gnidium", "3260": "08989_Plantae_Tracheophyta_Magnoliopsida_Malvales_Thymelaeaceae_Daphne_laureola", "3261": "08990_Plantae_Tracheophyta_Magnoliopsida_Malvales_Thymelaeaceae_Daphne_mezereum", "3262": "08991_Plantae_Tracheophyta_Magnoliopsida_Malvales_Thymelaeaceae_Dirca_palustris", "3263": "08992_Plantae_Tracheophyta_Magnoliopsida_Malvales_Thymelaeaceae_Pimelea_villosa", "3264": "08993_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Combretaceae_Conocarpus_erectus", "3265": "08994_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Combretaceae_Laguncularia_racemosa", "3266": "08995_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Combretaceae_Terminalia_catappa", "3267": "08996_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Lythraceae_Ammannia_coccinea", "3268": "08997_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Lythraceae_Cuphea_carthagenensis", "3269": "08998_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Lythraceae_Decodon_verticillatus", "3270": "08999_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Lythraceae_Lagerstroemia_indica", "3271": "09000_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Lythraceae_Lythrum_alatum", "3272": "09001_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Lythraceae_Lythrum_hyssopifolia", "3273": "09002_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Lythraceae_Lythrum_salicaria", "3274": "09003_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Lythraceae_Punica_granatum", "3275": "09004_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Lythraceae_Trapa_natans", "3276": "09005_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Melastomataceae_Clidemia_hirta", "3277": "09006_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Melastomataceae_Melastoma_malabathricum", "3278": "09007_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Melastomataceae_Rhexia_alifanus", "3279": "09008_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Melastomataceae_Rhexia_mariana", "3280": "09009_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Melastomataceae_Rhexia_virginica", "3281": "09010_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Myrtaceae_Eucalyptus_camaldulensis", "3282": "09011_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Myrtaceae_Eucalyptus_globulus", "3283": "09012_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Myrtaceae_Leptospermum_laevigatum", "3284": "09013_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Myrtaceae_Leptospermum_scoparium", "3285": "09014_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Myrtaceae_Lophomyrtus_bullata", "3286": "09015_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Myrtaceae_Metrosideros_diffusa", "3287": "09016_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Myrtaceae_Metrosideros_excelsa", "3288": "09017_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Myrtaceae_Metrosideros_fulgens", "3289": "09018_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Myrtaceae_Metrosideros_perforata", "3290": "09019_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Myrtaceae_Metrosideros_polymorpha", "3291": "09020_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Myrtaceae_Metrosideros_robusta", "3292": "09021_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Myrtaceae_Metrosideros_umbellata", "3293": "09022_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Myrtaceae_Myrtus_communis", "3294": "09023_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Myrtaceae_Psidium_cattleyanum", "3295": "09024_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Myrtaceae_Psidium_guajava", "3296": "09025_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Camissoniopsis_bistorta", "3297": "09026_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Camissoniopsis_cheiranthifolia", "3298": "09027_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Chamaenerion_angustifolium", "3299": "09028_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Chamaenerion_latifolium", "3300": "09029_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Chylismia_brevipes", "3301": "09030_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Chylismia_claviformis", "3302": "09031_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Circaea_alpina", "3303": "09032_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Circaea_canadensis", "3304": "09033_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Circaea_lutetiana", "3305": "09034_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Clarkia_amoena", "3306": "09035_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Clarkia_concinna", "3307": "09036_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Clarkia_purpurea", "3308": "09037_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Clarkia_rhomboidea", "3309": "09038_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Clarkia_rubicunda", "3310": "09039_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Clarkia_unguiculata", "3311": "09040_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Epilobium_brachycarpum", "3312": "09041_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Epilobium_canum", "3313": "09042_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Epilobium_ciliatum", "3314": "09043_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Epilobium_hirsutum", "3315": "09044_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Epilobium_montanum", "3316": "09045_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Eremothera_boothii", "3317": "09046_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Eulobus_californicus", "3318": "09047_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Fuchsia_boliviana", "3319": "09048_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Fuchsia_excorticata", "3320": "09049_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Fuchsia_magellanica", "3321": "09050_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Lopezia_racemosa", "3322": "09051_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Ludwigia_alternifolia", "3323": "09052_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Ludwigia_octovalvis", "3324": "09053_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Ludwigia_palustris", "3325": "09054_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Ludwigia_peploides", "3326": "09055_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Ludwigia_peruviana", "3327": "09056_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Oenothera_berlandieri", "3328": "09057_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Oenothera_biennis", "3329": "09058_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Oenothera_cespitosa", "3330": "09059_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Oenothera_curtiflora", "3331": "09060_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Oenothera_deltoides", "3332": "09061_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Oenothera_drummondii", "3333": "09062_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Oenothera_elata", "3334": "09063_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Oenothera_gaura", "3335": "09064_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Oenothera_glaucifolia", "3336": "09065_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Oenothera_glazioviana", "3337": "09066_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Oenothera_hartwegii", "3338": "09067_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Oenothera_laciniata", "3339": "09068_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Oenothera_macrocarpa", "3340": "09069_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Oenothera_rosea", "3341": "09070_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Oenothera_rubricaulis", "3342": "09071_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Oenothera_speciosa", "3343": "09072_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Oenothera_suffrutescens", "3344": "09073_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Oenothera_suffulta", "3345": "09074_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Oenothera_triloba", "3346": "09075_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Onagraceae_Taraxia_ovata", "3347": "09076_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Penaeaceae_Penaea_mucronata", "3348": "09077_Plantae_Tracheophyta_Magnoliopsida_Myrtales_Penaeaceae_Saltera_sarcocolla", "3349": "09078_Plantae_Tracheophyta_Magnoliopsida_Nymphaeales_Cabombaceae_Brasenia_schreberi", "3350": "09079_Plantae_Tracheophyta_Magnoliopsida_Nymphaeales_Nymphaeaceae_Nuphar_advena", "3351": "09080_Plantae_Tracheophyta_Magnoliopsida_Nymphaeales_Nymphaeaceae_Nuphar_lutea", "3352": "09081_Plantae_Tracheophyta_Magnoliopsida_Nymphaeales_Nymphaeaceae_Nuphar_polysepala", "3353": "09082_Plantae_Tracheophyta_Magnoliopsida_Nymphaeales_Nymphaeaceae_Nuphar_variegata", "3354": "09083_Plantae_Tracheophyta_Magnoliopsida_Nymphaeales_Nymphaeaceae_Nymphaea_alba", "3355": "09084_Plantae_Tracheophyta_Magnoliopsida_Nymphaeales_Nymphaeaceae_Nymphaea_candida", "3356": "09085_Plantae_Tracheophyta_Magnoliopsida_Nymphaeales_Nymphaeaceae_Nymphaea_nouchali", "3357": "09086_Plantae_Tracheophyta_Magnoliopsida_Nymphaeales_Nymphaeaceae_Nymphaea_odorata", "3358": "09087_Plantae_Tracheophyta_Magnoliopsida_Oxalidales_Cunoniaceae_Bauera_rubioides", "3359": "09088_Plantae_Tracheophyta_Magnoliopsida_Oxalidales_Cunoniaceae_Weinmannia_racemosa", "3360": "09089_Plantae_Tracheophyta_Magnoliopsida_Oxalidales_Elaeocarpaceae_Aristotelia_serrata", "3361": "09090_Plantae_Tracheophyta_Magnoliopsida_Oxalidales_Elaeocarpaceae_Elaeocarpus_hookerianus", "3362": "09091_Plantae_Tracheophyta_Magnoliopsida_Oxalidales_Oxalidaceae_Oxalis_acetosella", "3363": "09092_Plantae_Tracheophyta_Magnoliopsida_Oxalidales_Oxalidaceae_Oxalis_articulata", "3364": "09093_Plantae_Tracheophyta_Magnoliopsida_Oxalidales_Oxalidaceae_Oxalis_corniculata", "3365": "09094_Plantae_Tracheophyta_Magnoliopsida_Oxalidales_Oxalidaceae_Oxalis_debilis", "3366": "09095_Plantae_Tracheophyta_Magnoliopsida_Oxalidales_Oxalidaceae_Oxalis_dillenii", "3367": "09096_Plantae_Tracheophyta_Magnoliopsida_Oxalidales_Oxalidaceae_Oxalis_drummondii", "3368": "09097_Plantae_Tracheophyta_Magnoliopsida_Oxalidales_Oxalidaceae_Oxalis_incarnata", "3369": "09098_Plantae_Tracheophyta_Magnoliopsida_Oxalidales_Oxalidaceae_Oxalis_latifolia", "3370": "09099_Plantae_Tracheophyta_Magnoliopsida_Oxalidales_Oxalidaceae_Oxalis_luteola", "3371": "09100_Plantae_Tracheophyta_Magnoliopsida_Oxalidales_Oxalidaceae_Oxalis_montana", "3372": "09101_Plantae_Tracheophyta_Magnoliopsida_Oxalidales_Oxalidaceae_Oxalis_obtusa", "3373": "09102_Plantae_Tracheophyta_Magnoliopsida_Oxalidales_Oxalidaceae_Oxalis_oregana", "3374": "09103_Plantae_Tracheophyta_Magnoliopsida_Oxalidales_Oxalidaceae_Oxalis_pes-caprae", "3375": "09104_Plantae_Tracheophyta_Magnoliopsida_Oxalidales_Oxalidaceae_Oxalis_polyphylla", "3376": "09105_Plantae_Tracheophyta_Magnoliopsida_Oxalidales_Oxalidaceae_Oxalis_purpurea", "3377": "09106_Plantae_Tracheophyta_Magnoliopsida_Oxalidales_Oxalidaceae_Oxalis_stricta", "3378": "09107_Plantae_Tracheophyta_Magnoliopsida_Oxalidales_Oxalidaceae_Oxalis_violacea", "3379": "09108_Plantae_Tracheophyta_Magnoliopsida_Piperales_Aristolochiaceae_Aristolochia_californica", "3380": "09109_Plantae_Tracheophyta_Magnoliopsida_Piperales_Aristolochiaceae_Aristolochia_clematitis", "3381": "09110_Plantae_Tracheophyta_Magnoliopsida_Piperales_Aristolochiaceae_Asarum_canadense", "3382": "09111_Plantae_Tracheophyta_Magnoliopsida_Piperales_Aristolochiaceae_Asarum_caudatum", "3383": "09112_Plantae_Tracheophyta_Magnoliopsida_Piperales_Aristolochiaceae_Asarum_europaeum", "3384": "09113_Plantae_Tracheophyta_Magnoliopsida_Piperales_Aristolochiaceae_Endodeca_serpentaria", "3385": "09114_Plantae_Tracheophyta_Magnoliopsida_Piperales_Aristolochiaceae_Hexastylis_arifolia", "3386": "09115_Plantae_Tracheophyta_Magnoliopsida_Piperales_Piperaceae_Piper_excelsum", "3387": "09116_Plantae_Tracheophyta_Magnoliopsida_Piperales_Saururaceae_Anemopsis_californica", "3388": "09117_Plantae_Tracheophyta_Magnoliopsida_Piperales_Saururaceae_Houttuynia_cordata", "3389": "09118_Plantae_Tracheophyta_Magnoliopsida_Piperales_Saururaceae_Saururus_cernuus", "3390": "09119_Plantae_Tracheophyta_Magnoliopsida_Proteales_Nelumbonaceae_Nelumbo_lutea", "3391": "09120_Plantae_Tracheophyta_Magnoliopsida_Proteales_Nelumbonaceae_Nelumbo_nucifera", "3392": "09121_Plantae_Tracheophyta_Magnoliopsida_Proteales_Platanaceae_Platanus_occidentalis", "3393": "09122_Plantae_Tracheophyta_Magnoliopsida_Proteales_Platanaceae_Platanus_racemosa", "3394": "09123_Plantae_Tracheophyta_Magnoliopsida_Proteales_Platanaceae_Platanus_wrightii", "3395": "09124_Plantae_Tracheophyta_Magnoliopsida_Proteales_Proteaceae_Banksia_integrifolia", "3396": "09125_Plantae_Tracheophyta_Magnoliopsida_Proteales_Proteaceae_Banksia_marginata", "3397": "09126_Plantae_Tracheophyta_Magnoliopsida_Proteales_Proteaceae_Hakea_sericea", "3398": "09127_Plantae_Tracheophyta_Magnoliopsida_Proteales_Proteaceae_Knightia_excelsa", "3399": "09128_Plantae_Tracheophyta_Magnoliopsida_Proteales_Proteaceae_Leucadendron_laureolum", "3400": "09129_Plantae_Tracheophyta_Magnoliopsida_Proteales_Proteaceae_Leucadendron_rubrum", "3401": "09130_Plantae_Tracheophyta_Magnoliopsida_Proteales_Proteaceae_Leucadendron_salignum", "3402": "09131_Plantae_Tracheophyta_Magnoliopsida_Proteales_Proteaceae_Leucospermum_conocarpodendron", "3403": "09132_Plantae_Tracheophyta_Magnoliopsida_Proteales_Proteaceae_Leucospermum_cuneiforme", "3404": "09133_Plantae_Tracheophyta_Magnoliopsida_Proteales_Proteaceae_Mimetes_cucullatus", "3405": "09134_Plantae_Tracheophyta_Magnoliopsida_Proteales_Proteaceae_Protea_caffra", "3406": "09135_Plantae_Tracheophyta_Magnoliopsida_Proteales_Proteaceae_Protea_cynaroides", "3407": "09136_Plantae_Tracheophyta_Magnoliopsida_Proteales_Proteaceae_Protea_laurifolia", "3408": "09137_Plantae_Tracheophyta_Magnoliopsida_Proteales_Proteaceae_Protea_neriifolia", "3409": "09138_Plantae_Tracheophyta_Magnoliopsida_Proteales_Proteaceae_Protea_nitida", "3410": "09139_Plantae_Tracheophyta_Magnoliopsida_Proteales_Proteaceae_Protea_repens", "3411": "09140_Plantae_Tracheophyta_Magnoliopsida_Proteales_Proteaceae_Serruria_fasciflora", "3412": "09141_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Berberidaceae_Achlys_triphylla", "3413": "09142_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Berberidaceae_Berberis_aquifolium", "3414": "09143_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Berberidaceae_Berberis_bealei", "3415": "09144_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Berberidaceae_Berberis_darwinii", "3416": "09145_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Berberidaceae_Berberis_nervosa", "3417": "09146_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Berberidaceae_Berberis_repens", "3418": "09147_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Berberidaceae_Berberis_thunbergii", "3419": "09148_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Berberidaceae_Berberis_trifoliolata", "3420": "09149_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Berberidaceae_Berberis_vulgaris", "3421": "09150_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Berberidaceae_Caulophyllum_giganteum", "3422": "09151_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Berberidaceae_Caulophyllum_thalictroides", "3423": "09152_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Berberidaceae_Jeffersonia_diphylla", "3424": "09153_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Berberidaceae_Nandina_domestica", "3425": "09154_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Berberidaceae_Podophyllum_peltatum", "3426": "09155_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Berberidaceae_Vancouveria_hexandra", "3427": "09156_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Menispermaceae_Cocculus_carolinus", "3428": "09157_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Menispermaceae_Menispermum_canadense", "3429": "09158_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Argemone_albiflora", "3430": "09159_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Argemone_mexicana", "3431": "09160_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Argemone_munita", "3432": "09161_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Argemone_ochroleuca", "3433": "09162_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Argemone_polyanthemos", "3434": "09163_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Capnoides_sempervirens", "3435": "09164_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Chelidonium_majus", "3436": "09165_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Corydalis_aurea", "3437": "09166_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Corydalis_cava", "3438": "09167_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Corydalis_flavula", "3439": "09168_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Corydalis_solida", "3440": "09169_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Dendromecon_rigida", "3441": "09170_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Dicentra_canadensis", "3442": "09171_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Dicentra_cucullaria", "3443": "09172_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Dicentra_formosa", "3444": "09173_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Ehrendorferia_chrysantha", "3445": "09174_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Eschscholzia_caespitosa", "3446": "09175_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Eschscholzia_californica", "3447": "09176_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Eschscholzia_minutiflora", "3448": "09177_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Eschscholzia_parishii", "3449": "09178_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Fumaria_capreolata", "3450": "09179_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Fumaria_muralis", "3451": "09180_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Fumaria_officinalis", "3452": "09181_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Glaucium_flavum", "3453": "09182_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Lamprocapnos_spectabilis", "3454": "09183_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Papaver_cambricum", "3455": "09184_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Papaver_dubium", "3456": "09185_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Papaver_heterophyllum", "3457": "09186_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Papaver_rhoeas", "3458": "09187_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Papaver_somniferum", "3459": "09188_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Platystemon_californicus", "3460": "09189_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Pseudofumaria_lutea", "3461": "09190_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Romneya_coulteri", "3462": "09191_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Sanguinaria_canadensis", "3463": "09192_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Papaveraceae_Stylophorum_diphyllum", "3464": "09193_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Aconitum_columbianum", "3465": "09194_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Aconitum_delphiniifolium", "3466": "09195_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Aconitum_lycoctonum", "3467": "09196_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Aconitum_napellus", "3468": "09197_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Aconitum_septentrionale", "3469": "09198_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Actaea_pachypoda", "3470": "09199_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Actaea_racemosa", "3471": "09200_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Actaea_rubra", "3472": "09201_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Actaea_spicata", "3473": "09202_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Adonis_vernalis", "3474": "09203_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Anemonastrum_canadense", "3475": "09204_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Anemone_berlandieri", "3476": "09205_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Anemone_coronaria", "3477": "09206_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Anemone_cylindrica", "3478": "09207_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Anemone_hortensis", "3479": "09208_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Anemone_multifida", "3480": "09209_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Anemone_virginiana", "3481": "09210_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Anemonoides_altaica", "3482": "09211_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Anemonoides_nemorosa", "3483": "09212_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Anemonoides_quinquefolia", "3484": "09213_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Anemonoides_ranunculoides", "3485": "09214_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Anemonoides_sylvestris", "3486": "09215_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Aquilegia_atrata", "3487": "09216_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Aquilegia_canadensis", "3488": "09217_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Aquilegia_chrysantha", "3489": "09218_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Aquilegia_coerulea", "3490": "09219_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Aquilegia_flavescens", "3491": "09220_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Aquilegia_formosa", "3492": "09221_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Aquilegia_vulgaris", "3493": "09222_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Caltha_leptosepala", "3494": "09223_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Caltha_palustris", "3495": "09224_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Ceratocephala_testiculata", "3496": "09225_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Clematis_alpina", "3497": "09226_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Clematis_crispa", "3498": "09227_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Clematis_drummondii", "3499": "09228_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Clematis_lasiantha", "3500": "09229_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Clematis_ligusticifolia", "3501": "09230_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Clematis_occidentalis", "3502": "09231_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Clematis_paniculata", "3503": "09232_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Clematis_pitcheri", "3504": "09233_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Clematis_terniflora", "3505": "09234_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Clematis_texensis", "3506": "09235_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Clematis_virginiana", "3507": "09236_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Clematis_vitalba", "3508": "09237_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Consolida_regalis", "3509": "09238_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Coptis_trifolia", "3510": "09239_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Delphinium_cardinale", "3511": "09240_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Delphinium_carolinianum", "3512": "09241_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Delphinium_glaucum", "3513": "09242_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Delphinium_nudicaule", "3514": "09243_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Delphinium_nuttallianum", "3515": "09244_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Delphinium_parishii", "3516": "09245_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Delphinium_tricorne", "3517": "09246_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Enemion_biternatum", "3518": "09247_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Eranthis_hyemalis", "3519": "09248_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Ficaria_verna", "3520": "09249_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Helleborus_foetidus", "3521": "09250_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Hepatica_acutiloba", "3522": "09251_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Hepatica_americana", "3523": "09252_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Hepatica_nobilis", "3524": "09253_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Hydrastis_canadensis", "3525": "09254_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Nigella_damascena", "3526": "09255_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Pulsatilla_alpina", "3527": "09256_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Pulsatilla_nuttalliana", "3528": "09257_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Pulsatilla_occidentalis", "3529": "09258_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Pulsatilla_patens", "3530": "09259_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Ranunculus_abortivus", "3531": "09260_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Ranunculus_acris", "3532": "09261_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Ranunculus_aquatilis", "3533": "09262_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Ranunculus_auricomus", "3534": "09263_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Ranunculus_bulbosus", "3535": "09264_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Ranunculus_californicus", "3536": "09265_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Ranunculus_cassubicus", "3537": "09266_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Ranunculus_fascicularis", "3538": "09267_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Ranunculus_flammula", "3539": "09268_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Ranunculus_glaberrimus", "3540": "09269_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Ranunculus_hispidus", "3541": "09270_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Ranunculus_muricatus", "3542": "09271_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Ranunculus_occidentalis", "3543": "09272_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Ranunculus_polyanthemos", "3544": "09273_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Ranunculus_recurvatus", "3545": "09274_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Ranunculus_repens", "3546": "09275_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Ranunculus_sceleratus", "3547": "09276_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Thalictrum_aquilegiifolium", "3548": "09277_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Thalictrum_dasycarpum", "3549": "09278_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Thalictrum_dioicum", "3550": "09279_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Thalictrum_fendleri", "3551": "09280_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Thalictrum_lucidum", "3552": "09281_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Thalictrum_minus", "3553": "09282_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Thalictrum_pubescens", "3554": "09283_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Thalictrum_thalictroides", "3555": "09284_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Trollius_asiaticus", "3556": "09285_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Trollius_europaeus", "3557": "09286_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Trollius_laxus", "3558": "09287_Plantae_Tracheophyta_Magnoliopsida_Ranunculales_Ranunculaceae_Xanthorhiza_simplicissima", "3559": "09288_Plantae_Tracheophyta_Magnoliopsida_Rosales_Cannabaceae_Cannabis_sativa", "3560": "09289_Plantae_Tracheophyta_Magnoliopsida_Rosales_Cannabaceae_Celtis_laevigata", "3561": "09290_Plantae_Tracheophyta_Magnoliopsida_Rosales_Cannabaceae_Celtis_occidentalis", "3562": "09291_Plantae_Tracheophyta_Magnoliopsida_Rosales_Cannabaceae_Celtis_pallida", "3563": "09292_Plantae_Tracheophyta_Magnoliopsida_Rosales_Cannabaceae_Celtis_reticulata", "3564": "09293_Plantae_Tracheophyta_Magnoliopsida_Rosales_Cannabaceae_Humulus_japonicus", "3565": "09294_Plantae_Tracheophyta_Magnoliopsida_Rosales_Cannabaceae_Humulus_lupulus", "3566": "09295_Plantae_Tracheophyta_Magnoliopsida_Rosales_Cannabaceae_Trema_orientalis", "3567": "09296_Plantae_Tracheophyta_Magnoliopsida_Rosales_Elaeagnaceae_Elaeagnus_angustifolia", "3568": "09297_Plantae_Tracheophyta_Magnoliopsida_Rosales_Elaeagnaceae_Elaeagnus_commutata", "3569": "09298_Plantae_Tracheophyta_Magnoliopsida_Rosales_Elaeagnaceae_Elaeagnus_pungens", "3570": "09299_Plantae_Tracheophyta_Magnoliopsida_Rosales_Elaeagnaceae_Elaeagnus_umbellata", "3571": "09300_Plantae_Tracheophyta_Magnoliopsida_Rosales_Elaeagnaceae_Hippophae_rhamnoides", "3572": "09301_Plantae_Tracheophyta_Magnoliopsida_Rosales_Elaeagnaceae_Shepherdia_canadensis", "3573": "09302_Plantae_Tracheophyta_Magnoliopsida_Rosales_Moraceae_Artocarpus_altilis", "3574": "09303_Plantae_Tracheophyta_Magnoliopsida_Rosales_Moraceae_Artocarpus_heterophyllus", "3575": "09304_Plantae_Tracheophyta_Magnoliopsida_Rosales_Moraceae_Broussonetia_papyrifera", "3576": "09305_Plantae_Tracheophyta_Magnoliopsida_Rosales_Moraceae_Fatoua_villosa", "3577": "09306_Plantae_Tracheophyta_Magnoliopsida_Rosales_Moraceae_Ficus_aurea", "3578": "09307_Plantae_Tracheophyta_Magnoliopsida_Rosales_Moraceae_Ficus_carica", "3579": "09308_Plantae_Tracheophyta_Magnoliopsida_Rosales_Moraceae_Ficus_microcarpa", "3580": "09309_Plantae_Tracheophyta_Magnoliopsida_Rosales_Moraceae_Ficus_petiolaris", "3581": "09310_Plantae_Tracheophyta_Magnoliopsida_Rosales_Moraceae_Ficus_pumila", "3582": "09311_Plantae_Tracheophyta_Magnoliopsida_Rosales_Moraceae_Ficus_religiosa", "3583": "09312_Plantae_Tracheophyta_Magnoliopsida_Rosales_Moraceae_Maclura_pomifera", "3584": "09313_Plantae_Tracheophyta_Magnoliopsida_Rosales_Moraceae_Morus_alba", "3585": "09314_Plantae_Tracheophyta_Magnoliopsida_Rosales_Moraceae_Morus_indica", "3586": "09315_Plantae_Tracheophyta_Magnoliopsida_Rosales_Moraceae_Morus_rubra", "3587": "09316_Plantae_Tracheophyta_Magnoliopsida_Rosales_Moraceae_Streblus_heterophyllus", "3588": "09317_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Berchemia_scandens", "3589": "09318_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Ceanothus_americanus", "3590": "09319_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Ceanothus_cordulatus", "3591": "09320_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Ceanothus_crassifolius", "3592": "09321_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Ceanothus_cuneatus", "3593": "09322_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Ceanothus_fendleri", "3594": "09323_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Ceanothus_herbaceus", "3595": "09324_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Ceanothus_integerrimus", "3596": "09325_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Ceanothus_leucodermis", "3597": "09326_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Ceanothus_megacarpus", "3598": "09327_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Ceanothus_oliganthus", "3599": "09328_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Ceanothus_perplexans", "3600": "09329_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Ceanothus_prostratus", "3601": "09330_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Ceanothus_spinosus", "3602": "09331_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Ceanothus_thyrsiflorus", "3603": "09332_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Ceanothus_velutinus", "3604": "09333_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Ceanothus_verrucosus", "3605": "09334_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Colubrina_texensis", "3606": "09335_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Condalia_hookeri", "3607": "09336_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Discaria_toumatou", "3608": "09337_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Frangula_alnus", "3609": "09338_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Frangula_californica", "3610": "09339_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Frangula_caroliniana", "3611": "09340_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Frangula_purshiana", "3612": "09341_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Paliurus_spina-christi", "3613": "09342_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Rhamnus_alaternus", "3614": "09343_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Rhamnus_alnifolia", "3615": "09344_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Rhamnus_cathartica", "3616": "09345_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Rhamnus_crocea", "3617": "09346_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Rhamnus_humboldtiana", "3618": "09347_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Rhamnus_ilicifolia", "3619": "09348_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Trichocephalus_stipularis", "3620": "09349_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rhamnaceae_Ziziphus_obtusifolia", "3621": "09350_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Acaena_anserinifolia", "3622": "09351_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Acaena_novae-zelandiae", "3623": "09352_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Adenostoma_fasciculatum", "3624": "09353_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Adenostoma_sparsifolium", "3625": "09354_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Agrimonia_eupatoria", "3626": "09355_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Agrimonia_gryposepala", "3627": "09356_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Agrimonia_parviflora", "3628": "09357_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Agrimonia_pilosa", "3629": "09358_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Amelanchier_alnifolia", "3630": "09359_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Amelanchier_arborea", "3631": "09360_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Amelanchier_spicata", "3632": "09361_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Amelanchier_utahensis", "3633": "09362_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Argentina_anserina", "3634": "09363_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Aronia_melanocarpa", "3635": "09364_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Aruncus_dioicus", "3636": "09365_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Cercocarpus_betuloides", "3637": "09366_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Cercocarpus_ledifolius", "3638": "09367_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Cercocarpus_montanus", "3639": "09368_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Chamaebatia_foliolosa", "3640": "09369_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Chamaebatiaria_millefolium", "3641": "09370_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Coleogyne_ramosissima", "3642": "09371_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Comarum_palustre", "3643": "09372_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Crataegus_marshallii", "3644": "09373_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Crataegus_monogyna", "3645": "09374_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Dasiphora_fruticosa", "3646": "09375_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Dryas_drummondii", "3647": "09376_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Dryas_octopetala", "3648": "09377_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Drymocallis_arguta", "3649": "09378_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Drymocallis_glandulosa", "3650": "09379_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Eriobotrya_japonica", "3651": "09380_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Fallugia_paradoxa", "3652": "09381_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Filipendula_rubra", "3653": "09382_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Filipendula_ulmaria", "3654": "09383_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Filipendula_vulgaris", "3655": "09384_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Fragaria_chiloensis", "3656": "09385_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Fragaria_vesca", "3657": "09386_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Fragaria_virginiana", "3658": "09387_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Fragaria_viridis", "3659": "09388_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Geum_aleppicum", "3660": "09389_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Geum_canadense", "3661": "09390_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Geum_macrophyllum", "3662": "09391_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Geum_rivale", "3663": "09392_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Geum_rossii", "3664": "09393_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Geum_triflorum", "3665": "09394_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Geum_urbanum", "3666": "09395_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Heteromeles_arbutifolia", "3667": "09396_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Holodiscus_discolor", "3668": "09397_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Kerria_japonica", "3669": "09398_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Luetkea_pectinata", "3670": "09399_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Malus_domestica", "3671": "09400_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Malus_fusca", "3672": "09401_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Malus_pumila", "3673": "09402_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Oemleria_cerasiformis", "3674": "09403_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Petrophytum_caespitosum", "3675": "09404_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Photinia_serratifolia", "3676": "09405_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Physocarpus_capitatus", "3677": "09406_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Physocarpus_opulifolius", "3678": "09407_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Potentilla_argentea", "3679": "09408_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Potentilla_canadensis", "3680": "09409_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Potentilla_erecta", "3681": "09410_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Potentilla_gracilis", "3682": "09411_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Potentilla_indica", "3683": "09412_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Potentilla_norvegica", "3684": "09413_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Potentilla_recta", "3685": "09414_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Potentilla_reptans", "3686": "09415_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Potentilla_simplex", "3687": "09416_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Prunus_americana", "3688": "09417_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Prunus_angustifolia", "3689": "09418_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Prunus_avium", "3690": "09419_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Prunus_caroliniana", "3691": "09420_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Prunus_cerasifera", "3692": "09421_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Prunus_dulcis", "3693": "09422_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Prunus_emarginata", "3694": "09423_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Prunus_fasciculata", "3695": "09424_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Prunus_fremontii", "3696": "09425_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Prunus_ilicifolia", "3697": "09426_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Prunus_laurocerasus", "3698": "09427_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Prunus_mexicana", "3699": "09428_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Prunus_padus", "3700": "09429_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Prunus_pensylvanica", "3701": "09430_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Prunus_persica", "3702": "09431_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Prunus_pumila", "3703": "09432_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Prunus_serotina", "3704": "09433_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Prunus_spinosa", "3705": "09434_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Prunus_virginiana", "3706": "09435_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Purshia_stansburyana", "3707": "09436_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Purshia_tridentata", "3708": "09437_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Pyracantha_coccinea", "3709": "09438_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Pyrus_calleryana", "3710": "09439_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Pyrus_communis", "3711": "09440_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rhaphiolepis_indica", "3712": "09441_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rhodotypos_scandens", "3713": "09442_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rosa_acicularis", "3714": "09443_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rosa_arkansana", "3715": "09444_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rosa_bracteata", "3716": "09445_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rosa_californica", "3717": "09446_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rosa_canina", "3718": "09447_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rosa_carolina", "3719": "09448_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rosa_gymnocarpa", "3720": "09449_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rosa_majalis", "3721": "09450_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rosa_multiflora", "3722": "09451_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rosa_nutkana", "3723": "09452_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rosa_palustris", "3724": "09453_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rosa_rubiginosa", "3725": "09454_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rosa_rugosa", "3726": "09455_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rosa_spinosissima", "3727": "09456_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rosa_woodsii", "3728": "09457_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_allegheniensis", "3729": "09458_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_arcticus", "3730": "09459_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_armeniacus", "3731": "09460_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_bifrons", "3732": "09461_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_caesius", "3733": "09462_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_chamaemorus", "3734": "09463_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_cissoides", "3735": "09464_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_dalibarda", "3736": "09465_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_flagellaris", "3737": "09466_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_hispidus", "3738": "09467_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_idaeus", "3739": "09468_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_laciniatus", "3740": "09469_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_leucodermis", "3741": "09470_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_occidentalis", "3742": "09471_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_odoratus", "3743": "09472_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_parviflorus", "3744": "09473_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_pedatus", "3745": "09474_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_phoenicolasius", "3746": "09475_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_pubescens", "3747": "09476_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_saxatilis", "3748": "09477_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_schmidelioides", "3749": "09478_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_spectabilis", "3750": "09479_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_trivialis", "3751": "09480_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_ulmifolius", "3752": "09481_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Rubus_ursinus", "3753": "09482_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Sanguisorba_canadensis", "3754": "09483_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Sanguisorba_minor", "3755": "09484_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Sanguisorba_officinalis", "3756": "09485_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Sibbaldia_procumbens", "3757": "09486_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Sibbaldiopsis_tridentata", "3758": "09487_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Sorbaria_sorbifolia", "3759": "09488_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Sorbus_americana", "3760": "09489_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Sorbus_aucuparia", "3761": "09490_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Sorbus_sitchensis", "3762": "09491_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Spiraea_alba", "3763": "09492_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Spiraea_betulifolia", "3764": "09493_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Spiraea_douglasii", "3765": "09494_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Spiraea_japonica", "3766": "09495_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Spiraea_splendens", "3767": "09496_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Spiraea_tomentosa", "3768": "09497_Plantae_Tracheophyta_Magnoliopsida_Rosales_Rosaceae_Waldsteinia_fragarioides", "3769": "09498_Plantae_Tracheophyta_Magnoliopsida_Rosales_Ulmaceae_Ulmus_alata", "3770": "09499_Plantae_Tracheophyta_Magnoliopsida_Rosales_Ulmaceae_Ulmus_americana", "3771": "09500_Plantae_Tracheophyta_Magnoliopsida_Rosales_Ulmaceae_Ulmus_crassifolia", "3772": "09501_Plantae_Tracheophyta_Magnoliopsida_Rosales_Ulmaceae_Ulmus_glabra", "3773": "09502_Plantae_Tracheophyta_Magnoliopsida_Rosales_Ulmaceae_Ulmus_laevis", "3774": "09503_Plantae_Tracheophyta_Magnoliopsida_Rosales_Ulmaceae_Ulmus_minor", "3775": "09504_Plantae_Tracheophyta_Magnoliopsida_Rosales_Ulmaceae_Ulmus_parvifolia", "3776": "09505_Plantae_Tracheophyta_Magnoliopsida_Rosales_Ulmaceae_Ulmus_pumila", "3777": "09506_Plantae_Tracheophyta_Magnoliopsida_Rosales_Ulmaceae_Ulmus_rubra", "3778": "09507_Plantae_Tracheophyta_Magnoliopsida_Rosales_Ulmaceae_Ulmus_thomasii", "3779": "09508_Plantae_Tracheophyta_Magnoliopsida_Rosales_Urticaceae_Boehmeria_cylindrica", "3780": "09509_Plantae_Tracheophyta_Magnoliopsida_Rosales_Urticaceae_Elatostema_rugosum", "3781": "09510_Plantae_Tracheophyta_Magnoliopsida_Rosales_Urticaceae_Laportea_canadensis", "3782": "09511_Plantae_Tracheophyta_Magnoliopsida_Rosales_Urticaceae_Parietaria_judaica", "3783": "09512_Plantae_Tracheophyta_Magnoliopsida_Rosales_Urticaceae_Parietaria_pensylvanica", "3784": "09513_Plantae_Tracheophyta_Magnoliopsida_Rosales_Urticaceae_Pilea_microphylla", "3785": "09514_Plantae_Tracheophyta_Magnoliopsida_Rosales_Urticaceae_Pilea_pumila", "3786": "09515_Plantae_Tracheophyta_Magnoliopsida_Rosales_Urticaceae_Urtica_chamaedryoides", "3787": "09516_Plantae_Tracheophyta_Magnoliopsida_Rosales_Urticaceae_Urtica_dioica", "3788": "09517_Plantae_Tracheophyta_Magnoliopsida_Rosales_Urticaceae_Urtica_ferox", "3789": "09518_Plantae_Tracheophyta_Magnoliopsida_Rosales_Urticaceae_Urtica_urens", "3790": "09519_Plantae_Tracheophyta_Magnoliopsida_Santalales_Loranthaceae_Ileostylus_micranthus", "3791": "09520_Plantae_Tracheophyta_Magnoliopsida_Santalales_Santalaceae_Arceuthobium_campylopodum", "3792": "09521_Plantae_Tracheophyta_Magnoliopsida_Santalales_Santalaceae_Colpoon_compressum", "3793": "09522_Plantae_Tracheophyta_Magnoliopsida_Santalales_Santalaceae_Comandra_umbellata", "3794": "09523_Plantae_Tracheophyta_Magnoliopsida_Santalales_Santalaceae_Exocarpos_cupressiformis", "3795": "09524_Plantae_Tracheophyta_Magnoliopsida_Santalales_Santalaceae_Geocaulon_lividum", "3796": "09525_Plantae_Tracheophyta_Magnoliopsida_Santalales_Santalaceae_Phoradendron_californicum", "3797": "09526_Plantae_Tracheophyta_Magnoliopsida_Santalales_Santalaceae_Phoradendron_leucarpum", "3798": "09527_Plantae_Tracheophyta_Magnoliopsida_Santalales_Santalaceae_Viscum_album", "3799": "09528_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Anacardiaceae_Anacardium_occidentale", "3800": "09529_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Anacardiaceae_Cotinus_coggygria", "3801": "09530_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Anacardiaceae_Malosma_laurina", "3802": "09531_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Anacardiaceae_Mangifera_indica", "3803": "09532_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Anacardiaceae_Pistacia_chinensis", "3804": "09533_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Anacardiaceae_Pistacia_lentiscus", "3805": "09534_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Anacardiaceae_Rhus_aromatica", "3806": "09535_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Anacardiaceae_Rhus_copallinum", "3807": "09536_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Anacardiaceae_Rhus_glabra", "3808": "09537_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Anacardiaceae_Rhus_integrifolia", "3809": "09538_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Anacardiaceae_Rhus_lanceolata", "3810": "09539_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Anacardiaceae_Rhus_microphylla", "3811": "09540_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Anacardiaceae_Rhus_ovata", "3812": "09541_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Anacardiaceae_Rhus_typhina", "3813": "09542_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Anacardiaceae_Rhus_virens", "3814": "09543_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Anacardiaceae_Schinus_molle", "3815": "09544_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Anacardiaceae_Schinus_terebinthifolia", "3816": "09545_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Anacardiaceae_Searsia_lucida", "3817": "09546_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Anacardiaceae_Toxicodendron_diversilobum", "3818": "09547_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Anacardiaceae_Toxicodendron_radicans", "3819": "09548_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Anacardiaceae_Toxicodendron_rydbergii", "3820": "09549_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Anacardiaceae_Toxicodendron_vernix", "3821": "09550_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Burseraceae_Bursera_microphylla", "3822": "09551_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Burseraceae_Bursera_simaruba", "3823": "09552_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Meliaceae_Azadirachta_indica", "3824": "09553_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Meliaceae_Dysoxylum_spectabile", "3825": "09554_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Meliaceae_Melia_azedarach", "3826": "09555_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Rutaceae_Citrus_trifoliata", "3827": "09556_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Rutaceae_Cneoridium_dumosum", "3828": "09557_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Rutaceae_Correa_reflexa", "3829": "09558_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Rutaceae_Murraya_paniculata", "3830": "09559_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Rutaceae_Ptelea_trifoliata", "3831": "09560_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Rutaceae_Thamnosma_montana", "3832": "09561_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Rutaceae_Zanthoxylum_americanum", "3833": "09562_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Rutaceae_Zanthoxylum_clava-herculis", "3834": "09563_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Rutaceae_Zanthoxylum_fagara", "3835": "09564_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Rutaceae_Zanthoxylum_hirsutum", "3836": "09565_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Acer_campestre", "3837": "09566_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Acer_circinatum", "3838": "09567_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Acer_floridanum", "3839": "09568_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Acer_ginnala", "3840": "09569_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Acer_glabrum", "3841": "09570_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Acer_grandidentatum", "3842": "09571_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Acer_macrophyllum", "3843": "09572_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Acer_negundo", "3844": "09573_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Acer_nigrum", "3845": "09574_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Acer_palmatum", "3846": "09575_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Acer_pensylvanicum", "3847": "09576_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Acer_platanoides", "3848": "09577_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Acer_pseudoplatanus", "3849": "09578_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Acer_rubrum", "3850": "09579_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Acer_saccharinum", "3851": "09580_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Acer_saccharum", "3852": "09581_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Acer_spicatum", "3853": "09582_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Acer_tataricum", "3854": "09583_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Aesculus_californica", "3855": "09584_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Aesculus_glabra", "3856": "09585_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Aesculus_hippocastanum", "3857": "09586_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Aesculus_pavia", "3858": "09587_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Alectryon_excelsus", "3859": "09588_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Cardiospermum_halicacabum", "3860": "09589_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Dodonaea_viscosa", "3861": "09590_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Koelreuteria_paniculata", "3862": "09591_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Sapindus_drummondii", "3863": "09592_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Sapindaceae_Ungnadia_speciosa", "3864": "09593_Plantae_Tracheophyta_Magnoliopsida_Sapindales_Simaroubaceae_Ailanthus_altissima", "3865": "09594_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Altingiaceae_Liquidambar_styraciflua", "3866": "09595_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Cotyledon_orbiculata", "3867": "09596_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Crassula_connata", "3868": "09597_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Crassula_ovata", "3869": "09598_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Dudleya_arizonica", "3870": "09599_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Dudleya_cymosa", "3871": "09600_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Dudleya_edulis", "3872": "09601_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Dudleya_farinosa", "3873": "09602_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Dudleya_lanceolata", "3874": "09603_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Dudleya_pulverulenta", "3875": "09604_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Dudleya_saxosa", "3876": "09605_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Echeveria_gibbiflora", "3877": "09606_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Hylotelephium_telephium", "3878": "09607_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Kalanchoe_delagoensis", "3879": "09608_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Kalanchoe_pinnata", "3880": "09609_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Orostachys_spinosa", "3881": "09610_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Rhodiola_integrifolia", "3882": "09611_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Rhodiola_rosea", "3883": "09612_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Sedum_acre", "3884": "09613_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Sedum_album", "3885": "09614_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Sedum_dasyphyllum", "3886": "09615_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Sedum_lanceolatum", "3887": "09616_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Sedum_nuttallii", "3888": "09617_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Sedum_obtusatum", "3889": "09618_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Sedum_sexangulare", "3890": "09619_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Sedum_spathulifolium", "3891": "09620_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Sedum_ternatum", "3892": "09621_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Sempervivum_arachnoideum", "3893": "09622_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Sempervivum_tectorum", "3894": "09623_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Tylecodon_paniculatus", "3895": "09624_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Crassulaceae_Umbilicus_rupestris", "3896": "09625_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Grossulariaceae_Ribes_americanum", "3897": "09626_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Grossulariaceae_Ribes_aureum", "3898": "09627_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Grossulariaceae_Ribes_bracteosum", "3899": "09628_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Grossulariaceae_Ribes_cereum", "3900": "09629_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Grossulariaceae_Ribes_cynosbati", "3901": "09630_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Grossulariaceae_Ribes_indecorum", "3902": "09631_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Grossulariaceae_Ribes_lacustre", "3903": "09632_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Grossulariaceae_Ribes_malvaceum", "3904": "09633_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Grossulariaceae_Ribes_nevadense", "3905": "09634_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Grossulariaceae_Ribes_nigrum", "3906": "09635_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Grossulariaceae_Ribes_roezlii", "3907": "09636_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Grossulariaceae_Ribes_rubrum", "3908": "09637_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Grossulariaceae_Ribes_sanguineum", "3909": "09638_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Grossulariaceae_Ribes_speciosum", "3910": "09639_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Grossulariaceae_Ribes_triste", "3911": "09640_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Grossulariaceae_Ribes_uva-crispa", "3912": "09641_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Haloragaceae_Haloragis_erecta", "3913": "09642_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Haloragaceae_Myriophyllum_aquaticum", "3914": "09643_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Haloragaceae_Myriophyllum_spicatum", "3915": "09644_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Hamamelidaceae_Hamamelis_virginiana", "3916": "09645_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Iteaceae_Itea_virginica", "3917": "09646_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Paeoniaceae_Paeonia_brownii", "3918": "09647_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Paeoniaceae_Paeonia_californica", "3919": "09648_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Penthoraceae_Penthorum_sedoides", "3920": "09649_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Chrysosplenium_alternifolium", "3921": "09650_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Chrysosplenium_americanum", "3922": "09651_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Darmera_peltata", "3923": "09652_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Heuchera_cylindrica", "3924": "09653_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Heuchera_micrantha", "3925": "09654_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Heuchera_richardsonii", "3926": "09655_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Leptarrhena_pyrolifolia", "3927": "09656_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Lithophragma_affine", "3928": "09657_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Lithophragma_heterophyllum", "3929": "09658_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Lithophragma_parviflorum", "3930": "09659_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Micranthes_californica", "3931": "09660_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Micranthes_ferruginea", "3932": "09661_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Micranthes_pensylvanica", "3933": "09662_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Micranthes_virginiensis", "3934": "09663_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Mitella_diphylla", "3935": "09664_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Mitella_nuda", "3936": "09665_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Saxifraga_aizoides", "3937": "09666_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Saxifraga_bronchialis", "3938": "09667_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Saxifraga_granulata", "3939": "09668_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Saxifraga_oppositifolia", "3940": "09669_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Saxifraga_paniculata", "3941": "09670_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Saxifraga_tricuspidata", "3942": "09671_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Tellima_grandiflora", "3943": "09672_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Tiarella_cordifolia", "3944": "09673_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Tiarella_trifoliata", "3945": "09674_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Tolmiea_menziesii", "3946": "09675_Plantae_Tracheophyta_Magnoliopsida_Saxifragales_Saxifragaceae_Whipplea_modesta", "3947": "09676_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Calystegia_macrostegia", "3948": "09677_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Calystegia_sepium", "3949": "09678_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Calystegia_silvatica", "3950": "09679_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Calystegia_soldanella", "3951": "09680_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Convolvulus_althaeoides", "3952": "09681_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Convolvulus_arvensis", "3953": "09682_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Convolvulus_equitans", "3954": "09683_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Cuscuta_californica", "3955": "09684_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Cuscuta_gronovii", "3956": "09685_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Cuscuta_pacifica", "3957": "09686_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Dichondra_carolinensis", "3958": "09687_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Dichondra_repens", "3959": "09688_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Distimake_dissectus", "3960": "09689_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Evolvulus_alsinoides", "3961": "09690_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Evolvulus_sericeus", "3962": "09691_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Ipomoea_alba", "3963": "09692_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Ipomoea_batatas", "3964": "09693_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Ipomoea_cairica", "3965": "09694_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Ipomoea_carnea", "3966": "09695_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Ipomoea_coccinea", "3967": "09696_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Ipomoea_cordatotriloba", "3968": "09697_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Ipomoea_hederacea", "3969": "09698_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Ipomoea_hederifolia", "3970": "09699_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Ipomoea_imperati", "3971": "09700_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Ipomoea_indica", "3972": "09701_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Ipomoea_lacunosa", "3973": "09702_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Ipomoea_obscura", "3974": "09703_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Ipomoea_pandurata", "3975": "09704_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Ipomoea_pes-caprae", "3976": "09705_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Ipomoea_purpurea", "3977": "09706_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Ipomoea_quamoclit", "3978": "09707_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Ipomoea_sagittata", "3979": "09708_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Ipomoea_triloba", "3980": "09709_Plantae_Tracheophyta_Magnoliopsida_Solanales_Convolvulaceae_Jacquemontia_tamnifolia", "3981": "09710_Plantae_Tracheophyta_Magnoliopsida_Solanales_Hydroleaceae_Hydrolea_ovata", "3982": "09711_Plantae_Tracheophyta_Magnoliopsida_Solanales_Montiniaceae_Montinia_caryophyllacea", "3983": "09712_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Alkekengi_officinarum", "3984": "09713_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Atropa_belladonna", "3985": "09714_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Capsicum_annuum", "3986": "09715_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Datura_discolor", "3987": "09716_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Datura_inoxia", "3988": "09717_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Datura_stramonium", "3989": "09718_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Datura_wrightii", "3990": "09719_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Hyoscyamus_niger", "3991": "09720_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Lycium_andersonii", "3992": "09721_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Lycium_barbarum", "3993": "09722_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Lycium_carolinianum", "3994": "09723_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Lycium_ferocissimum", "3995": "09724_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Nicandra_physalodes", "3996": "09725_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Nicotiana_glauca", "3997": "09726_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Nicotiana_obtusifolia", "3998": "09727_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Physalis_angulata", "3999": "09728_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Physalis_crassifolia", "4000": "09729_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Physalis_heterophylla", "4001": "09730_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Physalis_peruviana", "4002": "09731_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Quincula_lobata", "4003": "09732_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Solanum_americanum", "4004": "09733_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Solanum_carolinense", "4005": "09734_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Solanum_chenopodioides", "4006": "09735_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Solanum_dimidiatum", "4007": "09736_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Solanum_diphyllum", "4008": "09737_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Solanum_douglasii", "4009": "09738_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Solanum_dulcamara", "4010": "09739_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Solanum_elaeagnifolium", "4011": "09740_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Solanum_emulans", "4012": "09741_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Solanum_erianthum", "4013": "09742_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Solanum_laciniatum", "4014": "09743_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Solanum_lycopersicum", "4015": "09744_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Solanum_mauritianum", "4016": "09745_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Solanum_nigrum", "4017": "09746_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Solanum_pseudocapsicum", "4018": "09747_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Solanum_rostratum", "4019": "09748_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Solanum_tuberosum", "4020": "09749_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Solanum_umbelliferum", "4021": "09750_Plantae_Tracheophyta_Magnoliopsida_Solanales_Solanaceae_Solanum_xanti", "4022": "09751_Plantae_Tracheophyta_Magnoliopsida_Vitales_Vitaceae_Ampelopsis_arborea", "4023": "09752_Plantae_Tracheophyta_Magnoliopsida_Vitales_Vitaceae_Ampelopsis_cordata", "4024": "09753_Plantae_Tracheophyta_Magnoliopsida_Vitales_Vitaceae_Ampelopsis_glandulosa", "4025": "09754_Plantae_Tracheophyta_Magnoliopsida_Vitales_Vitaceae_Cayratia_japonica", "4026": "09755_Plantae_Tracheophyta_Magnoliopsida_Vitales_Vitaceae_Cissus_trifoliata", "4027": "09756_Plantae_Tracheophyta_Magnoliopsida_Vitales_Vitaceae_Parthenocissus_inserta", "4028": "09757_Plantae_Tracheophyta_Magnoliopsida_Vitales_Vitaceae_Parthenocissus_quinquefolia", "4029": "09758_Plantae_Tracheophyta_Magnoliopsida_Vitales_Vitaceae_Parthenocissus_tricuspidata", "4030": "09759_Plantae_Tracheophyta_Magnoliopsida_Vitales_Vitaceae_Vitis_aestivalis", "4031": "09760_Plantae_Tracheophyta_Magnoliopsida_Vitales_Vitaceae_Vitis_arizonica", "4032": "09761_Plantae_Tracheophyta_Magnoliopsida_Vitales_Vitaceae_Vitis_californica", "4033": "09762_Plantae_Tracheophyta_Magnoliopsida_Vitales_Vitaceae_Vitis_mustangensis", "4034": "09763_Plantae_Tracheophyta_Magnoliopsida_Vitales_Vitaceae_Vitis_riparia", "4035": "09764_Plantae_Tracheophyta_Magnoliopsida_Vitales_Vitaceae_Vitis_rotundifolia", "4036": "09765_Plantae_Tracheophyta_Magnoliopsida_Zygophyllales_Krameriaceae_Krameria_bicolor", "4037": "09766_Plantae_Tracheophyta_Magnoliopsida_Zygophyllales_Krameriaceae_Krameria_erecta", "4038": "09767_Plantae_Tracheophyta_Magnoliopsida_Zygophyllales_Krameriaceae_Krameria_lanceolata", "4039": "09768_Plantae_Tracheophyta_Magnoliopsida_Zygophyllales_Zygophyllaceae_Fagonia_laevis", "4040": "09769_Plantae_Tracheophyta_Magnoliopsida_Zygophyllales_Zygophyllaceae_Guaiacum_angustifolium", "4041": "09770_Plantae_Tracheophyta_Magnoliopsida_Zygophyllales_Zygophyllaceae_Guaiacum_coulteri", "4042": "09771_Plantae_Tracheophyta_Magnoliopsida_Zygophyllales_Zygophyllaceae_Kallstroemia_grandiflora", "4043": "09772_Plantae_Tracheophyta_Magnoliopsida_Zygophyllales_Zygophyllaceae_Larrea_tridentata", "4044": "09773_Plantae_Tracheophyta_Magnoliopsida_Zygophyllales_Zygophyllaceae_Tribulus_terrestris", "4045": "09774_Plantae_Tracheophyta_Pinopsida_Pinales_Araucariaceae_Agathis_australis", "4046": "09775_Plantae_Tracheophyta_Pinopsida_Pinales_Cupressaceae_Calocedrus_decurrens", "4047": "09776_Plantae_Tracheophyta_Pinopsida_Pinales_Cupressaceae_Cupressus_macrocarpa", "4048": "09777_Plantae_Tracheophyta_Pinopsida_Pinales_Cupressaceae_Cupressus_nootkatensis", "4049": "09778_Plantae_Tracheophyta_Pinopsida_Pinales_Cupressaceae_Juniperus_ashei", "4050": "09779_Plantae_Tracheophyta_Pinopsida_Pinales_Cupressaceae_Juniperus_californica", "4051": "09780_Plantae_Tracheophyta_Pinopsida_Pinales_Cupressaceae_Juniperus_communis", "4052": "09781_Plantae_Tracheophyta_Pinopsida_Pinales_Cupressaceae_Juniperus_deppeana", "4053": "09782_Plantae_Tracheophyta_Pinopsida_Pinales_Cupressaceae_Juniperus_grandis", "4054": "09783_Plantae_Tracheophyta_Pinopsida_Pinales_Cupressaceae_Juniperus_horizontalis", "4055": "09784_Plantae_Tracheophyta_Pinopsida_Pinales_Cupressaceae_Juniperus_monosperma", "4056": "09785_Plantae_Tracheophyta_Pinopsida_Pinales_Cupressaceae_Juniperus_occidentalis", "4057": "09786_Plantae_Tracheophyta_Pinopsida_Pinales_Cupressaceae_Juniperus_osteosperma", "4058": "09787_Plantae_Tracheophyta_Pinopsida_Pinales_Cupressaceae_Juniperus_oxycedrus", "4059": "09788_Plantae_Tracheophyta_Pinopsida_Pinales_Cupressaceae_Juniperus_scopulorum", "4060": "09789_Plantae_Tracheophyta_Pinopsida_Pinales_Cupressaceae_Juniperus_virginiana", "4061": "09790_Plantae_Tracheophyta_Pinopsida_Pinales_Cupressaceae_Sequoia_sempervirens", "4062": "09791_Plantae_Tracheophyta_Pinopsida_Pinales_Cupressaceae_Sequoiadendron_giganteum", "4063": "09792_Plantae_Tracheophyta_Pinopsida_Pinales_Cupressaceae_Taxodium_ascendens", "4064": "09793_Plantae_Tracheophyta_Pinopsida_Pinales_Cupressaceae_Taxodium_distichum", "4065": "09794_Plantae_Tracheophyta_Pinopsida_Pinales_Cupressaceae_Taxodium_mucronatum", "4066": "09795_Plantae_Tracheophyta_Pinopsida_Pinales_Cupressaceae_Thuja_occidentalis", "4067": "09796_Plantae_Tracheophyta_Pinopsida_Pinales_Cupressaceae_Thuja_plicata", "4068": "09797_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Abies_alba", "4069": "09798_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Abies_balsamea", "4070": "09799_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Abies_concolor", "4071": "09800_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Abies_grandis", "4072": "09801_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Abies_lasiocarpa", "4073": "09802_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Abies_sibirica", "4074": "09803_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Larix_decidua", "4075": "09804_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Larix_laricina", "4076": "09805_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Larix_occidentalis", "4077": "09806_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Larix_sibirica", "4078": "09807_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Picea_abies", "4079": "09808_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Picea_engelmannii", "4080": "09809_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Picea_glauca", "4081": "09810_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Picea_mariana", "4082": "09811_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Picea_obovata", "4083": "09812_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Picea_pungens", "4084": "09813_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Picea_rubens", "4085": "09814_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Picea_sitchensis", "4086": "09815_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_albicaulis", "4087": "09816_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_banksiana", "4088": "09817_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_cembroides", "4089": "09818_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_contorta", "4090": "09819_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_coulteri", "4091": "09820_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_echinata", "4092": "09821_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_edulis", "4093": "09822_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_elliottii", "4094": "09823_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_flexilis", "4095": "09824_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_jeffreyi", "4096": "09825_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_lambertiana", "4097": "09826_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_longaeva", "4098": "09827_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_monophylla", "4099": "09828_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_monticola", "4100": "09829_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_nigra", "4101": "09830_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_palustris", "4102": "09831_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_pinaster", "4103": "09832_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_pinea", "4104": "09833_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_ponderosa", "4105": "09834_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_radiata", "4106": "09835_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_resinosa", "4107": "09836_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_rigida", "4108": "09837_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_sabiniana", "4109": "09838_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_sibirica", "4110": "09839_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_strobus", "4111": "09840_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_sylvestris", "4112": "09841_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_taeda", "4113": "09842_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_torreyana", "4114": "09843_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pinus_virginiana", "4115": "09844_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Pseudotsuga_menziesii", "4116": "09845_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Tsuga_canadensis", "4117": "09846_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Tsuga_heterophylla", "4118": "09847_Plantae_Tracheophyta_Pinopsida_Pinales_Pinaceae_Tsuga_mertensiana", "4119": "09848_Plantae_Tracheophyta_Pinopsida_Pinales_Podocarpaceae_Dacrycarpus_dacrydioides", "4120": "09849_Plantae_Tracheophyta_Pinopsida_Pinales_Podocarpaceae_Dacrydium_cupressinum", "4121": "09850_Plantae_Tracheophyta_Pinopsida_Pinales_Podocarpaceae_Phyllocladus_alpinus", "4122": "09851_Plantae_Tracheophyta_Pinopsida_Pinales_Podocarpaceae_Phyllocladus_trichomanoides", "4123": "09852_Plantae_Tracheophyta_Pinopsida_Pinales_Podocarpaceae_Podocarpus_laetus", "4124": "09853_Plantae_Tracheophyta_Pinopsida_Pinales_Podocarpaceae_Podocarpus_totara", "4125": "09854_Plantae_Tracheophyta_Pinopsida_Pinales_Podocarpaceae_Prumnopitys_ferruginea", "4126": "09855_Plantae_Tracheophyta_Pinopsida_Pinales_Podocarpaceae_Prumnopitys_taxifolia", "4127": "09856_Plantae_Tracheophyta_Pinopsida_Pinales_Taxaceae_Taxus_baccata", "4128": "09857_Plantae_Tracheophyta_Pinopsida_Pinales_Taxaceae_Taxus_brevifolia", "4129": "09858_Plantae_Tracheophyta_Pinopsida_Pinales_Taxaceae_Taxus_canadensis", "4130": "09859_Plantae_Tracheophyta_Pinopsida_Pinales_Taxaceae_Torreya_californica", "4131": "09860_Plantae_Tracheophyta_Polypodiopsida_Cyatheales_Cyatheaceae_Cyathea_dealbata", "4132": "09861_Plantae_Tracheophyta_Polypodiopsida_Cyatheales_Cyatheaceae_Cyathea_medullaris", "4133": "09862_Plantae_Tracheophyta_Polypodiopsida_Cyatheales_Cyatheaceae_Cyathea_smithii", "4134": "09863_Plantae_Tracheophyta_Polypodiopsida_Cyatheales_Dicksoniaceae_Dicksonia_antarctica", "4135": "09864_Plantae_Tracheophyta_Polypodiopsida_Cyatheales_Dicksoniaceae_Dicksonia_fibrosa", "4136": "09865_Plantae_Tracheophyta_Polypodiopsida_Cyatheales_Dicksoniaceae_Dicksonia_squarrosa", "4137": "09866_Plantae_Tracheophyta_Polypodiopsida_Equisetales_Equisetaceae_Equisetum_arvense", "4138": "09867_Plantae_Tracheophyta_Polypodiopsida_Equisetales_Equisetaceae_Equisetum_fluviatile", "4139": "09868_Plantae_Tracheophyta_Polypodiopsida_Equisetales_Equisetaceae_Equisetum_hyemale", "4140": "09869_Plantae_Tracheophyta_Polypodiopsida_Equisetales_Equisetaceae_Equisetum_laevigatum", "4141": "09870_Plantae_Tracheophyta_Polypodiopsida_Equisetales_Equisetaceae_Equisetum_pratense", "4142": "09871_Plantae_Tracheophyta_Polypodiopsida_Equisetales_Equisetaceae_Equisetum_ramosissimum", "4143": "09872_Plantae_Tracheophyta_Polypodiopsida_Equisetales_Equisetaceae_Equisetum_scirpoides", "4144": "09873_Plantae_Tracheophyta_Polypodiopsida_Equisetales_Equisetaceae_Equisetum_sylvaticum", "4145": "09874_Plantae_Tracheophyta_Polypodiopsida_Equisetales_Equisetaceae_Equisetum_telmateia", "4146": "09875_Plantae_Tracheophyta_Polypodiopsida_Equisetales_Equisetaceae_Equisetum_variegatum", "4147": "09876_Plantae_Tracheophyta_Polypodiopsida_Gleicheniales_Gleicheniaceae_Dicranopteris_linearis", "4148": "09877_Plantae_Tracheophyta_Polypodiopsida_Gleicheniales_Gleicheniaceae_Gleichenia_dicarpa", "4149": "09878_Plantae_Tracheophyta_Polypodiopsida_Gleicheniales_Gleicheniaceae_Sticherus_cunninghamii", "4150": "09879_Plantae_Tracheophyta_Polypodiopsida_Hymenophyllales_Hymenophyllaceae_Hymenophyllum_demissum", "4151": "09880_Plantae_Tracheophyta_Polypodiopsida_Hymenophyllales_Hymenophyllaceae_Hymenophyllum_flabellatum", "4152": "09881_Plantae_Tracheophyta_Polypodiopsida_Hymenophyllales_Hymenophyllaceae_Hymenophyllum_nephrophyllum", "4153": "09882_Plantae_Tracheophyta_Polypodiopsida_Hymenophyllales_Hymenophyllaceae_Trichomanes_venosum", "4154": "09883_Plantae_Tracheophyta_Polypodiopsida_Ophioglossales_Ophioglossaceae_Botrychium_lunaria", "4155": "09884_Plantae_Tracheophyta_Polypodiopsida_Ophioglossales_Ophioglossaceae_Botrypus_virginianus", "4156": "09885_Plantae_Tracheophyta_Polypodiopsida_Ophioglossales_Ophioglossaceae_Sceptridium_dissectum", "4157": "09886_Plantae_Tracheophyta_Polypodiopsida_Ophioglossales_Ophioglossaceae_Sceptridium_multifidum", "4158": "09887_Plantae_Tracheophyta_Polypodiopsida_Osmundales_Osmundaceae_Leptopteris_hymenophylloides", "4159": "09888_Plantae_Tracheophyta_Polypodiopsida_Osmundales_Osmundaceae_Osmunda_claytoniana", "4160": "09889_Plantae_Tracheophyta_Polypodiopsida_Osmundales_Osmundaceae_Osmunda_regalis", "4161": "09890_Plantae_Tracheophyta_Polypodiopsida_Osmundales_Osmundaceae_Osmundastrum_cinnamomeum", "4162": "09891_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Aspleniaceae_Asplenium_appendiculatum", "4163": "09892_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Aspleniaceae_Asplenium_bulbiferum", "4164": "09893_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Aspleniaceae_Asplenium_ceterach", "4165": "09894_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Aspleniaceae_Asplenium_flabellifolium", "4166": "09895_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Aspleniaceae_Asplenium_flaccidum", "4167": "09896_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Aspleniaceae_Asplenium_gracillimum", "4168": "09897_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Aspleniaceae_Asplenium_hookerianum", "4169": "09898_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Aspleniaceae_Asplenium_montanum", "4170": "09899_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Aspleniaceae_Asplenium_nidus", "4171": "09900_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Aspleniaceae_Asplenium_oblongifolium", "4172": "09901_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Aspleniaceae_Asplenium_platyneuron", "4173": "09902_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Aspleniaceae_Asplenium_polyodon", "4174": "09903_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Aspleniaceae_Asplenium_rhizophyllum", "4175": "09904_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Aspleniaceae_Asplenium_ruta-muraria", "4176": "09905_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Aspleniaceae_Asplenium_scolopendrium", "4177": "09906_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Aspleniaceae_Asplenium_trichomanes", "4178": "09907_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Aspleniaceae_Asplenium_viride", "4179": "09908_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Athyriaceae_Athyrium_angustum", "4180": "09909_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Athyriaceae_Athyrium_filix-femina", "4181": "09910_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Athyriaceae_Deparia_acrostichoides", "4182": "09911_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Blechnaceae_Blechnum_chambersii", "4183": "09912_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Blechnaceae_Blechnum_filiforme", "4184": "09913_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Blechnaceae_Blechnum_novae-zelandiae", "4185": "09914_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Blechnaceae_Blechnum_parrisiae", "4186": "09915_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Blechnaceae_Blechnum_penna-marina", "4187": "09916_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Blechnaceae_Blechnum_procerum", "4188": "09917_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Blechnaceae_Cranfillia_fluviatilis", "4189": "09918_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Blechnaceae_Lomaria_discolor", "4190": "09919_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Blechnaceae_Struthiopteris_spicant", "4191": "09920_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Blechnaceae_Telmatoblechnum_serrulatum", "4192": "09921_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Blechnaceae_Woodwardia_areolata", "4193": "09922_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Blechnaceae_Woodwardia_fimbriata", "4194": "09923_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Blechnaceae_Woodwardia_virginica", "4195": "09924_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Cystopteridaceae_Cystopteris_bulbifera", "4196": "09925_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Cystopteridaceae_Cystopteris_fragilis", "4197": "09926_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Cystopteridaceae_Cystopteris_protrusa", "4198": "09927_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Cystopteridaceae_Gymnocarpium_dryopteris", "4199": "09928_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dennstaedtiaceae_Dennstaedtia_punctilobula", "4200": "09929_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dennstaedtiaceae_Histiopteris_incisa", "4201": "09930_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dennstaedtiaceae_Hypolepis_ambigua", "4202": "09931_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dennstaedtiaceae_Paesia_scaberula", "4203": "09932_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dennstaedtiaceae_Pteridium_aquilinum", "4204": "09933_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dennstaedtiaceae_Pteridium_esculentum", "4205": "09934_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dennstaedtiaceae_Pteridium_pinetorum", "4206": "09935_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Diplaziopsidaceae_Homalosorus_pycnocarpos", "4207": "09936_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Cyrtomium_falcatum", "4208": "09937_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_arguta", "4209": "09938_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_carthusiana", "4210": "09939_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_cristata", "4211": "09940_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_expansa", "4212": "09941_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_filix-mas", "4213": "09942_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_fragrans", "4214": "09943_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_intermedia", "4215": "09944_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_marginalis", "4216": "09945_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Polystichum_acrostichoides", "4217": "09946_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Polystichum_lonchitis", "4218": "09947_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Polystichum_munitum", "4219": "09948_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Polystichum_neozelandicum", "4220": "09949_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Polystichum_vestitum", "4221": "09950_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Rumohra_adiantiformis", "4222": "09951_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Nephrolepidaceae_Nephrolepis_cordifolia", "4223": "09952_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Onocleaceae_Matteuccia_struthiopteris", "4224": "09953_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Onocleaceae_Onoclea_sensibilis", "4225": "09954_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Microsorum_pustulatum", "4226": "09955_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Microsorum_scandens", "4227": "09956_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Notogrammitis_heterophylla", "4228": "09957_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Phlebodium_aureum", "4229": "09958_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Pleopeltis_michauxiana", "4230": "09959_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Polypodium_californicum", "4231": "09960_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Polypodium_glycyrrhiza", "4232": "09961_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Polypodium_scouleri", "4233": "09962_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Polypodium_virginianum", "4234": "09963_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Polypodium_vulgare", "4235": "09964_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Pyrrosia_eleagnifolia", "4236": "09965_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Acrostichum_danaeifolium", "4237": "09966_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Adiantum_aleuticum", "4238": "09967_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Adiantum_capillus-veneris", "4239": "09968_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Adiantum_cunninghamii", "4240": "09969_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Adiantum_hispidulum", "4241": "09970_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Adiantum_jordanii", "4242": "09971_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Adiantum_pedatum", "4243": "09972_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Aspidotis_densa", "4244": "09973_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Astrolepis_sinuata", "4245": "09974_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Cryptogramma_acrostichoides", "4246": "09975_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Myriopteris_alabamensis", "4247": "09976_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Myriopteris_aurea", "4248": "09977_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Myriopteris_parryi", "4249": "09978_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pellaea_andromedifolia", "4250": "09979_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pellaea_atropurpurea", "4251": "09980_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pellaea_glabella", "4252": "09981_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pellaea_mucronata", "4253": "09982_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pellaea_rotundifolia", "4254": "09983_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pentagramma_triangularis", "4255": "09984_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pteris_cretica", "4256": "09985_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pteris_macilenta", "4257": "09986_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pteris_tremula", "4258": "09987_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pteris_vittata", "4259": "09988_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Thelypteridaceae_Parathelypteris_noveboracensis", "4260": "09989_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Thelypteridaceae_Phegopteris_connectilis", "4261": "09990_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Thelypteridaceae_Phegopteris_hexagonoptera", "4262": "09991_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Thelypteridaceae_Pneumatopteris_pennigera", "4263": "09992_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Thelypteridaceae_Thelypteris_palustris", "4264": "09993_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Woodsiaceae_Woodsia_ilvensis", "4265": "09994_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Woodsiaceae_Woodsia_obtusa", "4266": "09995_Plantae_Tracheophyta_Polypodiopsida_Psilotales_Psilotaceae_Psilotum_nudum", "4267": "09996_Plantae_Tracheophyta_Polypodiopsida_Psilotales_Psilotaceae_Tmesipteris_elongata", "4268": "09997_Plantae_Tracheophyta_Polypodiopsida_Salviniales_Salviniaceae_Azolla_filiculoides", "4269": "09998_Plantae_Tracheophyta_Polypodiopsida_Salviniales_Salviniaceae_Salvinia_minima", "4270": "09999_Plantae_Tracheophyta_Polypodiopsida_Schizaeales_Lygodiaceae_Lygodium_japonicum"}}}}], "splits": [{"name": "train", "num_bytes": 25180455748.65, "num_examples": 213550}], "download_size": 7778993107, "dataset_size": 25180455748.65}} | 2023-12-03T02:36:42+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "inat_2021_train_mini_plantae"
More Information needed | [
"# Dataset Card for \"inat_2021_train_mini_plantae\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"inat_2021_train_mini_plantae\"\n\nMore Information needed"
] | [
6,
22
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"inat_2021_train_mini_plantae\"\n\nMore Information needed"
] |
6fba64fd944d80cbf726da09d35b884dd3d1ece1 | A dataset of 2,126 1-turn conversations artificially generated using GPT-4, designed to fit the tone of the Discord bot Mr. Eagle.
This dataset was used to train MrEagle-LoRA.
| EagleConsortium/MrEagle-2126 | [
"license:wtfpl",
"region:us"
] | 2023-12-02T15:41:41+00:00 | {"license": "wtfpl"} | 2023-12-03T04:01:08+00:00 | [] | [] | TAGS
#license-wtfpl #region-us
| A dataset of 2,126 1-turn conversations artificially generated using GPT-4, designed to fit the tone of the Discord bot Mr. Eagle.
This dataset was used to train MrEagle-LoRA.
| [] | [
"TAGS\n#license-wtfpl #region-us \n"
] | [
14
] | [
"passage: TAGS\n#license-wtfpl #region-us \n"
] |
0bb374e72a184fb2ef4260d941636634dce2f106 |
# Dataset Card for Evaluation run of mrm8488/mistral-7b-ft-h4-no_robots_instructions
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/mrm8488/mistral-7b-ft-h4-no_robots_instructions
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [mrm8488/mistral-7b-ft-h4-no_robots_instructions](https://huggingface.co/mrm8488/mistral-7b-ft-h4-no_robots_instructions) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_mrm8488__mistral-7b-ft-h4-no_robots_instructions",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T15:43:14.595425](https://huggingface.co/datasets/open-llm-leaderboard/details_mrm8488__mistral-7b-ft-h4-no_robots_instructions/blob/main/results_2023-12-02T15-43-14.595425.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.36694465504169826,
"acc_stderr": 0.013275883047712211
},
"harness|gsm8k|5": {
"acc": 0.36694465504169826,
"acc_stderr": 0.013275883047712211
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_mrm8488__mistral-7b-ft-h4-no_robots_instructions | [
"region:us"
] | 2023-12-02T15:42:33+00:00 | {"pretty_name": "Evaluation run of mrm8488/mistral-7b-ft-h4-no_robots_instructions", "dataset_summary": "Dataset automatically created during the evaluation run of model [mrm8488/mistral-7b-ft-h4-no_robots_instructions](https://huggingface.co/mrm8488/mistral-7b-ft-h4-no_robots_instructions) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_mrm8488__mistral-7b-ft-h4-no_robots_instructions\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T15:43:14.595425](https://huggingface.co/datasets/open-llm-leaderboard/details_mrm8488__mistral-7b-ft-h4-no_robots_instructions/blob/main/results_2023-12-02T15-43-14.595425.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.36694465504169826,\n \"acc_stderr\": 0.013275883047712211\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.36694465504169826,\n \"acc_stderr\": 0.013275883047712211\n }\n}\n```", "repo_url": "https://huggingface.co/mrm8488/mistral-7b-ft-h4-no_robots_instructions", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T15_42_28.726427", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-42-28.726427.parquet"]}, {"split": "2023_12_02T15_42_53.272777", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-42-53.272777.parquet"]}, {"split": "2023_12_02T15_43_07.243379", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-43-07.243379.parquet"]}, {"split": "2023_12_02T15_43_14.595425", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-43-14.595425.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-43-14.595425.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T15_42_28.726427", "path": ["results_2023-12-02T15-42-28.726427.parquet"]}, {"split": "2023_12_02T15_42_53.272777", "path": ["results_2023-12-02T15-42-53.272777.parquet"]}, {"split": "2023_12_02T15_43_07.243379", "path": ["results_2023-12-02T15-43-07.243379.parquet"]}, {"split": "2023_12_02T15_43_14.595425", "path": ["results_2023-12-02T15-43-14.595425.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T15-43-14.595425.parquet"]}]}]} | 2023-12-02T15:43:29+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of mrm8488/mistral-7b-ft-h4-no_robots_instructions
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model mrm8488/mistral-7b-ft-h4-no_robots_instructions on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T15:43:14.595425(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of mrm8488/mistral-7b-ft-h4-no_robots_instructions",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model mrm8488/mistral-7b-ft-h4-no_robots_instructions on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T15:43:14.595425(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of mrm8488/mistral-7b-ft-h4-no_robots_instructions",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model mrm8488/mistral-7b-ft-h4-no_robots_instructions on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T15:43:14.595425(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
29,
31,
178,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of mrm8488/mistral-7b-ft-h4-no_robots_instructions## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model mrm8488/mistral-7b-ft-h4-no_robots_instructions on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T15:43:14.595425(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
069d90f33372b576fa378c58b29bc893202ca859 |
# Dataset Card for Evaluation run of pszemraj/distilgpt2-HC3
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/pszemraj/distilgpt2-HC3
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [pszemraj/distilgpt2-HC3](https://huggingface.co/pszemraj/distilgpt2-HC3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_pszemraj__distilgpt2-HC3",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T15:42:48.762697](https://huggingface.co/datasets/open-llm-leaderboard/details_pszemraj__distilgpt2-HC3/blob/main/results_2023-12-02T15-42-48.762697.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_pszemraj__distilgpt2-HC3 | [
"region:us"
] | 2023-12-02T15:42:52+00:00 | {"pretty_name": "Evaluation run of pszemraj/distilgpt2-HC3", "dataset_summary": "Dataset automatically created during the evaluation run of model [pszemraj/distilgpt2-HC3](https://huggingface.co/pszemraj/distilgpt2-HC3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_pszemraj__distilgpt2-HC3\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T15:42:48.762697](https://huggingface.co/datasets/open-llm-leaderboard/details_pszemraj__distilgpt2-HC3/blob/main/results_2023-12-02T15-42-48.762697.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/pszemraj/distilgpt2-HC3", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T15_42_48.762697", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-42-48.762697.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-42-48.762697.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T15_42_48.762697", "path": ["results_2023-12-02T15-42-48.762697.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T15-42-48.762697.parquet"]}]}]} | 2023-12-02T15:43:04+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of pszemraj/distilgpt2-HC3
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model pszemraj/distilgpt2-HC3 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T15:42:48.762697(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of pszemraj/distilgpt2-HC3",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model pszemraj/distilgpt2-HC3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T15:42:48.762697(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of pszemraj/distilgpt2-HC3",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model pszemraj/distilgpt2-HC3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T15:42:48.762697(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
20,
31,
169,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of pszemraj/distilgpt2-HC3## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model pszemraj/distilgpt2-HC3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T15:42:48.762697(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
572538e2c19d57793b122d7b4d27870d78142cf0 |
# Dataset Card for Evaluation run of KevinNi/mistral-class-bio-tutor
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/KevinNi/mistral-class-bio-tutor
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [KevinNi/mistral-class-bio-tutor](https://huggingface.co/KevinNi/mistral-class-bio-tutor) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_KevinNi__mistral-class-bio-tutor",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T15:48:30.567817](https://huggingface.co/datasets/open-llm-leaderboard/details_KevinNi__mistral-class-bio-tutor/blob/main/results_2023-12-02T15-48-30.567817.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_KevinNi__mistral-class-bio-tutor | [
"region:us"
] | 2023-12-02T15:48:35+00:00 | {"pretty_name": "Evaluation run of KevinNi/mistral-class-bio-tutor", "dataset_summary": "Dataset automatically created during the evaluation run of model [KevinNi/mistral-class-bio-tutor](https://huggingface.co/KevinNi/mistral-class-bio-tutor) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_KevinNi__mistral-class-bio-tutor\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T15:48:30.567817](https://huggingface.co/datasets/open-llm-leaderboard/details_KevinNi__mistral-class-bio-tutor/blob/main/results_2023-12-02T15-48-30.567817.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/KevinNi/mistral-class-bio-tutor", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T15_48_30.567817", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-48-30.567817.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-48-30.567817.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T15_48_30.567817", "path": ["results_2023-12-02T15-48-30.567817.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T15-48-30.567817.parquet"]}]}]} | 2023-12-02T15:48:44+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of KevinNi/mistral-class-bio-tutor
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model KevinNi/mistral-class-bio-tutor on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T15:48:30.567817(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of KevinNi/mistral-class-bio-tutor",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model KevinNi/mistral-class-bio-tutor on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T15:48:30.567817(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of KevinNi/mistral-class-bio-tutor",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model KevinNi/mistral-class-bio-tutor on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T15:48:30.567817(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
21,
31,
170,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of KevinNi/mistral-class-bio-tutor## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model KevinNi/mistral-class-bio-tutor on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T15:48:30.567817(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
ae3c7e142a8dd69f0c0542497b53bf72fca536ce |
# Dataset Card for Evaluation run of hakurei/mommygpt-3B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/hakurei/mommygpt-3B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [hakurei/mommygpt-3B](https://huggingface.co/hakurei/mommygpt-3B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_hakurei__mommygpt-3B",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T15:58:26.242540](https://huggingface.co/datasets/open-llm-leaderboard/details_hakurei__mommygpt-3B/blob/main/results_2023-12-02T15-58-26.242540.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.02122820318423048,
"acc_stderr": 0.003970449129848636
},
"harness|gsm8k|5": {
"acc": 0.02122820318423048,
"acc_stderr": 0.003970449129848636
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_hakurei__mommygpt-3B | [
"region:us"
] | 2023-12-02T15:58:30+00:00 | {"pretty_name": "Evaluation run of hakurei/mommygpt-3B", "dataset_summary": "Dataset automatically created during the evaluation run of model [hakurei/mommygpt-3B](https://huggingface.co/hakurei/mommygpt-3B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_hakurei__mommygpt-3B\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T15:58:26.242540](https://huggingface.co/datasets/open-llm-leaderboard/details_hakurei__mommygpt-3B/blob/main/results_2023-12-02T15-58-26.242540.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.02122820318423048,\n \"acc_stderr\": 0.003970449129848636\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.02122820318423048,\n \"acc_stderr\": 0.003970449129848636\n }\n}\n```", "repo_url": "https://huggingface.co/hakurei/mommygpt-3B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T15_58_26.242540", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-58-26.242540.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T15-58-26.242540.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T15_58_26.242540", "path": ["results_2023-12-02T15-58-26.242540.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T15-58-26.242540.parquet"]}]}]} | 2023-12-02T15:58:39+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of hakurei/mommygpt-3B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model hakurei/mommygpt-3B on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T15:58:26.242540(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of hakurei/mommygpt-3B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model hakurei/mommygpt-3B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T15:58:26.242540(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of hakurei/mommygpt-3B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model hakurei/mommygpt-3B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T15:58:26.242540(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
18,
31,
167,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of hakurei/mommygpt-3B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model hakurei/mommygpt-3B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T15:58:26.242540(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
c7ce00a5d32198f21a2c3f8e5381a9b6b1819166 |
# Dataset Card for Evaluation run of adonlee/LLaMA_2_13B_SFT_v1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/adonlee/LLaMA_2_13B_SFT_v1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [adonlee/LLaMA_2_13B_SFT_v1](https://huggingface.co/adonlee/LLaMA_2_13B_SFT_v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_adonlee__LLaMA_2_13B_SFT_v1",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T16:26:28.271568](https://huggingface.co/datasets/open-llm-leaderboard/details_adonlee__LLaMA_2_13B_SFT_v1/blob/main/results_2023-12-02T16-26-28.271568.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.39272175890826383,
"acc_stderr": 0.013451745349586569
},
"harness|gsm8k|5": {
"acc": 0.39272175890826383,
"acc_stderr": 0.013451745349586569
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_adonlee__LLaMA_2_13B_SFT_v1 | [
"region:us"
] | 2023-12-02T16:17:13+00:00 | {"pretty_name": "Evaluation run of adonlee/LLaMA_2_13B_SFT_v1", "dataset_summary": "Dataset automatically created during the evaluation run of model [adonlee/LLaMA_2_13B_SFT_v1](https://huggingface.co/adonlee/LLaMA_2_13B_SFT_v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_adonlee__LLaMA_2_13B_SFT_v1\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T16:26:28.271568](https://huggingface.co/datasets/open-llm-leaderboard/details_adonlee__LLaMA_2_13B_SFT_v1/blob/main/results_2023-12-02T16-26-28.271568.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.39272175890826383,\n \"acc_stderr\": 0.013451745349586569\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.39272175890826383,\n \"acc_stderr\": 0.013451745349586569\n }\n}\n```", "repo_url": "https://huggingface.co/adonlee/LLaMA_2_13B_SFT_v1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T16_17_05.792628", "path": ["**/details_harness|gsm8k|5_2023-12-02T16-17-05.792628.parquet"]}, {"split": "2023_12_02T16_26_28.271568", "path": ["**/details_harness|gsm8k|5_2023-12-02T16-26-28.271568.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T16-26-28.271568.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T16_17_05.792628", "path": ["results_2023-12-02T16-17-05.792628.parquet"]}, {"split": "2023_12_02T16_26_28.271568", "path": ["results_2023-12-02T16-26-28.271568.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T16-26-28.271568.parquet"]}]}]} | 2023-12-02T16:26:37+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of adonlee/LLaMA_2_13B_SFT_v1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model adonlee/LLaMA_2_13B_SFT_v1 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T16:26:28.271568(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of adonlee/LLaMA_2_13B_SFT_v1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model adonlee/LLaMA_2_13B_SFT_v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T16:26:28.271568(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of adonlee/LLaMA_2_13B_SFT_v1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model adonlee/LLaMA_2_13B_SFT_v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T16:26:28.271568(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
27,
31,
176,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of adonlee/LLaMA_2_13B_SFT_v1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model adonlee/LLaMA_2_13B_SFT_v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T16:26:28.271568(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
5287d8911a1d40cad8b5a6dee0a51a5ecf3b3cae |
# Dataset Card for Evaluation run of fblgit/juanako-7b-UNA
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/fblgit/juanako-7b-UNA
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [fblgit/juanako-7b-UNA](https://huggingface.co/fblgit/juanako-7b-UNA) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_fblgit__juanako-7b-UNA",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T16:19:18.907702](https://huggingface.co/datasets/open-llm-leaderboard/details_fblgit__juanako-7b-UNA/blob/main/results_2023-12-02T16-19-18.907702.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.44806671721000757,
"acc_stderr": 0.013697992668274525
},
"harness|gsm8k|5": {
"acc": 0.44806671721000757,
"acc_stderr": 0.013697992668274525
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_fblgit__juanako-7b-UNA | [
"region:us"
] | 2023-12-02T16:19:23+00:00 | {"pretty_name": "Evaluation run of fblgit/juanako-7b-UNA", "dataset_summary": "Dataset automatically created during the evaluation run of model [fblgit/juanako-7b-UNA](https://huggingface.co/fblgit/juanako-7b-UNA) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_fblgit__juanako-7b-UNA\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T16:19:18.907702](https://huggingface.co/datasets/open-llm-leaderboard/details_fblgit__juanako-7b-UNA/blob/main/results_2023-12-02T16-19-18.907702.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.44806671721000757,\n \"acc_stderr\": 0.013697992668274525\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.44806671721000757,\n \"acc_stderr\": 0.013697992668274525\n }\n}\n```", "repo_url": "https://huggingface.co/fblgit/juanako-7b-UNA", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T16_19_18.907702", "path": ["**/details_harness|gsm8k|5_2023-12-02T16-19-18.907702.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T16-19-18.907702.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T16_19_18.907702", "path": ["results_2023-12-02T16-19-18.907702.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T16-19-18.907702.parquet"]}]}]} | 2023-12-02T16:19:34+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of fblgit/juanako-7b-UNA
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model fblgit/juanako-7b-UNA on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T16:19:18.907702(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of fblgit/juanako-7b-UNA",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model fblgit/juanako-7b-UNA on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T16:19:18.907702(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of fblgit/juanako-7b-UNA",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model fblgit/juanako-7b-UNA on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T16:19:18.907702(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
20,
31,
169,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of fblgit/juanako-7b-UNA## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model fblgit/juanako-7b-UNA on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T16:19:18.907702(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
c607124a9b7f627f7ed6af05aecfb1e7c1c4c0b7 |
# Patch
Threads pulled from 2ch
| zeio/patch | [
"license:apache-2.0",
"region:us"
] | 2023-12-02T16:21:17+00:00 | {"license": "apache-2.0"} | 2024-01-18T20:50:24+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
|
# Patch
Threads pulled from 2ch
| [
"# Patch\n\nThreads pulled from 2ch"
] | [
"TAGS\n#license-apache-2.0 #region-us \n",
"# Patch\n\nThreads pulled from 2ch"
] | [
14,
8
] | [
"passage: TAGS\n#license-apache-2.0 #region-us \n# Patch\n\nThreads pulled from 2ch"
] |
a6da538e9b0d7c598c5ba46ea8f7ff8cf6dd893c |
# Dataset Card for Evaluation run of adonlee/LLaMA_2_13B_SFT_v1.5
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/adonlee/LLaMA_2_13B_SFT_v1.5
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [adonlee/LLaMA_2_13B_SFT_v1.5](https://huggingface.co/adonlee/LLaMA_2_13B_SFT_v1.5) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_adonlee__LLaMA_2_13B_SFT_v1.5",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T16:28:44.620973](https://huggingface.co/datasets/open-llm-leaderboard/details_adonlee__LLaMA_2_13B_SFT_v1.5/blob/main/results_2023-12-02T16-28-44.620973.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.35405610310841545,
"acc_stderr": 0.013172728385222557
},
"harness|gsm8k|5": {
"acc": 0.35405610310841545,
"acc_stderr": 0.013172728385222557
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_adonlee__LLaMA_2_13B_SFT_v1.5 | [
"region:us"
] | 2023-12-02T16:28:48+00:00 | {"pretty_name": "Evaluation run of adonlee/LLaMA_2_13B_SFT_v1.5", "dataset_summary": "Dataset automatically created during the evaluation run of model [adonlee/LLaMA_2_13B_SFT_v1.5](https://huggingface.co/adonlee/LLaMA_2_13B_SFT_v1.5) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_adonlee__LLaMA_2_13B_SFT_v1.5\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T16:28:44.620973](https://huggingface.co/datasets/open-llm-leaderboard/details_adonlee__LLaMA_2_13B_SFT_v1.5/blob/main/results_2023-12-02T16-28-44.620973.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.35405610310841545,\n \"acc_stderr\": 0.013172728385222557\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.35405610310841545,\n \"acc_stderr\": 0.013172728385222557\n }\n}\n```", "repo_url": "https://huggingface.co/adonlee/LLaMA_2_13B_SFT_v1.5", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T16_28_44.620973", "path": ["**/details_harness|gsm8k|5_2023-12-02T16-28-44.620973.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T16-28-44.620973.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T16_28_44.620973", "path": ["results_2023-12-02T16-28-44.620973.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T16-28-44.620973.parquet"]}]}]} | 2023-12-02T16:28:53+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of adonlee/LLaMA_2_13B_SFT_v1.5
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model adonlee/LLaMA_2_13B_SFT_v1.5 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T16:28:44.620973(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of adonlee/LLaMA_2_13B_SFT_v1.5",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model adonlee/LLaMA_2_13B_SFT_v1.5 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T16:28:44.620973(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of adonlee/LLaMA_2_13B_SFT_v1.5",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model adonlee/LLaMA_2_13B_SFT_v1.5 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T16:28:44.620973(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
27,
31,
176,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of adonlee/LLaMA_2_13B_SFT_v1.5## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model adonlee/LLaMA_2_13B_SFT_v1.5 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T16:28:44.620973(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
cc3b553fb771383826155fd21eed61d11f094ef2 |
# Dataset Card for Evaluation run of abhishek/ccy0-2g7e-wqsa-0
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/abhishek/ccy0-2g7e-wqsa-0
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [abhishek/ccy0-2g7e-wqsa-0](https://huggingface.co/abhishek/ccy0-2g7e-wqsa-0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_abhishek__ccy0-2g7e-wqsa-0",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T16:46:35.234385](https://huggingface.co/datasets/open-llm-leaderboard/details_abhishek__ccy0-2g7e-wqsa-0/blob/main/results_2023-12-02T16-46-35.234385.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.32221379833206976,
"acc_stderr": 0.01287243548118878
},
"harness|gsm8k|5": {
"acc": 0.32221379833206976,
"acc_stderr": 0.01287243548118878
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_abhishek__ccy0-2g7e-wqsa-0 | [
"region:us"
] | 2023-12-02T16:33:06+00:00 | {"pretty_name": "Evaluation run of abhishek/ccy0-2g7e-wqsa-0", "dataset_summary": "Dataset automatically created during the evaluation run of model [abhishek/ccy0-2g7e-wqsa-0](https://huggingface.co/abhishek/ccy0-2g7e-wqsa-0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_abhishek__ccy0-2g7e-wqsa-0\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T16:46:35.234385](https://huggingface.co/datasets/open-llm-leaderboard/details_abhishek__ccy0-2g7e-wqsa-0/blob/main/results_2023-12-02T16-46-35.234385.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.32221379833206976,\n \"acc_stderr\": 0.01287243548118878\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.32221379833206976,\n \"acc_stderr\": 0.01287243548118878\n }\n}\n```", "repo_url": "https://huggingface.co/abhishek/ccy0-2g7e-wqsa-0", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T16_33_02.439769", "path": ["**/details_harness|gsm8k|5_2023-12-02T16-33-02.439769.parquet"]}, {"split": "2023_12_02T16_46_35.234385", "path": ["**/details_harness|gsm8k|5_2023-12-02T16-46-35.234385.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T16-46-35.234385.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T16_33_02.439769", "path": ["results_2023-12-02T16-33-02.439769.parquet"]}, {"split": "2023_12_02T16_46_35.234385", "path": ["results_2023-12-02T16-46-35.234385.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T16-46-35.234385.parquet"]}]}]} | 2023-12-02T16:46:45+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of abhishek/ccy0-2g7e-wqsa-0
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model abhishek/ccy0-2g7e-wqsa-0 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T16:46:35.234385(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of abhishek/ccy0-2g7e-wqsa-0",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model abhishek/ccy0-2g7e-wqsa-0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T16:46:35.234385(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of abhishek/ccy0-2g7e-wqsa-0",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model abhishek/ccy0-2g7e-wqsa-0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T16:46:35.234385(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
26,
31,
175,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of abhishek/ccy0-2g7e-wqsa-0## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model abhishek/ccy0-2g7e-wqsa-0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T16:46:35.234385(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
1fb637591719c215e395dc4189b2ee06d6d712f8 |
# Dataset Card for Evaluation run of kaist-ai/prometheus-13b-v1.0
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/kaist-ai/prometheus-13b-v1.0
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [kaist-ai/prometheus-13b-v1.0](https://huggingface.co/kaist-ai/prometheus-13b-v1.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_kaist-ai__prometheus-13b-v1.0",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T16:47:58.854671](https://huggingface.co/datasets/open-llm-leaderboard/details_kaist-ai__prometheus-13b-v1.0/blob/main/results_2023-12-02T16-47-58.854671.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.3169067475360121,
"acc_stderr": 0.012815868296721386
},
"harness|gsm8k|5": {
"acc": 0.3169067475360121,
"acc_stderr": 0.012815868296721386
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_kaist-ai__prometheus-13b-v1.0 | [
"region:us"
] | 2023-12-02T16:48:06+00:00 | {"pretty_name": "Evaluation run of kaist-ai/prometheus-13b-v1.0", "dataset_summary": "Dataset automatically created during the evaluation run of model [kaist-ai/prometheus-13b-v1.0](https://huggingface.co/kaist-ai/prometheus-13b-v1.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_kaist-ai__prometheus-13b-v1.0\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T16:47:58.854671](https://huggingface.co/datasets/open-llm-leaderboard/details_kaist-ai__prometheus-13b-v1.0/blob/main/results_2023-12-02T16-47-58.854671.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.3169067475360121,\n \"acc_stderr\": 0.012815868296721386\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3169067475360121,\n \"acc_stderr\": 0.012815868296721386\n }\n}\n```", "repo_url": "https://huggingface.co/kaist-ai/prometheus-13b-v1.0", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T16_47_58.854671", "path": ["**/details_harness|gsm8k|5_2023-12-02T16-47-58.854671.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T16-47-58.854671.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T16_47_58.854671", "path": ["results_2023-12-02T16-47-58.854671.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T16-47-58.854671.parquet"]}]}]} | 2023-12-02T16:48:17+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of kaist-ai/prometheus-13b-v1.0
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model kaist-ai/prometheus-13b-v1.0 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T16:47:58.854671(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of kaist-ai/prometheus-13b-v1.0",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model kaist-ai/prometheus-13b-v1.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T16:47:58.854671(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of kaist-ai/prometheus-13b-v1.0",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model kaist-ai/prometheus-13b-v1.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T16:47:58.854671(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
23,
31,
172,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of kaist-ai/prometheus-13b-v1.0## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model kaist-ai/prometheus-13b-v1.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T16:47:58.854671(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
13b4e163a9d4274252f76d81e6ce02c055e5d584 |
Data is scraped from https://islamqa.info/. Data contains islamic questions and its corrseponding answers from scholars.
Example:
| Question | Full Answer |
| -------- | ----------- |
| What should a person do if one of the areas normally washed during wudu is wounded? | Praise be to Allah. If a person is wounded in one of the areas of wudu and is not able to cover the wound with a bandage, he should perform wudu except for the wounded body part, for which he should then perform ablution with sand (tayammum). 1 (Note that this means one performs normal tayammum, not actually placing the sand on the wound itself.) 1 Al-Mughni ma'a al-Sharh il-Kabeer 1/282. | | minhalvp/islamqa | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:1K<n<10K",
"license:apache-2.0",
"region:us"
] | 2023-12-02T16:58:14+00:00 | {"license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation", "conversational"], "dataset_info": {"features": [{"name": "Question", "dtype": "string"}, {"name": "Full Answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10259356, "num_examples": 3355}], "download_size": 5425524, "dataset_size": 10259356}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-02T17:26:57+00:00 | [] | [] | TAGS
#task_categories-text-generation #task_categories-conversational #size_categories-1K<n<10K #license-apache-2.0 #region-us
| Data is scraped from URL Data contains islamic questions and its corrseponding answers from scholars.
Example:
| [] | [
"TAGS\n#task_categories-text-generation #task_categories-conversational #size_categories-1K<n<10K #license-apache-2.0 #region-us \n"
] | [
47
] | [
"passage: TAGS\n#task_categories-text-generation #task_categories-conversational #size_categories-1K<n<10K #license-apache-2.0 #region-us \n"
] |
74daf841f766f06cfc0ad0be1cae20da9b5a6137 |
# Dataset Card for My Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information) | on1onmangoes/ZED | [
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-12-02T17:25:08+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["conversational"], "pretty_name": "ZED", "dataset_info": {"features": [{"name": "file_name", "dtype": "string"}, {"name": "wav", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}, {"name": "emotion", "sequence": "string"}, {"name": "duration", "dtype": "float32"}, {"name": "emotion_start", "sequence": "float32"}, {"name": "emotion_end", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 30998607.0, "num_examples": 180}], "download_size": 30998412, "dataset_size": 30998607.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-02T21:38:58+00:00 | [] | [
"en"
] | TAGS
#task_categories-conversational #size_categories-n<1K #language-English #license-apache-2.0 #region-us
|
# Dataset Card for My Dataset
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information | [
"# Dataset Card for My Dataset",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information"
] | [
"TAGS\n#task_categories-conversational #size_categories-n<1K #language-English #license-apache-2.0 #region-us \n",
"# Dataset Card for My Dataset",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information"
] | [
38,
8,
109
] | [
"passage: TAGS\n#task_categories-conversational #size_categories-n<1K #language-English #license-apache-2.0 #region-us \n# Dataset Card for My Dataset## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information"
] |
8903b70e578eeafaf0a375688c56d4580ee9e5ef | # Dataset Card for "hotpot_qa_json"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | davidfant/hotpot_qa_json | [
"region:us"
] | 2023-12-02T17:38:03+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "index", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "json", "dtype": "string"}, {"name": "label", "dtype": "bool"}, {"name": "title", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7593555, "num_examples": 7500}, {"name": "test", "num_bytes": 986166, "num_examples": 1000}], "download_size": 3854552, "dataset_size": 8579721}} | 2023-12-02T20:42:58+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "hotpot_qa_json"
More Information needed | [
"# Dataset Card for \"hotpot_qa_json\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"hotpot_qa_json\"\n\nMore Information needed"
] | [
6,
17
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"hotpot_qa_json\"\n\nMore Information needed"
] |
28b45b7fe197555db5eb39d67e6c86de04e64962 |
# datagouvfr-datasets
This dataset is created from [data.gouv.fr's catalog](https://www.data.gouv.fr/en/datasets/catalogue-des-donnees-de-data-gouv-fr/), i.e. the French official open data platform list of datasets.
data.gouv.fr's datasets with `archived` status or with no associated ressource are excluded.
This has two configs, described below.
## compact
This config exposes 3 features:
- `id`
- `content`
- `tags`
`content` is the concatenation of the dataset's title and its description, stripped from markdown (markdown to text).
## raw
All columns from the original catalog, with the preprocess filter described above. Description is still in markdown.
| abulte/datagouvfr-datasets | [
"license:etalab-2.0",
"region:us"
] | 2023-12-02T17:44:36+00:00 | {"license": "etalab-2.0", "configs": [{"config_name": "compact", "data_files": "data/datagouvfr-datasets.csv"}, {"config_name": "raw", "data_files": "data/datagouvfr-datasets-raw.csv"}]} | 2023-12-14T09:08:32+00:00 | [] | [] | TAGS
#license-etalab-2.0 #region-us
|
# datagouvfr-datasets
This dataset is created from URL's catalog, i.e. the French official open data platform list of datasets.
URL's datasets with 'archived' status or with no associated ressource are excluded.
This has two configs, described below.
## compact
This config exposes 3 features:
- 'id'
- 'content'
- 'tags'
'content' is the concatenation of the dataset's title and its description, stripped from markdown (markdown to text).
## raw
All columns from the original catalog, with the preprocess filter described above. Description is still in markdown.
| [
"# datagouvfr-datasets\n\nThis dataset is created from URL's catalog, i.e. the French official open data platform list of datasets.\n\nURL's datasets with 'archived' status or with no associated ressource are excluded.\n\nThis has two configs, described below.",
"## compact\n\nThis config exposes 3 features:\n- 'id'\n- 'content'\n- 'tags'\n\n'content' is the concatenation of the dataset's title and its description, stripped from markdown (markdown to text).",
"## raw\n\nAll columns from the original catalog, with the preprocess filter described above. Description is still in markdown."
] | [
"TAGS\n#license-etalab-2.0 #region-us \n",
"# datagouvfr-datasets\n\nThis dataset is created from URL's catalog, i.e. the French official open data platform list of datasets.\n\nURL's datasets with 'archived' status or with no associated ressource are excluded.\n\nThis has two configs, described below.",
"## compact\n\nThis config exposes 3 features:\n- 'id'\n- 'content'\n- 'tags'\n\n'content' is the concatenation of the dataset's title and its description, stripped from markdown (markdown to text).",
"## raw\n\nAll columns from the original catalog, with the preprocess filter described above. Description is still in markdown."
] | [
14,
67,
53,
26
] | [
"passage: TAGS\n#license-etalab-2.0 #region-us \n# datagouvfr-datasets\n\nThis dataset is created from URL's catalog, i.e. the French official open data platform list of datasets.\n\nURL's datasets with 'archived' status or with no associated ressource are excluded.\n\nThis has two configs, described below.## compact\n\nThis config exposes 3 features:\n- 'id'\n- 'content'\n- 'tags'\n\n'content' is the concatenation of the dataset's title and its description, stripped from markdown (markdown to text).## raw\n\nAll columns from the original catalog, with the preprocess filter described above. Description is still in markdown."
] |
d85025b41bba280e7b3d7edb5edeb4982c71cc32 |
Dataset mentioned for ToolBench project https://github.com/OpenBMB/ToolBench
They were in the google drive data.zip https://drive.google.com/drive/folders/1yBUQ732mPu-KclJnuQELEhtKakdXFc3J
These two json are already processed by the original author. Just plugin into the ToolBnech repo deepseed arguments.
```
--data_path ./toolllama_G123_dfs_train.json \
--eval_data_path ./toolllama_G123_dfs_eval.json \
```
~~My objective is to tailer the training data to 1/100 size and used them for the LLaMA-Factory project. https://github.com/hiyouga/LLaMA-Factory~~
So that more open source models could benifit from function calling dataset.
## Edit
The objective is obtained by using another dataset instead: https://huggingface.co/datasets/Yhyu13/glaive-function-calling-v2-llama-factory-convert
It is smaller and better. | Yhyu13/ToolBench_toolllama_G123_dfs | [
"license:apache-2.0",
"region:us"
] | 2023-12-02T18:45:58+00:00 | {"license": "apache-2.0"} | 2024-01-03T09:58:25+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
|
Dataset mentioned for ToolBench project URL
They were in the google drive URL URL
These two json are already processed by the original author. Just plugin into the ToolBnech repo deepseed arguments.
~~My objective is to tailer the training data to 1/100 size and used them for the LLaMA-Factory project. URL
So that more open source models could benifit from function calling dataset.
## Edit
The objective is obtained by using another dataset instead: URL
It is smaller and better. | [
"## Edit\n\nThe objective is obtained by using another dataset instead: URL\nIt is smaller and better."
] | [
"TAGS\n#license-apache-2.0 #region-us \n",
"## Edit\n\nThe objective is obtained by using another dataset instead: URL\nIt is smaller and better."
] | [
14,
21
] | [
"passage: TAGS\n#license-apache-2.0 #region-us \n## Edit\n\nThe objective is obtained by using another dataset instead: URL\nIt is smaller and better."
] |
00a94456ffc8b5b1408c04aa1dfeae6b6fc3777a | 
***Muv-Luv Alternative Alpaca-formatted dataset***
# HUGE FUCKIN MEME. DO NOT USE.
Credit to Alpin for the pruned VN dataset.
...
My formatting script sucks. | localfultonextractor/Muv-Luv-Alternative | [
"region:us"
] | 2023-12-02T19:33:08+00:00 | {} | 2024-02-04T20:06:48+00:00 | [] | [] | TAGS
#region-us
| !image/gif
*Muv-Luv Alternative Alpaca-formatted dataset*
# HUGE FUCKIN MEME. DO NOT USE.
Credit to Alpin for the pruned VN dataset.
...
My formatting script sucks. | [
"# HUGE FUCKIN MEME. DO NOT USE.\n\nCredit to Alpin for the pruned VN dataset.\n\n...\n\nMy formatting script sucks."
] | [
"TAGS\n#region-us \n",
"# HUGE FUCKIN MEME. DO NOT USE.\n\nCredit to Alpin for the pruned VN dataset.\n\n...\n\nMy formatting script sucks."
] | [
6,
33
] | [
"passage: TAGS\n#region-us \n# HUGE FUCKIN MEME. DO NOT USE.\n\nCredit to Alpin for the pruned VN dataset.\n\n...\n\nMy formatting script sucks."
] |
17523ffb49da0cca67723c8896a3ad22efcdd8ef | This is machine translation from english to russian of a summarization dataset https://huggingface.co/datasets/knkarthick/dialogsum.
Translation was done by google translate, so the quality is sometimes arguable.
Project repository:
https://github.com/Hacker1337/tg_messages_summarizer
### Data Fields
- dialogue: text of dialogue.
- summary: summary of the dialogue.
---
license: cc-by-nc-sa-4.0
task_categories:
- summarization
language:
- ru
---
Information about data from the original dataset:
---
## Dataset Description
### Links
- **Homepage:** https://aclanthology.org/2021.findings-acl.449
- **Repository:** https://github.com/cylnlp/dialogsum
- **Paper:** https://aclanthology.org/2021.findings-acl.449
- **Point of Contact:** https://huggingface.co/knkarthick
### Dataset Summary
DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 (Plus 100 holdout data for topic generation) dialogues with corresponding manually labeled summaries and topics.
## Dataset Structure
### Data Instances
DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 dialogues (+1000 tests) split into train, test and validation.
The first instance in the training set:
{'id': 'train_0', 'summary': "Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.", 'dialogue': "#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\n#Person2#: I found it would be a good idea to get a check-up.\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\n#Person2#: Ok.\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\n#Person2#: Yes.\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\n#Person2#: Ok, thanks doctor.", 'topic': "get a check-up}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- topic: human written topic/one liner of the dialogue.
- id: unique file id of an example.
### Data Splits
- train: 12460
- val: 500
- test: 1500
## Dataset Creation
### Curation Rationale
In paper:
We collect dialogue data for DialogSum from three public dialogue corpora, namely Dailydialog (Li et al., 2017), DREAM (Sun et al., 2019) and MuTual (Cui et al., 2019), as well as an English speaking practice website. These datasets contain face-to-face spoken dialogues that cover a wide range of daily-life topics, including schooling, work, medication, shopping, leisure, travel. Most conversations take place between friends, colleagues, and between service providers and customers.
Compared with previous datasets, dialogues from DialogSum have distinct characteristics:
Under rich real-life scenarios, including more diverse task-oriented scenarios;
Have clear communication patterns and intents, which is valuable to serve as summarization sources;
Have a reasonable length, which comforts the purpose of automatic summarization.
We ask annotators to summarize each dialogue based on the following criteria:
Convey the most salient information;
Be brief;
Preserve important named entities within the conversation;
Be written from an observer perspective;
Be written in formal language.
| Hacker1337/ru_dialogsum | [
"task_categories:summarization",
"task_categories:text2text-generation",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:ru",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-12-02T21:04:08+00:00 | {"language": ["ru"], "license": "cc-by-nc-sa-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["summarization", "text2text-generation", "text-generation"], "dataset_info": {"features": [{"name": "dialogue", "dtype": "string"}, {"name": "summary", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19849333, "num_examples": 12460}, {"name": "validation", "num_bytes": 776937, "num_examples": 500}, {"name": "test", "num_bytes": 2372057, "num_examples": 1500}], "download_size": 10149385, "dataset_size": 22998327}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2023-12-02T21:17:57+00:00 | [] | [
"ru"
] | TAGS
#task_categories-summarization #task_categories-text2text-generation #task_categories-text-generation #size_categories-10K<n<100K #language-Russian #license-cc-by-nc-sa-4.0 #region-us
| This is machine translation from english to russian of a summarization dataset URL
Translation was done by google translate, so the quality is sometimes arguable.
Project repository:
URL
### Data Fields
- dialogue: text of dialogue.
- summary: summary of the dialogue.
---
license: cc-by-nc-sa-4.0
task_categories:
- summarization
language:
- ru
---
Information about data from the original dataset:
---
## Dataset Description
### Links
- Homepage: URL
- Repository: URL
- Paper: URL
- Point of Contact: URL
### Dataset Summary
DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 (Plus 100 holdout data for topic generation) dialogues with corresponding manually labeled summaries and topics.
## Dataset Structure
### Data Instances
DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 dialogues (+1000 tests) split into train, test and validation.
The first instance in the training set:
{'id': 'train_0', 'summary': "Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.", 'dialogue': "#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\n#Person2#: I found it would be a good idea to get a check-up.\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\n#Person2#: Ok.\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\n#Person2#: Yes.\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\n#Person2#: Ok, thanks doctor.", 'topic': "get a check-up}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- topic: human written topic/one liner of the dialogue.
- id: unique file id of an example.
### Data Splits
- train: 12460
- val: 500
- test: 1500
## Dataset Creation
### Curation Rationale
In paper:
We collect dialogue data for DialogSum from three public dialogue corpora, namely Dailydialog (Li et al., 2017), DREAM (Sun et al., 2019) and MuTual (Cui et al., 2019), as well as an English speaking practice website. These datasets contain face-to-face spoken dialogues that cover a wide range of daily-life topics, including schooling, work, medication, shopping, leisure, travel. Most conversations take place between friends, colleagues, and between service providers and customers.
Compared with previous datasets, dialogues from DialogSum have distinct characteristics:
Under rich real-life scenarios, including more diverse task-oriented scenarios;
Have clear communication patterns and intents, which is valuable to serve as summarization sources;
Have a reasonable length, which comforts the purpose of automatic summarization.
We ask annotators to summarize each dialogue based on the following criteria:
Convey the most salient information;
Be brief;
Preserve important named entities within the conversation;
Be written from an observer perspective;
Be written in formal language.
| [
"### Data Fields\n- dialogue: text of dialogue.\n- summary: summary of the dialogue.\n\n---\nlicense: cc-by-nc-sa-4.0\ntask_categories:\n- summarization\nlanguage:\n- ru\n---\n\nInformation about data from the original dataset:\n---",
"## Dataset Description",
"### Links\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Point of Contact: URL",
"### Dataset Summary\nDialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 (Plus 100 holdout data for topic generation) dialogues with corresponding manually labeled summaries and topics.",
"## Dataset Structure",
"### Data Instances\nDialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 dialogues (+1000 tests) split into train, test and validation.\nThe first instance in the training set:\n{'id': 'train_0', 'summary': \"Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.\", 'dialogue': \"#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\\n#Person2#: I found it would be a good idea to get a check-up.\\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\\n#Person2#: Ok.\\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\\n#Person2#: Yes.\\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\\n#Person2#: Ok, thanks doctor.\", 'topic': \"get a check-up}",
"### Data Fields\n- dialogue: text of dialogue.\n- summary: human written summary of the dialogue.\n- topic: human written topic/one liner of the dialogue.\n- id: unique file id of an example.",
"### Data Splits\n- train: 12460\n- val: 500\n- test: 1500",
"## Dataset Creation",
"### Curation Rationale\nIn paper:\nWe collect dialogue data for DialogSum from three public dialogue corpora, namely Dailydialog (Li et al., 2017), DREAM (Sun et al., 2019) and MuTual (Cui et al., 2019), as well as an English speaking practice website. These datasets contain face-to-face spoken dialogues that cover a wide range of daily-life topics, including schooling, work, medication, shopping, leisure, travel. Most conversations take place between friends, colleagues, and between service providers and customers.\nCompared with previous datasets, dialogues from DialogSum have distinct characteristics:\nUnder rich real-life scenarios, including more diverse task-oriented scenarios;\nHave clear communication patterns and intents, which is valuable to serve as summarization sources;\nHave a reasonable length, which comforts the purpose of automatic summarization.\nWe ask annotators to summarize each dialogue based on the following criteria:\nConvey the most salient information;\nBe brief;\nPreserve important named entities within the conversation;\nBe written from an observer perspective;\nBe written in formal language."
] | [
"TAGS\n#task_categories-summarization #task_categories-text2text-generation #task_categories-text-generation #size_categories-10K<n<100K #language-Russian #license-cc-by-nc-sa-4.0 #region-us \n",
"### Data Fields\n- dialogue: text of dialogue.\n- summary: summary of the dialogue.\n\n---\nlicense: cc-by-nc-sa-4.0\ntask_categories:\n- summarization\nlanguage:\n- ru\n---\n\nInformation about data from the original dataset:\n---",
"## Dataset Description",
"### Links\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Point of Contact: URL",
"### Dataset Summary\nDialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 (Plus 100 holdout data for topic generation) dialogues with corresponding manually labeled summaries and topics.",
"## Dataset Structure",
"### Data Instances\nDialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 dialogues (+1000 tests) split into train, test and validation.\nThe first instance in the training set:\n{'id': 'train_0', 'summary': \"Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.\", 'dialogue': \"#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\\n#Person2#: I found it would be a good idea to get a check-up.\\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\\n#Person2#: Ok.\\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\\n#Person2#: Yes.\\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\\n#Person2#: Ok, thanks doctor.\", 'topic': \"get a check-up}",
"### Data Fields\n- dialogue: text of dialogue.\n- summary: human written summary of the dialogue.\n- topic: human written topic/one liner of the dialogue.\n- id: unique file id of an example.",
"### Data Splits\n- train: 12460\n- val: 500\n- test: 1500",
"## Dataset Creation",
"### Curation Rationale\nIn paper:\nWe collect dialogue data for DialogSum from three public dialogue corpora, namely Dailydialog (Li et al., 2017), DREAM (Sun et al., 2019) and MuTual (Cui et al., 2019), as well as an English speaking practice website. These datasets contain face-to-face spoken dialogues that cover a wide range of daily-life topics, including schooling, work, medication, shopping, leisure, travel. Most conversations take place between friends, colleagues, and between service providers and customers.\nCompared with previous datasets, dialogues from DialogSum have distinct characteristics:\nUnder rich real-life scenarios, including more diverse task-oriented scenarios;\nHave clear communication patterns and intents, which is valuable to serve as summarization sources;\nHave a reasonable length, which comforts the purpose of automatic summarization.\nWe ask annotators to summarize each dialogue based on the following criteria:\nConvey the most salient information;\nBe brief;\nPreserve important named entities within the conversation;\nBe written from an observer perspective;\nBe written in formal language."
] | [
70,
57,
4,
23,
52,
6,
428,
46,
18,
5,
256
] | [
"passage: TAGS\n#task_categories-summarization #task_categories-text2text-generation #task_categories-text-generation #size_categories-10K<n<100K #language-Russian #license-cc-by-nc-sa-4.0 #region-us \n### Data Fields\n- dialogue: text of dialogue.\n- summary: summary of the dialogue.\n\n---\nlicense: cc-by-nc-sa-4.0\ntask_categories:\n- summarization\nlanguage:\n- ru\n---\n\nInformation about data from the original dataset:\n---## Dataset Description### Links\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Point of Contact: URL### Dataset Summary\nDialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 (Plus 100 holdout data for topic generation) dialogues with corresponding manually labeled summaries and topics.## Dataset Structure"
] |
8c7cb4e5e22bb27ebdc64edcc9f40f122f318b43 | # Dataset Card for "test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | mnoukhov/test | [
"region:us"
] | 2023-12-02T21:12:25+00:00 | {"dataset_info": {"features": [{"name": "foo", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 24, "num_examples": 3}], "download_size": 843, "dataset_size": 24}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-02T21:34:32+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "test"
More Information needed | [
"# Dataset Card for \"test\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"test\"\n\nMore Information needed"
] | [
6,
11
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"test\"\n\nMore Information needed"
] |
a6c991b8222b706fd0dbdccc3a21d73e88bc311b | # Dataset Card for "stylegan_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tianyi0216/stylegan_data | [
"region:us"
] | 2023-12-02T21:35:52+00:00 | {"dataset_info": {"features": [{"name": "source_img", "dtype": "image"}, {"name": "instruction", "dtype": "string"}, {"name": "target_img", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 2910811213.15, "num_examples": 1995}], "download_size": 2964893208, "dataset_size": 2910811213.15}} | 2023-12-03T01:47:13+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "stylegan_data"
More Information needed | [
"# Dataset Card for \"stylegan_data\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"stylegan_data\"\n\nMore Information needed"
] | [
6,
14
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"stylegan_data\"\n\nMore Information needed"
] |
9a1bb551db0c4cb42582fa453b2ac92c6fa0e333 |
# Dataset Card for Evaluation run of qblocks/falcon_7b_norobots
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/qblocks/falcon_7b_norobots
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [qblocks/falcon_7b_norobots](https://huggingface.co/qblocks/falcon_7b_norobots) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_qblocks__falcon_7b_norobots",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T22:03:06.046351](https://huggingface.co/datasets/open-llm-leaderboard/details_qblocks__falcon_7b_norobots/blob/main/results_2023-12-02T22-03-06.046351.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.04473085670962851,
"acc_stderr": 0.005693886131407044
},
"harness|gsm8k|5": {
"acc": 0.04473085670962851,
"acc_stderr": 0.005693886131407044
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_qblocks__falcon_7b_norobots | [
"region:us"
] | 2023-12-02T22:03:05+00:00 | {"pretty_name": "Evaluation run of qblocks/falcon_7b_norobots", "dataset_summary": "Dataset automatically created during the evaluation run of model [qblocks/falcon_7b_norobots](https://huggingface.co/qblocks/falcon_7b_norobots) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_qblocks__falcon_7b_norobots\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T22:03:06.046351](https://huggingface.co/datasets/open-llm-leaderboard/details_qblocks__falcon_7b_norobots/blob/main/results_2023-12-02T22-03-06.046351.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.04473085670962851,\n \"acc_stderr\": 0.005693886131407044\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.04473085670962851,\n \"acc_stderr\": 0.005693886131407044\n }\n}\n```", "repo_url": "https://huggingface.co/qblocks/falcon_7b_norobots", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T22_03_02.492484", "path": ["**/details_harness|gsm8k|5_2023-12-02T22-03-02.492484.parquet"]}, {"split": "2023_12_02T22_03_06.046351", "path": ["**/details_harness|gsm8k|5_2023-12-02T22-03-06.046351.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T22-03-06.046351.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T22_03_02.492484", "path": ["results_2023-12-02T22-03-02.492484.parquet"]}, {"split": "2023_12_02T22_03_06.046351", "path": ["results_2023-12-02T22-03-06.046351.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T22-03-06.046351.parquet"]}]}]} | 2023-12-02T22:03:13+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of qblocks/falcon_7b_norobots
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model qblocks/falcon_7b_norobots on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T22:03:06.046351(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of qblocks/falcon_7b_norobots",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model qblocks/falcon_7b_norobots on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T22:03:06.046351(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of qblocks/falcon_7b_norobots",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model qblocks/falcon_7b_norobots on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T22:03:06.046351(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
22,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of qblocks/falcon_7b_norobots## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model qblocks/falcon_7b_norobots on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T22:03:06.046351(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
b49aad904f24ff7fdc06bef232f662ff09d43726 |
# Dataset Card for Evaluation run of abhishek/zephyr-beta-math
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/abhishek/zephyr-beta-math
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [abhishek/zephyr-beta-math](https://huggingface.co/abhishek/zephyr-beta-math) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_abhishek__zephyr-beta-math",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T22:04:08.967316](https://huggingface.co/datasets/open-llm-leaderboard/details_abhishek__zephyr-beta-math/blob/main/results_2023-12-02T22-04-08.967316.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5640636846095527,
"acc_stderr": 0.013658968058849159
},
"harness|gsm8k|5": {
"acc": 0.5640636846095527,
"acc_stderr": 0.013658968058849159
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_abhishek__zephyr-beta-math | [
"region:us"
] | 2023-12-02T22:04:11+00:00 | {"pretty_name": "Evaluation run of abhishek/zephyr-beta-math", "dataset_summary": "Dataset automatically created during the evaluation run of model [abhishek/zephyr-beta-math](https://huggingface.co/abhishek/zephyr-beta-math) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_abhishek__zephyr-beta-math\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T22:04:08.967316](https://huggingface.co/datasets/open-llm-leaderboard/details_abhishek__zephyr-beta-math/blob/main/results_2023-12-02T22-04-08.967316.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5640636846095527,\n \"acc_stderr\": 0.013658968058849159\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5640636846095527,\n \"acc_stderr\": 0.013658968058849159\n }\n}\n```", "repo_url": "https://huggingface.co/abhishek/zephyr-beta-math", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T22_04_08.967316", "path": ["**/details_harness|gsm8k|5_2023-12-02T22-04-08.967316.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T22-04-08.967316.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T22_04_08.967316", "path": ["results_2023-12-02T22-04-08.967316.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T22-04-08.967316.parquet"]}]}]} | 2023-12-02T22:04:16+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of abhishek/zephyr-beta-math
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model abhishek/zephyr-beta-math on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T22:04:08.967316(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of abhishek/zephyr-beta-math",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model abhishek/zephyr-beta-math on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T22:04:08.967316(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of abhishek/zephyr-beta-math",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model abhishek/zephyr-beta-math on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T22:04:08.967316(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
20,
31,
169,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of abhishek/zephyr-beta-math## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model abhishek/zephyr-beta-math on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T22:04:08.967316(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
4cfbd5e30aff03ecc0c68cb4249502a44803e7e5 |
# Dataset Card for Evaluation run of klyang/MentaLLaMA-chat-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/klyang/MentaLLaMA-chat-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [klyang/MentaLLaMA-chat-7B](https://huggingface.co/klyang/MentaLLaMA-chat-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_klyang__MentaLLaMA-chat-7B",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T22:04:51.845916](https://huggingface.co/datasets/open-llm-leaderboard/details_klyang__MentaLLaMA-chat-7B/blob/main/results_2023-12-02T22-04-51.845916.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.16148597422289612,
"acc_stderr": 0.010135959452134306
},
"harness|gsm8k|5": {
"acc": 0.16148597422289612,
"acc_stderr": 0.010135959452134306
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_klyang__MentaLLaMA-chat-7B | [
"region:us"
] | 2023-12-02T22:04:54+00:00 | {"pretty_name": "Evaluation run of klyang/MentaLLaMA-chat-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [klyang/MentaLLaMA-chat-7B](https://huggingface.co/klyang/MentaLLaMA-chat-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_klyang__MentaLLaMA-chat-7B\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T22:04:51.845916](https://huggingface.co/datasets/open-llm-leaderboard/details_klyang__MentaLLaMA-chat-7B/blob/main/results_2023-12-02T22-04-51.845916.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.16148597422289612,\n \"acc_stderr\": 0.010135959452134306\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.16148597422289612,\n \"acc_stderr\": 0.010135959452134306\n }\n}\n```", "repo_url": "https://huggingface.co/klyang/MentaLLaMA-chat-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T22_04_51.845916", "path": ["**/details_harness|gsm8k|5_2023-12-02T22-04-51.845916.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T22-04-51.845916.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T22_04_51.845916", "path": ["results_2023-12-02T22-04-51.845916.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T22-04-51.845916.parquet"]}]}]} | 2023-12-02T22:04:59+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of klyang/MentaLLaMA-chat-7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model klyang/MentaLLaMA-chat-7B on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T22:04:51.845916(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of klyang/MentaLLaMA-chat-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model klyang/MentaLLaMA-chat-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T22:04:51.845916(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of klyang/MentaLLaMA-chat-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model klyang/MentaLLaMA-chat-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T22:04:51.845916(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
21,
31,
170,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of klyang/MentaLLaMA-chat-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model klyang/MentaLLaMA-chat-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T22:04:51.845916(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
875cdaa4a99bdc5c4a2369e890e4ee04fe8c586a |
# Dataset Card for Evaluation run of qblocks/zephyr_7b_norobots
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/qblocks/zephyr_7b_norobots
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [qblocks/zephyr_7b_norobots](https://huggingface.co/qblocks/zephyr_7b_norobots) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_qblocks__zephyr_7b_norobots",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T22:10:51.334218](https://huggingface.co/datasets/open-llm-leaderboard/details_qblocks__zephyr_7b_norobots/blob/main/results_2023-12-02T22-10-51.334218.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.20621683093252463,
"acc_stderr": 0.011144364089781441
},
"harness|gsm8k|5": {
"acc": 0.20621683093252463,
"acc_stderr": 0.011144364089781441
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_qblocks__zephyr_7b_norobots | [
"region:us"
] | 2023-12-02T22:10:54+00:00 | {"pretty_name": "Evaluation run of qblocks/zephyr_7b_norobots", "dataset_summary": "Dataset automatically created during the evaluation run of model [qblocks/zephyr_7b_norobots](https://huggingface.co/qblocks/zephyr_7b_norobots) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_qblocks__zephyr_7b_norobots\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T22:10:51.334218](https://huggingface.co/datasets/open-llm-leaderboard/details_qblocks__zephyr_7b_norobots/blob/main/results_2023-12-02T22-10-51.334218.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.20621683093252463,\n \"acc_stderr\": 0.011144364089781441\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.20621683093252463,\n \"acc_stderr\": 0.011144364089781441\n }\n}\n```", "repo_url": "https://huggingface.co/qblocks/zephyr_7b_norobots", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T22_10_51.334218", "path": ["**/details_harness|gsm8k|5_2023-12-02T22-10-51.334218.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T22-10-51.334218.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T22_10_51.334218", "path": ["results_2023-12-02T22-10-51.334218.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T22-10-51.334218.parquet"]}]}]} | 2023-12-02T22:10:58+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of qblocks/zephyr_7b_norobots
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model qblocks/zephyr_7b_norobots on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T22:10:51.334218(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of qblocks/zephyr_7b_norobots",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model qblocks/zephyr_7b_norobots on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T22:10:51.334218(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of qblocks/zephyr_7b_norobots",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model qblocks/zephyr_7b_norobots on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T22:10:51.334218(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
22,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of qblocks/zephyr_7b_norobots## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model qblocks/zephyr_7b_norobots on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T22:10:51.334218(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
22680af402b21772822de5d8e6cf4451fd2bcb94 |
# Dataset Card for Evaluation run of harborwater/open-llama-3b-claude-30k
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/harborwater/open-llama-3b-claude-30k
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [harborwater/open-llama-3b-claude-30k](https://huggingface.co/harborwater/open-llama-3b-claude-30k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_harborwater__open-llama-3b-claude-30k",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T22:19:50.317589](https://huggingface.co/datasets/open-llm-leaderboard/details_harborwater__open-llama-3b-claude-30k/blob/main/results_2023-12-02T22-19-50.317589.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.021986353297952996,
"acc_stderr": 0.004039162758110046
},
"harness|gsm8k|5": {
"acc": 0.021986353297952996,
"acc_stderr": 0.004039162758110046
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_harborwater__open-llama-3b-claude-30k | [
"region:us"
] | 2023-12-02T22:19:53+00:00 | {"pretty_name": "Evaluation run of harborwater/open-llama-3b-claude-30k", "dataset_summary": "Dataset automatically created during the evaluation run of model [harborwater/open-llama-3b-claude-30k](https://huggingface.co/harborwater/open-llama-3b-claude-30k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_harborwater__open-llama-3b-claude-30k\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T22:19:50.317589](https://huggingface.co/datasets/open-llm-leaderboard/details_harborwater__open-llama-3b-claude-30k/blob/main/results_2023-12-02T22-19-50.317589.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.021986353297952996,\n \"acc_stderr\": 0.004039162758110046\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.021986353297952996,\n \"acc_stderr\": 0.004039162758110046\n }\n}\n```", "repo_url": "https://huggingface.co/harborwater/open-llama-3b-claude-30k", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_02T22_19_50.317589", "path": ["**/details_harness|gsm8k|5_2023-12-02T22-19-50.317589.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T22-19-50.317589.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_02T22_19_50.317589", "path": ["results_2023-12-02T22-19-50.317589.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T22-19-50.317589.parquet"]}]}]} | 2023-12-02T22:19:58+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of harborwater/open-llama-3b-claude-30k
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model harborwater/open-llama-3b-claude-30k on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T22:19:50.317589(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of harborwater/open-llama-3b-claude-30k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model harborwater/open-llama-3b-claude-30k on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T22:19:50.317589(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of harborwater/open-llama-3b-claude-30k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model harborwater/open-llama-3b-claude-30k on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T22:19:50.317589(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
24,
31,
173,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of harborwater/open-llama-3b-claude-30k## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model harborwater/open-llama-3b-claude-30k on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T22:19:50.317589(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
5a40657b6cff5e03c37932fccaa269262dd32874 |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | aksheyd/test2 | [
"region:us"
] | 2023-12-02T22:32:44+00:00 | {} | 2023-12-02T22:33:32+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.