sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
0c88dc959fd721314f8ad736a96057cf1665e852 | # AutoTrain Dataset for project: oaoqoqkaksk
## Dataset Description
This dataset has been automatically processed by AutoTrain for project oaoqoqkaksk.
### Languages
The BCP-47 code for the dataset's language is en2nl.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"target": "\u00de\u00e6t Sunnanrastere onl\u00edcnescynn",
"source": "The Sun raster image format"
},
{
"target": "Lundon",
"source": "Gordon"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "Value(dtype='string', id=None)",
"source": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1528 |
| valid | 383 |
| Tritkoman/ENtoANGGNOME | [
"task_categories:translation",
"language:en",
"language:nl",
"region:us"
] | 2022-10-29T17:30:11+00:00 | {"language": ["en", "nl"], "task_categories": ["translation"]} | 2022-10-29T17:45:18+00:00 |
d00918e71905f1a4f4696d0e61a979cfe8ccee01 | Dfggggvvhg | Zxol/Dfv | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | 2022-10-29T18:46:09+00:00 | {"license": "bigscience-bloom-rail-1.0"} | 2022-10-29T18:46:54+00:00 |
a6326ffae85441615a056dbfea9ce8131b1d67a6 | mariopeng/openIPAseq2seq | [
"license:unlicense",
"region:us"
] | 2022-10-29T18:49:57+00:00 | {"license": "unlicense"} | 2022-10-29T19:11:57+00:00 |
|
b4b871e5d5f20e77218d34aabfd7e09f782fedd0 |
# Dataset Description
## Structure
- Consists of 5 fields
- Each row corresponds to a policy - sequence of actions, given an initial `<START>` state, and corresponding rewards at each step.
## Fields
`steps`, `step_attn_masks`, `rewards`, `actions`, `dones`
## Field descriptions
- `steps` (List of lists of `Int`s) - tokenized step tokens of all the steps in the policy sequence (here we use the `roberta-base` tokenizer, as `roberta-base` would be used to encode each step of a recipe)
- `step_attn_masks` (List of lists of `Int`s) - Attention masks corresponding to `steps`
- `rewards` (List of `Float`s) - Sequence of rewards (normalized b/w 0 and 1) assigned per step.
- `actions` (List of lists of `Int`s) - Sequence of actions (one-hot encoded, as the action space is discrete). There are `33` different actions possible (we consider the maximum number of steps per recipe = `16`, so the action can vary from `-16` to `+16`; The class label is got by adding 16 to the actual action value)
- `dones` (List of `Bool`) - Sequence of flags, conveying if the work is completed when that step is reached, or not.
## Dataset Size
- Number of rows = `2255673`
- Maximum number of steps per row = `16` | AnonymousSub/recipe_RL_data_roberta-base | [
"multilinguality:monolingual",
"language:en",
"region:us"
] | 2022-10-29T20:16:35+00:00 | {"annotations_creators": [], "language_creators": [], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": [], "task_categories": [], "task_ids": [], "pretty_name": "recipe RL roberta base", "tags": []} | 2022-11-03T15:38:06+00:00 |
d98f91761614aa984340c6ce99a333e4b2cd21b6 |
# Chibi Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by chibi_style"```
Use (Chibi) tag beside the Embedding for best results
If it is to strong just add [] around it.
Trained until 6000 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/rXHJyFQ.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/eocJJXg.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/8dA3EUO.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/mmChRb3.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/sooxpE5.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/chibi_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-10-29T20:44:17+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-10-29T20:50:26+00:00 |
008edafee29d0b086ea59c8b94a83fb12cb1aa00 |
# Dataset Card for S&P 500 Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
This Dataset was created by combining the daily close prices for each stock in the current (as of 10/29/2022) S&P 500 index dating back to January 1, 1970. This data came from the Kaggle dataset (https://www.kaggle.com/datasets/paultimothymooney/stock-market-data) and was aggregated using PANDAS before being converted to a HuggingFace Dataset.
### Dataset Summary
This dataset has 407 columns specifying dates and associated close prices of the stocks in the S&P 500 that had data which could be accessed from the above Kaggle dataset. 94 stocks are missing due to issues loading that stock data into the dataset (i.e. stock name changes (like FB to META)). These items will need further review. There are many NA values due to stocks that were not in existence as early as 1970.
### Supported Tasks and Leaderboards
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
No split has currently been created for the dataset.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
https://www.kaggle.com/datasets/paultimothymooney/stock-market-data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/nick-carroll1) for adding this dataset.
---
dataset_info:
features:
- name: MMM
dtype: float64
- name: AOS
dtype: float64
- name: ABT
dtype: float64
- name: ABBV
dtype: float64
- name: ABMD
dtype: float64
- name: ACN
dtype: float64
- name: ATVI
dtype: float64
- name: ADM
dtype: float64
- name: ADBE
dtype: float64
- name: ADP
dtype: float64
- name: AAP
dtype: float64
- name: A
dtype: float64
- name: APD
dtype: float64
- name: AKAM
dtype: float64
- name: ALK
dtype: float64
- name: ALB
dtype: float64
- name: ARE
dtype: float64
- name: ALGN
dtype: float64
- name: ALLE
dtype: float64
- name: LNT
dtype: float64
- name: GOOG
dtype: float64
- name: MO
dtype: float64
- name: AMZN
dtype: float64
- name: AMD
dtype: float64
- name: AEE
dtype: float64
- name: AAL
dtype: float64
- name: AEP
dtype: float64
- name: AXP
dtype: float64
- name: AIG
dtype: float64
- name: AMT
dtype: float64
- name: AWK
dtype: float64
- name: AMP
dtype: float64
- name: ABC
dtype: float64
- name: AME
dtype: float64
- name: AMGN
dtype: float64
- name: APH
dtype: float64
- name: ADI
dtype: float64
- name: AON
dtype: float64
- name: APA
dtype: float64
- name: AAPL
dtype: float64
- name: AMAT
dtype: float64
- name: ANET
dtype: float64
- name: AJG
dtype: float64
- name: AIZ
dtype: float64
- name: T
dtype: float64
- name: ATO
dtype: float64
- name: ADSK
dtype: float64
- name: AZO
dtype: float64
- name: AVB
dtype: float64
- name: AVY
dtype: float64
- name: BAC
dtype: float64
- name: BAX
dtype: float64
- name: BDX
dtype: float64
- name: WRB
dtype: float64
- name: BBY
dtype: float64
- name: BIO
dtype: float64
- name: BIIB
dtype: float64
- name: BLK
dtype: float64
- name: BK
dtype: float64
- name: BA
dtype: float64
- name: BWA
dtype: float64
- name: BXP
dtype: float64
- name: BSX
dtype: float64
- name: BMY
dtype: float64
- name: AVGO
dtype: float64
- name: BR
dtype: float64
- name: BRO
dtype: float64
- name: CHRW
dtype: float64
- name: CDNS
dtype: float64
- name: CZR
dtype: float64
- name: CPT
dtype: float64
- name: CPB
dtype: float64
- name: COF
dtype: float64
- name: CAH
dtype: float64
- name: KMX
dtype: float64
- name: CAT
dtype: float64
- name: CBOE
dtype: float64
- name: CDW
dtype: float64
- name: CNC
dtype: float64
- name: CNP
dtype: float64
- name: CF
dtype: float64
- name: CRL
dtype: float64
- name: SCHW
dtype: float64
- name: CHTR
dtype: float64
- name: CMG
dtype: float64
- name: CB
dtype: float64
- name: CHD
dtype: float64
- name: CINF
dtype: float64
- name: CTAS
dtype: float64
- name: CSCO
dtype: float64
- name: C
dtype: float64
- name: CFG
dtype: float64
- name: CLX
dtype: float64
- name: CME
dtype: float64
- name: CMS
dtype: float64
- name: KO
dtype: float64
- name: CTSH
dtype: float64
- name: CL
dtype: float64
- name: CMCSA
dtype: float64
- name: CAG
dtype: float64
- name: COP
dtype: float64
- name: ED
dtype: float64
- name: COO
dtype: float64
- name: CPRT
dtype: float64
- name: GLW
dtype: float64
- name: CSGP
dtype: float64
- name: COST
dtype: float64
- name: CCI
dtype: float64
- name: CMI
dtype: float64
- name: DHI
dtype: float64
- name: DRI
dtype: float64
- name: DVA
dtype: float64
- name: DE
dtype: float64
- name: DAL
dtype: float64
- name: DVN
dtype: float64
- name: DXCM
dtype: float64
- name: FANG
dtype: float64
- name: DLR
dtype: float64
- name: DFS
dtype: float64
- name: DISH
dtype: float64
- name: DIS
dtype: float64
- name: DG
dtype: float64
- name: DLTR
dtype: float64
- name: D
dtype: float64
- name: DPZ
dtype: float64
- name: DOV
dtype: float64
- name: DOW
dtype: float64
- name: DTE
dtype: float64
- name: DD
dtype: float64
- name: EMN
dtype: float64
- name: ETN
dtype: float64
- name: EBAY
dtype: float64
- name: ECL
dtype: float64
- name: EIX
dtype: float64
- name: EW
dtype: float64
- name: EA
dtype: float64
- name: LLY
dtype: float64
- name: EMR
dtype: float64
- name: ENPH
dtype: float64
- name: EOG
dtype: float64
- name: EPAM
dtype: float64
- name: EFX
dtype: float64
- name: EQIX
dtype: float64
- name: EQR
dtype: float64
- name: ESS
dtype: float64
- name: EL
dtype: float64
- name: RE
dtype: float64
- name: ES
dtype: float64
- name: EXC
dtype: float64
- name: EXPE
dtype: float64
- name: EXPD
dtype: float64
- name: EXR
dtype: float64
- name: XOM
dtype: float64
- name: FFIV
dtype: float64
- name: FDS
dtype: float64
- name: FAST
dtype: float64
- name: FRT
dtype: float64
- name: FDX
dtype: float64
- name: FITB
dtype: float64
- name: FRC
dtype: float64
- name: FE
dtype: float64
- name: FIS
dtype: float64
- name: FISV
dtype: float64
- name: FLT
dtype: float64
- name: FMC
dtype: float64
- name: F
dtype: float64
- name: FTNT
dtype: float64
- name: FBHS
dtype: float64
- name: FOXA
dtype: float64
- name: BEN
dtype: float64
- name: FCX
dtype: float64
- name: GRMN
dtype: float64
- name: IT
dtype: float64
- name: GNRC
dtype: float64
- name: GD
dtype: float64
- name: GE
dtype: float64
- name: GIS
dtype: float64
- name: GM
dtype: float64
- name: GPC
dtype: float64
- name: GILD
dtype: float64
- name: GPN
dtype: float64
- name: HAL
dtype: float64
- name: HIG
dtype: float64
- name: HAS
dtype: float64
- name: HCA
dtype: float64
- name: HSIC
dtype: float64
- name: HSY
dtype: float64
- name: HES
dtype: float64
- name: HPE
dtype: float64
- name: HLT
dtype: float64
- name: HOLX
dtype: float64
- name: HD
dtype: float64
- name: HON
dtype: float64
- name: HRL
dtype: float64
- name: HST
dtype: float64
- name: HPQ
dtype: float64
- name: HUM
dtype: float64
- name: HBAN
dtype: float64
- name: HII
dtype: float64
- name: IBM
dtype: float64
- name: IEX
dtype: float64
- name: IDXX
dtype: float64
- name: ITW
dtype: float64
- name: ILMN
dtype: float64
- name: INCY
dtype: float64
- name: IR
dtype: float64
- name: INTC
dtype: float64
- name: ICE
dtype: float64
- name: IP
dtype: float64
- name: IPG
dtype: float64
- name: IFF
dtype: float64
- name: INTU
dtype: float64
- name: ISRG
dtype: float64
- name: IVZ
dtype: float64
- name: IRM
dtype: float64
- name: JBHT
dtype: float64
- name: JKHY
dtype: float64
- name: JNJ
dtype: float64
- name: JCI
dtype: float64
- name: JPM
dtype: float64
- name: JNPR
dtype: float64
- name: K
dtype: float64
- name: KEY
dtype: float64
- name: KEYS
dtype: float64
- name: KMB
dtype: float64
- name: KIM
dtype: float64
- name: KLAC
dtype: float64
- name: KHC
dtype: float64
- name: KR
dtype: float64
- name: LH
dtype: float64
- name: LRCX
dtype: float64
- name: LVS
dtype: float64
- name: LDOS
dtype: float64
- name: LNC
dtype: float64
- name: LYV
dtype: float64
- name: LKQ
dtype: float64
- name: LMT
dtype: float64
- name: LOW
dtype: float64
- name: LYB
dtype: float64
- name: MRO
dtype: float64
- name: MPC
dtype: float64
- name: MKTX
dtype: float64
- name: MAR
dtype: float64
- name: MMC
dtype: float64
- name: MLM
dtype: float64
- name: MA
dtype: float64
- name: MKC
dtype: float64
- name: MCD
dtype: float64
- name: MCK
dtype: float64
- name: MDT
dtype: float64
- name: MRK
dtype: float64
- name: MET
dtype: float64
- name: MTD
dtype: float64
- name: MGM
dtype: float64
- name: MCHP
dtype: float64
- name: MU
dtype: float64
- name: MSFT
dtype: float64
- name: MAA
dtype: float64
- name: MHK
dtype: float64
- name: MOH
dtype: float64
- name: TAP
dtype: float64
- name: MDLZ
dtype: float64
- name: MPWR
dtype: float64
- name: MNST
dtype: float64
- name: MCO
dtype: float64
- name: MOS
dtype: float64
- name: MSI
dtype: float64
- name: MSCI
dtype: float64
- name: NDAQ
dtype: float64
- name: NTAP
dtype: float64
- name: NFLX
dtype: float64
- name: NWL
dtype: float64
- name: NEM
dtype: float64
- name: NWSA
dtype: float64
- name: NEE
dtype: float64
- name: NI
dtype: float64
- name: NDSN
dtype: float64
- name: NSC
dtype: float64
- name: NTRS
dtype: float64
- name: NOC
dtype: float64
- name: NCLH
dtype: float64
- name: NRG
dtype: float64
- name: NVDA
dtype: float64
- name: NVR
dtype: float64
- name: NXPI
dtype: float64
- name: ORLY
dtype: float64
- name: OXY
dtype: float64
- name: ODFL
dtype: float64
- name: OMC
dtype: float64
- name: OKE
dtype: float64
- name: PCAR
dtype: float64
- name: PKG
dtype: float64
- name: PH
dtype: float64
- name: PAYX
dtype: float64
- name: PAYC
dtype: float64
- name: PNR
dtype: float64
- name: PEP
dtype: float64
- name: PKI
dtype: float64
- name: PFE
dtype: float64
- name: PM
dtype: float64
- name: PSX
dtype: float64
- name: PNW
dtype: float64
- name: PXD
dtype: float64
- name: PNC
dtype: float64
- name: POOL
dtype: float64
- name: PPG
dtype: float64
- name: PFG
dtype: float64
- name: PG
dtype: float64
- name: PLD
dtype: float64
- name: PRU
dtype: float64
- name: PEG
dtype: float64
- name: PTC
dtype: float64
- name: PHM
dtype: float64
- name: QRVO
dtype: float64
- name: PWR
dtype: float64
- name: QCOM
dtype: float64
- name: DGX
dtype: float64
- name: RL
dtype: float64
- name: RJF
dtype: float64
- name: O
dtype: float64
- name: REG
dtype: float64
- name: REGN
dtype: float64
- name: RF
dtype: float64
- name: RSG
dtype: float64
- name: RMD
dtype: float64
- name: RHI
dtype: float64
- name: ROK
dtype: float64
- name: ROL
dtype: float64
- name: ROP
dtype: float64
- name: ROST
dtype: float64
- name: RCL
dtype: float64
- name: CRM
dtype: float64
- name: SBAC
dtype: float64
- name: SLB
dtype: float64
- name: STX
dtype: float64
- name: SEE
dtype: float64
- name: SRE
dtype: float64
- name: NOW
dtype: float64
- name: SHW
dtype: float64
- name: SBNY
dtype: float64
- name: SPG
dtype: float64
- name: SWKS
dtype: float64
- name: SO
dtype: float64
- name: LUV
dtype: float64
- name: SWK
dtype: float64
- name: SBUX
dtype: float64
- name: STT
dtype: float64
- name: SYK
dtype: float64
- name: SIVB
dtype: float64
- name: SYF
dtype: float64
- name: SNPS
dtype: float64
- name: TMUS
dtype: float64
- name: TROW
dtype: float64
- name: TTWO
dtype: float64
- name: TRGP
dtype: float64
- name: TEL
dtype: float64
- name: TDY
dtype: float64
- name: TSLA
dtype: float64
- name: TXN
dtype: float64
- name: TXT
dtype: float64
- name: TMO
dtype: float64
- name: TJX
dtype: float64
- name: TSCO
dtype: float64
- name: TDG
dtype: float64
- name: TRV
dtype: float64
- name: TYL
dtype: float64
- name: TSN
dtype: float64
- name: USB
dtype: float64
- name: UDR
dtype: float64
- name: ULTA
dtype: float64
- name: UNP
dtype: float64
- name: UAL
dtype: float64
- name: UPS
dtype: float64
- name: URI
dtype: float64
- name: UNH
dtype: float64
- name: UHS
dtype: float64
- name: VTR
dtype: float64
- name: VRSN
dtype: float64
- name: VRSK
dtype: float64
- name: VZ
dtype: float64
- name: VRTX
dtype: float64
- name: VFC
dtype: float64
- name: V
dtype: float64
- name: VMC
dtype: float64
- name: WAB
dtype: float64
- name: WBA
dtype: float64
- name: WMT
dtype: float64
- name: WM
dtype: float64
- name: WAT
dtype: float64
- name: WEC
dtype: float64
- name: WFC
dtype: float64
- name: WST
dtype: float64
- name: WDC
dtype: float64
- name: WRK
dtype: float64
- name: WY
dtype: float64
- name: WHR
dtype: float64
- name: WMB
dtype: float64
- name: WTW
dtype: float64
- name: GWW
dtype: float64
- name: WYNN
dtype: float64
- name: XEL
dtype: float64
- name: XYL
dtype: float64
- name: YUM
dtype: float64
- name: ZBRA
dtype: float64
- name: ZBH
dtype: float64
- name: ZION
dtype: float64
- name: ZTS
dtype: float64
- name: Date
dtype: timestamp[ns]
splits:
- name: train
num_bytes: 44121086
num_examples: 13322
download_size: 0
dataset_size: 44121086
---
# Dataset Card for "sp500"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nick-carroll1/sp500 | [
"region:us"
] | 2022-10-29T22:20:49+00:00 | {} | 2022-10-29T23:08:46+00:00 |
a45b65f64bc7411bb5d29b5076eb12eb9add8103 | muchojarabe/gato_slider | [
"license:cc",
"region:us"
] | 2022-10-30T04:00:59+00:00 | {"license": "cc"} | 2022-10-30T04:01:23+00:00 |
|
d77d7ad3c624c51030f2f32c83e892b3d620b3d4 |
# Dataset Card for ProsocialDialog Dataset
## Dataset Description
- **Repository:** [Dataset and Model](https://github.com/skywalker023/prosocial-dialog)
- **Paper:** [ProsocialDialog: A Prosocial Backbone for Conversational Agents](https://aclanthology.org/2022.emnlp-main.267/)
- **Point of Contact:** [Hyunwoo Kim](mailto:[email protected])
## Dataset Summary
ProsocialDialog is the first large-scale multi-turn English dialogue dataset to teach conversational agents to respond to problematic content following social norms. Covering diverse unethical, problematic, biased, and toxic situations, ProsocialDialog contains responses that encourage prosocial behavior, grounded in commonsense social rules (i.e., rules-of-thumb, RoTs). Created via a human-AI collaborative framework, ProsocialDialog consists of 58K dialogues, with 331K utterances, 160K unique RoTs, and 497K dialogue safety labels accompanied by free-form rationales.
## Supported Tasks
* Dialogue response generation
* Dialogue safety prediction
* Rules-of-thumb generation
## Languages
English
## Dataset Structure
### Data Attributes
attribute | type | description
--- | --- | ---
`context` | str | the potentially unsafe utterance
`response` | str | the guiding utterance grounded on rules-of-thumb (`rots`)
`rots` | list of str\|null | the relevant rules-of-thumb for `text` *not* labeled as \_\_casual\_\_
`safety_label` | str | the final verdict of the context according to `safety_annotations`: {\_\_casual\_\_, \_\_possibly\_needs\_caution\_\_, \_\_probably\_needs\_caution\_\_, \_\_needs\_caution\_\_, \_\_needs\_intervention\_\_}
`safety_annotations` | list of str | raw annotations from three workers: {casual, needs caution, needs intervention}
`safety_annotation_reasons` | list of str | the reasons behind the safety annotations in free-form text from each worker
`source` | str | the source of the seed text that was used to craft the first utterance of the dialogue: {socialchemistry, sbic, ethics_amt, ethics_reddit}
`etc` | str\|null | other information
`dialogue_id` | int | the dialogue index
`response_id` | int | the response index
`episode_done` | bool | an indicator of whether it is the end of the dialogue
## Dataset Creation
To create ProsocialDialog, we set up a human-AI collaborative data creation framework, where GPT-3 generates the potentially unsafe utterances, and crowdworkers provide prosocial responses to them. This approach allows us to circumvent two substantial challenges: (1) there are no available large-scale corpora of multiturn prosocial conversations between humans, and (2) asking humans to write unethical, toxic, or problematic utterances could result in psychological harms (Roberts, 2017; Steiger et al., 2021).
### Further Details, Social Impacts, and Limitations
Please refer to our [paper](https://arxiv.org/abs/2205.12688).
## Additional Information
### Citation
Please cite our work if you found the resources in this repository useful:
```
@inproceedings{kim2022prosocialdialog,
title={ProsocialDialog: A Prosocial Backbone for Conversational Agents},
author={Hyunwoo Kim and Youngjae Yu and Liwei Jiang and Ximing Lu and Daniel Khashabi and Gunhee Kim and Yejin Choi and Maarten Sap},
booktitle={EMNLP},
year=2022
}
``` | allenai/prosocial-dialog | [
"task_categories:conversational",
"task_categories:text-classification",
"task_ids:dialogue-generation",
"task_ids:multi-class-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:100K<n<1M",
"source_datasets:original",
"source_datasets:extended|social_bias_frames",
"language:en",
"license:cc-by-4.0",
"dialogue",
"dialogue safety",
"social norm",
"rules-of-thumb",
"arxiv:2205.12688",
"region:us"
] | 2022-10-30T04:24:12+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced", "machine-generated"], "language": ["en"], "license": "cc-by-4.0", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K", "100K<n<1M"], "source_datasets": ["original", "extended|social_bias_frames"], "task_categories": ["conversational", "text-classification"], "task_ids": ["dialogue-generation", "multi-class-classification"], "pretty_name": "ProsocialDialog", "tags": ["dialogue", "dialogue safety", "social norm", "rules-of-thumb"]} | 2023-02-03T07:58:29+00:00 |
3e9c4eb6eb75d1a72396ab005bcd0abdcf319060 | # Dataset Card for "sroie_document_understanding"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Contributions](#contributions)
## Dataset Description
This dataset is an enriched version of SROIE 2019 dataset with additional labels for line descriptions and line totals for OCR and layout understanding.
## Dataset Structure
```python
DatasetDict({
train: Dataset({
features: ['image', 'ocr'],
num_rows: 652
})
})
```
### Data Fields
```python
{
'image': PIL Image object,
'ocr': [
# text box 1
{
'box': [[float, float], [float, float], [float, float], [float, float]],
'label': str, # "other" | "company" | "address" | "date" | "line_description" | "line_total" | "total"
'text': str
},
...
# text box N
{
'box': [[float, float], [float, float], [float, float], [float, float]],
'label': str,
'text': str,
}
]
}
```
## Dataset Creation
### Source Data
The dataset was obtained from [ICDAR2019 Competition on Scanned Receipt OCR and Information Extraction](https://rrc.cvc.uab.es/?ch=13)
### Annotations
#### Annotation process
Additional labels to receipt line items were added using open source [labelme](https://github.com/wkentaro/labelme) tool.
#### Who are the annotators?
Arvind Rajan (adding labels to the original text boxes from source)
## Additional Information
### Licensing Information
MIT License
### Contributions
Thanks to [@arvindrajan92](https://github.com/arvindrajan92) for adding this dataset. | arvindrajan92/sroie_document_understanding | [
"license:mit",
"region:us"
] | 2022-10-30T04:49:57+00:00 | {"license": "mit", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ocr", "list": [{"name": "box", "sequence": {"sequence": "float64"}}, {"name": "label", "dtype": "string"}, {"name": "text", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 267317016.0, "num_examples": 652}], "download_size": 217146103, "dataset_size": 267317016.0}} | 2022-10-30T06:30:53+00:00 |
06581f273fd26b82fb36eecb48ddda298564f29f |
Free Fonts for Simplified Chinese, downloaded from [Google Fonts](https://fonts.google.com/?subset=chinese-simplified). | breezedeus/openfonts | [
"license:ofl-1.1",
"region:us"
] | 2022-10-30T06:29:57+00:00 | {"license": "ofl-1.1"} | 2022-10-30T06:37:11+00:00 |
1a267499f05a2ada702cca61e9caf6ce4ed0cd6d | noticias medioambiente | api19750904/efeverde | [
"region:us"
] | 2022-10-30T09:29:19+00:00 | {} | 2022-10-30T09:30:29+00:00 |
a8f3bebe787e1b70a2bc5d3f6025b414a2eb4467 |
# Wlop Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by wlop_style"```
Use the Embedding with one of [SirVeggies](https://huggingface.co/SirVeggie) Wlop models for best results
If it is to strong just add [] around it.
Trained until 6000 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/ImByEK5.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/BndPSqd.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/4cB2B28.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/Hw5FMID.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/ddwJwoO.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/wlop_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-10-30T09:36:54+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-11-03T23:34:09+00:00 |
3971d9415584a57e6564fcc83310433c52a7bb82 |
# Torino Artist Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by torino_art"```
If it is to strong just add [] around it.
Trained until 12800 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/xnRZgRb.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/AcHsCMX.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/egIlKhy.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/nZQh3da.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/V9UFqn2.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/torino_art | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-10-30T09:47:07+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-10-30T09:53:46+00:00 |
a89cc21ad1f15b9fc5dd53a517e8ab7611315b3e | Foxes/image | [
"license:other",
"region:us"
] | 2022-10-30T09:58:09+00:00 | {"license": "other"} | 2022-10-30T10:03:18+00:00 |
|
1104b20e2e295532383774162647922afd7ae301 | Zakia/test | [
"license:cc-by-4.0",
"doi:10.57967/hf/0074",
"region:us"
] | 2022-10-30T10:14:09+00:00 | {"license": "cc-by-4.0"} | 2022-10-30T10:14:09+00:00 |
|
6892e2e8f10b7b385041ec817f024c8dfa4cbad2 | # Dataset Card for "answerable_tydiqa_raw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | PartiallyTyped/answerable_tydiqa_raw | [
"region:us"
] | 2022-10-30T10:18:47+00:00 | {"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "golds", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "answer_text", "sequence": "string"}]}, {"name": "context", "dtype": "string"}, {"name": "seq_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21022889, "num_examples": 29868}, {"name": "validation", "num_bytes": 2616173, "num_examples": 3712}], "download_size": 16292808, "dataset_size": 23639062}} | 2022-10-30T10:19:07+00:00 |
5dc06479106fbe781b1d1bb3c5da16ae4f3fdde0 | # Dataset Card for "answerable_tydiqa_raw_split"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | PartiallyTyped/answerable_tydiqa_raw_split | [
"region:us"
] | 2022-10-30T10:19:23+00:00 | {"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "seq_id", "dtype": "string"}, {"name": "golds", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "answer_text", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 32809511, "num_examples": 129290}, {"name": "validation", "num_bytes": 4034498, "num_examples": 15801}], "download_size": 17092210, "dataset_size": 36844009}} | 2022-10-30T10:19:44+00:00 |
9e80f0e386c0c307eea98787ffa2dc558105cbfb | # Dataset Card for "answerable_tydiqa_5fbde19f5f4ac461c405a962adddaeb6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | PartiallyTyped/answerable_tydiqa_5fbde19f5f4ac461c405a962adddaeb6 | [
"region:us"
] | 2022-10-30T10:23:50+00:00 | {"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "golds", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "answer_text", "sequence": "string"}]}, {"name": "context", "dtype": "string"}, {"name": "seq_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21022889, "num_examples": 29868}, {"name": "validation", "num_bytes": 2616173, "num_examples": 3712}], "download_size": 16292808, "dataset_size": 23639062}} | 2022-10-30T10:24:10+00:00 |
629ddc29395be3b5f982d8daf6d12731d7364931 | # Dataset Card for "answerable_tydiqa_6fe3e6eac99651ae0255a686875476a4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | PartiallyTyped/answerable_tydiqa_6fe3e6eac99651ae0255a686875476a4 | [
"region:us"
] | 2022-10-30T10:26:11+00:00 | {"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "seq_id", "dtype": "string"}, {"name": "golds", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "answer_text", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 32809511, "num_examples": 129290}, {"name": "validation", "num_bytes": 4034498, "num_examples": 15801}], "download_size": 17092210, "dataset_size": 36844009}} | 2022-10-30T10:26:33+00:00 |
6f9b86b4c5141cc8f5b8db89af92ec93ac7ea3d1 | Foxter/1 | [
"region:us"
] | 2022-10-30T11:09:13+00:00 | {} | 2022-10-30T11:12:52+00:00 |
|
c663a7a901ed9bfe086d513ce9de7aa2dbea5680 |
<h4> Usage </h4>
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add
<em style="font-weight:600">art by assassin_style </em>
add <b>[ ]</b> around it to reduce its weight.
<h4> Included Files </h4>
<ul>
<li>6500 steps <em>Usage: art by assassin_style-6500</em></li>
<li>10,000 steps <em>Usage: art by assassin_style-10000</em> </li>
<li>15,000 steps <em>Usage: art by assassin_style </em></li>
</ul>
cheers<br>
Wipeout
<h4> Example Pictures </h4>
<table>
<tbody>
<tr>
<td><img height="100%/" width="100%" src="https://i.imgur.com/RhE7Qce.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/wVOH8GU.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/YaBbNNK.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/63HpAf1.png"></td>
</tr>
</tbody>
</table>
<h4> prompt comparison </h4>
<em> click the image to enlarge</em>
<a href="https://i.imgur.com/nrkCPEf.jpg" target="_blank"><img height="50%" width="50%" src="https://i.imgur.com/nrkCPEf.jpg"></a>
| zZWipeoutZz/assassin_style | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-10-30T11:37:45+00:00 | {"license": "creativeml-openrail-m"} | 2022-10-30T13:00:51+00:00 |
f62c99bb7b1c00254d300679172802b400281cfe |
Movielens 20m data with split training and test set by userId for GAUC.
More details could be see at:
https://github.com/auxten/edgeRec/blob/main/example/movielens/readme.md
## User split
user split status in `user` table, see SQL below:
```sql
create table movies
(
movieId INTEGER,
title TEXT,
genres TEXT
);
create table ratings
(
userId INTEGER,
movieId INTEGER,
rating FLOAT,
timestamp INTEGER
);
create table tags
(
userId INTEGER,
movieId INTEGER,
tag TEXT,
timestamp INTEGER
);
-- import data from csv, do it with any tool
select count(distinct userId) from ratings; -- 138,493 users
create table user as select distinct userId, 0 as is_train from ratings;
-- choose 100000 random user as train user
update user
set is_train = 1
where userId in
(SELECT userId
FROM (select distinct userId from ratings)
ORDER BY RANDOM()
LIMIT 100000);
select count(*) from user where is_train != 1; -- 38,493 test users
-- split train and test set of movielens-20m ratings
create table ratings_train as
select r.userId, movieId, rating, timestamp
from ratings r
left join user u on r.userId = u.userId
where is_train = 1;
create table ratings_test as
select r.userId, movieId, rating, timestamp
from ratings r
left join user u on r.userId = u.userId
where is_train = 0;
select count(*) from ratings_train; --14,393,526
select count(*) from ratings_test; --5,606,737
select count(*) from ratings; --20,000,263
```
## User feature
`user_feature_train` and `user_feature_test` are pre-processed user feature
see SQL below:
```sql
-- user feature prepare
create table user_feature_train as
select r1.userId, ugenres, avgRating, cntRating
from
(
select userId, avg(rating) as avgRating,
count(rating) cntRating
from ratings_train r1 group by userId
) r1 left join (
select userId,
group_concat(genres) as ugenres
from ratings_train r
left join movies t2 on r.movieId = t2.movieId
where r.rating > 3.5
group by userId
) r2 on r2.userId = r1.userId
-- user feature prepare
create table user_feature_test as
select r1.userId, ugenres, avgRating, cntRating
from
(
select userId, avg(rating) as avgRating,
count(rating) cntRating
from ratings_test r1 group by userId
) r1 left join (
select userId,
group_concat(genres) as ugenres
from ratings_test r
left join movies t2 on r.movieId = t2.movieId
where r.rating > 3.5
group by userId
) r2 on r2.userId = r1.userId
```
## User behavior
```sql
create table ub_train as
select userId, group_concat(movieId) movieIds ,group_concat(timestamp) timestamps from ratings_train_desc group by userId order by timestamp
create table ub_test as
select userId, group_concat(movieId) movieIds ,group_concat(timestamp) timestamps from ratings_test_desc group by userId order by timestamp
create table ratings_train_desc as
select r.userId, movieId, rating, timestamp
from ratings_train r order by r.userId, timestamp desc;
create table ratings_test_desc as
select r.userId, movieId, rating, timestamp
from ratings_test r order by r.userId, timestamp desc;
```
| auxten/movielens-20m | [
"license:apache-2.0",
"region:us"
] | 2022-10-30T13:47:43+00:00 | {"license": "apache-2.0"} | 2022-10-30T13:57:36+00:00 |
fb223d9a4bb5dc26d0ed573c2318f3174c0b7b06 | KEEPYs/titou | [
"license:openrail",
"region:us"
] | 2022-10-30T14:24:12+00:00 | {"license": "openrail"} | 2022-10-30T14:25:31+00:00 |
|
b5c56fd50f5993b1cebb86586d286981ec05ae72 |
# Dataset Card for "lmqg/qg_annotation"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the annotated questions generated by different models, used to measure the correlation of automatic metrics against
human in ["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992).
### Languages
English (en)
## Dataset Structure
An example of 'train' looks as follows.
```python
{
"correctness": 1.8,
"grammaticality": 3.0,
"understandability": 2.4,
"prediction": "What trade did the Ming dynasty have a shortage of?",
"Bleu_4": 0.4961682999359617,
"METEOR": 0.3572683356086923,
"ROUGE_L": 0.7272727272727273,
"BERTScore": 0.9142221808433532,
"MoverScore": 0.6782580808848975,
"reference_raw": "What important trade did the Ming Dynasty have with Tibet?",
"answer_raw": "horse trade",
"paragraph_raw": "Some scholars note that Tibetan leaders during the Ming frequently engaged in civil war and conducted their own foreign diplomacy with neighboring states such as Nepal. Some scholars underscore the commercial aspect of the Ming-Tibetan relationship, noting the Ming dynasty's shortage of horses for warfare and thus the importance of the horse trade with Tibet. Others argue that the significant religious nature of the relationship of the Ming court with Tibetan lamas is underrepresented in modern scholarship. In hopes of reviving the unique relationship of the earlier Mongol leader Kublai Khan (r. 1260\u20131294) and his spiritual superior Drog\u00f6n Ch\u00f6gyal Phagpa (1235\u20131280) of the Sakya school of Tibetan Buddhism, the Yongle Emperor (r. 1402\u20131424) made a concerted effort to build a secular and religious alliance with Deshin Shekpa (1384\u20131415), the Karmapa of the Karma Kagyu school. However, the Yongle Emperor's attempts were unsuccessful.",
"sentence_raw": "Some scholars underscore the commercial aspect of the Ming-Tibetan relationship, noting the Ming dynasty's shortage of horses for warfare and thus the importance of the horse trade with Tibet.",
"reference_norm": "what important trade did the ming dynasty have with tibet ?",
"model": "T5 Large"
}
```
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | lmqg/qg_annotation | [
"multilinguality:monolingual",
"size_categories:<1K",
"language:en",
"license:cc-by-4.0",
"arxiv:2210.03992",
"region:us"
] | 2022-10-30T14:26:50+00:00 | {"language": "en", "license": "cc-by-4.0", "multilinguality": "monolingual", "size_categories": "<1K", "pretty_name": "QG Annotation"} | 2022-10-30T15:08:30+00:00 |
074904da08bd9c88c246cc4108954dd5b9df96ce | karabas/Medal | [
"license:apache-2.0",
"region:us"
] | 2022-10-30T16:51:41+00:00 | {"license": "apache-2.0"} | 2022-10-30T20:02:07+00:00 |
|
66bebf8a6d23d46f11d9528c9b9c01cad0a78d2d | efeverde | api19750904/efeverde_5_cat_lem | [
"region:us"
] | 2022-10-30T17:42:40+00:00 | {} | 2022-10-30T17:43:32+00:00 |
c6f24b11060e96ccd426b767f308acb33cd716fd | karabas/small_medals | [
"license:unlicense",
"doi:10.57967/hf/0076",
"region:us"
] | 2022-10-30T20:03:37+00:00 | {"license": "unlicense"} | 2022-10-30T20:16:56+00:00 |
|
3ce53de6c851bc3fc44e9c2733d69db7e1185fc3 | LVN/photo | [
"license:openrail",
"region:us"
] | 2022-10-30T20:51:36+00:00 | {"license": "openrail"} | 2022-10-30T21:11:15+00:00 |
|
b9dcaee150e77ece89e2a10c197ae823ea27a685 | Andris2067/VPurvitis2 | [
"license:openrail",
"region:us"
] | 2022-10-30T22:31:34+00:00 | {"license": "openrail"} | 2022-10-30T23:14:16+00:00 |
|
339ce0d6a41439bac7b42fd71405e68253ed1dbf |
# Dataset Card for LILA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Tutorial](#tutorial)
- [Working with Taxonomies](#working-with-taxonomies)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://lila.science/
- **Repository:** N/A
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** [[email protected]]([email protected])
### Dataset Summary
LILA Camera Traps is an aggregate data set of images taken by camera traps, which are devices that automatically (e.g. via motion detection) capture images of wild animals to help ecological research.
This data set is the first time when disparate camera trap data sets have been aggregated into a single training environment with a single [taxonomy](https://lila.science/taxonomy-mapping-for-camera-trap-data-sets/).
This data set consists of only camera trap image data sets, whereas the broader [LILA](lila.science/) website also has other data sets related to biology and conservation, intended as a resource for both machine learning (ML) researchers and those that want to harness ML for this topic.
See below for information about each specific dataset that LILA contains:
<details>
<summary> Caltech Camera Traps </summary>
This data set contains 243,100 images from 140 camera locations in the Southwestern United States, with labels for 21 animal categories (plus empty), primarily at the species level (for example, the most common labels are opossum, raccoon, and coyote), and approximately 66,000 bounding box annotations. Approximately 70% of images are labeled as empty.
More information about this data set is available [here](https://beerys.github.io/CaltechCameraTraps/).
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
For questions about this data set, contact [email protected].
If you use this data set, please cite the associated manuscript:
```bibtex
@inproceedings{DBLP:conf/eccv/BeeryHP18,
author = {Sara Beery and
Grant Van Horn and
Pietro Perona},
title = {Recognition in Terra Incognita},
booktitle = {Computer Vision - {ECCV} 2018 - 15th European Conference, Munich,
Germany, September 8-14, 2018, Proceedings, Part {XVI}},
pages = {472--489},
year = {2018},
crossref = {DBLP:conf/eccv/2018-16},
url = {https://doi.org/10.1007/978-3-030-01270-0\_28},
doi = {10.1007/978-3-030-01270-0\_28},
timestamp = {Mon, 08 Oct 2018 17:08:07 +0200},
biburl = {https://dblp.org/rec/bib/conf/eccv/BeeryHP18},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
</details>
<details>
<summary> ENA24 </summary>
This data set contains approximately 10,000 camera trap images representing 23 classes from Eastern North America, with bounding boxes on each image. The most common classes are “American Crow”, “American Black Bear”, and “Dog”.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
Please cite this manuscript if you use this data set:
```bibtex
@article{yousif2019dynamic,
title={Dynamic Programming Selection of Object Proposals for Sequence-Level Animal Species Classification in the Wild},
author={Yousif, Hayder and Kays, Roland and He, Zhihai},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
year={2019},
publisher={IEEE}
}
```
For questions about this data set, contact [Hayder Yousif]([email protected]).
</details>
<details>
<summary> Missouri Camera Traps </summary>
This data set contains approximately 25,000 camera trap images representing 20 species (for example, the most common labels are red deer, mouflon, and white-tailed deer). Images within each sequence share the same species label (even though the animal may not have been recorded in all the images in the sequence). Around 900 bounding boxes are included. These are very challenging sequences with highly cluttered and dynamic scenes. Spatial resolutions of the images vary from 1920 × 1080 to 2048 × 1536. Sequence lengths vary from 3 to more than 300 frames.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
If you use this data set, please cite the associated manuscript:
```bibtex
@article{zhang2016animal,
title={Animal detection from highly cluttered natural scenes using spatiotemporal object region proposals and patch verification},
author={Zhang, Zhi and He, Zhihai and Cao, Guitao and Cao, Wenming},
journal={IEEE Transactions on Multimedia},
volume={18},
number={10},
pages={2079--2092},
year={2016},
publisher={IEEE}
}
```
For questions about this data set, contact [Hayder Yousif]([email protected]) and [Zhi Zhang]([email protected]).
</details>
<details>
<summary> North American Camera Trap Images (NACTI) </summary>
This data set contains 3.7M camera trap images from five locations across the United States, with labels for 28 animal categories, primarily at the species level (for example, the most common labels are cattle, boar, and red deer). Approximately 12% of images are labeled as empty. We have also added bounding box annotations to 8892 images (mostly vehicles and birds).
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
Please cite this manuscript if you use this data set:
```bibtex
@article{tabak2019machine,
title={Machine learning to classify animal species in camera trap images: Applications in ecology},
author={Tabak, Michael A and Norouzzadeh, Mohammad S and Wolfson, David W and Sweeney, Steven J and VerCauteren, Kurt C and Snow, Nathan P and Halseth, Joseph M and Di Salvo, Paul A and Lewis, Jesse S and White, Michael D and others},
journal={Methods in Ecology and Evolution},
volume={10},
number={4},
pages={585--590},
year={2019},
publisher={Wiley Online Library}
}
```
For questions about this data set, contact [[email protected]]([email protected]).
</details>
<details>
<summary> WCS Camera Traps </summary>
This data set contains approximately 1.4M camera trap images representing around 675 species from 12 countries, making it one of the most diverse camera trap data sets available publicly. Data were provided by the [Wildlife Conservation Society](https://www.wcs.org/). The most common classes are tayassu pecari (peccary), meleagris ocellata (ocellated turkey), and bos taurus (cattle). A complete list of classes and associated image counts is available here. Approximately 50% of images are empty. We have also added approximately 375,000 bounding box annotations to approximately 300,000 of those images, which come from sequences covering almost all locations.
Sequences are inferred from timestamps, so may not strictly represent bursts. Images were labeled at a combination of image and sequence level, so – as is the case with most camera trap data sets – empty images may be labeled as non-empty (if an animal was present in one frame of a sequence but not in others). Images containing humans are referred to in metadata, but are not included in the data files. You can find more information about the data set [on the LILA website](https://lila.science/datasets/wcscameratraps).
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Wellington Camera Traps </summary>
This data set contains 270,450 images from 187 camera locations in Wellington, New Zealand. The cameras (Bushnell 119537, 119476, and 119436) recorded sequences of three images when triggered. Each sequence was labelled by citizen scientists and/or professional ecologists from Victoria University of Wellington into 17 classes: 15 animal categories (for example, the most common labels are bird, cat, and hedgehog), empty, and unclassifiable. Approximately 17% of images are labeled as empty. Images within each sequence share the same species label (even though the animal may not have been recorded in all three images).
If you use this data set, please cite the associated manuscript:
```bibtex
@article{anton2018monitoring,
title={Monitoring the mammalian fauna of urban areas using remote cameras and citizen science},
author={Anton, Victor and Hartley, Stephen and Geldenhuis, Andre and Wittmer, Heiko U},
journal={Journal of Urban Ecology},
volume={4},
number={1},
pages={juy002},
year={2018},
publisher={Oxford University Press}
}
```
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
For questions about this data set, contact [Victor Anton]([email protected]).
</details>
<details>
<summary> Island Conservation Camera Traps </summary>
This data set contains approximately 123,000 camera trap images from 123 camera locations from 7 islands in 6 countries. Data were provided by Island Conservation during projects conducted to prevent the extinction of threatened species on islands.
The most common classes are rabbit, rat, petrel, iguana, cat, goat, and pig, with both rat and cat represented between multiple island sites representing significantly different ecosystems (tropical forest, dry forest, and temperate forests). Additionally, this data set represents data from locations and ecosystems that, to our knowledge, are not well represented in publicly available datasets including >1,000 images each of iguanas, petrels, and shearwaters. A complete list of classes and associated image counts is available here. Approximately 60% of the images are empty. We have also included approximately 65,000 bounding box annotations for about 50,000 images.
In general cameras were dispersed across each project site to detect the presence of invasive vertebrate species that threaten native island species. Cameras were set to capture bursts of photos for each motion detection event (between three and eight photos) with a set delay between events (10 to 30 seconds) to minimize the number of photos. Images containing humans are referred to in metadata, but are not included in the data files.
For questions about this data set, contact [David Will]([email protected]) at Island Conservation.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
The original data set included a “human” class label; for privacy reasons, we have removed those images from this version of the data set. Those labels are still present in the metadata. If those images are important to your work, contact us; in some cases it will be possible to release those images under an alternative license.
</details>
<details>
<summary> Channel Islands Camera Traps </summary>
This data set contains 246,529 camera trap images from 73 camera locations in the Channel Islands, California. All animals are annotated with bounding boxes. Data were provided by The Nature Conservancy. Animals are classified as rodent1 (82914), fox (48150), bird (11099), skunk (1071), or other (159). 114,949 images (47%) are empty. All images of rats were taken on islands already known to have rat populations.
If you use these data in a publication or report, please use the following citation:
The Nature Conservancy (2021): Channel Islands Camera Traps 1.0. The Nature Conservancy. Dataset.
For questions about this data set, contact [Nathaniel Rindlaub]([email protected]) at The Nature Conservancy.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
The original data set included a “human” class label; for privacy reasons, we have removed those images from this version of the data set. Those labels are still present in the metadata.
</details>
<details>
<summary> Idaho Camera Traps </summary>
This data set contains approximately 1.5 million camera trap images from Idaho. Labels are provided for 62 categories, most of which are animal classes (“deer”, “elk”, and “cattle” are the most common animal classes), but labels also include some state indicators (e.g. “snow on lens”, “foggy lens”). Approximately 70.5% of images are labeled as empty. Annotations were assigned to image sequences, rather than individual images, so annotations are meaningful only at the sequence level.
The metadata contains references to images containing humans, but these have been removed from the dataset (along with images containing vehicles and domestic dogs).
Images were provided by the Idaho Department of Fish and Game. No representations or warranties are made regarding the data, including but not limited to warranties of non-infringement or fitness for a particular purpose. Some information shared under this agreement may not have undergone quality assurance procedures and should be considered provisional. Images may not be sold in any format, but may be used for scientific publications. Please acknowledge the Idaho Department of Fish and Game when using images for publication or scientific communication.
</details>
<details>
<summary> Snapshot Serengeti </summary>
This data set contains approximately 2.65M sequences of camera trap images, totaling 7.1M images, from seasons one through eleven of the [Snapshot Serengeti project](https://snapshotserengeti.org/) -- the flagship project of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Serengeti National Park in Tanzania is best known for the massive annual migrations of wildebeest and zebra that drive the cycling of its dynamic ecosystem.
Labels are provided for 61 categories, primarily at the species level (for example, the most common labels are wildebeest, zebra, and Thomson’s gazelle). Approximately 76% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshotserengeti-v-2-0/SnapshotSerengeti_S1-11_v2.1.species_list.csv). We have also added approximately 150,000 bounding box annotations to approximately 78,000 of those images.
The images and species-level labels are described in more detail in the associated manuscript:
```bibtex
@misc{dryad_5pt92,
title = {Data from: Snapshot Serengeti, high-frequency annotated camera trap images of 40 mammalian species in an African savanna},
author = {Swanson, AB and Kosmala, M and Lintott, CJ and Simpson, RJ and Smith, A and Packer, C},
year = {2015},
journal = {Scientific Data},
URL = {https://doi.org/10.5061/dryad.5pt92},
doi = {doi:10.5061/dryad.5pt92},
publisher = {Dryad Digital Repository}
}
```
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Karoo </summary>
This data set contains 14889 sequences of camera trap images, totaling 38074 images, from the [Snapshot Karoo](https://www.zooniverse.org/projects/shuebner729/snapshot-karoo) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Karoo National Park, located in the arid Nama Karoo biome of South Africa, is defined by its endemic vegetation and mountain landscapes. Its unique topographical gradient has led to a surprising amount of biodiversity, with 58 mammals and more than 200 bird species recorded, as well as a multitude of reptilian species.
Labels are provided for 38 categories, primarily at the species level (for example, the most common labels are gemsbokoryx, hartebeestred, and kudu). Approximately 83.02% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/KAR/SnapshotKaroo_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Kgalagadi </summary>
This data set contains 3611 sequences of camera trap images, totaling 10222 images, from the [Snapshot Kgalagadi](https://www.zooniverse.org/projects/shuebner729/snapshot-kgalagadi/) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. The Kgalagadi Transfrontier Park stretches from the Namibian border across South Africa and into Botswana, covering a landscape commonly referred to as the Kalahari – an arid savanna. This region is of great interest to help us understand how animals cope with extreme temperatures at both ends of the scale.
Labels are provided for 31 categories, primarily at the species level (for example, the most common labels are gemsbokoryx, birdother, and ostrich). Approximately 76.14% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/KGA/SnapshotKgalagadi_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Enonkishu </summary>
This data set contains 13301 sequences of camera trap images, totaling 28544 images, from the [Snapshot Enonkishu](https://www.zooniverse.org/projects/aguthmann/snapshot-enonkishu) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Enonkishu Conservancy is located on the northern boundary of the Mara-Serengeti ecosystem in Kenya, and is managed by a consortium of stakeholders and land-owning Maasai families. Their aim is to promote coexistence between wildlife and livestock in order to encourage regenerative grazing and build stability in the Mara conservancies.
Labels are provided for 39 categories, primarily at the species level (for example, the most common labels are impala, warthog, and zebra). Approximately 64.76% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/ENO/SnapshotEnonkishu_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Camdeboo </summary>
This data set contains 12132 sequences of camera trap images, totaling 30227 images, from the [Snapshot Camdeboo](https://www.zooniverse.org/projects/shuebner729/snapshot-camdeboo) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Camdeboo National Park, South Africa is crucial habitat for many birds on a global scale, with greater than fifty endemic and near-endemic species and many migratory species.
Labels are provided for 43 categories, primarily at the species level (for example, the most common labels are kudu, springbok, and ostrich). Approximately 43.74% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/CDB/SnapshotCamdeboo_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Mountain Zebra </summary>
This data set contains 71688 sequences of camera trap images, totaling 73034 images, from the [Snapshot Mountain Zebra](https://www.zooniverse.org/projects/meredithspalmer/snapshot-mountain-zebra/) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Mountain Zebra National Park is located in the Eastern Cape of South Africa in a transitional area between several distinct biomes, which means it is home to many endemic species. As the name suggests, this park contains the largest remnant population of Cape Mountain zebras, ~700 as of 2019 and increasing steadily every year.
Labels are provided for 54 categories, primarily at the species level (for example, the most common labels are zebramountain, kudu, and springbok). Approximately 91.23% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/MTZ/SnapshotMountainZebra_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Kruger </summary>
This data set contains 4747 sequences of camera trap images, totaling 10072 images, from the [Snapshot Kruger](https://www.zooniverse.org/projects/shuebner729/snapshot-kruger) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Kruger National Park, South Africa has been a refuge for wildlife since its establishment in 1898, and it houses one of the most diverse wildlife assemblages remaining in Africa. The Snapshot Safari grid was established in 2018 as part of a research project assessing the impacts of large mammals on plant life as boundary fences were removed and wildlife reoccupied areas of previous extirpation.
Labels are provided for 46 categories, primarily at the species level (for example, the most common labels are impala, elephant, and buffalo). Approximately 61.60% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/KRU/SnapshotKruger_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> SWG Camera Traps </summary>
This data set contains 436,617 sequences of camera trap images from 982 locations in Vietnam and Lao, totaling 2,039,657 images. Labels are provided for 120 categories, primarily at the species level (for example, the most common labels are “Eurasian Wild Pig”, “Large-antlered Muntjac”, and “Unidentified Murid”). Approximately 12.98% of images are labeled as empty. A full list of species and associated image counts is available here. 101,659 bounding boxes are provided on 88,135 images.
This data set is provided by the Saola Working Group; providers include:
- IUCN SSC Asian Wild Cattle Specialist Group’s Saola Working Group (SWG)
- Asian Arks
- Wildlife Conservation Society (Lao)
- WWF Lao
- Integrated Conservation of Biodiversity and Forests project, Lao (ICBF)
- Center for Environment and Rural Development, Vinh University, Vietnam
If you use these data in a publication or report, please use the following citation:
SWG (2021): Northern and Central Annamites Camera Traps 2.0. IUCN SSC Asian Wild Cattle Specialist Group’s Saola Working Group. Dataset.
For questions about this data set, contact [email protected].
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Orinoquia Camera Traps </summary>
This data set contains 104,782 images collected from a 50-camera-trap array deployed from January to July 2020 within the private natural reserves El Rey Zamuro (31 km2) and Las Unamas (40 km2), located in the Meta department in the Orinoquía region in central Colombia. We deployed cameras using a stratified random sampling design across forest core area strata. Cameras were spaced 1 km apart from one another, located facing wildlife trails, and deployed with no bait. Images were stored and reviewed by experts using the Wildlife Insights platform.
This data set contains 51 classes, predominantly mammals such as the collared peccary, black agouti, spotted paca, white-lipped peccary, lowland tapir, and giant anteater. Approximately 20% of images are empty.
The main purpose of the study is to understand how humans, wildlife, and domestic animals interact in multi-functional landscapes (e.g., agricultural livestock areas with native forest remnants). However, this data set was also used to review model performance of AI-powered platforms – Wildlife Insights (WI), MegaDetector (MD), and Machine Learning for Wildlife Image Classification (MLWIC2). We provide a demonstration of the use of WI, MD, and MLWIC2 and R code for evaluating model performance of these platforms in the accompanying [GitHub repository](https://github.com/julianavelez1/Processing-Camera-Trap-Data-Using-AI).
If you use these data in a publication or report, please use the following citation:
```bibtex
@article{velez2022choosing,
title={Choosing an Appropriate Platform and Workflow for Processing Camera Trap Data using Artificial Intelligence},
author={V{\'e}lez, Juliana and Castiblanco-Camacho, Paula J and Tabak, Michael A and Chalmers, Carl and Fergus, Paul and Fieberg, John},
journal={arXiv preprint arXiv:2202.02283},
year={2022}
}
```
For questions about this data set, contact [Juliana Velez Gomez]([email protected]).
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
### Supported Tasks and Leaderboards
No leaderboards exist for LILA.
### Languages
The [LILA taxonomy](https://lila.science/taxonomy-mapping-for-camera-trap-data-sets/) is provided in English.
## Dataset Structure
### Data Instances
The data annotations are provided in [COCO Camera Traps](https://github.com/Microsoft/CameraTraps/blob/master/data_management/README.md#coco-cameratraps-format) format.
All of the datasets share a common category taxonomy, which is defined on the [LILA website](https://lila.science/taxonomy-mapping-for-camera-trap-data-sets/).
### Data Fields
Different datasets may have slightly varying fields, which include:
`file_name`: the file name \
`width` and `height`: the dimensions of the image \
`study`: which research study the image was collected as part of \
`location` : the name of the location at which the image was taken \
`annotations`: information about image annotation, which includes the taxonomy information, bounding box/boxes (`bbox`/`bboxes`) if any, as well as any other annotation information. \
`image` : the `path` to download the image and any other information that is available, e.g. its size in `bytes`.
### Data Splits
This dataset does not have a predefined train/test split.
## Dataset Creation
### Curation Rationale
The datasets that constitute LILA have been provided by the organizations, projects and researchers who collected them.
### Source Data
#### Initial data collection and normalization
N/A
#### Who are the source language producers?
N/A
### Annotations
#### Annotation process
Each dataset has been annotated by the members of the project/organization that provided it.
#### Who are the annotators?
The annotations have been provided by domain experts in fields such as biology and ecology.
### Personal and Sensitive Information
Some of the original data sets included a “human” class label; for privacy reasons, these images were removed. Those labels are still present in the metadata. If those images are important to your work, contact the [LILA maintainers](mailto:[email protected]), since in some cases it will be possible to release those images under an alternative license.
## Considerations for Using the Data
### Social Impact of Dataset
Machine learning depends on labeled data, but accessing such data in biology and conservation is a challenge. Consequently, everyone benefits when labeled data is made available. Biologists and conservation scientists benefit by having data to train on, and free hosting allows teams to multiply the impact of their data (we suggest listing this benefit in grant proposals that fund data collection). ML researchers benefit by having data to experiment with.
### Discussion of Biases
These datasets do not represent global diversity, but are examples of local ecosystems and animals.
### Other Known Limitations
N/A
## Additional Information
### Tutorial
The [tutorial in this Google Colab notebook](https://colab.research.google.com/drive/17gPOIK-ksxPyX6yP9TaKIimlwf9DYe2R?usp=sharing) demonstrates how to work with this dataset, including filtering by species, collating configurations, and downloading images.
### Working with Taxonomies
All the taxonomy categories are saved as ClassLabels, which can be converted to strings as needed. Strings can likewise be converted to integers as needed, to filter the dataset. In the example below we filter the "Caltech Camera Traps" dataset to find all the entries with a "felis catus" as the species for the first annotation.
```python
dataset = load_dataset("society-ethics/lila_camera_traps", "Caltech Camera Traps", split="train")
taxonomy = dataset.features["annotations"].feature["taxonomy"]
# Filters to show only cats
cats = dataset.filter(lambda x: x["annotations"]["taxonomy"][0]["species"] == taxonomy["species"].str2int("felis catus"))
```
The original common names have been saved with their taxonomy mappings in this repository in `common_names_to_tax.json`. These can be used, for example, to map from a taxonomy combination to a common name to help make queries more legible. Note, however, that there is a small number of duplicate common names with different taxonomy values which you will need to disambiguate.
The following example loads the first "sea turtle" in the "Island Conservation Camera Traps" dataset.
```python
LILA_COMMON_NAMES_TO_TAXONOMY = pd.read_json("https://huggingface.co/datasets/society-ethics/lila_camera_traps/raw/main/data/common_names_to_tax.json", lines=True).set_index("common_name")
dataset = load_dataset("society-ethics/lila_camera_traps", "Island Conservation Camera Traps", split="train")
taxonomy = dataset.features["annotations"].feature["taxonomy"]
sea_turtle = LILA_COMMON_NAMES_TO_TAXONOMY.loc["sea turtle"].to_dict()
sea_turtle = {k: taxonomy[k].str2int(v) if v is not None else v for k, v in sea_turtle.items()} # Map to ClassLabel integers
sea_turtle_dataset = ds.filter(lambda x: x["annotations"]["taxonomy"][0] == sea_turtle)
```
The example below selects a random item from the dataset, and then maps from the taxonomy to a common name:
```python
LILA_COMMON_NAMES_TO_TAXONOMY = pd.read_json("https://huggingface.co/datasets/society-ethics/lila_camera_traps/raw/main/data/common_names_to_tax.json", lines=True).set_index("common_name")
dataset = load_dataset("society-ethics/lila_camera_traps", "Caltech Camera Traps", split="train")
taxonomy = dataset.features["annotations"].feature["taxonomy"]
random_entry = dataset.shuffle()[0]
filter_taxonomy = random_entry["annotations"]["taxonomy"][0]
filter_keys = list(map(lambda x: (x[0], taxonomy[x[0]].int2str(x[1])), filter(lambda x: x[1] is not None, list(filter_taxonomy.items()))))
if len(filter_keys) > 0:
print(LILA_COMMON_NAMES_TO_TAXONOMY[np.logical_and.reduce([
LILA_COMMON_NAMES_TO_TAXONOMY[k] == v for k,v in filter_keys
])])
else:
print("No common name found for the item.")
```
### Dataset Curators
LILA BC is maintained by a working group that includes representatives from Ecologize, Zooniverse, the Evolving AI Lab, Snapshot Safari, and Microsoft AI for Earth. Hosting on Microsoft Azure is provided by Microsoft AI for Earth.
### Licensing Information
Many, but not all, LILA data sets were released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/). Check the details of the specific dataset you are using in its section above.
### Citation Information
Citations for each dataset (if they exist) are provided in its section above.
### Contributions
Thanks to [@NimaBoscarino](https://github.com/NimaBoscarino/) for adding this dataset.
| society-ethics/lila_camera_traps | [
"task_categories:image-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:other",
"biodiversity",
"camera trap data",
"wildlife monitoring",
"region:us"
] | 2022-10-30T22:34:29+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["image-classification"], "pretty_name": "LILA Camera Traps", "tags": ["biodiversity", "camera trap data", "wildlife monitoring"]} | 2023-03-07T20:14:40+00:00 |
9d031f0412a79f3f53cfb7b584560cb40775bf33 | oo92/diffusion-data | [
"license:mit",
"region:us"
] | 2022-10-31T03:03:02+00:00 | {"license": "mit"} | 2022-10-31T03:08:58+00:00 |
|
4f352870d3552163c0b4be7ee7195e1cf402f5b3 |
# Dataset Card for openpi_v2
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Open PI is the first dataset for tracking state changes in procedural text from arbitrary domains by using an unrestricted (open) vocabulary. Our solution is a new task formulation in which just the text is provided, from which a set of state changes (entity, attribute, before, after) is generated for each step, where the entity, attribute, and values must all be predicted from an open vocabulary.
### Supported Tasks and Leaderboards
- `Task 1`: Given paragraph (e.g., with 5 steps), identify entities that change (challenge: implicit entities, some explicit entities that don’t change)
- `Task 3`: Given paragraph, identify the attributes of entity that change (challenge: implicit entities, attributes & many combinations)
- `Task 4`: Given paragraph & an entity, identify the sequence of attribute value changes (challenge: implicit attributes)
- `Task 7`: Given image url, identify the visual attributes of entity and non-visual attributes of entity that change
### Languages
English
## Dataset Structure
### Data Instances
A typical instance in the dataset:
```
{
"goal": "goal1_text",
"steps": [
"step1_text",
"step2_text",
...
],
"topics": "topic1_annotation",
"image_urls": [
"step1_url_text",
"step2_url_text",
...
],
"states": [
{
"answers_openpiv1_metadata": {
"entity": "entity1 | entity2 | ...",
"attribute": "attribute1 | attribute2 | ...",
"answers": [
"before: step1_entity1_before | step1_entity2_before, after: step1_entity1_after | step1_entity2_after",
...
],
"modality": [
"step1_entity1_modality_id | step1_entity2_modality_id",
...
]
},
"entity": "entity1 | entity2 | ...",
"attribute": "attribute1 | attribute2 | ...",
"answers": [
"before: step1_entity1_before_merged | step1_entity2_before_merged, after: step1_entity1_after_merged | step1_entity2_after_merged",
...
]
}
]
}
```
### Data Fields
The following is an excerpt from the dataset README:
Within "goal", "steps", "topics", and "image_urls", the fields should be self-explanatory. Listed below is an explanation about those within "states":
#### Fields specific to questions:
### Data Splits
Train, Valid, Dev
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | abhinavk/openpi_v2 | [
"task_categories:question-answering",
"task_categories:text-classification",
"task_ids:entity-linking-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-10-31T04:49:26+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": [], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["question-answering", "text-classification"], "task_ids": ["entity-linking-classification", "natural-language-inference"], "pretty_name": "openpi_v2", "tags": []} | 2022-11-07T02:23:34+00:00 |
5f678f5408eb19543510a1af2d58797c9366d6c0 | KETI-AIR/aihub_scitech_translation | [
"license:apache-2.0",
"region:us"
] | 2022-10-31T06:11:15+00:00 | {"license": "apache-2.0"} | 2022-10-31T06:33:28+00:00 |
|
aa7407539ed836835ed51916fd092c02ce1dea1b | bobfu/cats | [
"license:cc0-1.0",
"region:us"
] | 2022-10-31T06:24:46+00:00 | {"license": "cc0-1.0"} | 2022-10-31T06:27:06+00:00 |
|
c9369bf40a8f0788c3d438e9998d161d7f183910 | nrajsubramanian/usfaq | [
"license:mit",
"region:us"
] | 2022-10-31T06:56:33+00:00 | {"license": "mit"} | 2022-10-31T06:57:45+00:00 |
|
04700c3f0966cb42f464b4144fe87df4848feb5d | matchbench/dbp15k-fr-en | [
"language:fr",
"language:en",
"region:us"
] | 2022-10-31T07:08:08+00:00 | {"language": ["fr", "en"]} | 2023-01-23T12:28:45+00:00 |
|
c1028c4f0bd26a5af0c414ee8dcabcdafeebf83e | KETI-AIR/aihub_koenzh_food_translation | [
"license:apache-2.0",
"region:us"
] | 2022-10-31T07:24:36+00:00 | {"license": "apache-2.0"} | 2023-04-18T02:36:50+00:00 |
|
442f8c4c00aae04c37fcb44e7ecb44023af2b9ee | KETI-AIR/aihub_scitech20_translation | [
"license:apache-2.0",
"region:us"
] | 2022-10-31T08:12:20+00:00 | {"license": "apache-2.0"} | 2022-10-31T08:12:50+00:00 |
|
0ba2c99b0dde16ac5fe281bba5c99b4203039ea2 | KETI-AIR/aihub_socialtech20_translation | [
"license:apache-2.0",
"region:us"
] | 2022-10-31T08:13:21+00:00 | {"license": "apache-2.0"} | 2022-10-31T08:13:36+00:00 |
|
0ce47b4d95b13204112fea6b36bb35847d690f35 |
# Dataset Card for MyoQuant SDH Data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances and Splits](#data-instances-and-splits)
- [Dataset Creation and Annotations](#dataset-creation-and-annotations)
- [Source Data and annotation process](#source-data-and-annotation-process)
- [Who are the annotators ?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases and Limitations](#discussion-of-biases-and-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [The Team Behind this Dataset](#the-team-behind-this-dataset)
- [Partners](#partners)
## Dataset Description
- **Homepage:** https://github.com/lambda-science/MyoQuant
- **Repository:** https://huggingface.co/corentinm7/MyoQuant-SDH-Model
- **Paper:** Yet To Come
- **Leaderboard:** N/A
- **Point of Contact:** [**Corentin Meyer**, 3rd year PhD Student in the CSTB Team, ICube — CNRS — Unistra](https://cmeyer.fr) email: <[email protected]>
### Dataset Summary
<p align="center">
<img src="https://i.imgur.com/mzALgZL.png" alt="MyoQuant Banner" style="border-radius: 25px;" />
</p>
This dataset contains images of individual muscle fiber used to train [MyoQuant](https://github.com/lambda-science/MyoQuant) SDH Model. The goal of these data is to train a tool to classify SDH stained muscle fibers depending on the presence of mitochondria repartition anomalies. A pathological feature useful for diagnosis and classification in patient with congenital myopathies.
## Dataset Structure
### Data Instances and Splits
A total of 16 787 single muscle fiber images are in the dataset, split in three sets: train, validation and test set.
See the table for the exact count of images in each category:
| | Train (72%) | Validation (8%) | Test (20%) | TOTAL |
|---------|-------------|-----------------|------------|-------------|
| control | 9165 | 1019 | 2546 | 12730 (76%) |
| sick | 2920 | 325 | 812 | 4057 (24%) |
| TOTAL | 12085 | 1344 | 3358 | 16787 |
## Dataset Creation and Annotations
### Source Data and annotation process
To create this dataset of single muscle images, whole slide image of mice muscle fiber with SDH staining were taken from WT mice (1), BIN1 KO mice (10) and mutated DNM2 mice (7). Cells contained within these slides manually counted, labeled and classified in two categories: control (no anomaly) or sick (mitochondria anomaly) by two experts/annotators. Then all single muscle images were extracted from the image using CellPose to detect each individual cell’s boundaries. Resulting in 16787 images from 18 whole image slides.
### Who are the annotators?
All data in this dataset were generated and manually annotated by two experts:
- [**Quentin GIRAUD, PhD Student**](https://twitter.com/GiraudGiraud20) @ [Department Translational Medicine, IGBMC, CNRS UMR 7104](https://www.igbmc.fr/en/recherche/teams/pathophysiology-of-neuromuscular-diseases), 1 rue Laurent Fries, 67404 Illkirch, France <[email protected]>
- **Charlotte GINESTE, Post-Doc** @ [Department Translational Medicine, IGBMC, CNRS UMR 7104](https://www.igbmc.fr/en/recherche/teams/pathophysiology-of-neuromuscular-diseases), 1 rue Laurent Fries, 67404 Illkirch, France <[email protected]>
A second pass of verification was done by:
- **Bertrand VERNAY, Platform Leader** @ [Light Microscopy Facility, IGBMC, CNRS UMR 7104](https://www.igbmc.fr/en/plateformes-technologiques/photonic-microscopy), 1 rue Laurent Fries, 67404 Illkirch, France <[email protected]>
### Personal and Sensitive Information
All image data comes from mice, there is no personal nor sensitive information in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The aim of this dataset is to improve congenital myopathies diagnosis by providing tools to automatically quantify specific pathogenic features in muscle fiber histology images.
### Discussion of Biases and Limitations
This dataset has several limitations (non-exhaustive list):
- The images are from mice and thus might not be ideal to represent actual mechanism in human muscle
- The image comes only from two mice models with mutations in two genes (BIN1, DNM2) while congenital myopathies can be caused by a mutation in more than 35+ genes.
- Only mitochondria anomaly was considered to classify cells as "sick", other anomalies were not considered, thus control cells might present other anomalies (such as what is called "cores" in congenital myopathies for examples)
## Additional Information
### Licensing Information
This dataset is under the GNU AFFERO GENERAL PUBLIC LICENSE Version 3, to ensure that what's open-source, stays open-source and available to the community.
### Citation Information
MyoQuant publication with model and data is yet to come.
## The Team Behind this Dataset
**The creator, uploader and main maintainer of this dataset, associated model and MyoQuant is:**
- **[Corentin Meyer, 3rd year PhD Student in the CSTB Team, ICube — CNRS — Unistra](https://cmeyer.fr) Email: <[email protected]> Github: [@lambda-science](https://github.com/lambda-science)**
Special thanks to the experts that created the data for this dataset and all the time they spend counting cells :
- **Quentin GIRAUD, PhD Student** @ [Department Translational Medicine, IGBMC, CNRS UMR 7104](https://www.igbmc.fr/en/recherche/teams/pathophysiology-of-neuromuscular-diseases), 1 rue Laurent Fries, 67404 Illkirch, France <[email protected]>
- **Charlotte GINESTE, Post-Doc** @ [Department Translational Medicine, IGBMC, CNRS UMR 7104](https://www.igbmc.fr/en/recherche/teams/pathophysiology-of-neuromuscular-diseases), 1 rue Laurent Fries, 67404 Illkirch, France <[email protected]>
Last but not least thanks to Bertrand Vernay being at the origin of this project:
- **Bertrand VERNAY, Platform Leader** @ [Light Microscopy Facility, IGBMC, CNRS UMR 7104](https://www.igbmc.fr/en/plateformes-technologiques/photonic-microscopy), 1 rue Laurent Fries, 67404 Illkirch, France <[email protected]>
## Partners
<p align="center">
<img src="https://i.imgur.com/m5OGthE.png" alt="Partner Banner" style="border-radius: 25px;" />
</p>
MyoQuant-SDH-Data is born within the collaboration between the [CSTB Team @ ICube](https://cstb.icube.unistra.fr/en/index.php/Home) led by Julie D. Thompson, the [Morphological Unit of the Institute of Myology of Paris](https://www.institut-myologie.org/en/recherche-2/neuromuscular-investigation-center/morphological-unit/) led by Teresinha Evangelista, the [imagery platform MyoImage of Center of Research in Myology](https://recherche-myologie.fr/technologies/myoimage/) led by Bruno Cadot, [the photonic microscopy platform of the IGMBC](https://www.igbmc.fr/en/plateformes-technologiques/photonic-microscopy) led by Bertrand Vernay and the [Pathophysiology of neuromuscular diseases team @ IGBMC](https://www.igbmc.fr/en/igbmc/a-propos-de-ligbmc/directory/jocelyn-laporte) led by Jocelyn Laporte
| corentinm7/MyoQuant-SDH-Data | [
"task_categories:image-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"size_categories:10K<n<100K",
"source_datasets:original",
"license:agpl-3.0",
"myology",
"biology",
"histology",
"muscle",
"cells",
"fibers",
"myopathy",
"SDH",
"myoquant",
"region:us"
] | 2022-10-31T08:37:20+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": [], "license": ["agpl-3.0"], "multilinguality": [], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["image-classification"], "pretty_name": "SDH staining muscle fiber histology images used to train MyoQuant model.", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "control", "1": "sick"}}}}], "config_name": "SDH_16k", "splits": [{"name": "test", "num_bytes": 683067, "num_examples": 3358}, {"name": "train", "num_bytes": 2466024, "num_examples": 12085}, {"name": "validation", "num_bytes": 281243, "num_examples": 1344}], "download_size": 2257836789, "dataset_size": 3430334}, "tags": ["myology", "biology", "histology", "muscle", "cells", "fibers", "myopathy", "SDH", "myoquant"]} | 2022-11-16T18:19:23+00:00 |
bde0b1d8b0c2e02160f37f4a22957e0adfcaad7c | nev/anime-giph | [
"license:other",
"region:us"
] | 2022-10-31T08:48:59+00:00 | {"license": "other"} | 2022-12-05T08:51:48+00:00 |
|
892faabeccc027ec862b3889a6cb232ea04d4558 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-083d71a4-50b6-4074-aa7d-a46eddb83f06-42 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-31T09:10:47+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "autoevaluate/multi-class-classification", "metrics": ["matthews_correlation"], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-10-31T09:11:37+00:00 |
50549635611eefc47cc7852b05fa7838e6b32ea3 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-fe056b5c-7e36-4094-b3f2-84d1fbaaf77c-53 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-31T09:25:02+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "autoevaluate/multi-class-classification", "metrics": ["matthews_correlation"], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-10-31T09:25:45+00:00 |
51ebe9dbdca6c10696c926181cea1f5e339d9aaa | Sombredems/sags | [
"license:other",
"region:us"
] | 2022-10-31T10:37:52+00:00 | {"license": "other"} | 2022-10-31T14:06:08+00:00 |
|
08d5a56fbbfbd8f8e7c6372cfb2f43159388f872 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-6da44258-8968-4823-8933-3375e1cfee89-64 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-31T10:45:00+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "autoevaluate/multi-class-classification", "metrics": ["matthews_correlation"], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-10-31T10:45:45+00:00 |
35619762a828711029111dac816e3be6bfb33059 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-0d3aacb2-653b-459b-af2f-2d90d5362791-75 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-31T11:00:08+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-10-31T11:00:48+00:00 |
b940c76750ef805c687e0e49d274edcfb00e7214 | bankawat/ASR | [
"license:unknown",
"region:us"
] | 2022-10-31T11:16:12+00:00 | {"license": "unknown"} | 2022-11-01T01:23:00+00:00 |
|
66839876b5ad5337aa11c89d71db04f3e1e2ff15 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-95ce44b7-7684-4cf4-b396-d486367937e4-86 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-31T11:29:16+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-10-31T11:29:54+00:00 |
4e0cf3f26014b3ececa0fe89260099593caeb3c0 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-f69c187c-a1f8-462d-8272-41a77bd1f8ed-97 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-31T11:32:19+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-10-31T11:32:57+00:00 |
c8287a1fdc3bb36bdbc84293a1a34cf4ee5384c5 | # positive-reframing-ptbr-dataset
Version translated into pt-br of the dataset available in the work ["Inducing Positive Perspectives with Text Reframing"](https://arxiv.org/abs/2204.02952). Used in model [positive-reframing-ptbr](https://huggingface.co/dominguesm/positive-reframing-ptbr).
**Citation:**
> Ziems, C., Li, M., Zhang, A., & Yang, D. (2022). Inducing Positive Perspectives with Text Reframing. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL)_.
**BibTeX:**
```tex
@inproceedings{ziems-etal-2022-positive-frames,
title = "Inducing Positive Perspectives with Text Reframing",
author = "Ziems, Caleb and
Li, Minzhi and
Zhang, Anthony and
Yang, Diyi",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics",
month = may,
year = "2022",
address = "Online and Dublin, Ireland",
publisher = "Association for Computational Linguistics"
}
``` | dominguesm/positive-reframing-ptbr-dataset | [
"arxiv:2204.02952",
"region:us"
] | 2022-10-31T12:17:25+00:00 | {"dataset_info": {"features": [{"name": "original_text", "dtype": "string"}, {"name": "reframed_text", "dtype": "string"}, {"name": "strategy", "dtype": "string"}, {"name": "strategy_original_text", "dtype": "string"}], "splits": [{"name": "dev", "num_bytes": 318805, "num_examples": 835}, {"name": "test", "num_bytes": 321952, "num_examples": 835}, {"name": "train", "num_bytes": 2586935, "num_examples": 6679}], "download_size": 1845244, "dataset_size": 3227692}} | 2022-10-31T12:43:59+00:00 |
230d88b7e15e1fd2b0df276cb236559be413bff8 | ChristianOrr/mnist | [
"license:apache-2.0",
"region:us"
] | 2022-10-31T12:32:35+00:00 | {"license": "apache-2.0"} | 2022-11-01T13:09:41+00:00 |
|
eb792fb79d79a7e3b3b12eaea26dfb5a6ec23deb | # Dataset Card for "FoodBase"
Dataset for FoodBase corpus introduced in [this paper](https://academic.oup.com/database/article/doi/10.1093/database/baz121/5611291).
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Dizex/FoodBase | [
"region:us"
] | 2022-10-31T12:42:55+00:00 | {"dataset_info": {"features": [{"name": "nltk_tokens", "sequence": "string"}, {"name": "iob_tags", "sequence": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2040036, "num_examples": 600}, {"name": "val", "num_bytes": 662190, "num_examples": 200}], "download_size": 353747, "dataset_size": 2702226}} | 2022-10-31T12:48:53+00:00 |
5f56df48ab1ed088c122e2d73cd696e66e22e8e2 | # Portuguese Legal Sentences
Collection of Legal Sentences from the Portuguese Supreme Court of Justice
The goal of this dataset was to be used for MLM and TSDAE
Extended version of rufimelo/PortugueseLegalSentences-v1
200000/200000/100000
### Contributions
[@rufimelo99](https://github.com/rufimelo99)
| rufimelo/PortugueseLegalSentences-v2 | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:pt",
"license:apache-2.0",
"region:us"
] | 2022-10-31T14:28:04+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["pt"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "source_datasets": ["original"]} | 2022-11-01T13:14:38+00:00 |
3bddddbe0ef0f314a548753b200ec3e681492a8e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: SiraH/bert-finetuned-squad
* Dataset: subjqa
* Config: grocery
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sushant-joshi](https://huggingface.co/sushant-joshi) for evaluating this model. | autoevaluate/autoeval-eval-subjqa-grocery-9dee2c-1945965520 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-31T14:45:07+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["subjqa"], "eval_info": {"task": "extractive_question_answering", "model": "SiraH/bert-finetuned-squad", "metrics": [], "dataset_name": "subjqa", "dataset_config": "grocery", "dataset_split": "train", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-10-31T14:45:47+00:00 |
db95ae658758c7b2337a54a2facabefe3af9698a | # Dataset Card for "cartoon-blip-captions"
| Norod78/cartoon-blip-captions | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-10-31T14:48:15+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["other"], "language": ["en"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "pretty_name": "Cartoon BLIP captions", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 190959102.953, "num_examples": 3141}], "download_size": 190279356, "dataset_size": 190959102.953}, "tags": []} | 2022-11-09T16:27:57+00:00 |
607724cf2959d50f0a171e8ff42a7233f96dcd19 | LiveEvil/lucyrev1 | [
"license:apache-2.0",
"region:us"
] | 2022-10-31T15:40:56+00:00 | {"license": "apache-2.0"} | 2022-10-31T15:40:56+00:00 |
|
17d5b9dafdaa266f17aedfaa0154fe56411cdb44 | # Dataset Card for "Arabic_SQuAD"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
---
# Citation
```
@inproceedings{mozannar-etal-2019-neural,
title = "Neural {A}rabic Question Answering",
author = "Mozannar, Hussein and
Maamary, Elie and
El Hajal, Karl and
Hajj, Hazem",
booktitle = "Proceedings of the Fourth Arabic Natural Language Processing Workshop",
month = aug,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/W19-4612",
doi = "10.18653/v1/W19-4612",
pages = "108--118",
abstract = "This paper tackles the problem of open domain factual Arabic question answering (QA) using Wikipedia as our knowledge source. This constrains the answer of any question to be a span of text in Wikipedia. Open domain QA for Arabic entails three challenges: annotated QA datasets in Arabic, large scale efficient information retrieval and machine reading comprehension. To deal with the lack of Arabic QA datasets we present the Arabic Reading Comprehension Dataset (ARCD) composed of 1,395 questions posed by crowdworkers on Wikipedia articles, and a machine translation of the Stanford Question Answering Dataset (Arabic-SQuAD). Our system for open domain question answering in Arabic (SOQAL) is based on two components: (1) a document retriever using a hierarchical TF-IDF approach and (2) a neural reading comprehension model using the pre-trained bi-directional transformer BERT. Our experiments on ARCD indicate the effectiveness of our approach with our BERT-based reader achieving a 61.3 F1 score, and our open domain system SOQAL achieving a 27.6 F1 score.",
}
```
--- | Mostafa3zazi/Arabic_SQuAD | [
"region:us"
] | 2022-10-31T19:16:37+00:00 | {"dataset_info": {"features": [{"name": "index", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int64"}, {"name": "c_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 61868003, "num_examples": 48344}], "download_size": 10512179, "dataset_size": 61868003}} | 2022-10-31T19:32:25+00:00 |
60d116ecea74a9d94acfbebd19dd061ab42f627a | Jirui/testing | [
"license:afl-3.0",
"region:us"
] | 2022-10-31T19:42:52+00:00 | {"license": "afl-3.0"} | 2022-10-31T19:42:52+00:00 |
|
3857e5ae2a3357a65605cce3d8314a3570371cbb |
A collection of regularization / class instance datasets for the [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) model to use for DreamBooth prior preservation loss training. Files labeled with "mse vae" used the [stabilityai/sd-vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse) VAE. For ease of use, datasets are stored as zip files containing 512x512 PNG images. The number of images in each zip file is specified at the end of the filename.
There is currently a bug where HuggingFace is incorrectly reporting that the datasets are pickled. They are not picked, they are simple ZIP files containing the images.
Currently this repository contains the following datasets (datasets are named after the prompt they used):
Art Styles
* "**artwork style**": 4125 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**artwork style**": 4200 images generated using 50 DPM++ 2S a Karras steps and a CFG of 7, using the MSE VAE. A negative prompt of "text" was also used for this dataset.
* "**artwork style**": 2750 images generated using 50 DPM++ 2S a Karras steps and a CFG of 7, using the MSE VAE.
* "**illustration style**": 3050 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**erotic photography**": 2760 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**landscape photography**": 2500 images generated using 50 DPM++ 2S a Karras steps and a CFG of 7, using the MSE VAE. A negative prompt of "b&w, text" was also used for this dataset.
People
* "**person**": 2115 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**woman**": 4420 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**guy**": 4820 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**supermodel**": 4411 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**bikini model**": 4260 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**sexy athlete**": 5020 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**femme fatale**": 4725 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**sexy man**": 3505 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**sexy woman**": 3500 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
Animals
* "**kitty**": 5100 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**cat**": 2050 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
Vehicles
* "**fighter jet**": 1600 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**train**": 2669 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**car**": 3150 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
Themes
* "**cyberpunk**": 3040 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
I used the "Generate Forever" feature in [AUTOMATIC1111's WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) to create thousands of images for each dataset. Every image in a particular dataset uses the exact same settings, with only the seed value being different.
You can use my regularization / class image datasets with: https://github.com/ShivamShrirao/diffusers, https://github.com/JoePenna/Dreambooth-Stable-Diffusion, https://github.com/TheLastBen/fast-stable-diffusion, and any other DreamBooth projects that have support for prior preservation loss.
| ProGamerGov/StableDiffusion-v1-5-Regularization-Images | [
"license:mit",
"image-text-dataset",
"synthetic-dataset",
"region:us"
] | 2022-10-31T22:21:09+00:00 | {"license": "mit", "tags": ["image-text-dataset", "synthetic-dataset"]} | 2023-11-18T20:46:01+00:00 |
3bc134f4be0eb287bca607e529ef11f06b7cea62 | My initial attempt at creating a dataset intended to create a customized model to include Ruby. | digiSilk/real_ruby | [
"region:us"
] | 2022-10-31T23:49:43+00:00 | {} | 2022-11-01T00:06:52+00:00 |
4845af940bf5042c1ddd28df29cf32d12c88b1d3 | Onur-Ozbek-Crafty-Apes-VFX/CAVFX-LAION | [
"license:mit",
"region:us"
] | 2022-11-01T01:47:03+00:00 | {"license": "mit"} | 2022-11-01T10:24:40+00:00 |
|
a189a9d3742ff9a42941b536305cd77221d3262b | # BioNLP2021 dataset (Task2)
___
Data fields:
* text (str): source text; Section and Article (train_mul subset only) are separated by <SAS> ; Single Documents are separated by <DOC> ; Sentences are separated by <SS>
* summ_abs, summ_ext (str): abstractive and extractive summarization, whose Sentences are separated by <SS>
* question (str): question, whose Sentences are separated by <SS>
* key (str): key in the origin dataset (for submitting) | nbtpj/BioNLP2021 | [
"region:us"
] | 2022-11-01T01:51:49+00:00 | {} | 2023-01-02T02:11:44+00:00 |
4d005b3e1a5f1e558bf1e53ba4d4c6835c9fc667 | # Dataset Card for "text_summarization_dataset1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | shahidul034/text_summarization_dataset1 | [
"region:us"
] | 2022-11-01T02:13:04+00:00 | {"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 129017829, "num_examples": 106525}], "download_size": 43557623, "dataset_size": 129017829}} | 2022-11-01T02:13:08+00:00 |
55b0bfdf562703f905a60e4522bb56547c7406e8 | # Dataset Card for "text_summarization_dataset2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | shahidul034/text_summarization_dataset2 | [
"region:us"
] | 2022-11-01T02:14:42+00:00 | {"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 125954432, "num_examples": 105252}], "download_size": 42217690, "dataset_size": 125954432}} | 2022-11-01T02:14:47+00:00 |
01b5203a600c3bde5dbf229adee63962608e0714 | # Dataset Card for "text_summarization_dataset3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | shahidul034/text_summarization_dataset3 | [
"region:us"
] | 2022-11-01T02:15:46+00:00 | {"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 123296943, "num_examples": 103365}], "download_size": 41220771, "dataset_size": 123296943}} | 2022-11-01T02:15:51+00:00 |
a4910c6c1646eacfcb88f7703e2e0bd7fdee559c | # Dataset Card for "text_summarization_dataset4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | shahidul034/text_summarization_dataset4 | [
"region:us"
] | 2022-11-01T02:16:12+00:00 | {"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 111909333, "num_examples": 87633}], "download_size": 38273895, "dataset_size": 111909333}} | 2022-11-01T02:16:16+00:00 |
945ac8484e1efc07ad26996071343822dad8dc3b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 123tarunanand/roberta-base-finetuned
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MHassanSaleem](https://huggingface.co/MHassanSaleem) for evaluating this model. | autoevaluate/autoeval-eval-adversarial_qa-adversarialQA-cadd10-1947965536 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-01T02:40:38+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "123tarunanand/roberta-base-finetuned", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-11-01T02:41:47+00:00 |
5c0ac9c4b877a715105c979c30e06e6e15dd4754 | n1ghtf4l1/super-collider | [
"license:mit",
"region:us"
] | 2022-11-01T04:19:03+00:00 | {"license": "mit"} | 2022-11-01T04:23:41+00:00 |
|
abd91a59bfb0d76131319a2a5288dee5cb26bf58 | Poison413/Installation01 | [
"license:unknown",
"doi:10.57967/hf/0080",
"region:us"
] | 2022-11-01T05:57:44+00:00 | {"license": "unknown"} | 2022-11-01T07:21:12+00:00 |
|
c499f832e3b97fec8889ddd10ec8765f7386474a | hakancam/avats | [
"license:bigscience-openrail-m",
"region:us"
] | 2022-11-01T06:15:11+00:00 | {"license": "bigscience-openrail-m"} | 2022-11-01T06:15:11+00:00 |
|
16a66c3fda4c2dbb68195d70bf51148d3edb86cf |
# CTKFacts dataset for Document retrieval
Czech Natural Language Inference dataset of ~3K *evidence*-*claim* pairs labelled with SUPPORTS, REFUTES or NOT ENOUGH INFO veracity labels. Extracted from a round of fact-checking experiments concluded and described within the [CsFEVER andCTKFacts: Acquiring Czech data for Fact Verification](https://arxiv.org/abs/2201.11115) paper currently being revised for publication in LREV journal.
## NLI version
Can be found at https://huggingface.co/datasets/ctu-aic/ctkfacts_nli | ctu-aic/ctkfacts | [
"license:cc-by-sa-3.0",
"arxiv:2201.11115",
"region:us"
] | 2022-11-01T06:36:40+00:00 | {"license": "cc-by-sa-3.0"} | 2022-11-01T06:47:03+00:00 |
31504c14df60081992b939f8acab2762d4fb0ad8 |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| Hallalay/TAiPET | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:other-my-multilinguality",
"size_categories:1K<n<10K",
"source_datasets:original",
"license:unknown",
"Wallpaper",
"StableDiffusion",
"img2img",
"region:us"
] | 2022-11-01T08:41:06+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["crowdsourced"], "language": [], "license": ["unknown"], "multilinguality": ["other-my-multilinguality"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-to-image"], "task_ids": [], "pretty_name": "TAiPET", "tags": ["Wallpaper", "StableDiffusion", "img2img"]} | 2022-11-09T19:59:17+00:00 |
6f8ce801f8cf4cc9d58c08f61f3424ad612f2f67 |
# HoC : Hallmarks of Cancer Corpus
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [No Warranty](#no-warranty)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://s-baker.net/resource/hoc/
- **Repository:** https://github.com/sb895/Hallmarks-of-Cancer
- **Paper:** https://academic.oup.com/bioinformatics/article/32/3/432/1743783
- **Leaderboard:** https://paperswithcode.com/dataset/hoc-1
- **Point of Contact:** [Yanis Labrak](mailto:[email protected])
### Dataset Summary
The Hallmarks of Cancer Corpus for text classification
The Hallmarks of Cancer (HOC) Corpus consists of 1852 PubMed publication abstracts manually annotated by experts according to a taxonomy. The taxonomy consists of 37 classes in a hierarchy. Zero or more class labels are assigned to each sentence in the corpus. The labels are found under the "labels" directory, while the tokenized text can be found under "text" directory. The filenames are the corresponding PubMed IDs (PMID).
In addition to the HOC corpus, we also have the [Cancer Hallmarks Analytics Tool](http://chat.lionproject.net/) which classifes all of PubMed according to the HoC taxonomy.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for `multi-class-classification`.
### Languages
The corpora consists of PubMed article only in english:
- `English - United States (en-US)`
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
dataset = load_dataset("qanastek/HoC")
validation = dataset["validation"]
print("First element of the validation set : ", validation[0])
```
## Dataset Structure
### Data Instances
```json
{
"document_id": "12634122_5",
"text": "Genes that were overexpressed in OM3 included oncogenes , cell cycle regulators , and those involved in signal transduction , whereas genes for DNA repair enzymes and inhibitors of transformation and metastasis were suppressed .",
"label": [9, 5, 0, 6]
}
```
### Data Fields
`document_id`: Unique identifier of the document.
`text`: Raw text of the PubMed abstracts.
`label`: One of the 10 currently known hallmarks of cancer.
| Hallmark | Search term |
|:-------------------------------------------:|:-------------------------------------------:|
| 1. Sustaining proliferative signaling (PS) | Proliferation Receptor Cancer |
| | 'Growth factor' Cancer |
| | 'Cell cycle' Cancer |
| 2. Evading growth suppressors (GS) | 'Cell cycle' Cancer |
| | 'Contact inhibition' |
| 3. Resisting cell death (CD) | Apoptosis Cancer |
| | Necrosis Cancer |
| | Autophagy Cancer |
| 4. Enabling replicative immortality (RI) | Senescence Cancer |
| | Immortalization Cancer |
| 5. Inducing angiogenesis (A) | Angiogenesis Cancer |
| | 'Angiogenic factor' |
| 6. Activating invasion & metastasis (IM) | Metastasis Invasion Cancer |
| 7. Genome instability & mutation (GI) | Mutation Cancer |
| | 'DNA repair' Cancer |
| | Adducts Cancer |
| | 'Strand breaks' Cancer |
| | 'DNA damage' Cancer |
| 8. Tumor-promoting inflammation (TPI) | Inflammation Cancer |
| | 'Oxidative stress' Cancer |
| | Inflammation 'Immune response' Cancer |
| 9. Deregulating cellular energetics (CE) | Glycolysis Cancer; 'Warburg effect' Cancer |
| 10. Avoiding immune destruction (ID) | 'Immune system' Cancer |
| | Immunosuppression Cancer |
### Data Splits
Distribution of data for the 10 hallmarks:
| **Hallmark** | **No. abstracts** | **No. sentences** |
|:------------:|:-----------------:|:-----------------:|
| 1. PS | 462 | 993 |
| 2. GS | 242 | 468 |
| 3. CD | 430 | 883 |
| 4. RI | 115 | 295 |
| 5. A | 143 | 357 |
| 6. IM | 291 | 667 |
| 7. GI | 333 | 771 |
| 8. TPI | 194 | 437 |
| 9. CE | 105 | 213 |
| 10. ID | 108 | 226 |
## Dataset Creation
### Source Data
#### Who are the source language producers?
The corpus has been produced and uploaded by Baker Simon and Silins Ilona and Guo Yufan and Ali Imran and Hogberg Johan and Stenius Ulla and Korhonen Anna.
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Additional Information
### Dataset Curators
__HoC__: Baker Simon and Silins Ilona and Guo Yufan and Ali Imran and Hogberg Johan and Stenius Ulla and Korhonen Anna
__Hugging Face__: Labrak Yanis (Not affiliated with the original corpus)
### Licensing Information
```plain
GNU General Public License v3.0
```
```plain
Permissions
- Commercial use
- Modification
- Distribution
- Patent use
- Private use
Limitations
- Liability
- Warranty
Conditions
- License and copyright notice
- State changes
- Disclose source
- Same license
```
### Citation Information
We would very much appreciate it if you cite our publications:
[Automatic semantic classification of scientific literature according to the hallmarks of cancer](https://academic.oup.com/bioinformatics/article/32/3/432/1743783)
```bibtex
@article{baker2015automatic,
title={Automatic semantic classification of scientific literature according to the hallmarks of cancer},
author={Baker, Simon and Silins, Ilona and Guo, Yufan and Ali, Imran and H{\"o}gberg, Johan and Stenius, Ulla and Korhonen, Anna},
journal={Bioinformatics},
volume={32},
number={3},
pages={432--440},
year={2015},
publisher={Oxford University Press}
}
```
[Cancer Hallmarks Analytics Tool (CHAT): a text mining approach to organize and evaluate scientific literature on cancer](https://www.repository.cam.ac.uk/bitstream/handle/1810/265268/btx454.pdf?sequence=8&isAllowed=y)
```bibtex
@article{baker2017cancer,
title={Cancer Hallmarks Analytics Tool (CHAT): a text mining approach to organize and evaluate scientific literature on cancer},
author={Baker, Simon and Ali, Imran and Silins, Ilona and Pyysalo, Sampo and Guo, Yufan and H{\"o}gberg, Johan and Stenius, Ulla and Korhonen, Anna},
journal={Bioinformatics},
volume={33},
number={24},
pages={3973--3981},
year={2017},
publisher={Oxford University Press}
}
```
[Cancer hallmark text classification using convolutional neural networks](https://www.repository.cam.ac.uk/bitstream/handle/1810/270037/BIOTXTM2016.pdf?sequence=1&isAllowed=y)
```bibtex
@article{baker2017cancer,
title={Cancer hallmark text classification using convolutional neural networks},
author={Baker, Simon and Korhonen, Anna-Leena and Pyysalo, Sampo},
year={2016}
}
```
[Initializing neural networks for hierarchical multi-label text classification](http://www.aclweb.org/anthology/W17-2339)
```bibtex
@article{baker2017initializing,
title={Initializing neural networks for hierarchical multi-label text classification},
author={Baker, Simon and Korhonen, Anna},
journal={BioNLP 2017},
pages={307--315},
year={2017}
}
```
| qanastek/HoC | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"region:us"
] | 2022-11-01T10:49:52+00:00 | {"annotations_creators": ["machine-generated", "expert-generated"], "language_creators": ["found"], "language": ["en"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "HoC", "language_bcp47": ["en-US"]} | 2022-11-01T15:03:11+00:00 |
d9197eacfb0afff29d90a2d4e7d0d98a5dfb54bc | # Dataset Card for sova_rudevices
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [SOVA RuDevices](https://github.com/sovaai/sova-dataset)
- **Repository:** [SOVA Dataset](https://github.com/sovaai/sova-dataset)
- **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
- **Point of Contact:** [SOVA.ai](mailto:[email protected])
### Dataset Summary
SOVA Dataset is free public STT/ASR dataset. It consists of several parts, one of them is SOVA RuDevices. This part is an acoustic corpus of approximately 100 hours of 16kHz Russian live speech with manual annotating, prepared by [SOVA.ai team](https://github.com/sovaai).
Authors do not divide the dataset into train, validation and test subsets. Therefore, I was compelled to prepare this splitting. The training subset includes more than 82 hours, the validation subset includes approximately 6 hours, and the test subset includes approximately 6 hours too.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER.
### Languages
The audio is in Russian.
## Dataset Structure
### Data Instances
A typical data point comprises the audio data, usually called `audio` and its transcription, called `transcription`. Any additional information about the speaker and the passage which contains the transcription is not provided.
```
{'audio': {'path': '/home/bond005/datasets/sova_rudevices/data/train/00003ec0-1257-42d1-b475-db1cd548092e.wav',
'array': array([ 0.00787354, 0.00735474, 0.00714111, ...,
-0.00018311, -0.00015259, -0.00018311]), dtype=float32),
'sampling_rate': 16000},
'transcription': 'мне получше стало'}
```
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- transcription: the transcription of the audio file.
### Data Splits
This dataset consists of three splits: training, validation, and test. This splitting was realized with accounting of internal structure of SOVA RuDevices (the validation split is based on the subdirectory `0`, and the test split is based on the subdirectory `1` of the original dataset), but audio recordings of the same speakers can be in different splits at the same time (the opposite is not guaranteed).
| | Train | Validation | Test |
| ----- | ------ | ---------- | ----- |
| examples | 81607 | 5835 | 5799 |
| hours | 82.4h | 5.9h | 5.8h |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
All recorded audio files were manually annotated.
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Egor Zubarev, Timofey Moskalets, and SOVA.ai team.
### Licensing Information
[Creative Commons BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@misc{sova2021rudevices,
author = {Zubarev, Egor and Moskalets, Timofey and SOVA.ai},
title = {SOVA RuDevices Dataset: free public STT/ASR dataset with manually annotated live speech},
publisher = {GitHub},
journal = {GitHub repository},
year = {2021},
howpublished = {\url{https://github.com/sovaai/sova-dataset}},
}
```
### Contributions
Thanks to [@bond005](https://github.com/bond005) for adding this dataset. | bond005/sova_rudevices | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100k",
"source_datasets:extended",
"language:ru",
"license:cc-by-4.0",
"region:us"
] | 2022-11-01T13:03:55+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["ru"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100k"], "source_datasets": ["extended"], "task_categories": ["automatic-speech-recognition", "audio-classification"], "pretty_name": "RuDevices"} | 2022-11-01T15:59:30+00:00 |
b8140f24921f6631cea28e0a7aa3a39683eb2fb3 | Deepak2846/name | [
"license:unknown",
"region:us"
] | 2022-11-01T13:05:13+00:00 | {"license": "unknown"} | 2022-11-21T17:11:01+00:00 |
|
d278dfd8a801d43f5f3ce23228118d8d53faca81 | # Portuguese Legal Sentences
Collection of Legal Sentences from the Portuguese Supreme Court of Justice
The goal of this dataset was to be used for MLM and TSDAE
Extended version of rufimelo/PortugueseLegalSentences-v1
400000/50000/50000
### Contributions
[@rufimelo99](https://github.com/rufimelo99)
| rufimelo/PortugueseLegalSentences-v3 | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:pt",
"license:apache-2.0",
"region:us"
] | 2022-11-01T13:06:19+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["pt"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "source_datasets": ["original"]} | 2022-11-01T13:15:47+00:00 |
44c359b77af23165acac3dfe32a092aa7a9c00fb | KETI-AIR/aihub_news_mrc | [
"license:apache-2.0",
"region:us"
] | 2022-11-01T13:18:00+00:00 | {"license": "apache-2.0"} | 2022-11-02T07:43:03+00:00 |
|
9fe1c98602d295a0e7bc5bb628769d1e71e22be7 |
# MNLI Norwegian
The Multi-Genre Natural Language Inference (MultiNLI) corpus is a crowd-sourced collection of 433k sentence pairs annotated with textual entailment information. The corpus is modeled on the SNLI corpus, but differs in that it covers a range of genres of spoken and written text, and supports a distinctive cross-genre generalisation evaluation. There is also a [HuggingFace version](https://huggingface.co/datasets/multi_nli) of the dataset available.
This dataset is machine translated using Google Translate. From this translation different version of the dataset where created. Included in the repo is a version that is specifically suited for training sentence-BERT-models. This version include the triplet: base-entailment-contradiction. It also includes a version that mixes English and Norwegian, as well as both csv and json-verions. The script for generating the datasets are included in this repo.
Please note that there is no test dataset for MNLI, since this is closed. The authors of MNLI informs us that they selected 7500 new contexts in the same way as the original MNLI contexts. That means the English part of the XNLI test sets is highly comparable. For each genre, the text is generally in-domain with the original MNLI test set (it's from the same source and selected by me in the same way). In most cases the XNLI test set can therefore be used.
### The following datasets are available in the repo:
* mnli_no_en_for_simcse.csv
* mnli_no_en_small_for_simcse.csv
* mnli_no_for_simcse.csv
* multinli_1.0_dev_matched_no_mt.jsonl
* multinli_1.0_dev_mismatched_no_mt.jsonl
* multinli_1.0_train_no_mt.jsonl
* nli_for_simcse.csv
* xnli_dev_no_mt.jsonl
* xnli_test_no_mt.jsonl
### Licensing Information
The majority of the corpus is released under the OANC’s license, which allows all content to be freely used, modified, and shared under permissive terms. The data in the FICTION section falls under several permissive licenses; Seven Swords is available under a Creative Commons Share-Alike 3.0 Unported License, and with the explicit permission of the author, Living History and Password Incorrect are available under Creative Commons Attribution 3.0 Unported Licenses; the remaining works of fiction are in the public domain in the United States (but may be licensed differently elsewhere). The translation and compilation of the Norwegian part is released under the Creative Commons Attribution 3.0 Unported Licenses.
### Citation Information
The datasets are compiled and machine translated by the AiLab at the Norwegian National Library. However, the vast majority of the work related to this dataset is compiling the English version. We therefore suggest that you also cite the original work:
```
@InProceedings{N18-1101,
author = "Williams, Adina
and Nangia, Nikita
and Bowman, Samuel",
title = "A Broad-Coverage Challenge Corpus for
Sentence Understanding through Inference",
booktitle = "Proceedings of the 2018 Conference of
the North American Chapter of the
Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long
Papers)",
year = "2018",
publisher = "Association for Computational Linguistics",
pages = "1112--1122",
location = "New Orleans, Louisiana",
url = "http://aclweb.org/anthology/N18-1101"
}
| NbAiLab/mnli-norwegian | [
"task_categories:sentence-similarity",
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-classification",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"language:no",
"language:nob",
"language:en",
"license:apache-2.0",
"norwegian",
"simcse",
"mnli",
"nli",
"sentence",
"region:us"
] | 2022-11-01T14:53:34+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated", "expert-generated"], "language": ["no", "nob", "en"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["sentence-similarity", "text-classification"], "task_ids": ["natural-language-inference", "semantic-similarity-classification"], "pretty_name": "MNLI Norwegian", "tags": ["norwegian", "simcse", "mnli", "nli", "sentence"]} | 2022-11-23T09:45:12+00:00 |
24d4cac8c5b21c7396382d6cc6952dabe95c8dcb | LiveEvil/TestText | [
"license:openrail",
"region:us"
] | 2022-11-01T18:53:47+00:00 | {"license": "openrail"} | 2022-11-01T18:53:47+00:00 |
|
3859c76db2f6f3d3b9a3863345e3ccdbff75879d | # Dataset Card for "fashion-product-images-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Data was obtained from [here](https://www.kaggle.com/datasets/paramaggarwal/fashion-product-images-small) | ashraq/fashion-product-images-small | [
"region:us"
] | 2022-11-01T20:22:50+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "gender", "dtype": "string"}, {"name": "masterCategory", "dtype": "string"}, {"name": "subCategory", "dtype": "string"}, {"name": "articleType", "dtype": "string"}, {"name": "baseColour", "dtype": "string"}, {"name": "season", "dtype": "string"}, {"name": "year", "dtype": "float64"}, {"name": "usage", "dtype": "string"}, {"name": "productDisplayName", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 546202015.44, "num_examples": 44072}], "download_size": 271496441, "dataset_size": 546202015.44}} | 2022-11-01T20:25:52+00:00 |
caf62a8694ff3c9fa6523dc1f74d446569fded46 | 

 | Valentingmz/Repositor | [
"region:us"
] | 2022-11-01T20:28:07+00:00 | {} | 2022-11-01T20:39:51+00:00 |
031c7b7df6f699fdcd5041c2810bab60907dc354 | LiveEvil/mysheet | [
"license:openrail",
"region:us"
] | 2022-11-01T20:54:32+00:00 | {"license": "openrail"} | 2022-11-01T20:54:32+00:00 |
|
a311ec1ad64e5e5a005e8759b8dde88acecc42eb | # AutoTrain Dataset for project: mysheet
## Dataset Description
This dataset has been automatically processed by AutoTrain for project mysheet.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"context": "The term \u201cpseudocode\u201d refers to writing code in a humanly understandable language such as English, and breaking it down to its core concepts.",
"question": "What is pseudocode?",
"answers.text": [
"Pseudocode is breaking down your code in English."
],
"answers.answer_start": [
33
]
},
{
"context": "Python is an interactive programming language designed for API and Machine Learning use.",
"question": "What is Python?",
"answers.text": [
"Python is an interactive programming language."
],
"answers.answer_start": [
0
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"context": "Value(dtype='string', id=None)",
"question": "Value(dtype='string', id=None)",
"answers.text": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"answers.answer_start": "Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 3 |
| valid | 1 |
| LiveEvil/autotrain-data-mysheet | [
"language:en",
"region:us"
] | 2022-11-01T20:55:23+00:00 | {"language": ["en"]} | 2022-11-01T20:55:52+00:00 |
36cf8a781bf9396d6b7e7fb536ef635571fbec77 | This is a ParaModeler, for rating hook/grabbers of an introduction paragraph. | LiveEvil/EsCheck-Paragraph | [
"license:openrail",
"region:us"
] | 2022-11-01T21:33:39+00:00 | {"license": "openrail"} | 2022-11-02T15:15:44+00:00 |
3393491a7c997952b11efaa843193f618d82f6cb | learningbot/hadoop | [
"license:gpl-3.0",
"region:us"
] | 2022-11-01T23:24:51+00:00 | {"license": "gpl-3.0"} | 2022-11-01T23:24:51+00:00 |
|
863faca5cd61e14147241f86fb9ffcce538cb800 | To access an image use the following
Bucket URL: https://d26smi9133w0oo.cloudfront.net/
example:
https://d26smi9133w0oo.cloudfront.net/room-7/1670520485-CZk4C72xBr5wPfTpwDAnG6-7648_7008-a-chicken-breaking-through-a-mirrornnotn.webp
**Bucket URL/key**
SQLite
https://huggingface.co/datasets/huggingface-projects/sd-multiplayer-data/blob/main/rooms_data.db
```bash
sqlite> PRAGMA table_info(rooms_data);
0|id|INTEGER|1||1
1|room_id|TEXT|1||0
2|uuid|TEXT|1||0
3|x|INTEGER|1||0
4|y|INTEGER|1||0
5|prompt|TEXT|1||0
6|time|DATETIME|1||0
7|key|TEXT|1||0
$: sqlite3 rooms_data.db
SELECT * FROM rooms_data WHERE room_id = 'room-40';
```
JSON example
https://huggingface.co/datasets/huggingface-projects/sd-multiplayer-data/blob/main/room-39.json
```json
[
{
"id": 160103269,
"room_id": "room-7",
"uuid": "CZk4C72xBr5wPfTpwDAnG6",
"x": 7648,
"y": 7008,
"prompt": "7648_7008 a chicken breaking through a mirrornnotn webp",
"time": "2022-12-08T17:28:06+00:00",
"key": "room-7/1670520485-CZk4C72xBr5wPfTpwDAnG6-7648_7008-a-chicken-breaking-through-a-mirrornnotn.webp"
}
]
| huggingface-projects/sd-multiplayer-data | [
"region:us"
] | 2022-11-02T00:57:18+00:00 | {} | 2022-12-13T14:37:41+00:00 |
7c394b430826ee4b382c888e833699dffaea5423 | # Dataset Card for "crows_pairs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | henryscheible/crows_pairs | [
"region:us"
] | 2022-11-02T02:25:49+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "test", "num_bytes": 146765.59151193633, "num_examples": 302}, {"name": "train", "num_bytes": 586090.4084880636, "num_examples": 1206}], "download_size": 113445, "dataset_size": 732856.0}} | 2022-11-02T02:25:56+00:00 |
76bf143f6cf6aebfb72f24bc3b9e2d2b5b0a0899 | # `chinese_clean_passages_80m`
包含**8千余万**(88328203)个**纯净**中文段落,不包含任何字母、数字。\
Containing more than **80 million pure \& clean** Chinese passages, without any letters/digits/special tokens.
文本长度大部分介于50\~200个汉字之间。\
The passage length is approximately 50\~200 Chinese characters.
通过`datasets.load_dataset()`下载数据,会产生38个大小约340M的数据包,共约12GB,所以请确保有足够空间。\
Downloading the dataset will result in 38 data shards each of which is about 340M and 12GB in total. Make sure there's enough space in your device:)
```
>>>
passage_dataset = load_dataset('beyond/chinese_clean_passages_80m')
<<<
Downloading data: 100%|█| 341M/341M [00:06<00:00, 52.0MB
Downloading data: 100%|█| 342M/342M [00:06<00:00, 54.4MB
Downloading data: 100%|█| 341M/341M [00:06<00:00, 49.1MB
Downloading data: 100%|█| 341M/341M [00:14<00:00, 23.5MB
Downloading data: 100%|█| 341M/341M [00:10<00:00, 33.6MB
Downloading data: 100%|█| 342M/342M [00:07<00:00, 43.1MB
...(38 data shards)
```
本数据集被用于训练[GENIUS模型中文版](https://huggingface.co/spaces/beyond/genius),如果这个数据集对您的研究有帮助,请引用以下论文。
This dataset is created for the pre-training of [GENIUS model](https://huggingface.co/spaces/beyond/genius), if you find this dataset useful, please cite our paper.
```
@article{guo2022genius,
title={GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation},
author={Guo, Biyang and Gong, Yeyun and Shen, Yelong and Han, Songqiao and Huang, Hailiang and Duan, Nan and Chen, Weizhu},
journal={arXiv preprint arXiv:2211.10330},
year={2022}
}
```
---
Acknowledgment:\
数据是基于[CLUE中文预训练语料集](https://github.com/CLUEbenchmark/CLUE)进行处理、过滤得到的。\
This dataset is processed/filtered from the [CLUE pre-training corpus](https://github.com/CLUEbenchmark/CLUE).
原始数据集引用:
```
@misc{bright_xu_2019_3402023,
author = {Bright Xu},
title = {NLP Chinese Corpus: Large Scale Chinese Corpus for NLP },
month = sep,
year = 2019,
doi = {10.5281/zenodo.3402023},
version = {1.0},
publisher = {Zenodo},
url = {https://doi.org/10.5281/zenodo.3402023}
}
```
| beyond/chinese_clean_passages_80m | [
"region:us"
] | 2022-11-02T02:53:49+00:00 | {"dataset_info": {"features": [{"name": "passage", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18979214734, "num_examples": 88328203}], "download_size": 1025261393, "dataset_size": 18979214734}} | 2022-12-06T07:09:20+00:00 |
0701ea3fa42db65b7237cab8e916a35659c5b845 | # Dataset Card for "animal-crossing-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | pseeej/animal-crossing-data | [
"region:us"
] | 2022-11-02T03:30:51+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7209776.0, "num_examples": 389}], "download_size": 7181848, "dataset_size": 7209776.0}} | 2022-11-02T03:31:55+00:00 |
6b06220d4057c9f974c693567958ebc32f764d89 | # Dataset Card for "onset-drums_corpora_parliament_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | gary109/onset-drums_corpora_parliament_processed | [
"region:us"
] | 2022-11-02T03:50:08+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 43947, "num_examples": 283}], "download_size": 14691, "dataset_size": 43947}} | 2022-11-22T07:42:46+00:00 |
0179bb2c085b52b01ca23991c7581c136b76e0e6 | # Dataset Card for "goodreads_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | dhmeltzer/goodreads_test | [
"region:us"
] | 2022-11-02T04:14:19+00:00 | {"dataset_info": {"features": [{"name": "review_text", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 1010427121, "num_examples": 478033}], "download_size": 496736771, "dataset_size": 1010427121}} | 2022-11-02T04:14:57+00:00 |
dfefc099c175c50fa26da17038a2970fc6808171 | # Dataset Card for "goodreads_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | dhmeltzer/goodreads_train | [
"region:us"
] | 2022-11-02T04:14:58+00:00 | {"dataset_info": {"features": [{"name": "rating", "dtype": "int64"}, {"name": "review_text", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 1893978314, "num_examples": 900000}], "download_size": 928071460, "dataset_size": 1893978314}} | 2022-11-02T04:16:00+00:00 |
f03ddd3203868f65e565b39d1af1cf5e1df228f8 | Harmony22/The-stonks | [
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2022-11-02T07:38:24+00:00 | {"license": "cc-by-nc-nd-4.0"} | 2022-11-02T07:38:24+00:00 |
|
6fd649a5748873d108c8a785a38a55ddca291260 | # Dataset Card for "nymemes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | annabelng/nymemes | [
"region:us"
] | 2022-11-02T07:59:23+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3760740114.362, "num_examples": 32933}], "download_size": 4007130292, "dataset_size": 3760740114.362}} | 2022-11-02T08:02:09+00:00 |
1fafac00f14590feb94984ee7dc1adc861179fc7 | # Dataset Card for "music_genres"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lewtun/music_genres | [
"region:us"
] | 2022-11-02T10:01:46+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "song_id", "dtype": "int64"}, {"name": "genre_id", "dtype": "int64"}, {"name": "genre", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1978321742.996, "num_examples": 5076}, {"name": "train", "num_bytes": 7844298868.902, "num_examples": 19909}], "download_size": 9793244255, "dataset_size": 9822620611.898}} | 2022-11-02T10:27:30+00:00 |
e92eef786328238456e467d116c53d7d914c1e0e | KETI-AIR/aihub_admin_docs_mrc | [
"license:apache-2.0",
"region:us"
] | 2022-11-02T10:18:43+00:00 | {"license": "apache-2.0"} | 2022-11-02T10:19:12+00:00 |
|
6fd41bb2494326e92dd46a92a1aeff50fbce4fdd |
## About this dataset
The [CAES](http://galvan.usc.es/caes/) [(Parodi, 2015)](https://www.tandfonline.com/doi/full/10.1080/23247797.2015.1084685?cookieSet=1) dataset, also referred as the “Corpus de Aprendices del Español” (CAES), is a collection of texts created by Spanish L2 learners from Spanish learning centres and universities. These students had different learning levels, different backgrounds (11 native languages) and various levels of experience with the language. We used web scraping techniques to download a portion of the full dataset since its current website only provides content filtered by categories that have to be manually selected. The readability level of each text in CAES follows the [Common European Framework of Reference for Languages (CEFR)](https://www.coe.int/en/web/common-european-framework-reference-languages). The [raw version](https://huggingface.co/datasets/lmvasque/caes/blob/main/caes.raw.csv) of this corpus also contains information about the learners and the type of assignments with which they were assigned to create each text.
We have downloaded this dataset from its original [website](https://galvan.usc.es/caes/search) to make it available to the community. If you use this data, please credit the original author and our work as well (see citations below).
## About the splits
We have uploaded two versions of the CAES corpus:
- **caes.raw.csv**: raw data from the website with no further filtering. It includes information about the learners and the type/topic of their assignments.
- **caes.jsonl**: this data is limited to the text samples, the original levels of readability and our standardised category according to these: simple/complex and basic/intermediate/advanced. You can check for more details about these splits in our [paper](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link).
## Citation
If you use our splits in your research, please cite our work: "[A Benchmark for Neural Readability Assessment of Texts in Spanish](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link)"
```
@inproceedings{vasquez-rodriguez-etal-2022-benchmarking,
title = "A Benchmark for Neural Readability Assessment of Texts in Spanish",
author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
Cuenca-Jim{\'\e}nez, Pedro-Manuel and
Morales-Esquivel, Sergio Esteban and
Alva-Manchego, Fernando",
booktitle = "Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
month = dec,
year = "2022",
}
```
We have extracted the CAES corpus from their [website](https://galvan.usc.es/caes/search). If you use this corpus, please also cite their work as follows:
```
@article{Parodi2015,
author = "Giovanni Parodi",
title = "Corpus de aprendices de español (CAES)",
journal = "Journal of Spanish Language Teaching",
volume = "2",
number = "2",
pages = "194-200",
year = "2015",
publisher = "Routledge",
doi = "10.1080/23247797.2015.1084685",
URL = "https://doi.org/10.1080/23247797.2015.1084685",
eprint = "https://doi.org/10.1080/23247797.2015.1084685"
}
```
You can also find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark). | lmvasque/caes | [
"license:cc-by-4.0",
"region:us"
] | 2022-11-02T10:40:31+00:00 | {"license": "cc-by-4.0"} | 2022-11-11T18:09:24+00:00 |
189a95069a1544141fd9c21f638b979b106460f1 | ## About this dataset
The dataset Coh-Metrix-Esp (Cuentos) [(Quispesaravia et al., 2016)](https://aclanthology.org/L16-1745/) is a collection of 100 documents consisting of 50 children fables (“simple” texts) and 50 stories for adults (“complex” texts) scrapped from the web. If you use this data, please credit the original website and our work as well (see citations below).
## Citation
If you use our splits in your research, please cite our work: "[A Benchmark for Neural Readability Assessment of Texts in Spanish](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link)".
```
@inproceedings{vasquez-rodriguez-etal-2022-benchmarking,
title = "A Benchmark for Neural Readability Assessment of Texts in Spanish",
author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
Cuenca-Jim{\'\e}nez, Pedro-Manuel and
Morales-Esquivel, Sergio Esteban and
Alva-Manchego, Fernando",
booktitle = "Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
month = dec,
year = "2022",
}
```
#### Coh-Metrix-Esp (Cuentos)
```
@inproceedings{quispesaravia-etal-2016-coh,
title = "{C}oh-{M}etrix-{E}sp: A Complexity Analysis Tool for Documents Written in {S}panish",
author = "Quispesaravia, Andre and
Perez, Walter and
Sobrevilla Cabezudo, Marco and
Alva-Manchego, Fernando",
booktitle = "Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}'16)",
month = may,
year = "2016",
address = "Portoro{\v{z}}, Slovenia",
publisher = "European Language Resources Association (ELRA)",
url = "https://aclanthology.org/L16-1745",
pages = "4694--4698",
}
```
You can also find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark). | lmvasque/coh-metrix-esp | [
"license:cc-by-sa-4.0",
"region:us"
] | 2022-11-02T10:43:02+00:00 | {"license": "cc-by-sa-4.0"} | 2022-11-11T17:44:04+00:00 |
9a9ece7cc079929fb0902994f71e5c63f4284e11 | ## About this dataset
This dataset was collected from [HablaCultura.com](https://hablacultura.com/) a website with resources for Spanish students, labeled by instructors following the [Common European Framework of Reference for Languages (CEFR)](https://www.coe.int/en/web/common-european-framework-reference-languages). We have scraped the freely available articles from its original [website](https://hablacultura.com/) to make it available to the community. If you use this data, please credit the original [website](https://hablacultura.com/) and our work as well.
## Citation
If you use our splits in your research, please cite our work: "[A Benchmark for Neural Readability Assessment of Texts in Spanish](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link)".
```
@inproceedings{vasquez-rodriguez-etal-2022-benchmarking,
title = "A Benchmark for Neural Readability Assessment of Texts in Spanish",
author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
Cuenca-Jim{\'\e}nez, Pedro-Manuel and
Morales-Esquivel, Sergio Esteban and
Alva-Manchego, Fernando",
booktitle = "Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
month = dec,
year = "2022",
}
```
You can also find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark). | lmvasque/hablacultura | [
"license:cc-by-4.0",
"region:us"
] | 2022-11-02T10:44:43+00:00 | {"license": "cc-by-4.0"} | 2022-11-11T17:42:13+00:00 |
b8ec1babb569f217a0248fb05f8323539bf90d96 |
## About this dataset
This dataset was collected from [kwiziq.com](https://www.kwiziq.com/), a website dedicated to aid Spanish learning through automated methods. It also provides articles in different CEFR-based levels. We have scraped the freely available articles from its original [website](https://www.kwiziq.com/) to make it available to the community. If you use this data, please credit the original [website]((https://www.kwiziq.com/) and our work as well.
## Citation
If you use our splits in your research, please cite our work: "[A Benchmark for Neural Readability Assessment of Texts in Spanish](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link)".
```
@inproceedings{vasquez-rodriguez-etal-2022-benchmarking,
title = "A Benchmark for Neural Readability Assessment of Texts in Spanish",
author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
Cuenca-Jim{\'\e}nez, Pedro-Manuel and
Morales-Esquivel, Sergio Esteban and
Alva-Manchego, Fernando",
booktitle = "Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
month = dec,
year = "2022",
}
```
You can also find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark).
| lmvasque/kwiziq | [
"license:cc-by-4.0",
"region:us"
] | 2022-11-02T10:45:55+00:00 | {"license": "cc-by-4.0"} | 2022-11-11T17:40:47+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.