Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
null | null | {} | LJ/koelectra-base-v3-finetuned-korquad-finetuned-korquad | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | LMS5413/sla | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | LN/Test | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {"license": "afl-3.0"} | lneduchal/FinancialBERT | null | [
"license:afl-3.0",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | git lfs install
git clone https://huggingface.co/LPM/AI_1 | {} | LPM/AI_1 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | LSP/I | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | LSP/Kajaj | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | LSP/ma | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | LTNguyen/stsb_vn | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | LTNguyen/stsb_vv | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | LULU0X01/distilbert-base-uncased-finetuned-cola | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Rick DioloGPT Model
| {"tags": ["conversational"]} | LactoseLegend/DialoGPT-small-Rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | ### Model information
* Fine tuning dataset: https://www.kaggle.com/seungguini/bts-youtube-comments
* Base model: GPT2 Small
* Epoch: 5
* API page: [Ainize](https://ainize.ai/teachable-ainize/gpt2-train?branch=train/cv695m9g40av0cdabuqp)
* Demo page: [End-point](https://kubecon-tabtab-ainize-team.endpoint.ainize.ai/?modelUrl=https://train-cv695m9g40av0cdabuqp-gpt2-train-teachable-ainize.endpoint.ainize.ai/predictions/gpt-2-en-small-finetune)
### ===Teachable NLP=== ###
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
* Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
* Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
| {} | Laeyoung/BTS-comments-generator | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
#Witcher1 Geralt DialoGPT small model | {"tags": ["conversational"]} | Laezor/DialoGPT-small-witcher1 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
#Yakuza 0 DialoGPT Model | {"tags": ["conversational"]} | Laezor/DialoGPT-small-yakuza_0 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Dialogue From Persona 3 | {"tags": ["conversational"]} | LaiJY/DialoGPTChatbot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | LailaAlrajhi/BERT_XAI | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
translation | transformers | ### marianmt-th-zh_cn
* source languages: th
* target languages: zh_cn
* dataset:
* model: transformer-align
* pre-processing: normalization + SentencePiece
* test set scores: 15.53
## Training
Training scripts from [LalitaDeelert/NLP-ZH_TH-Project](https://github.com/LalitaDeelert/NLP-ZH_TH-Project). Experiments tracked at [cstorm125/marianmt-th-zh_cn](https://wandb.ai/cstorm125/marianmt-th-zh_cn).
```
export WANDB_PROJECT=marianmt-th-zh_cn
python train_model.py --input_fname ../data/v1/Train.csv \\\\\\\\
\\\\t--output_dir ../models/marianmt-th-zh_cn \\\\\\\\
\\\\t--source_lang th --target_lang zh \\\\\\\\
\\\\t--metric_tokenize zh --fp16
```
## Usage
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Lalita/marianmt-zh_cn-th")
model = AutoModelForSeq2SeqLM.from_pretrained("Lalita/marianmt-zh_cn-th").cpu()
src_text = [
'ฉันรักคุณ',
'ฉันอยากกินข้าว',
]
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
print([tokenizer.decode(t, skip_special_tokens=True) for t in translated])
> ['我爱你', '我想吃饭。']
```
## Requirements
```
transformers==4.6.0
torch==1.8.0
``` | {"tags": ["translation", "torch==1.8.0"], "widget": [{"text": "Inference Unavailable"}]} | Lalita/marianmt-th-zh_cn | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"torch==1.8.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
translation | transformers | ### marianmt-zh_cn-th
* source languages: zh_cn
* target languages: th
* dataset:
* model: transformer-align
* pre-processing: normalization + SentencePiece
* test set scores: syllable: 15.95, word: 8.43
## Training
Training scripts from [LalitaDeelert/NLP-ZH_TH-Project](https://github.com/LalitaDeelert/NLP-ZH_TH-Project). Experiments tracked at [cstorm125/marianmt-zh_cn-th](https://wandb.ai/cstorm125/marianmt-zh_cn-th).
```
export WANDB_PROJECT=marianmt-zh_cn-th
python train_model.py --input_fname ../data/v1/Train.csv \\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\t--output_dir ../models/marianmt-zh_cn-th \\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\t--source_lang zh --target_lang th \\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\t--metric_tokenize th_syllable --fp16
```
## Usage
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Lalita/marianmt-zh_cn-th")
model = AutoModelForSeq2SeqLM.from_pretrained("Lalita/marianmt-zh_cn-th").cpu()
src_text = [
'我爱你',
'我想吃米饭',
]
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
print([tokenizer.decode(t, skip_special_tokens=True) for t in translated])
> ['ผมรักคุณนะ', 'ฉันอยากกินข้าว']
```
## Requirements
```
transformers==4.6.0
torch==1.8.0
``` | {"tags": ["translation", "torch==1.8.0"], "widget": [{"text": "Inference Unavailable"}]} | Lalita/marianmt-zh_cn-th | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"torch==1.8.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | LanPham/wav2vec2-base-asr-2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | {} | LanPham/wav2vec2-base-jp | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | LanPham/wav2vec2-base-timit-demo | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Lance/mt5 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | speechbrain |
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Speaker Verification with ECAPA-TDNN embeddings on cnceleb
This repository provides all the necessary tools to perform speaker verification with a pretrained ECAPA-TDNN model using SpeechBrain.
The system can be used to extract speaker embeddings as well.
It is trained on cnceleb 1+ cnceleb2 training data.
For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io). The model performance on cnceleb1-test set(Cleaned) is:
| Release | EER(%) | minDCF |
|:-------------:|:--------------:|:--------------:|
## Pipeline description
This system is composed of an ECAPA-TDNN model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss. Speaker Verification is performed using cosine distance between speaker embeddings.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Compute your speaker embeddings
```python
import torchaudio
from speechbrain.pretrained import EncoderClassifier
classifier = EncoderClassifier.from_hparams(source="LanceaKing/spkrec-ecapa-cnceleb")
signal, fs =torchaudio.load('samples/audio_samples/example1.wav')
embeddings = classifier.encode_batch(signal)
```
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify_file* if needed. Make sure your input tensor is compliant with the expected sampling rate if you use *encode_batch* and *classify_batch*.
### Perform Speaker Verification
```python
from speechbrain.pretrained import SpeakerRecognition
verification = SpeakerRecognition.from_hparams(source="LanceaKing/spkrec-ecapa-cnceleb", savedir="pretrained_models/spkrec-ecapa-cnceleb")
score, prediction = verification.verify_files("speechbrain/spkrec-ecapa-cnceleb/example1.wav", "speechbrain/spkrec-ecapa-cnceleb/example2.flac")
```
The prediction is 1 if the two signals in input are from the same speaker and 0 otherwise.
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain (aa018540).
To train it from scratch follows these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/LanceaKing/speechbrain/
```
2. Install it:
```
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```
cd recipes/CNCeleb/SpeakerRec
python train_speaker_embeddings.py hparams/train_ecapa_tdnn.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1-ahC1xeyPinAHp2oAohL-02smNWO41Cc?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing ECAPA-TDNN
```
@inproceedings{DBLP:conf/interspeech/DesplanquesTD20,
author = {Brecht Desplanques and
Jenthe Thienpondt and
Kris Demuynck},
editor = {Helen Meng and
Bo Xu and
Thomas Fang Zheng},
title = {{ECAPA-TDNN:} Emphasized Channel Attention, Propagation and Aggregation
in {TDNN} Based Speaker Verification},
booktitle = {Interspeech 2020},
pages = {3830--3834},
publisher = {{ISCA}},
year = {2020},
}
```
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and Fran莽ois Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/ | {"language": "zh", "license": "apache-2.0", "tags": ["speechbrain", "embeddings", "Speaker", "Verification", "Identification", "pytorch", "ECAPA", "TDNN"], "datasets": ["cnceleb"], "metrics": ["EER"]} | LanceaKing/spkrec-ecapa-cnceleb | null | [
"speechbrain",
"embeddings",
"Speaker",
"Verification",
"Identification",
"pytorch",
"ECAPA",
"TDNN",
"zh",
"dataset:cnceleb",
"arxiv:2106.04624",
"license:apache-2.0",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Lancer/I | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Langame/blenderbot2-400M | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | {} | Langame/convai-gpt-j-6B-8bit | null | [
"transformers",
"pytorch",
"gptj",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-starter
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the Langame/starter dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 500.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 66.67 | 200 | 3.6445 |
| No log | 133.33 | 400 | 4.5703 |
| 1.0101 | 200.0 | 600 | 5.2109 |
| 1.0101 | 266.67 | 800 | 5.5430 |
| 0.0681 | 333.33 | 1000 | 5.7227 |
| 0.0681 | 400.0 | 1200 | 5.8672 |
| 0.0681 | 466.67 | 1400 | 5.9961 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["Langame/starter"], "model-index": [{"name": "distilgpt2-starter", "results": []}]} | Langame/distilgpt2-starter | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:Langame/starter",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | {} | Langame/gpt2-starter-2 | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | {} | Langame/gpt2-starter | null | [
"transformers",
"pytorch",
"tensorboard",
"onnx",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Langame/gpt2-waiting
This fine-tuned model can generate funny waiting messages.
[Langame](https://langa.me) uses these within its platform 😛.
| {"language": ["en"], "license": "mit", "tags": ["text-generation"], "datasets": ["waiting-messages"], "widget": [{"text": "List of funny waiting messages:", "example_title": "Funny waiting messages"}]} | Langame/gpt2-waiting | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"en",
"dataset:waiting-messages",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | # Mengzi-BERT base fin model (Chinese)
Continue trained mengzi-bert-base with 20G financial news and research reports. Masked language modeling(MLM), part-of-speech(POS) tagging and sentence order prediction(SOP) are used as training task.
[Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese](https://arxiv.org/abs/2110.06696)
## Usage
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained("Langboat/mengzi-bert-base-fin")
model = BertModel.from_pretrained("Langboat/mengzi-bert-base-fin")
```
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
```
@misc{zhang2021mengzi,
title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese},
author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou},
year={2021},
eprint={2110.06696},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["zh"], "license": "apache-2.0"} | Langboat/mengzi-bert-base-fin | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"zh",
"arxiv:2110.06696",
"doi:10.57967/hf/0024",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
# Mengzi-BERT base model (Chinese)
Pretrained model on 300G Chinese corpus. Masked language modeling(MLM), part-of-speech(POS) tagging and sentence order prediction(SOP) are used as training task.
[Mengzi: A lightweight yet Powerful Chinese Pre-trained Language Model](https://arxiv.org/abs/2110.06696)
## Usage
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained("Langboat/mengzi-bert-base")
model = BertModel.from_pretrained("Langboat/mengzi-bert-base")
```
## Scores on nine chinese tasks (without any data augmentation)
| Model | AFQMC | TNEWS | IFLYTEK | CMNLI | WSC | CSL | CMRC2018 | C3 | CHID |
|-|-|-|-|-|-|-|-|-|-|
|RoBERTa-wwm-ext| 74.30 | 57.51 | 60.80 | 80.70 | 67.20 | 80.67 | 77.59 | 67.06 | 83.78 |
|Mengzi-BERT-base| 74.58 | 57.97 | 60.68 | 82.12 | 87.50 | 85.40 | 78.54 | 71.70 | 84.16 |
RoBERTa-wwm-ext scores are from CLUE baseline
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
```
@misc{zhang2021mengzi,
title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese},
author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou},
year={2021},
eprint={2110.06696},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["zh"], "license": "apache-2.0", "widget": [{"text": "\u751f\u6d3b\u7684\u771f\u8c1b\u662f[MASK]\u3002"}]} | Langboat/mengzi-bert-base | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"zh",
"arxiv:2110.06696",
"doi:10.57967/hf/0023",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
# Mengzi-oscar-base-caption (Chinese Multi-modal Image Caption model)
[Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese](https://arxiv.org/abs/2110.06696)
Mengzi-oscar-base-caption is fine-tuned based on Chinese multi-modal pre-training model [Mengzi-Oscar](https://github.com/Langboat/Mengzi/blob/main/Mengzi-Oscar.md), on AIC-ICC Chinese image caption dataset.
## Usage
#### Installation
Check [INSTALL.md](https://github.com/microsoft/Oscar/blob/master/INSTALL.md) for installation instructions.
#### Pretrain & fine-tune
See the [Mengzi-Oscar.md](https://github.com/Langboat/Mengzi/blob/main/Mengzi-Oscar.md) for details.
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
```
@misc{zhang2021mengzi,
title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese},
author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou},
year={2021},
eprint={2110.06696},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["zh"], "license": "apache-2.0"} | Langboat/mengzi-oscar-base-caption | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"zh",
"arxiv:2110.06696",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | # Mengzi-oscar-base-retrieval (Chinese Image-text retrieval model)
[Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese](https://arxiv.org/abs/2110.06696)
Mengzi-oscar-base-retrieval is fine-tuned based on Chinese multi-modal pre-training model [Mengzi-Oscar](https://github.com/Langboat/Mengzi/blob/main/Mengzi-Oscar.md), on COCO-ir dataset.
## Usage
#### Installation
Check [INSTALL.md](https://github.com/microsoft/Oscar/blob/master/INSTALL.md) for installation instructions.
#### Pretrain & fine-tune
See the [Mengzi-Oscar.md](https://github.com/Langboat/Mengzi/blob/main/Mengzi-Oscar.md) for details.
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
```
@misc{zhang2021mengzi,
title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese},
author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou},
year={2021},
eprint={2110.06696},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["zh"], "license": "apache-2.0"} | Langboat/mengzi-oscar-base-retrieval | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"zh",
"arxiv:2110.06696",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
# Mengzi-oscar-base (Chinese Multi-modal pre-training model)
Mengzi-oscar is trained based on the Multi-modal pre-training model [Oscar](https://github.com/microsoft/Oscar), and is initialized using [Mengzi-Bert-Base](https://github.com/Langboat/Mengzi). 3.7M pairs of images and texts were used, including 0.7M Chinese image-caption pairs, 3M Chinese image-question pairs, a total of 0.22M different images.
[Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese](https://arxiv.org/abs/2110.06696)
## Usage
#### Installation
Check [INSTALL.md](https://github.com/microsoft/Oscar/blob/master/INSTALL.md) for installation instructions.
#### Pretrain & fine-tune
See the [Mengzi-Oscar.md](https://github.com/Langboat/Mengzi/blob/main/Mengzi-Oscar.md) for details.
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
```
@misc{zhang2021mengzi,
title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese},
author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou},
year={2021},
eprint={2110.06696},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["zh"], "license": "apache-2.0"} | Langboat/mengzi-oscar-base | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"zh",
"arxiv:2110.06696",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers |
# Mengzi-T5 model (Chinese)
Pretrained model on 300G Chinese corpus.
[Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese](https://arxiv.org/abs/2110.06696)
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Langboat/mengzi-t5-base")
model = T5ForConditionalGeneration.from_pretrained("Langboat/mengzi-t5-base")
```
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
```
@misc{zhang2021mengzi,
title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese},
author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou},
year={2021},
eprint={2110.06696},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["zh"], "license": "apache-2.0"} | Langboat/mengzi-t5-base | null | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"zh",
"arxiv:2110.06696",
"doi:10.57967/hf/0025",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Language/Demo | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Language/Demo1 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | keras | {} | Language/DemoRepo | null | [
"keras",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Gandalf DialoGPT Model | {"tags": ["conversational"]} | Laptop/DialoGPT-small-gandalf | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Lara/opus-mt-en-cs-finetuned-en-to-cs | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Lara/opus-mt-en-de-finetuned-en-to-de | null | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | LarryMoto/ln_wav2vec2-large-xls-r-300m-tr-colab | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | LarryMoto/wav2vec2-large-xls-r-300m-tr-colab | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Lation23/DiabloGPT-small-pettergriffin | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers |
## DeFormer
DeFormer är en modell som har tränats på att skilja mellan `de` och `dem` i svenska meningar. Modellen kan testas direkt i panelerna till höger under **Hosted Inference API** genom att skriva in en mening och trycka på **Compute**.
**Uppdatering 2023-05-06:** Modellen kan nu hantera även borttappade t:n i de**t**. Den nya versionen har tränats till att skilja mellan de, det och dem; samt enda och ända.
**Instruktioner:**
Använd endast de/dem/enda/ända med små bokstäver vid testning. Vid träning av modellen gjordes alla "De" och "Dem" om till gemener.
## Träningsdata
DeFormer har tränats på meningar från Europarlamentet och svenskspråkiga Wikimedia. Dessa hämtades från [OPUS](https://opus.nlpl.eu/). Källorna valdes ut för att de antogs ha ett korrekt språkbruk.
Endast meningar innehållandes `de`, `dem`, `det`, `enda` eller `ända` behölls i konstruktionen av träningsdataset. I tabellen nedan återfinns beskrivande statistik över antalet meningar som behölls från respektive dataset, samt frekvenser över förekomster av respektive ord.
| Datakälla | Meningar/dokument | # De | # Dem | # Det | # Enda | # Ända |
| ----------- | ----------- | ----------- | ----------- | -------------|---------- | --------- |
| [Europaparl sv.txt.gz](https://opus.nlpl.eu/download.php?f=Europarl/v8/mono/sv.txt.gz) | 1150556 | 461305 | 53726 | 824065 | 15553 | 1781 |
| [JRC-Acquis raw.sv.gz](https://opus.nlpl.eu/download.php?f=JRC-Acquis/mono/JRC-Acquis.raw.sv.gz) | 648387 | 399628 | 16539 | 326925 | 5975 | 267 |
| [Wikimedia sv.txt.gz](https://opus.nlpl.eu/download.php?f=wikimedia/v20210402/mono/sv.txt.gz) | 1615505 | 598371 | 38649 | 594038 | 24805 | 7063 |
| [Riksdagens anföranden](https://data.riksdagen.se/data/anforanden/) | 671031 | 497515 | 118069 | 659051 | 25912 | 4917 |
| [Riksdagens motioner (2014-2022)](https://data.riksdagen.se/data/dokument/) | 85124 | 85124 | 11773 | 104526 | 2740 | 453 |
| [SweDN (Superlim 2)](https://spraakbanken.gu.se/en/resources/swedn) | 93026 | 70254 | 16399 | 88087 | 5104 | 1236 |
| **Total** | **4286974** | **2112197** | **255155** | **2596692** | **80089** | **15717** |
Vid träningen av DeFormer introducerades slumpmässiga substitioner, där ovanstående ord byttes ut mot de former som de vanligen förväxlas med. Modellen utmanades sedan att klassificera huruvida ett givet ord tillhörde ett av följande kategorier
1. **`ord`** (alla bakgrundsord som inte är de/dem tillhör denna kategori)
2. **`DE`**
3. **`DEM`**
4. **`DET`**
5. **`ENDA`**
6. **`ÄNDA`**
Innan observationerna skickades in till modellträning byttes `de` ut mot `det` eller `dem` med cirka 50 procents sannolikhet, medan `dem` byttes till `de` i 40 procent av fallen. Liknande substutioner gjordes mellan `enda` och `ända`.
## Träffsäkerhet/Accuracy
DeFormer utvärderades på ett valideringsset bestående av 31200 meningar från samma datakälla (svenska wiki + europaparlamentet + JRC) som modellen tränats på. Slumpmässiga fel introducerades för att utmana modellen. 47 procent av förekommande `de` i ursprungsmeningarna ändrades till `dem`, medan 40 procent av förekommande `dem` ändrades till `de`. Tabellen nedan visar att DeFormer är väldigt träffsäker. De få "felaktiga" prediktioner som modellen outputtar är nästan samtliga `de/dem som`-konstruktioner med bisatser. Majoriteten av dessa är egentligen inte att anse som felaktiga, eftersom [båda formerna är accepterade](https://www4.isof.se/cgi-bin/srfl/visasvar.py?sok=dem%20som&svar=79718&log_id=705355).
**OBS:** Tabellen nedan gäller för den äldre varianten av DeFormer som endast skiljde mellan `de` och `dem`.
| | Accuracy |
| ----------- | ----------- |
| de | 99.9\% |
| dem | 98.6\% | | {"widget": [{"text": "dem har s\u00f6kt upp de f\u00f6r att prata.", "example_title": "de/dem exempel 1"}, {"text": "Jag s\u00e5g de komma runt h\u00f6rnet och g\u00e5 i riktning mot dem byggnaderna.", "example_title": "de/dem exempel 2"}, {"text": "de \u00e4r ganska tr\u00e5kigt att de blivit s\u00e5h\u00e4r, men de va de \u00e4nda jag kunde g\u00f6ra", "example_title": "enda/\u00e4nda och de(t)"}]} | Lauler/deformer | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"doi:10.57967/hf/0612",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3793
- Accuracy: 0.8404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3542 | 1.0 | 125 | 0.3611 | 0.839 |
| 0.2255 | 2.0 | 250 | 0.3793 | 0.8404 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "metrics": ["accuracy"], "model-index": [{"name": "results", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "amazon_reviews_multi", "type": "amazon_reviews_multi", "args": "es"}, "metrics": [{"type": "accuracy", "value": 0.8404, "name": "Accuracy"}]}]}]} | Lazaro97/results | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Lazaro97/roberta-base-bne-finetuned-amazon_reviews_multi | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Lazycloud/DialoGPT-medium-RickestRick | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
feature-extraction | transformers |
# LeBenchmark: wav2vec2 base model trained on 1K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.
For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: [LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech](https://arxiv.org/abs/2309.05472)
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:
## *Lebenchmark 2.0:*
- [wav2vec2-FR-14K-xlarge](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-xlarge): xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-large): Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-light](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-light): Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
## *Lebenchmark:*
- [wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large): Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-7K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-base): Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-3K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-large): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-3K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**).
- [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
@misc{parcollet2023lebenchmark,
title={LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech},
author={Titouan Parcollet and Ha Nguyen and Solene Evain and Marcely Zanon Boito and Adrien Pupier and Salima Mdhaffar and Hang Le and Sina Alisamir and Natalia Tomashenko and Marco Dinarelli and Shucong Zhang and Alexandre Allauzen and Maximin Coavoux and Yannick Esteve and Mickael Rouvier and Jerome Goulian and Benjamin Lecouteux and Francois Portet and Solange Rossato and Fabien Ringeval and Didier Schwab and Laurent Besacier},
year={2023},
eprint={2309.05472},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": "fr", "license": "apache-2.0", "tags": ["wav2vec2"]} | LeBenchmark/wav2vec2-FR-1K-base | null | [
"transformers",
"pytorch",
"wav2vec2",
"feature-extraction",
"fr",
"arxiv:2309.05472",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
feature-extraction | transformers |
# LeBenchmark: wav2vec2 large model trained on 1K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.
For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: [LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech](https://arxiv.org/abs/2309.05472)
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:
## *Lebenchmark 2.0:*
- [wav2vec2-FR-14K-xlarge](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-xlarge): xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-large): Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-light](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-light): Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
## *Lebenchmark:*
- [wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large): Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-7K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-base): Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-3K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-large): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-3K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**).
- [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
@misc{parcollet2023lebenchmark,
title={LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech},
author={Titouan Parcollet and Ha Nguyen and Solene Evain and Marcely Zanon Boito and Adrien Pupier and Salima Mdhaffar and Hang Le and Sina Alisamir and Natalia Tomashenko and Marco Dinarelli and Shucong Zhang and Alexandre Allauzen and Maximin Coavoux and Yannick Esteve and Mickael Rouvier and Jerome Goulian and Benjamin Lecouteux and Francois Portet and Solange Rossato and Fabien Ringeval and Didier Schwab and Laurent Besacier},
year={2023},
eprint={2309.05472},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": "fr", "license": "apache-2.0", "tags": ["wav2vec2"]} | LeBenchmark/wav2vec2-FR-1K-large | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"feature-extraction",
"fr",
"arxiv:2309.05472",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
feature-extraction | transformers |
# LeBenchmark: wav2vec2 base model trained on 2.6K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.
For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: [LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech](https://arxiv.org/abs/2309.05472)
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:
## *Lebenchmark 2.0:*
- [wav2vec2-FR-14K-xlarge](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-xlarge): xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-large): Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-light](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-light): Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
## *Lebenchmark:*
- [wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large): Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-7K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-base): Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-3K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-large): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-3K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**).
- [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
@misc{parcollet2023lebenchmark,
title={LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech},
author={Titouan Parcollet and Ha Nguyen and Solene Evain and Marcely Zanon Boito and Adrien Pupier and Salima Mdhaffar and Hang Le and Sina Alisamir and Natalia Tomashenko and Marco Dinarelli and Shucong Zhang and Alexandre Allauzen and Maximin Coavoux and Yannick Esteve and Mickael Rouvier and Jerome Goulian and Benjamin Lecouteux and Francois Portet and Solange Rossato and Fabien Ringeval and Didier Schwab and Laurent Besacier},
year={2023},
eprint={2309.05472},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": "fr", "license": "apache-2.0", "tags": ["wav2vec2"]} | LeBenchmark/wav2vec2-FR-2.6K-base | null | [
"transformers",
"pytorch",
"wav2vec2",
"feature-extraction",
"fr",
"arxiv:2309.05472",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
feature-extraction | transformers |
# LeBenchmark: wav2vec2 base model trained on 3K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.
For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: [LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech](https://arxiv.org/abs/2309.05472)
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:
## *Lebenchmark 2.0:*
- [wav2vec2-FR-14K-xlarge](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-xlarge): xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-large): Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-light](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-light): Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
## *Lebenchmark:*
- [wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large): Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-7K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-base): Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-3K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-large): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-3K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**).
- [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
@misc{parcollet2023lebenchmark,
title={LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech},
author={Titouan Parcollet and Ha Nguyen and Solene Evain and Marcely Zanon Boito and Adrien Pupier and Salima Mdhaffar and Hang Le and Sina Alisamir and Natalia Tomashenko and Marco Dinarelli and Shucong Zhang and Alexandre Allauzen and Maximin Coavoux and Yannick Esteve and Mickael Rouvier and Jerome Goulian and Benjamin Lecouteux and Francois Portet and Solange Rossato and Fabien Ringeval and Didier Schwab and Laurent Besacier},
year={2023},
eprint={2309.05472},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": "fr", "license": "apache-2.0", "tags": ["wav2vec2"]} | LeBenchmark/wav2vec2-FR-3K-base | null | [
"transformers",
"pytorch",
"wav2vec2",
"feature-extraction",
"fr",
"arxiv:2309.05472",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
feature-extraction | transformers |
# LeBenchmark: wav2vec2 large model trained on 3K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.
For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: [LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech](https://arxiv.org/abs/2309.05472)
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:
## *Lebenchmark 2.0:*
- [wav2vec2-FR-14K-xlarge](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-xlarge): xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-large): Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-light](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-light): Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
## *Lebenchmark:*
- [wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large): Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-7K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-base): Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-3K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-large): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-3K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**).
- [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
@misc{parcollet2023lebenchmark,
title={LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech},
author={Titouan Parcollet and Ha Nguyen and Solene Evain and Marcely Zanon Boito and Adrien Pupier and Salima Mdhaffar and Hang Le and Sina Alisamir and Natalia Tomashenko and Marco Dinarelli and Shucong Zhang and Alexandre Allauzen and Maximin Coavoux and Yannick Esteve and Mickael Rouvier and Jerome Goulian and Benjamin Lecouteux and Francois Portet and Solange Rossato and Fabien Ringeval and Didier Schwab and Laurent Besacier},
year={2023},
eprint={2309.05472},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": "fr", "license": "apache-2.0", "tags": ["wav2vec2"]} | LeBenchmark/wav2vec2-FR-3K-large | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"feature-extraction",
"fr",
"arxiv:2309.05472",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
feature-extraction | transformers |
# LeBenchmark: wav2vec2 base model trained on 7K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.
For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: [LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech](https://arxiv.org/abs/2309.05472)
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:
## *Lebenchmark 2.0:*
- [wav2vec2-FR-14K-xlarge](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-xlarge): xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-large): Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-light](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-light): Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
## *Lebenchmark:*
- [wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large): Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-7K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-base): Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-3K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-large): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-3K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**).
- [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
@misc{parcollet2023lebenchmark,
title={LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech},
author={Titouan Parcollet and Ha Nguyen and Solene Evain and Marcely Zanon Boito and Adrien Pupier and Salima Mdhaffar and Hang Le and Sina Alisamir and Natalia Tomashenko and Marco Dinarelli and Shucong Zhang and Alexandre Allauzen and Maximin Coavoux and Yannick Esteve and Mickael Rouvier and Jerome Goulian and Benjamin Lecouteux and Francois Portet and Solange Rossato and Fabien Ringeval and Didier Schwab and Laurent Besacier},
year={2023},
eprint={2309.05472},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": "fr", "license": "apache-2.0", "tags": ["wav2vec2"]} | LeBenchmark/wav2vec2-FR-7K-base | null | [
"transformers",
"pytorch",
"wav2vec2",
"feature-extraction",
"fr",
"arxiv:2309.05472",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
feature-extraction | transformers |
# LeBenchmark: wav2vec2 large model trained on 7K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.
For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: [LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech](https://arxiv.org/abs/2309.05472)
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:
## *Lebenchmark 2.0:*
- [wav2vec2-FR-14K-xlarge](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-xlarge): xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-large): Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-light](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-light): Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
## *Lebenchmark:*
- [wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large): Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-7K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-base): Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-3K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-large): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-3K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**).
- [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
@misc{parcollet2023lebenchmark,
title={LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech},
author={Titouan Parcollet and Ha Nguyen and Solene Evain and Marcely Zanon Boito and Adrien Pupier and Salima Mdhaffar and Hang Le and Sina Alisamir and Natalia Tomashenko and Marco Dinarelli and Shucong Zhang and Alexandre Allauzen and Maximin Coavoux and Yannick Esteve and Mickael Rouvier and Jerome Goulian and Benjamin Lecouteux and Francois Portet and Solange Rossato and Fabien Ringeval and Didier Schwab and Laurent Besacier},
year={2023},
eprint={2309.05472},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": "fr", "license": "apache-2.0", "tags": ["wav2vec2"]} | LeBenchmark/wav2vec2-FR-7K-large | null | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"feature-extraction",
"fr",
"arxiv:2309.05472",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | LeBoogle/CibaBot | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | LeJon/DiabloGPT-small-ultron | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | LeNerd46/AnotherNewTestModel | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | LeanSapien/DialiGPT-small-rick | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | LeanSapien/DialoGPT-small-rick | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | LeanSapien/DialoGPT-small-rickest | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Leehaiddong/BinaryGame | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Leesin/jhgg | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Legendarysoren/Twitter | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | {} | LegolasTheElf/Wav2Vec2_XLSR_Bengali_1b | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | {} | LegolasTheElf/Wav2Vec2_XLSR_Bengali_V2 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | {} | LegolasTheElf/Wav2Vec2_XLSR_Bengali_V3 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2Vec2_xls_r_300m_hi_cv7
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6567
- Wer: 0.6273
- Cer: 0.2093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 5.6969 | 9.52 | 400 | 3.3092 | 1.0 | 0.9800 |
| 1.7721 | 19.05 | 800 | 0.7769 | 0.7045 | 0.2367 |
| 0.6384 | 28.57 | 1200 | 0.6567 | 0.6273 | 0.2093 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "Wav2Vec2_xls_r_300m_hi_cv7", "results": []}]} | LegolasTheElf/Wav2Vec2_xls_r_300m_hi_cv7 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers | {} | LegolasTheElf/Wav2Vec2_xls_r_300m_hi_cv7_part2 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2Vec2_xls_r_300m_hi_final
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the ['Openslr Multilingual and code-switching ASR challenge'](http://www.openslr.org/103/) dataset and ['mozilla-foundation/common_voice_7_0'](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3035
- Wer: 0.3137
- Cer: 0.0972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.9821 | 0.64 | 400 | 0.5059 | 0.4783 | 0.1573 |
| 0.6861 | 1.28 | 800 | 0.4201 | 0.4247 | 0.1356 |
| 0.585 | 1.92 | 1200 | 0.3797 | 0.3811 | 0.1210 |
| 0.5193 | 2.56 | 1600 | 0.3577 | 0.3652 | 0.1152 |
| 0.4583 | 3.21 | 2000 | 0.3422 | 0.3519 | 0.1111 |
| 0.4282 | 3.85 | 2400 | 0.3261 | 0.3450 | 0.1071 |
| 0.3951 | 4.49 | 2800 | 0.3201 | 0.3325 | 0.1048 |
| 0.3619 | 5.13 | 3200 | 0.3167 | 0.3296 | 0.1030 |
| 0.345 | 5.77 | 3600 | 0.3157 | 0.3210 | 0.1013 |
| 0.338 | 6.41 | 4000 | 0.3051 | 0.3143 | 0.0982 |
| 0.3155 | 7.05 | 4400 | 0.3059 | 0.3154 | 0.0986 |
| 0.3057 | 7.69 | 4800 | 0.3035 | 0.3137 | 0.0972 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["hi"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "Openslr Multilingual", "mozilla-foundation/common_voice_7_0", "generated_from_trainer"], "model-index": [{"name": "Wav2Vec2_xls_r_300m_hi_final", "results": []}]} | LegolasTheElf/Wav2Vec2_xls_r_300m_hi_final | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"Openslr Multilingual",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"hi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2Vec2_xls_r_300m_hi_final
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the ['Openslr Multilingual and code-switching ASR challenge'](http://www.openslr.org/103/) dataset and ['mozilla-foundation/common_voice_7_0'](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3035
- Wer: 0.3137
- Cer: 0.0972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.9821 | 0.64 | 400 | 0.5059 | 0.4783 | 0.1573 |
| 0.6861 | 1.28 | 800 | 0.4201 | 0.4247 | 0.1356 |
| 0.585 | 1.92 | 1200 | 0.3797 | 0.3811 | 0.1210 |
| 0.5193 | 2.56 | 1600 | 0.3577 | 0.3652 | 0.1152 |
| 0.4583 | 3.21 | 2000 | 0.3422 | 0.3519 | 0.1111 |
| 0.4282 | 3.85 | 2400 | 0.3261 | 0.3450 | 0.1071 |
| 0.3951 | 4.49 | 2800 | 0.3201 | 0.3325 | 0.1048 |
| 0.3619 | 5.13 | 3200 | 0.3167 | 0.3296 | 0.1030 |
| 0.345 | 5.77 | 3600 | 0.3157 | 0.3210 | 0.1013 |
| 0.338 | 6.41 | 4000 | 0.3051 | 0.3143 | 0.0982 |
| 0.3155 | 7.05 | 4400 | 0.3059 | 0.3154 | 0.0986 |
| 0.3057 | 7.69 | 4800 | 0.3035 | 0.3137 | 0.0972 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0 | {"language": ["hi"], "license": "apache-2.0", "tags": ["Openslr Multilingual", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "Wav2Vec2_xls_r_300m_hi_final", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7.0", "type": "mozilla-foundation/common_voice_7_0", "args": "hi"}, "metrics": [{"type": "wer", "value": 34.21, "name": "Test WER"}]}]}]} | LegolasTheElf/Wav2Vec2_xls_r_lm_300m_hi | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"Openslr Multilingual",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"hi",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2Vec2_xls_r_openslr_Hi_V2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the [Harveenchadha/indic-voice](https://huggingface.co/datasets/Harveenchadha/indic-voice) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3184
- Wer: 0.3104
- Cer: 0.0958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Cer | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:------:|:---------------:|:------:|
| 7.1097 | 0.48 | 300 | 0.9965 | 3.3989 | 1.0 |
| 3.0235 | 0.96 | 600 | 0.3163 | 1.3183 | 0.7977 |
| 1.1419 | 1.44 | 900 | 0.1913 | 0.6416 | 0.5543 |
| 0.8242 | 1.92 | 1200 | 0.1608 | 0.5063 | 0.4804 |
| 0.6876 | 2.56 | 1600 | 0.1387 | 0.4401 | 0.4280 |
| 0.5868 | 3.21 | 2000 | 0.1249 | 0.3940 | 0.3907 |
| 0.5285 | 3.85 | 2400 | 0.1200 | 0.3661 | 0.3763 |
| 0.5 | 4.49 | 2800 | 0.3528 | 0.3610 | 0.1136 |
| 0.4538 | 5.13 | 3200 | 0.3403 | 0.3485 | 0.1086 |
| 0.4165 | 5.77 | 3600 | 0.3335 | 0.3439 | 0.1062 |
| 0.3989 | 6.41 | 4000 | 0.3264 | 0.3340 | 0.1036 |
| 0.3679 | 7.05 | 4400 | 0.3256 | 0.3287 | 0.1013 |
| 0.3517 | 7.69 | 4800 | 0.3212 | 0.3223 | 0.1002 |
| 0.3357 | 8.33 | 5200 | 0.3173 | 0.3196 | 0.0986 |
| 0.3225 | 8.97 | 5600 | 0.3142 | 0.3177 | 0.0985 |
| 0.3057 | 9.62 | 6000 | 0.3199 | 0.3156 | 0.0975 |
| 0.2972 | 10.26 | 6400 | 0.3139 | 0.3128 | 0.0967 |
| 0.2881 | 10.9 | 6800 | 0.3184 | 0.3107 | 0.0957 |
| 0.2791 | 11.54 | 7200 | 0.3184 | 0.3104 | 0.0958 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["hi"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "Harveenchadha/indic-voice", "generated_from_trainer"], "model-index": [{"name": "Wav2Vec2_xls_r_openslr_Hi_V2", "results": []}]} | LegolasTheElf/Wav2Vec2_xls_r_openslr_Hi_V2 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"Harveenchadha/indic-voice",
"generated_from_trainer",
"hi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers | {} | LegolasTheElf/Wav2vec2_XLSR_Bengali | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | LegolasTheElf/content | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | LegolasTheElf/xls-r-hi-common | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Leisa/distilbert-base-uncased-finetuned-imdb-accelerate | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5561 | 1.0 | 782 | 2.3738 |
| 2.4474 | 2.0 | 1564 | 2.3108 |
| 2.4037 | 3.0 | 2346 | 2.3017 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imdb"], "model-index": [{"name": "distilbert-base-uncased-finetuned-imdb", "results": []}]} | Leisa/distilbert-base-uncased-finetuned-imdb | null | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | {} | Leisa/dummy-model | null | [
"transformers",
"pytorch",
"camembert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Leisa/marian-finetuned-kde4-en-to-fr-accelerate | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
translation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8558
- Bleu: 52.9454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["translation", "generated_from_trainer"], "datasets": ["kde4"], "metrics": ["bleu"], "model-index": [{"name": "marian-finetuned-kde4-en-to-fr", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "kde4", "type": "kde4", "args": "en-fr"}, "metrics": [{"type": "bleu", "value": 52.94538305859332, "name": "Bleu"}]}]}]} | Leisa/marian-finetuned-kde4-en-to-fr | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Leisa/mt5-small-finetuned-amazon-en-es | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
## Model description
We fine-tuned a wav2vec 2.0 large XLSR-53 checkpoint with 842h of unlabelled Luxembourgish speech
collected from [RTL.lu](https://www.rtl.lu/). Then the model was fine-tuned on 4h of labelled
Luxembourgish speech from the same domain.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
## Citation
This model is a result of our paper `IMPROVING LUXEMBOURGISH SPEECH RECOGNITION WITH CROSS-LINGUAL SPEECH REPRESENTATIONS` submitted to the [IEEE SLT 2022 workshop](https://slt2022.org/)
```
@misc{lb-wav2vec2,
author = {Nguyen, Le Minh and Nayak, Shekhar and Coler, Matt.},
keywords = {Luxembourgish, multilingual speech recognition, language modelling, wav2vec 2.0 XLSR-53, under-resourced language},
title = {IMPROVING LUXEMBOURGISH SPEECH RECOGNITION WITH CROSS-LINGUAL SPEECH REPRESENTATIONS},
year = {2022},
copyright = {2023 IEEE}
}
``` | {"language": ["lb"], "license": "mit", "tags": ["automatic-speech-recognition", "generated_from_trainer"], "metrics": ["wer"], "pipeline_tag": "automatic-speech-recognition"} | Lemswasabi/wav2vec2-large-xlsr-53-842h-luxembourgish-4h | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"lb",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | LenaBTC67/Che_Prueba_Piloto | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad-Endpoint_with_impossible.csv
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7950
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.25 | 1.0 | 1273 | 0.8052 |
| 1.1199 | 2.0 | 2546 | 0.7950 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad-Endpoint_with_impossible.csv", "results": []}]} | LenaSchmidt/distilbert-base-uncased-finetuned-squad-Endpoint_with_impossible.csv | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0325 | 1.0 | 585 | 1.7520 |
| 1.609 | 2.0 | 1170 | 1.7713 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]} | LenaSchmidt/distilbert-base-uncased-finetuned-squad | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7608 | 1.0 | 2334 | 3.6655 |
| 3.6335 | 2.0 | 4668 | 3.6455 |
| 3.6066 | 3.0 | 7002 | 3.6424 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilgpt2-finetuned-wikitext2", "results": []}]} | LenaT/distilgpt2-finetuned-wikitext2 | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# first
This model is a fine-tuned version of [longformer-gottbert-base-8192-aw512-](https://huggingface.co/longformer-8192-aw512-gottbert-base) on the a 500 million token subset of the german parts of the OSCAR dataset.
It achieves the following results on the custom evaluation set:
- Loss: 1.4981
## Model description
The weights of the model are initialized from the german version of Roberta [gottbert-base](https://huggingface.co/uklfr/gottbert-base).
The local attention windows have a fixed size of 512 tokens across all layers.
The maximum sequence length is 8192.
## Intended uses & limitations
Longformer models enable processing long texts using a mixture of local attention on each subword token and task specific global attention on a subset of the tokens.
## Training and evaluation data
The [OSCAR](https://oscar-corpus.com) dataset is freely avaible corpus of filtered web texts from the Common Crawl in various languages. We used the 2017 version of the dataset.
## Training procedure
The model was trained with masked language modeling for 3 epochs on a customly created 500 million tokens subset of the german proportion of the [OSCAR](https://oscar-corpus.com) dataset.
It was validated using 5% of the original subset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.5636 | 0.1 | 500 | 2.2399 |
| 2.0426 | 0.2 | 1000 | 1.8841 |
| 1.9653 | 0.3 | 1500 | 1.7807 |
| 1.9422 | 0.4 | 2000 | 1.7206 |
| 1.9323 | 0.49 | 2500 | 1.6800 |
| 1.7587 | 0.59 | 3000 | 1.6507 |
| 1.7239 | 0.69 | 3500 | 1.6316 |
| 1.7452 | 0.79 | 4000 | 1.6137 |
| 1.7415 | 0.89 | 4500 | 1.5983 |
| 1.7733 | 0.99 | 5000 | 1.5830 |
| 1.7656 | 1.09 | 5500 | 1.5735 |
| 1.6543 | 1.19 | 6000 | 1.5643 |
| 1.7131 | 1.28 | 6500 | 1.5546 |
| 1.6456 | 1.38 | 7000 | 1.5503 |
| 1.716 | 1.48 | 7500 | 1.5422 |
| 1.806 | 1.58 | 8000 | 1.5377 |
| 1.8407 | 1.68 | 8500 | 1.5327 |
| 1.6371 | 1.78 | 9000 | 1.5278 |
| 1.6453 | 1.88 | 9500 | 1.5231 |
| 1.7754 | 1.98 | 10000 | 1.5214 |
| 1.7695 | 2.08 | 10500 | 1.5165 |
| 1.7109 | 2.17 | 11000 | 1.5138 |
| 1.6992 | 2.27 | 11500 | 1.5107 |
| 1.6707 | 2.37 | 12000 | 1.5097 |
| 1.6835 | 2.47 | 12500 | 1.5040 |
| 1.7171 | 2.57 | 13000 | 1.5041 |
| 1.7257 | 2.67 | 13500 | 1.4990 |
| 1.6287 | 2.77 | 14000 | 1.5017 |
| 1.7737 | 2.87 | 14500 | 1.4983 |
| 1.4002 | 2.96 | 15000 | 1.4992 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "first", "results": []}]} | LennartKeller/longformer-gottbert-base-8192-aw512 | null | [
"transformers",
"pytorch",
"safetensors",
"longformer",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# first
This model is a fine-tuned version of [nystromformer-gottbert-base-8192](https://huggingface.co/nystromformer-gottbert-base-8192) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7133 | 0.1 | 500 | 6.6155 |
| 2.7876 | 0.2 | 1000 | 2.5542 |
| 2.1831 | 0.3 | 1500 | 2.0356 |
| 2.0316 | 0.4 | 2000 | 1.8793 |
| 2.0678 | 0.49 | 2500 | 1.7954 |
| 1.8182 | 0.59 | 3000 | 1.7473 |
| 1.7393 | 0.69 | 3500 | 1.7081 |
| 1.7586 | 0.79 | 4000 | 1.6787 |
| 1.7417 | 0.89 | 4500 | 1.6563 |
| 1.8256 | 0.99 | 5000 | 1.6370 |
| 1.7957 | 1.09 | 5500 | 1.6219 |
| 1.6876 | 1.19 | 6000 | 1.6084 |
| 1.7172 | 1.28 | 6500 | 1.5941 |
| 1.6564 | 1.38 | 7000 | 1.5881 |
| 1.732 | 1.48 | 7500 | 1.5757 |
| 1.8272 | 1.58 | 8000 | 1.5692 |
| 1.7951 | 1.68 | 8500 | 1.5617 |
| 1.6669 | 1.78 | 9000 | 1.5546 |
| 1.6489 | 1.88 | 9500 | 1.5458 |
| 1.772 | 1.98 | 10000 | 1.5439 |
| 1.7424 | 2.08 | 10500 | 1.5379 |
| 1.7077 | 2.17 | 11000 | 1.5322 |
| 1.6926 | 2.27 | 11500 | 1.5294 |
| 1.656 | 2.37 | 12000 | 1.5274 |
| 1.7002 | 2.47 | 12500 | 1.5201 |
| 1.7102 | 2.57 | 13000 | 1.5197 |
| 1.7158 | 2.67 | 13500 | 1.5162 |
| 1.6081 | 2.77 | 14000 | 1.5169 |
| 1.754 | 2.87 | 14500 | 1.5140 |
| 1.3588 | 2.96 | 15000 | 1.5135 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.1+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "first", "results": []}]} | LennartKeller/nystromformer-gottbert-base-8192 | null | [
"transformers",
"pytorch",
"safetensors",
"nystromformer",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
#Kobayashi DialoGPT Model | {"tags": ["conversational"]} | Lenza/DialoGPT-medium-Kobayashi | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Leo625/DialogGPT-small-Rick | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
summarization | transformers | ## Hyperparameters
{
"num_train_epochs": 3,
"seed": 7,
"summary_column": "output_text",
"text_column": "text",
"encoder_max_length" : 512,
"decoder_max_length" :36,
"batch_size" : 256
}
## Usage
## Results
| key | value |
| --- | ----- |
| eval loss | 4.539857387542725|
| eval_rouge1 |23.7478 |
| eval_rouge2 |7.3616 |
| eval_rougeL |20.6615 |
| eval_rougeLsum |20.7371 |
| eval_gen_len| 16.1806|
|test loss | 4.515065670013428|
| test_rouge1 | 23.7415|
| test_rouge2 | 7.3548|
| test_rougeL | 20.746|
| test_rougeLsum | 20.8149|
| test_gen_len| 16.1926|
| {"language": "es", "license": "apache-2.0", "tags": ["summarization", "spanish", "beto2beto", "encoder-decoder"], "datasets": ["LeoCordoba/CC-NEWS-ES-titles"], "widget": [{"text": "La chocotorta, el tradicional y pr\u00e1ctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por cr\u00edticos de restaurants internacionales, a casi 40 a\u00f1os de su creaci\u00f3n. El r\u00e1nking Taste Atlas ubic\u00f3 primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. \u201cEste postre argentino sin hornear fue influenciado por la cocina italiana y se inspir\u00f3 en el famoso tiramis\u00fa italiano. Est\u00e1 elaborado con tres ingredientes b\u00e1sicos argentinos: galletas de chocolate, dulce de leche y queso crema\u201d, explica la p\u00e1gina web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votaci\u00f3n, super\u00f3 tambi\u00e9n a los waffles belgas y el zserb\u00f3 h\u00fangaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompa\u00f1a al list\u00f3n dorado de \u201cpostre n\u00famero uno\u201c, los expertos ense\u00f1an adem\u00e1s c\u00f3mo se hacen las chocotortas, paso por paso. \u201cLas galletas se ablandan en leche y se cubren con una combinaci\u00f3n de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, caf\u00e9 o incluso licor de caf\u00e9\u201d, detallan. Por \u00faltimo, adjudican su creaci\u00f3n a una \u201ccampa\u00f1a de m\u00e1rketing\u201d dise\u00f1ada para promover las galletitas ic\u00f3nicas que le dan su nombre. La chocotorta, infaltable en los cumplea\u00f1os argentinos, fue creada en 1982 por una creativa de las agencias m\u00e1s importantes del pa\u00eds, Marit\u00e9 Mabraga\u00f1a."}], "model-index": [{"name": "beto2beto-ccnews-titles-es", "results": [{"task": {"type": "abstractive-text-summarization", "name": "Abstractive Text Summarization"}, "dataset": {"name": "CCNEWS-ES-titles", "type": "LeoCordoba/CC-NEWS-ES-titles"}, "metrics": [{"type": "rogue-1", "value": 23.7478, "name": "Validation ROGUE-1"}, {"type": "rogue-2", "value": 7.3616, "name": "Validation ROGUE-2"}, {"type": "rogue-l", "value": 20.6615, "name": "Validation ROGUE-L"}, {"type": "rogue-lsum", "value": 20.7371, "name": "Validation ROGUE-Lsum"}, {"type": "rogue-1", "value": 23.7415, "name": "Test ROGUE-1"}, {"type": "rogue-2", "value": 7.3548, "name": "Test ROGUE-2"}, {"type": "rogue-l", "value": 20.746, "name": "Test ROGUE-L"}, {"type": "rogue-lsum", "value": 20.8149, "name": "Test ROGUE-Lsum"}]}]}]} | LeoCordoba/beto2beto-cc-news-es-titles | null | [
"transformers",
"pytorch",
"safetensors",
"encoder-decoder",
"text2text-generation",
"summarization",
"spanish",
"beto2beto",
"es",
"dataset:LeoCordoba/CC-NEWS-ES-titles",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
summarization | transformers | ## beto2beto-mlsum
This model was trained on the Spanish section of MLSum: https://paperswithcode.com/sota/abstractive-text-summarization-on-mlsum.
## Hyperparameters
{
"dataset_config": "es",
"dataset_name": "mlsum",
"do_eval": true,
"do_predict": true,
"do_train": true,
"fp16": true,
"max_target_length": 64,
"num_train_epochs": 10,
"per_device_eval_batch_size": 4,
"per_device_train_batch_size": 4,
"predict_with_generate": true,
"sagemaker_container_log_level": 20,
"sagemaker_program": "run_summarization.py",
"seed": 7,
"summary_column": "summary",
"text_column": "text"
}
## Usage
## Results
| metric | score |
| --- | ----- |
| validation_loss | 2.5021677017211914 |
| validation_rouge1 | 26.1256 |
| validation_rouge2 | 9.2552 |
| validation_rougeL | 21.4899 |
| validation_rougeLsum | 21.8194 |
| test_loss | 2.57672381401062 |
| test_rouge1 | 25.8639 |
| test_rouge2 | 8.911 |
| test_rougeL | 21.2426 |
| test_rougeLsum | 21.5859 |
| {"language": "es", "license": "apache-2.0", "tags": ["summarization", "spanish", "encoder-decoder", "beto"], "datasets": ["mlsum - es"], "widget": [{"text": "La chocotorta, el tradicional y pr\u00e1ctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por cr\u00edticos de restaurants internacionales, a casi 40 a\u00f1os de su creaci\u00f3n. El r\u00e1nking Taste Atlas ubic\u00f3 primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. \u201cEste postre argentino sin hornear fue influenciado por la cocina italiana y se inspir\u00f3 en el famoso tiramis\u00fa italiano. Est\u00e1 elaborado con tres ingredientes b\u00e1sicos argentinos: galletas de chocolate, dulce de leche y queso crema\u201d, explica la p\u00e1gina web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votaci\u00f3n, super\u00f3 tambi\u00e9n a los waffles belgas y el zserb\u00f3 h\u00fangaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompa\u00f1a al list\u00f3n dorado de \u201cpostre n\u00famero uno\", los expertos ense\u00f1an adem\u00e1s c\u00f3mo se hacen las chocotortas, paso por paso. \u201cLas galletas se ablandan en leche y se cubren con una combinaci\u00f3n de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, caf\u00e9 o incluso licor de caf\u00e9\u201d, detallan. Por \u00faltimo, adjudican su creaci\u00f3n a una \u201ccampa\u00f1a de m\u00e1rketing\u201d dise\u00f1ada para promover las galletitas ic\u00f3nicas que le dan su nombre. La chocotorta, infaltable en los cumplea\u00f1os argentinos, fue creada en 1982 por una creativa de las agencias m\u00e1s importantes del pa\u00eds, Marit\u00e9 Mabraga\u00f1a."}], "model-index": [{"name": "beto2beto-mlsum", "results": [{"task": {"type": "summarization", "name": "abstractive summarization"}, "dataset": {"name": "mlsum-es", "type": "mlsum", "args": "es"}, "metrics": [{"type": "rouge1", "value": 25.8639, "name": "rouge1"}, {"type": "rouge2", "value": 8.911, "name": "rouge2"}, {"type": "rougeL", "value": 21.2426, "name": "rougeL"}, {"type": "rougeLsum", "value": 21.5859, "name": "rougeLsum"}]}]}]} | LeoCordoba/beto2beto-mlsum | null | [
"transformers",
"pytorch",
"safetensors",
"encoder-decoder",
"text2text-generation",
"summarization",
"spanish",
"beto",
"es",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | ## beto2beto
Usage example here: https://colab.research.google.com/drive/18a2ZfF1e_Kyyydlv8INQIkJbv294xcAm?usp=sharing
Entrenado por 3 epochs sobre CC-NEWS-ES (2019), aproximadamente 68.000 steps. Encoder max length: 40•Decoder max length: 128
## Hyperparameters
## Usage
## Results
| key | value |
| --- | ----- |
| test_loss | 2.65148806571960452 |
| {"language": "es", "license": "apache-2.0", "tags": ["text-generation", "spanish", "encoder-decoder", "beto"], "datasets": ["LeoCordoba/CC-NEWS-ES"]} | LeoCordoba/beto2beto | null | [
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"text-generation",
"spanish",
"beto",
"es",
"dataset:LeoCordoba/CC-NEWS-ES",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
summarization | transformers |
## Hyperparameters
{
"max_target_length": 64,
"model_name_or_path": "google/mt5-small",
"num_train_epochs": 3,
"seed": 7,
"summary_column": "output_text",
"text_column": "text",
"encoder_max_length" : 512,
"decoder_max_length" :36,
"batch_size" : 128
}
## Usage
```
article = """ La chocotorta, el tradicional y práctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por críticos de restaurants internacionales, a casi 40 años de su creación. El ránking Taste Atlas ubicó primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. “Este postre argentino sin hornear fue influenciado por la cocina italiana y se inspiró en el famoso tiramisú italiano. Está elaborado con tres ingredientes básicos argentinos: galletas de chocolate, dulce de leche y queso crema”, explica la página web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votación, superó también a los waffles belgas y el zserbó húngaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompaña al listón dorado de “postre número uno", los expertos enseñan además cómo se hacen las chocotortas, paso por paso. “Las galletas se ablandan en leche y se cubren con una combinación de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, café o incluso licor de café”, detallan. Por último, adjudican su creación a una “campaña de márketing” diseñada para promover las galletitas icónicas que le dan su nombre. La chocotorta, infaltable en los cumpleaños argentinos, fue creada en 1982 por una creativa de las agencias más importantes del país, Marité Mabragaña. """
from transformers import pipeline
summarizer = pipeline("summarization", model="LeoCordoba/mt5-small-ccnews-titles-es")
summarizer(article, min_length=5, max_length=64)
```
## Results
| metric | score |
| --- | ----- |
| eval_loss | 2.879085063934326 |
| eval_rouge1 | 22.6623 |
| eval_rouge2 | 7.7894 |
| eval_rougeL | 19.8015, |
| eval_rougeLsum | 19.8092 |
| eval_gen_len | 17.1839 |
| test_loss | 2.878429412841797 |
| test_rouge1 | 22.9263 |
| test_rouge2 | 7.9146 |
| test_rougeL | 20.0272 |
| test_rougeLsum | 20.0387 |
| test_gen_len | 17.1696 | | {"language": "es", "license": "apache-2.0", "tags": ["summarization", "mt5", "spanish"], "datasets": ["LeoCordoba/CC-NEWS-ES-titles"], "widget": [{"text": "La chocotorta, el tradicional y pr\u00e1ctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por cr\u00edticos de restaurants internacionales, a casi 40 a\u00f1os de su creaci\u00f3n. El r\u00e1nking Taste Atlas ubic\u00f3 primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. \u201cEste postre argentino sin hornear fue influenciado por la cocina italiana y se inspir\u00f3 en el famoso tiramis\u00fa italiano. Est\u00e1 elaborado con tres ingredientes b\u00e1sicos argentinos: galletas de chocolate, dulce de leche y queso crema\u201d, explica la p\u00e1gina web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votaci\u00f3n, super\u00f3 tambi\u00e9n a los waffles belgas y el zserb\u00f3 h\u00fangaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompa\u00f1a al list\u00f3n dorado de \u201cpostre n\u00famero uno\u201c, los expertos ense\u00f1an adem\u00e1s c\u00f3mo se hacen las chocotortas, paso por paso. \u201cLas galletas se ablandan en leche y se cubren con una combinaci\u00f3n de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, caf\u00e9 o incluso licor de caf\u00e9\u201d, detallan. Por \u00faltimo, adjudican su creaci\u00f3n a una \u201ccampa\u00f1a de m\u00e1rketing\u201d dise\u00f1ada para promover las galletitas ic\u00f3nicas que le dan su nombre. La chocotorta, infaltable en los cumplea\u00f1os argentinos, fue creada en 1982 por una creativa de las agencias m\u00e1s importantes del pa\u00eds, Marit\u00e9 Mabraga\u00f1a."}], "model-index": [{"name": "mt5-small-ccnews-titles-es", "results": [{"task": {"type": "abstractive-text-summarization", "name": "Abstractive Text Summarization"}, "dataset": {"name": "CCNEWS-ES-titles", "type": "LeoCordoba/CC-NEWS-ES-titles"}, "metrics": [{"type": "rogue-1", "value": 22.6623, "name": "Validation ROGUE-1"}, {"type": "rogue-2", "value": 7.7894, "name": "Validation ROGUE-2"}, {"type": "rogue-l", "value": 19.8015, "name": "Validation ROGUE-L"}, {"type": "rogue-lsum", "value": 19.8092, "name": "Validation ROGUE-Lsum"}, {"type": "rogue-1", "value": 22.9263, "name": "Test ROGUE-1"}, {"type": "rogue-2", "value": 7.9146, "name": "Test ROGUE-2"}, {"type": "rogue-l", "value": 20.0272, "name": "Test ROGUE-L"}, {"type": "rogue-lsum", "value": 20.0387, "name": "Test ROGUE-Lsum"}]}]}]} | LeoCordoba/mt5-small-cc-news-es-titles | null | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"spanish",
"es",
"dataset:LeoCordoba/CC-NEWS-ES-titles",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
summarization | transformers | ## mt5-small-mlsum
This model was trained on the Spanish section of MLSum: https://paperswithcode.com/sota/abstractive-text-summarization-on-mlsum based on mt5-small.
## Hyperparameters
{
"dataset_config": "es",
"dataset_name": "mlsum",
"do_eval": true,
"do_predict": true,
"do_train": true,
"fp16": true,
"max_target_length": 64,
"model_name_or_path": "google/mt5-small",
"num_train_epochs": 10,
"output_dir": "/opt/ml/checkpoints",
"per_device_eval_batch_size": 4,
"per_device_train_batch_size": 4,
"predict_with_generate": true,
"sagemaker_container_log_level": 20,
"sagemaker_program": "run_summarization.py",
"save_strategy": "epoch",
"seed": 7,
"summary_column": "summary",
"text_column": "text"
}
## Usage
```
article = """ La chocotorta, el tradicional y práctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por críticos de restaurants internacionales, a casi 40 años de su creación. El ránking Taste Atlas ubicó primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. “Este postre argentino sin hornear fue influenciado por la cocina italiana y se inspiró en el famoso tiramisú italiano. Está elaborado con tres ingredientes básicos argentinos: galletas de chocolate, dulce de leche y queso crema”, explica la página web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votación, superó también a los waffles belgas y el zserbó húngaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompaña al listón dorado de “postre número uno", los expertos enseñan además cómo se hacen las chocotortas, paso por paso. “Las galletas se ablandan en leche y se cubren con una combinación de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, café o incluso licor de café”, detallan. Por último, adjudican su creación a una “campaña de márketing” diseñada para promover las galletitas icónicas que le dan su nombre. La chocotorta, infaltable en los cumpleaños argentinos, fue creada en 1982 por una creativa de las agencias más importantes del país, Marité Mabragaña. """
from transformers import pipeline
summarizer = pipeline("summarization", model="LeoCordoba/mt5-small-mlsum")
summarizer(article, min_length=5, max_length=64)
```
result: [{'summary_text': 'El ránking Taste Atlas ubicó primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche'}]
## Results
| metric | score |
| --- | ----- |
| eval_rouge1 | 26.4352 |
| eval_rouge2 | 8.9293 |
| eval_rougeL | 21.2622 |
| eval_rougeLsum | 21.5518 |
| test_rouge1 | 26.0756 |
| test_rouge2 | 8.4669 |
| test_rougeL | 20.8167 |
| test_rougeLsum | 21.0822 |
| {"language": "es", "license": "apache-2.0", "tags": ["summarization", "sagemaker", "mt5", "spanish"], "datasets": ["mlsum - es"], "widget": [{"text": "La chocotorta, el tradicional y pr\u00e1ctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por cr\u00edticos de restaurants internacionales, a casi 40 a\u00f1os de su creaci\u00f3n. El r\u00e1nking Taste Atlas ubic\u00f3 primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. \u201cEste postre argentino sin hornear fue influenciado por la cocina italiana y se inspir\u00f3 en el famoso tiramis\u00fa italiano. Est\u00e1 elaborado con tres ingredientes b\u00e1sicos argentinos: galletas de chocolate, dulce de leche y queso crema\u201d, explica la p\u00e1gina web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votaci\u00f3n, super\u00f3 tambi\u00e9n a los waffles belgas y el zserb\u00f3 h\u00fangaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompa\u00f1a al list\u00f3n dorado de \u201cpostre n\u00famero uno\u201c, los expertos ense\u00f1an adem\u00e1s c\u00f3mo se hacen las chocotortas, paso por paso. \u201cLas galletas se ablandan en leche y se cubren con una combinaci\u00f3n de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, caf\u00e9 o incluso licor de caf\u00e9\u201d, detallan. Por \u00faltimo, adjudican su creaci\u00f3n a una \u201ccampa\u00f1a de m\u00e1rketing\u201d dise\u00f1ada para promover las galletitas ic\u00f3nicas que le dan su nombre. La chocotorta, infaltable en los cumplea\u00f1os argentinos, fue creada en 1982 por una creativa de las agencias m\u00e1s importantes del pa\u00eds, Marit\u00e9 Mabraga\u00f1a."}], "model-index": [{"name": "mt5-small-mlsum", "results": [{"task": {"type": "summarization", "name": "abstractive summarization"}, "dataset": {"name": "mlsum-es", "type": "mlsum", "args": "es"}, "metrics": [{"type": "rouge1", "value": 26.0756, "name": "rouge1"}, {"type": "rouge2", "value": 8.4669, "name": "rouge2"}, {"type": "rougeL", "value": 20.8167, "name": "rougeL"}, {"type": "rougeLsum", "value": 21.0822, "name": "rougeLsum"}]}]}]} | LeoCordoba/mt5-small-mlsum | null | [
"transformers",
"pytorch",
"jax",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"sagemaker",
"spanish",
"es",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | 利用THUC dataset 訓練的文章分類器,共支援14種種類 | {} | LeoFeng/ChineseSequenceClassification | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
This is Chandler.
Chandler is your friend too. | {"tags": ["conversational"]} | Leonel/DialoGPT-small-chandler | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Leonis/bart-large-cnn-finetuned-med | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.