Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
null | null |
{}
|
cmqzg/first
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
summarization
|
transformers
|
{"license": "mit", "tags": ["summarization"], "datasets": ["kmfoda/booksum"]}
|
cnicu/led-booksum
| null |
[
"transformers",
"pytorch",
"led",
"text2text-generation",
"summarization",
"dataset:kmfoda/booksum",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
summarization
|
transformers
|
{"license": "mit", "tags": ["summarization"], "datasets": ["kmfoda/booksum"]}
|
cnicu/pegasus-large-booksum
| null |
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"summarization",
"dataset:kmfoda/booksum",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
{"license": "mit"}
|
cnicu/pegasus-xsum-booksum
| null |
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
summarization
|
transformers
|
{"license": "mit", "tags": ["summarization", "summary"], "datasets": ["kmfoda/booksum"]}
|
cnicu/t5-small-booksum
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"summarization",
"summary",
"dataset:kmfoda/booksum",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
cnrcastroli/njhyj
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8651
- Matthews Correlation: 0.5475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5233 | 1.0 | 535 | 0.5353 | 0.4004 |
| 0.3497 | 2.0 | 1070 | 0.5165 | 0.5076 |
| 0.2386 | 3.0 | 1605 | 0.6661 | 0.5161 |
| 0.1745 | 4.0 | 2140 | 0.7730 | 0.5406 |
| 0.1268 | 5.0 | 2675 | 0.8651 | 0.5475 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.6
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5474713423103301, "name": "Matthews Correlation"}]}]}]}
|
cnu/distilbert-base-uncased-finetuned-cola
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
coala/Art
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
# FairLex: A multilingual benchmark for evaluating fairness in legal text processing
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP.
---
Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. FairLex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.
---
## Pre-training details
For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, SPC).
We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads.
We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019).
For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese SPC).
## Models list
| Model name | Training corpora | Language |
|-----------------------------------|------------------|--------------------|
| `coastalcph/fairlex-ecthr-minlm` | ECtHR | `en` |
| `coastalcph/fairlex-scotus-minlm` | SCOTUS | `en` |
| `coastalcph/fairlex-fscs-minlm` | FSCS | [`de`, `fr`, `it`] |
| `coastalcph/fairlex-cail-minlm` | CAIL | `zh` |
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("coastalcph/fairlex-cail-minlm")
model = AutoModel.from_pretrained("coastalcph/fairlex-cail-minlm")
```
## Evaluation on downstream tasks
Consider the experiments in the article:
_Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. Fairlex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland._
## Author - Publication
```
@inproceedings{chalkidis-2022-fairlex,
author={Chalkidis, Ilias and Passini, Tommaso and Zhang, Sheng and
Tomada, Letizia and Schwemer, Sebastian Felix and Søgaard, Anders},
title={FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
year={2022},
address={Dublin, Ireland}
}
```
Ilias Chalkidis on behalf of [CoAStaL NLP Group](https://coastalcph.github.io)
| Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) |
|
{"language": "zh", "license": "cc-by-nc-sa-4.0", "tags": ["legal", "fairlex"], "pipeline_tag": "fill-mask", "widget": [{"text": "\u4e0a\u8ff0\u4e8b\u5b9e\uff0c\u88ab\u544a\u4eba\u5728\u5ead\u5ba1\u8fc7\u7a0b\u4e2d\u4ea6\u65e0\u5f02\u8bae\uff0c\u4e14\u6709<mask>\u7684\u9648\u8ff0\uff0c\u73b0\u573a\u8fa8\u8ba4\u7b14\u5f55\u53ca\u7167\u7247\uff0c\u88ab\u544a\u4eba\u7684\u524d\u79d1\u5211\u4e8b\u5224\u51b3\u4e66\uff0c\u91ca\u653e\u8bc1\u660e\u6750\u6599\uff0c\u6293\u83b7\u7ecf\u8fc7\uff0c\u88ab\u544a\u4eba\u7684\u4f9b\u8ff0\u53ca\u8eab\u4efd\u8bc1\u660e\u7b49\u8bc1\u636e\u8bc1\u5b9e\uff0c\u8db3\u4ee5\u8ba4\u5b9a\u3002"}]}
|
coastalcph/fairlex-cail-minilm
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"legal",
"fairlex",
"zh",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
# FairLex: A multilingual benchmark for evaluating fairness in legal text processing
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP.
---
Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. FairLex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.
---
## Pre-training details
For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, SPC).
We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads.
We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019).
For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese SPC).
## Models list
| Model name | Training corpora | Language |
|-----------------------------------|------------------|--------------------|
| `coastalcph/fairlex-ecthr-minlm` | ECtHR | `en` |
| `coastalcph/fairlex-scotus-minlm` | SCOTUS | `en` |
| `coastalcph/fairlex-fscs-minlm` | FSCS | [`de`, `fr`, `it`] |
| `coastalcph/fairlex-cail-minlm` | CAIL | `zh` |
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("coastalcph/fairlex-ecthr-minilm")
model = AutoModel.from_pretrained("coastalcph/fairlex-ecthr-minilm")
```
## Evaluation on downstream tasks
Consider the experiments in the article:
_Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. Fairlex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland._
## Author - Publication
```
@inproceedings{chalkidis-2022-fairlex,
author={Chalkidis, Ilias and Passini, Tommaso and Zhang, Sheng and
Tomada, Letizia and Schwemer, Sebastian Felix and Søgaard, Anders},
title={FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
year={2022},
address={Dublin, Ireland}
}
```
Ilias Chalkidis on behalf of [CoAStaL NLP Group](https://coastalcph.github.io)
| Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) |
|
{"language": "en", "license": "cc-by-nc-sa-4.0", "tags": ["legal", "fairlex"], "pipeline_tag": "fill-mask", "widget": [{"text": "The applicant submitted that her husband was subjected to treatment amounting to <mask> whilst in the custody of Adana Security Directorate"}]}
|
coastalcph/fairlex-ecthr-minilm
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"legal",
"fairlex",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
# FairLex: A multilingual benchmark for evaluating fairness in legal text processing
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP.
---
Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. FairLex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.
---
## Pre-training details
For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, SPC).
We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads.
We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019).
For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese SPC).
## Models list
| Model name | Training corpora | Language |
|-----------------------------------|------------------|--------------------|
| `coastalcph/fairlex-ecthr-minlm` | ECtHR | `en` |
| `coastalcph/fairlex-scotus-minlm` | SCOTUS | `en` |
| `coastalcph/fairlex-fscs-minlm` | FSCS | [`de`, `fr`, `it`] |
| `coastalcph/fairlex-cail-minlm` | CAIL | `zh` |
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("coastalcph/fairlex-fscs-minlm")
model = AutoModel.from_pretrained("coastalcph/fairlex-fscs-minlm")
```
## Evaluation on downstream tasks
Consider the experiments in the article:
_Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. Fairlex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland._
## Author - Publication
```
@inproceedings{chalkidis-2022-fairlex,
author={Chalkidis, Ilias and Passini, Tommaso and Zhang, Sheng and
Tomada, Letizia and Schwemer, Sebastian Felix and Søgaard, Anders},
title={FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
year={2022},
address={Dublin, Ireland}
}
```
Ilias Chalkidis on behalf of [CoAStaL NLP Group](https://coastalcph.github.io)
| Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) |
|
{"language": ["de", "fr", "it"], "license": "cc-by-nc-sa-4.0", "tags": ["legal", "fairlex"], "pipeline_tag": "fill-mask", "widget": [{"text": "Aus seinem damaligen strafbaren Verhalten resultierte eine Forderung der Nachlassverwaltung eines <mask>, wor\u00fcber eine aussergerichtliche Vereinbarung \u00fcber Fr. 500'000."}, {"text": " Elle avait pour but social les <mask> dans le domaine des changes, en particulier l'exploitation d'une plateforme internet."}, {"text": "Il Pretore ha accolto la petizione con sentenza 16 luglio 2015, accordando all'attore l'importo <mask>, con interessi di mora a partire dalla notifica del precetto esecutivo, e ha rigettato in tale misura l'opposizione interposta a quest'ultimo."}]}
|
coastalcph/fairlex-fscs-minilm
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"legal",
"fairlex",
"de",
"fr",
"it",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
# FairLex: A multilingual benchmark for evaluating fairness in legal text processing
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP.
---
Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. FairLex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.
---
## Pre-training details
For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, SPC).
We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads.
We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019).
For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese SPC).
## Models list
| Model name | Training corpora | Language |
|-----------------------------------|------------------|--------------------|
| `coastalcph/fairlex-ecthr-minlm` | ECtHR | `en` |
| `coastalcph/fairlex-scotus-minlm` | SCOTUS | `en` |
| `coastalcph/fairlex-fscs-minlm` | FSCS | [`de`, `fr`, `it`] |
| `coastalcph/fairlex-cail-minlm` | CAIL | `zh` |
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("coastalcph/fairlex-scotus-minlm")
model = AutoModel.from_pretrained("coastalcph/fairlex-scotus-minlm")
```
## Evaluation on downstream tasks
Consider the experiments in the article:
_Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. Fairlex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland._
## Author - Publication
```
@inproceedings{chalkidis-2022-fairlex,
author={Chalkidis, Ilias and Passini, Tommaso and Zhang, Sheng and
Tomada, Letizia and Schwemer, Sebastian Felix and Søgaard, Anders},
title={FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
year={2022},
address={Dublin, Ireland}
}
```
Ilias Chalkidis on behalf of [CoAStaL NLP Group](https://coastalcph.github.io)
| Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) |
|
{"language": "en", "license": "cc-by-nc-sa-4.0", "tags": ["legal", "fairlex"], "pipeline_tag": "fill-mask", "widget": [{"text": "Because the Court granted <mask> before judgment, the Court effectively stands in the shoes of the Court of Appeals and reviews the defendants\u2019 appeals."}]}
|
coastalcph/fairlex-scotus-minilm
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"legal",
"fairlex",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# Kohaku DialoGPT Model
|
{"tags": ["conversational"]}
|
cocoaclef/DialoGPT-small-kohaku
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# Rick Morty DialoGPT Model
|
{"tags": ["conversational"]}
|
codealtgeek/DiabloGPT-medium-rickmorty
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
transformers
|
HIYACCENT: An Improved Nigerian-Accented Speech Recognition System Based on Contrastive Learning
The global objective of this research was to develop a more robust model for the Nigerian English Speakers whose English pronunciations are heavily affected by their mother tongue. For this, the Wav2Vec-HIYACCENT model was proposed which introduced a new layer to the Novel Facebook Wav2vec to capture the disparity between the baseline model and Nigerian English Speeches. A CTC loss was also inserted on top of the model which adds flexibility to the speech-text alignment. This resulted in over 20% improvement in the performance for NAE.T
Fine-tuned facebook/wav2vec2-large on English using the UISpeech Corpus. When using this model, make sure that your speech input is sampled at 16kHz.
The script used for training can be found here: https://github.com/amceejay/HIYACCENT-NE-Speech-Recognition-System
##Usage: The model can be used directly (without a language model) as follows...
#Using the ASRecognition library:
from asrecognition import ASREngine
asr = ASREngine("fr", model_path="codeceejay/HIYACCENT_Wav2Vec2")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = asr.transcribe(audio_paths)
##Writing your own inference speech:
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "en"
MODEL_ID = "codeceejay/HIYACCENT_Wav2Vec2"
SAMPLES = 10
#You can use common_voice/timit or Nigerian Accented Speeches can also be found here: https://openslr.org/70/
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
|
{}
|
codeceejay/HIYACCENT_Wav2Vec2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null |
transformers
|
# Calbert: a Catalan Language Model
## Introduction
CALBERT is an open-source language model for Catalan pretrained on the ALBERT architecture.
It is now available on Hugging Face in its `tiny-uncased` version and `base-uncased` (the one you're looking at) as well, and was pretrained on the [OSCAR dataset](https://traces1.inria.fr/oscar/).
For further information or requests, please go to the [GitHub repository](https://github.com/codegram/calbert)
## Pre-trained models
| Model | Arch. | Training data |
| ----------------------------------- | -------------- | ---------------------- |
| `codegram` / `calbert-tiny-uncased` | Tiny (uncased) | OSCAR (4.3 GB of text) |
| `codegram` / `calbert-base-uncased` | Base (uncased) | OSCAR (4.3 GB of text) |
## How to use Calbert with HuggingFace
#### Load Calbert and its tokenizer:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("codegram/calbert-base-uncased")
model = AutoModel.from_pretrained("codegram/calbert-base-uncased")
model.eval() # disable dropout (or leave in train mode to finetune
```
#### Filling masks using pipeline
```python
from transformers import pipeline
calbert_fill_mask = pipeline("fill-mask", model="codegram/calbert-base-uncased", tokenizer="codegram/calbert-base-uncased")
results = calbert_fill_mask("M'agrada [MASK] això")
# results
# [{'sequence': "[CLS] m'agrada molt aixo[SEP]", 'score': 0.614592969417572, 'token': 61},
# {'sequence': "[CLS] m'agrada moltíssim aixo[SEP]", 'score': 0.06058056280016899, 'token': 4867},
# {'sequence': "[CLS] m'agrada més aixo[SEP]", 'score': 0.017195818945765495, 'token': 43},
# {'sequence': "[CLS] m'agrada llegir aixo[SEP]", 'score': 0.016321714967489243, 'token': 684},
# {'sequence': "[CLS] m'agrada escriure aixo[SEP]", 'score': 0.012185849249362946, 'token': 1306}]
```
#### Extract contextual embedding features from Calbert output
```python
import torch
# Tokenize in sub-words with SentencePiece
tokenized_sentence = tokenizer.tokenize("M'és una mica igual")
# ['▁m', "'", 'es', '▁una', '▁mica', '▁igual']
# 1-hot encode and add special starting and end tokens
encoded_sentence = tokenizer.encode(tokenized_sentence)
# [2, 109, 7, 71, 36, 371, 1103, 3]
# NB: Can be done in one step : tokenize.encode("M'és una mica igual")
# Feed tokens to Calbert as a torch tensor (batch dim 1)
encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
embeddings, _ = model(encoded_sentence)
embeddings.size()
# torch.Size([1, 8, 768])
embeddings.detach()
# tensor([[[-0.0261, 0.1166, -0.1075, ..., -0.0368, 0.0193, 0.0017],
# [ 0.1289, -0.2252, 0.9881, ..., -0.1353, 0.3534, 0.0734],
# [-0.0328, -1.2364, 0.9466, ..., 0.3455, 0.7010, -0.2085],
# ...,
# [ 0.0397, -1.0228, -0.2239, ..., 0.2932, 0.1248, 0.0813],
# [-0.0261, 0.1165, -0.1074, ..., -0.0368, 0.0193, 0.0017],
# [-0.1934, -0.2357, -0.2554, ..., 0.1831, 0.6085, 0.1421]]])
```
## Authors
CALBERT was trained and evaluated by [Txus Bach](https://twitter.com/txustice), as part of [Codegram](https://www.codegram.com)'s applied research.
<a href="https://huggingface.co/exbert/?model=codegram/calbert-base-uncased&modelKind=bidirectional&sentence=M%27agradaria%20força%20saber-ne%20més">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
{"language": "ca", "license": "mit", "tags": ["masked-lm", "catalan", "exbert"]}
|
codegram/calbert-base-uncased
| null |
[
"transformers",
"pytorch",
"albert",
"masked-lm",
"catalan",
"exbert",
"ca",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null |
transformers
|
# Calbert: a Catalan Language Model
## Introduction
CALBERT is an open-source language model for Catalan pretrained on the ALBERT architecture.
It is now available on Hugging Face in its `tiny-uncased` version (the one you're looking at) and `base-uncased` as well, and was pretrained on the [OSCAR dataset](https://traces1.inria.fr/oscar/).
For further information or requests, please go to the [GitHub repository](https://github.com/codegram/calbert)
## Pre-trained models
| Model | Arch. | Training data |
| ----------------------------------- | -------------- | ---------------------- |
| `codegram` / `calbert-tiny-uncased` | Tiny (uncased) | OSCAR (4.3 GB of text) |
| `codegram` / `calbert-base-uncased` | Base (uncased) | OSCAR (4.3 GB of text) |
## How to use Calbert with HuggingFace
#### Load Calbert and its tokenizer:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("codegram/calbert-tiny-uncased")
model = AutoModel.from_pretrained("codegram/calbert-tiny-uncased")
model.eval() # disable dropout (or leave in train mode to finetune
```
#### Filling masks using pipeline
```python
from transformers import pipeline
calbert_fill_mask = pipeline("fill-mask", model="codegram/calbert-tiny-uncased", tokenizer="codegram/calbert-tiny-uncased")
results = calbert_fill_mask("M'agrada [MASK] això")
# results
# [{'sequence': "[CLS] m'agrada molt aixo[SEP]", 'score': 0.4403671622276306, 'token': 61},
# {'sequence': "[CLS] m'agrada més aixo[SEP]", 'score': 0.050061386078596115, 'token': 43},
# {'sequence': "[CLS] m'agrada veure aixo[SEP]", 'score': 0.026286985725164413, 'token': 157},
# {'sequence': "[CLS] m'agrada bastant aixo[SEP]", 'score': 0.022483550012111664, 'token': 2143},
# {'sequence': "[CLS] m'agrada moltíssim aixo[SEP]", 'score': 0.014491282403469086, 'token': 4867}]
```
#### Extract contextual embedding features from Calbert output
```python
import torch
# Tokenize in sub-words with SentencePiece
tokenized_sentence = tokenizer.tokenize("M'és una mica igual")
# ['▁m', "'", 'es', '▁una', '▁mica', '▁igual']
# 1-hot encode and add special starting and end tokens
encoded_sentence = tokenizer.encode(tokenized_sentence)
# [2, 109, 7, 71, 36, 371, 1103, 3]
# NB: Can be done in one step : tokenize.encode("M'és una mica igual")
# Feed tokens to Calbert as a torch tensor (batch dim 1)
encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
embeddings, _ = model(encoded_sentence)
embeddings.size()
# torch.Size([1, 8, 312])
embeddings.detach()
# tensor([[[-0.2726, -0.9855, 0.9643, ..., 0.3511, 0.3499, -0.1984],
# [-0.2824, -1.1693, -0.2365, ..., -3.1866, -0.9386, -1.3718],
# [-2.3645, -2.2477, -1.6985, ..., -1.4606, -2.7294, 0.2495],
# ...,
# [ 0.8800, -0.0244, -3.0446, ..., 0.5148, -3.0903, 1.1879],
# [ 1.1300, 0.2425, 0.2162, ..., -0.5722, -2.2004, 0.4045],
# [ 0.4549, -0.2378, -0.2290, ..., -2.1247, -2.2769, -0.0820]]])
```
## Authors
CALBERT was trained and evaluated by [Txus Bach](https://twitter.com/txustice), as part of [Codegram](https://www.codegram.com)'s applied research.
<a href="https://huggingface.co/exbert/?model=codegram/calbert-tiny-uncased&modelKind=bidirectional&sentence=M%27agradaria%20força%20saber-ne%20més">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
{"language": "ca", "license": "mit", "tags": ["masked-lm", "catalan", "exbert"]}
|
codegram/calbert-tiny-uncased
| null |
[
"transformers",
"pytorch",
"albert",
"masked-lm",
"catalan",
"exbert",
"ca",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
codename/zxc
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
This model is a paraphraser designed for the Adversarial Paraphrasing Task described and used in this paper: https://aclanthology.org/2021.acl-long.552/.
Please refer to `nap_generation.py` on the github repository for ways to better utilize this model using concepts of top-k sampling and top-p sampling. The demo on huggingface will output only one sentence which will most likely be the same as the input sentence since the model is supposed to output using beam search and sampling.
Github repository: https://github.com/Advancing-Machine-Human-Reasoning-Lab/apt.git
Please cite the following if you use this model:
```bib
@inproceedings{nighojkar-licato-2021-improving,
title = "Improving Paraphrase Detection with the Adversarial Paraphrasing Task",
author = "Nighojkar, Animesh and
Licato, John",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.552",
pages = "7106--7116",
abstract = "If two sentences have the same meaning, it should follow that they are equivalent in their inferential properties, i.e., each sentence should textually entail the other. However, many paraphrase datasets currently in widespread use rely on a sense of paraphrase based on word overlap and syntax. Can we teach them instead to identify paraphrases in a way that draws on the inferential properties of the sentences, and is not over-reliant on lexical and syntactic similarities of a sentence pair? We apply the adversarial paradigm to this question, and introduce a new adversarial method of dataset creation for paraphrase identification: the Adversarial Paraphrasing Task (APT), which asks participants to generate semantically equivalent (in the sense of mutually implicative) but lexically and syntactically disparate paraphrases. These sentence pairs can then be used both to test paraphrase identification models (which get barely random accuracy) and then improve their performance. To accelerate dataset generation, we explore automation of APT using T5, and show that the resulting dataset also improves accuracy. We discuss implications for paraphrase detection and release our dataset in the hope of making paraphrase detection models better able to detect sentence-level meaning equivalence.",
}
```
|
{}
|
AMHR/T5-for-Adversarial-Paraphrasing
| null |
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
This model is a paraphrase detector trained on the Adversarial Paraphrasing datasets described and used in this paper: https://aclanthology.org/2021.acl-long.552/.
Github repository: https://github.com/Advancing-Machine-Human-Reasoning-Lab/apt.git
Please cite the following if you use this model:
```bib
@inproceedings{nighojkar-licato-2021-improving,
title = "Improving Paraphrase Detection with the Adversarial Paraphrasing Task",
author = "Nighojkar, Animesh and
Licato, John",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.552",
pages = "7106--7116",
abstract = "If two sentences have the same meaning, it should follow that they are equivalent in their inferential properties, i.e., each sentence should textually entail the other. However, many paraphrase datasets currently in widespread use rely on a sense of paraphrase based on word overlap and syntax. Can we teach them instead to identify paraphrases in a way that draws on the inferential properties of the sentences, and is not over-reliant on lexical and syntactic similarities of a sentence pair? We apply the adversarial paradigm to this question, and introduce a new adversarial method of dataset creation for paraphrase identification: the Adversarial Paraphrasing Task (APT), which asks participants to generate semantically equivalent (in the sense of mutually implicative) but lexically and syntactically disparate paraphrases. These sentence pairs can then be used both to test paraphrase identification models (which get barely random accuracy) and then improve their performance. To accelerate dataset generation, we explore automation of APT using T5, and show that the resulting dataset also improves accuracy. We discuss implications for paraphrase detection and release our dataset in the hope of making paraphrase detection models better able to detect sentence-level meaning equivalence.",
}
```
|
{}
|
AMHR/adversarial-paraphrasing-detector
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
{}
|
codesj/empathic-concern
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0611
- Precision: 0.9272
- Recall: 0.9382
- F1: 0.9327
- Accuracy: 0.9843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2432 | 1.0 | 878 | 0.0689 | 0.9132 | 0.9203 | 0.9168 | 0.9813 |
| 0.0507 | 2.0 | 1756 | 0.0608 | 0.9208 | 0.9346 | 0.9276 | 0.9835 |
| 0.03 | 3.0 | 2634 | 0.0611 | 0.9272 | 0.9382 | 0.9327 | 0.9843 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9843042559613643}}]}]}
|
codingJacob/distilbert-base-uncased-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
{}
|
codingJacob/dummy-model
| null |
[
"transformers",
"pytorch",
"camembert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
codingforgood/wav2vec2-base-timit-demo-colab
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
codistai/codeBERT-small-v2
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0605
- Precision: 0.9251
- Recall: 0.9357
- F1: 0.9304
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2402 | 1.0 | 878 | 0.0694 | 0.9168 | 0.9215 | 0.9191 | 0.9814 |
| 0.051 | 2.0 | 1756 | 0.0595 | 0.9249 | 0.9330 | 0.9289 | 0.9833 |
| 0.0302 | 3.0 | 2634 | 0.0605 | 0.9251 | 0.9357 | 0.9304 | 0.9837 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9837323462595516}}]}]}
|
cogito233/distilbert-base-uncased-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
feature-extraction
|
transformers
|
# LaBSE for English and Russian
This is a truncated version of [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE), which is, in turn, a port of [LaBSE](https://tfhub.dev/google/LaBSE/1) by Google.
The current model has only English and Russian tokens left in the vocabulary.
Thus, the vocabulary is 10% of the original, and number of parameters in the whole model is 27% of the original, without any loss in the quality of English and Russian embeddings.
To get the sentence embeddings, you can use the following code:
```python
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("cointegrated/LaBSE-en-ru")
model = AutoModel.from_pretrained("cointegrated/LaBSE-en-ru")
sentences = ["Hello World", "Привет Мир"]
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=64, return_tensors='pt')
with torch.no_grad():
model_output = model(**encoded_input)
embeddings = model_output.pooler_output
embeddings = torch.nn.functional.normalize(embeddings)
print(embeddings)
```
The model has been truncated in [this notebook](https://colab.research.google.com/drive/1dnPRn0-ugj3vZgSpyCC9sgslM2SuSfHy?usp=sharing).
You can adapt it for other languages (like [EIStakovskii/LaBSE-fr-de](https://huggingface.co/EIStakovskii/LaBSE-fr-de)), models or datasets.
## Reference:
Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Narveen Ari, Wei Wang. [Language-agnostic BERT Sentence Embedding](https://arxiv.org/abs/2007.01852). July 2020
License: [https://tfhub.dev/google/LaBSE/1](https://tfhub.dev/google/LaBSE/1)
|
{"language": ["ru", "en"], "tags": ["feature-extraction", "embeddings", "sentence-similarity"]}
|
cointegrated/LaBSE-en-ru
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"pretraining",
"feature-extraction",
"embeddings",
"sentence-similarity",
"ru",
"en",
"arxiv:2007.01852",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
{}
|
cointegrated/roberta-base-formality
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
This is a RoBERTa-large classifier trained on the CoLA corpus [Warstadt et al., 2019](https://www.mitpressjournals.org/doi/pdf/10.1162/tacl_a_00290),
which contains sentences paired with grammatical acceptability judgments. The model can be used to evaluate fluency of machine-generated English sentences, e.g. for evaluation of text style transfer.
The model was trained in the paper [Krishna et al, 2020. Reformulating Unsupervised Style Transfer as Paraphrase Generation](https://arxiv.org/abs/2010.05700), and its original version is available at [their project page](http://style.cs.umass.edu). We converted this model from Fairseq to Transformers format. All credit goes to the authors of the original paper.
## Citation
If you found this model useful and refer to it, please cite the original work:
```
@inproceedings{style20,
author={Kalpesh Krishna and John Wieting and Mohit Iyyer},
Booktitle = {Empirical Methods in Natural Language Processing},
Year = "2020",
Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation},
}
```
|
{}
|
cointegrated/roberta-large-cola-krishna2020
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"arxiv:2010.05700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
This is a version of paraphrase detector by DeepPavlov ([details in the documentation](http://docs.deeppavlov.ai/en/master/features/overview.html#ranking-model-docs)) ported to the `Transformers` format.
All credit goes to the authors of DeepPavlov.
The model has been trained on the dataset from http://paraphraser.ru/.
It classifies texts as paraphrases (class 1) or non-paraphrases (class 0).
```python
import torch
from transformers import AutoModelForSequenceClassification, BertTokenizer
model_name = 'cointegrated/rubert-base-cased-dp-paraphrase-detection'
model = AutoModelForSequenceClassification.from_pretrained(model_name).cuda()
tokenizer = BertTokenizer.from_pretrained(model_name)
def compare_texts(text1, text2):
batch = tokenizer(text1, text2, return_tensors='pt').to(model.device)
with torch.inference_mode():
proba = torch.softmax(model(**batch).logits, -1).cpu().numpy()
return proba[0] # p(non-paraphrase), p(paraphrase)
print(compare_texts('Сегодня на улице хорошая погода', 'Сегодня на улице отвратительная погода'))
# [0.7056226 0.2943774]
print(compare_texts('Сегодня на улице хорошая погода', 'Отличная погодка сегодня выдалась'))
# [0.16524374 0.8347562 ]
```
P.S. In the DeepPavlov repository, the tokenizer uses `max_seq_length=64`.
This model, however, uses `model_max_length=512`.
Therefore, the results on long texts may be inadequate.
|
{"language": ["ru"], "tags": ["sentence-similarity", "text-classification"], "datasets": ["merionum/ru_paraphraser"]}
|
cointegrated/rubert-base-cased-dp-paraphrase-detection
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"sentence-similarity",
"ru",
"dataset:merionum/ru_paraphraser",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
zero-shot-classification
|
transformers
|
# RuBERT for NLI (natural language inference)
This is the [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) fine-tuned to predict the logical relationship between two short texts: entailment, contradiction, or neutral.
## Usage
How to run the model for NLI:
```python
# !pip install transformers sentencepiece --quiet
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_checkpoint = 'cointegrated/rubert-base-cased-nli-threeway'
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint)
if torch.cuda.is_available():
model.cuda()
text1 = 'Сократ - человек, а все люди смертны.'
text2 = 'Сократ никогда не умрёт.'
with torch.inference_mode():
out = model(**tokenizer(text1, text2, return_tensors='pt').to(model.device))
proba = torch.softmax(out.logits, -1).cpu().numpy()[0]
print({v: proba[k] for k, v in model.config.id2label.items()})
# {'entailment': 0.009525929, 'contradiction': 0.9332064, 'neutral': 0.05726764}
```
You can also use this model for zero-shot short text classification (by labels only), e.g. for sentiment analysis:
```python
def predict_zero_shot(text, label_texts, model, tokenizer, label='entailment', normalize=True):
label_texts
tokens = tokenizer([text] * len(label_texts), label_texts, truncation=True, return_tensors='pt', padding=True)
with torch.inference_mode():
result = torch.softmax(model(**tokens.to(model.device)).logits, -1)
proba = result[:, model.config.label2id[label]].cpu().numpy()
if normalize:
proba /= sum(proba)
return proba
classes = ['Я доволен', 'Я недоволен']
predict_zero_shot('Какая гадость эта ваша заливная рыба!', classes, model, tokenizer)
# array([0.05609814, 0.9439019 ], dtype=float32)
predict_zero_shot('Какая вкусная эта ваша заливная рыба!', classes, model, tokenizer)
# array([0.9059292 , 0.09407079], dtype=float32)
```
Alternatively, you can use [Huggingface pipelines](https://huggingface.co/transformers/main_classes/pipelines.html) for inference.
## Sources
The model has been trained on a series of NLI datasets automatically translated to Russian from English.
Most datasets were taken [from the repo of Felipe Salvatore](https://github.com/felipessalvatore/NLI_datasets):
[JOCI](https://github.com/sheng-z/JOCI),
[MNLI](https://cims.nyu.edu/~sbowman/multinli/),
[MPE](https://aclanthology.org/I17-1011/),
[SICK](http://www.lrec-conf.org/proceedings/lrec2014/pdf/363_Paper.pdf),
[SNLI](https://nlp.stanford.edu/projects/snli/).
Some datasets obtained from the original sources:
[ANLI](https://github.com/facebookresearch/anli),
[NLI-style FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md),
[IMPPRES](https://github.com/facebookresearch/Imppres).
## Performance
The table below shows ROC AUC (one class vs rest) for five models on the corresponding *dev* sets:
- [tiny](https://huggingface.co/cointegrated/rubert-tiny-bilingual-nli): a small BERT predicting entailment vs not_entailment
- [twoway](https://huggingface.co/cointegrated/rubert-base-cased-nli-twoway): a base-sized BERT predicting entailment vs not_entailment
- [threeway](https://huggingface.co/cointegrated/rubert-base-cased-nli-threeway) (**this model**): a base-sized BERT predicting entailment vs contradiction vs neutral
- [vicgalle-xlm](https://huggingface.co/vicgalle/xlm-roberta-large-xnli-anli): a large multilingual NLI model
- [facebook-bart](https://huggingface.co/facebook/bart-large-mnli): a large multilingual NLI model
|model |add_one_rte|anli_r1|anli_r2|anli_r3|copa|fever|help|iie |imppres|joci|mnli |monli|mpe |scitail|sick|snli|terra|total |
|------------------------|-----------|-------|-------|-------|----|-----|----|-----|-------|----|-----|-----|----|-------|----|----|-----|------|
|n_observations |387 |1000 |1000 |1200 |200 |20474|3355|31232|7661 |939 |19647|269 |1000|2126 |500 |9831|307 |101128|
|tiny/entailment |0.77 |0.59 |0.52 |0.53 |0.53|0.90 |0.81|0.78 |0.93 |0.81|0.82 |0.91 |0.81|0.78 |0.93|0.95|0.67 |0.77 |
|twoway/entailment |0.89 |0.73 |0.61 |0.62 |0.58|0.96 |0.92|0.87 |0.99 |0.90|0.90 |0.99 |0.91|0.96 |0.97|0.97|0.87 |0.86 |
|threeway/entailment |0.91 |0.75 |0.61 |0.61 |0.57|0.96 |0.56|0.61 |0.99 |0.90|0.91 |0.67 |0.92|0.84 |0.98|0.98|0.90 |0.80 |
|vicgalle-xlm/entailment |0.88 |0.79 |0.63 |0.66 |0.57|0.93 |0.56|0.62 |0.77 |0.80|0.90 |0.70 |0.83|0.84 |0.91|0.93|0.93 |0.78 |
|facebook-bart/entailment|0.51 |0.41 |0.43 |0.47 |0.50|0.74 |0.55|0.57 |0.60 |0.63|0.70 |0.52 |0.56|0.68 |0.67|0.72|0.64 |0.58 |
|threeway/contradiction | |0.71 |0.64 |0.61 | |0.97 | | |1.00 |0.77|0.92 | |0.89| |0.99|0.98| |0.85 |
|threeway/neutral | |0.79 |0.70 |0.62 | |0.91 | | |0.99 |0.68|0.86 | |0.79| |0.96|0.96| |0.83 |
For evaluation (and for training of the [tiny](https://huggingface.co/cointegrated/rubert-tiny-bilingual-nli) and [twoway](https://huggingface.co/cointegrated/rubert-base-cased-nli-twoway) models), some extra datasets were used:
[Add-one RTE](https://cs.brown.edu/people/epavlick/papers/ans.pdf),
[CoPA](https://people.ict.usc.edu/~gordon/copa.html),
[IIE](https://aclanthology.org/I17-1100), and
[SCITAIL](https://allenai.org/data/scitail) taken from [the repo of Felipe Salvatore](https://github.com/felipessalvatore/NLI_datasets) and translatted,
[HELP](https://github.com/verypluming/HELP) and [MoNLI](https://github.com/atticusg/MoNLI) taken from the original sources and translated,
and Russian [TERRa](https://russiansuperglue.com/ru/tasks/task_info/TERRa).
|
{"language": "ru", "tags": ["rubert", "russian", "nli", "rte", "zero-shot-classification"], "datasets": ["cointegrated/nli-rus-translated-v2021"], "pipeline_tag": "zero-shot-classification", "widget": [{"text": "\u042f \u0445\u043e\u0447\u0443 \u043f\u043e\u0435\u0445\u0430\u0442\u044c \u0432 \u0410\u0432\u0441\u0442\u0440\u0430\u043b\u0438\u044e", "candidate_labels": "\u0441\u043f\u043e\u0440\u0442,\u043f\u0443\u0442\u0435\u0448\u0435\u0441\u0442\u0432\u0438\u044f,\u043c\u0443\u0437\u044b\u043a\u0430,\u043a\u0438\u043d\u043e,\u043a\u043d\u0438\u0433\u0438,\u043d\u0430\u0443\u043a\u0430,\u043f\u043e\u043b\u0438\u0442\u0438\u043a\u0430", "hypothesis_template": "\u0422\u0435\u043c\u0430 \u0442\u0435\u043a\u0441\u0442\u0430 - {}."}]}
|
cointegrated/rubert-base-cased-nli-threeway
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"rubert",
"russian",
"nli",
"rte",
"zero-shot-classification",
"ru",
"dataset:cointegrated/nli-rus-translated-v2021",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
zero-shot-classification
|
transformers
|
# RuBERT for NLI (natural language inference)
This is the [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) fine-tuned to predict the logical relationship between two short texts: entailment or not entailment.
For more details, see the card for a similar model: https://huggingface.co/cointegrated/rubert-base-cased-nli-threeway
|
{"language": "ru", "tags": ["rubert", "russian", "nli", "rte", "zero-shot-classification"], "datasets": ["cointegrated/nli-rus-translated-v2021"], "pipeline_tag": "zero-shot-classification", "widget": [{"text": "\u042f \u0445\u043e\u0447\u0443 \u043f\u043e\u0435\u0445\u0430\u0442\u044c \u0432 \u0410\u0432\u0441\u0442\u0440\u0430\u043b\u0438\u044e", "candidate_labels": "\u0441\u043f\u043e\u0440\u0442,\u043f\u0443\u0442\u0435\u0448\u0435\u0441\u0442\u0432\u0438\u044f,\u043c\u0443\u0437\u044b\u043a\u0430,\u043a\u0438\u043d\u043e,\u043a\u043d\u0438\u0433\u0438,\u043d\u0430\u0443\u043a\u0430,\u043f\u043e\u043b\u0438\u0442\u0438\u043a\u0430", "hypothesis_template": "\u0422\u0435\u043c\u0430 \u0442\u0435\u043a\u0441\u0442\u0430 - {}."}]}
|
cointegrated/rubert-base-cased-nli-twoway
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"rubert",
"russian",
"nli",
"rte",
"zero-shot-classification",
"ru",
"dataset:cointegrated/nli-rus-translated-v2021",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
transformers
|
The model for https://github.com/Lesha17/Punctuation; all credits go to the owner of this repository.
|
{}
|
cointegrated/rubert-base-lesha17-punctuation
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
zero-shot-classification
|
transformers
|
# RuBERT-tiny for NLI (natural language inference)
This is the [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) model fine-tuned to predict the logical relationship between two short texts: entailment or not entailment.
For more details, see the card for a related model: https://huggingface.co/cointegrated/rubert-base-cased-nli-threeway
|
{"language": "ru", "tags": ["rubert", "russian", "nli", "rte", "zero-shot-classification"], "datasets": ["cointegrated/nli-rus-translated-v2021"], "pipeline_tag": "zero-shot-classification", "widget": [{"text": "\u0421\u0435\u0440\u0432\u0438\u0441 \u043e\u0442\u0441\u0442\u043e\u0439\u043d\u044b\u0439, \u043a\u043e\u0440\u043c\u0438\u043b\u0438 \u043d\u0435\u0432\u043a\u0443\u0441\u043d\u043e", "candidate_labels": "\u041c\u043d\u0435 \u043f\u043e\u043d\u0440\u0430\u0432\u0438\u043b\u043e\u0441\u044c, \u041c\u043d\u0435 \u043d\u0435 \u043f\u043e\u043d\u0440\u0430\u0432\u0438\u043b\u043e\u0441\u044c", "hypothesis_template": "{}."}]}
|
cointegrated/rubert-tiny-bilingual-nli
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"rubert",
"russian",
"nli",
"rte",
"zero-shot-classification",
"ru",
"dataset:cointegrated/nli-rus-translated-v2021",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
This is the [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) model fine-tuned for classification of sentiment for short Russian texts.
The problem is formulated as multiclass classification: `negative` vs `neutral` vs `positive`.
## Usage
The function below estimates the sentiment of the given text:
```python
# !pip install transformers sentencepiece --quiet
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_checkpoint = 'cointegrated/rubert-tiny-sentiment-balanced'
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint)
if torch.cuda.is_available():
model.cuda()
def get_sentiment(text, return_type='label'):
""" Calculate sentiment of a text. `return_type` can be 'label', 'score' or 'proba' """
with torch.no_grad():
inputs = tokenizer(text, return_tensors='pt', truncation=True, padding=True).to(model.device)
proba = torch.sigmoid(model(**inputs).logits).cpu().numpy()[0]
if return_type == 'label':
return model.config.id2label[proba.argmax()]
elif return_type == 'score':
return proba.dot([-1, 0, 1])
return proba
text = 'Какая гадость эта ваша заливная рыба!'
# classify the text
print(get_sentiment(text, 'label')) # negative
# score the text on the scale from -1 (very negative) to +1 (very positive)
print(get_sentiment(text, 'score')) # -0.5894946306943893
# calculate probabilities of all labels
print(get_sentiment(text, 'proba')) # [0.7870447 0.4947824 0.19755007]
```
## Training
We trained the model on [the datasets collected by Smetanin](https://github.com/sismetanin/sentiment-analysis-in-russian). We have converted all training data into a 3-class format and have up- and downsampled the training data to balance both the sources and the classes. The training code is available as [a Colab notebook](https://gist.github.com/avidale/e678c5478086c1d1adc52a85cb2b93e6). The metrics on the balanced test set are the following:
| Source | Macro F1 |
| ----------- | ----------- |
| SentiRuEval2016_banks | 0.83 |
| SentiRuEval2016_tele | 0.74 |
| kaggle_news | 0.66 |
| linis | 0.50 |
| mokoron | 0.98 |
| rureviews | 0.72 |
| rusentiment | 0.67 |
|
{"language": ["ru"], "tags": ["russian", "classification", "sentiment", "multiclass"], "widget": [{"text": "\u041a\u0430\u043a\u0430\u044f \u0433\u0430\u0434\u043e\u0441\u0442\u044c \u044d\u0442\u0430 \u0432\u0430\u0448\u0430 \u0437\u0430\u043b\u0438\u0432\u043d\u0430\u044f \u0440\u044b\u0431\u0430!"}]}
|
cointegrated/rubert-tiny-sentiment-balanced
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"russian",
"classification",
"sentiment",
"multiclass",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
This is the [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) model fine-tuned for classification of toxicity and inappropriateness for short informal Russian texts, such as comments in social networks.
The problem is formulated as multilabel classification with the following classes:
- `non-toxic`: the text does NOT contain insults, obscenities, and threats, in the sense of the [OK ML Cup](https://cups.mail.ru/ru/tasks/1048) competition.
- `insult`
- `obscenity`
- `threat`
- `dangerous`: the text is inappropriate, in the sense of [Babakov et.al.](https://arxiv.org/abs/2103.05345), i.e. it can harm the reputation of the speaker.
A text can be considered safe if it is BOTH `non-toxic` and NOT `dangerous`.
## Usage
The function below estimates the probability that the text is either toxic OR dangerous:
```python
# !pip install transformers sentencepiece --quiet
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_checkpoint = 'cointegrated/rubert-tiny-toxicity'
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint)
if torch.cuda.is_available():
model.cuda()
def text2toxicity(text, aggregate=True):
""" Calculate toxicity of a text (if aggregate=True) or a vector of toxicity aspects (if aggregate=False)"""
with torch.no_grad():
inputs = tokenizer(text, return_tensors='pt', truncation=True, padding=True).to(model.device)
proba = torch.sigmoid(model(**inputs).logits).cpu().numpy()
if isinstance(text, str):
proba = proba[0]
if aggregate:
return 1 - proba.T[0] * (1 - proba.T[-1])
return proba
print(text2toxicity('я люблю нигеров', True))
# 0.9350118728093193
print(text2toxicity('я люблю нигеров', False))
# [0.9715758 0.0180863 0.0045551 0.00189755 0.9331106 ]
print(text2toxicity(['я люблю нигеров', 'я люблю африканцев'], True))
# [0.93501186 0.04156357]
print(text2toxicity(['я люблю нигеров', 'я люблю африканцев'], False))
# [[9.7157580e-01 1.8086294e-02 4.5550885e-03 1.8975559e-03 9.3311059e-01]
# [9.9979788e-01 1.9048342e-04 1.5297388e-04 1.7452303e-04 4.1369814e-02]]
```
## Training
The model has been trained on the joint dataset of [OK ML Cup](https://cups.mail.ru/ru/tasks/1048) and [Babakov et.al.](https://arxiv.org/abs/2103.05345) with `Adam` optimizer, the learning rate of `1e-5`, and batch size of `64` for `15` epochs. A text was considered inappropriate if its inappropriateness score was higher than 0.8, and appropriate - if it was lower than 0.2. The per-label ROC AUC on the dev set is:
```
non-toxic : 0.9937
insult : 0.9912
obscenity : 0.9881
threat : 0.9910
dangerous : 0.8295
```
|
{"language": ["ru"], "tags": ["russian", "classification", "toxicity", "multilabel"], "widget": [{"text": "\u0418\u0434\u0438 \u0442\u044b \u043d\u0430\u0444\u0438\u0433!"}]}
|
cointegrated/rubert-tiny-toxicity
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"russian",
"classification",
"toxicity",
"multilabel",
"ru",
"arxiv:2103.05345",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
This is a very small distilled version of the [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) model for Russian and English (45 MB, 12M parameters). There is also an **updated version of this model**, [rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2), with a larger vocabulary and better quality on practically all Russian NLU tasks.
This model is useful if you want to fine-tune it for a relatively simple Russian task (e.g. NER or sentiment classification), and you care more about speed and size than about accuracy. It is approximately x10 smaller and faster than a base-sized BERT. Its `[CLS]` embeddings can be used as a sentence representation aligned between Russian and English.
It was trained on the [Yandex Translate corpus](https://translate.yandex.ru/corpus), [OPUS-100](https://huggingface.co/datasets/opus100) and [Tatoeba](https://huggingface.co/datasets/tatoeba), using MLM loss (distilled from [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased)), translation ranking loss, and `[CLS]` embeddings distilled from [LaBSE](https://huggingface.co/sentence-transformers/LaBSE), [rubert-base-cased-sentence](https://huggingface.co/DeepPavlov/rubert-base-cased-sentence), Laser and USE.
There is a more detailed [description in Russian](https://habr.com/ru/post/562064/).
Sentence embeddings can be produced as follows:
```python
# pip install transformers sentencepiece
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("cointegrated/rubert-tiny")
model = AutoModel.from_pretrained("cointegrated/rubert-tiny")
# model.cuda() # uncomment it if you have a GPU
def embed_bert_cls(text, model, tokenizer):
t = tokenizer(text, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
model_output = model(**{k: v.to(model.device) for k, v in t.items()})
embeddings = model_output.last_hidden_state[:, 0, :]
embeddings = torch.nn.functional.normalize(embeddings)
return embeddings[0].cpu().numpy()
print(embed_bert_cls('привет мир', model, tokenizer).shape)
# (312,)
```
|
{"language": ["ru", "en"], "license": "mit", "tags": ["russian", "fill-mask", "pretraining", "embeddings", "masked-lm", "tiny", "feature-extraction", "sentence-similarity"], "widget": [{"text": "\u041c\u0438\u043d\u0438\u0430\u0442\u044e\u0440\u043d\u0430\u044f \u043c\u043e\u0434\u0435\u043b\u044c \u0434\u043b\u044f [MASK] \u0440\u0430\u0437\u043d\u044b\u0445 \u0437\u0430\u0434\u0430\u0447."}], "pipeline_tag": "fill-mask"}
|
cointegrated/rubert-tiny
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"pretraining",
"russian",
"fill-mask",
"embeddings",
"masked-lm",
"tiny",
"feature-extraction",
"sentence-similarity",
"ru",
"en",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
This is the [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) model fine-tuned for classification of emotions in Russian sentences. The task is multilabel classification, because one sentence can contain multiple emotions.
The model on the [CEDR dataset](https://huggingface.co/datasets/cedr) described in the paper ["Data-Driven Model for Emotion Detection in Russian Texts"](https://doi.org/10.1016/j.procs.2021.06.075) by Sboev et al.
The model has been trained with Adam optimizer for 40 epochs with learning rate `1e-5` and batch size 64 [in this notebook](https://colab.research.google.com/drive/1AFW70EJaBn7KZKRClDIdDUpbD46cEsat?usp=sharing).
The quality of the predicted probabilities on the test dataset is the following:
| label | no emotion | joy |sadness |surprise| fear |anger | mean | mean (emotions) |
|----------|------------|--------|--------|--------|--------|--------| --------| ----------------|
| AUC | 0.9286 | 0.9512 | 0.9564 | 0.8908 | 0.8955 | 0.7511 | 0.8956 | 0.8890 |
| F1 micro | 0.8624 | 0.9389 | 0.9362 | 0.9469 | 0.9575 | 0.9261 | 0.9280 | 0.9411 |
| F1 macro | 0.8562 | 0.8962 | 0.9017 | 0.8366 | 0.8359 | 0.6820 | 0.8348 | 0.8305 |
|
{"language": ["ru"], "tags": ["russian", "classification", "sentiment", "emotion-classification", "multiclass"], "datasets": ["cedr"], "widget": [{"text": "\u0411\u0435\u0441\u0438\u0448\u044c \u043c\u0435\u043d\u044f, \u043f\u0430\u0434\u043b\u0430"}, {"text": "\u041a\u0430\u043a \u0437\u0434\u043e\u0440\u043e\u0432\u043e, \u0447\u0442\u043e \u0432\u0441\u0435 \u043c\u044b \u0437\u0434\u0435\u0441\u044c \u0441\u0435\u0433\u043e\u0434\u043d\u044f \u0441\u043e\u0431\u0440\u0430\u043b\u0438\u0441\u044c"}, {"text": "\u041a\u0430\u043a-\u0442\u043e \u0441\u0442\u0440\u0451\u043c\u043d\u043e, \u0434\u0430\u0432\u0430\u0439 \u0441\u0432\u0430\u043b\u0438\u043c \u043e\u0442\u0441\u044e\u0434\u0430?"}, {"text": "\u0413\u0440\u0443\u0441\u0442\u044c-\u0442\u043e\u0441\u043a\u0430 \u043c\u0435\u043d\u044f \u0441\u044a\u0435\u0434\u0430\u0435\u0442"}, {"text": "\u0414\u0430\u043d\u043d\u044b\u0439 \u0444\u0440\u0430\u0433\u043c\u0435\u043d\u0442 \u0442\u0435\u043a\u0441\u0442\u0430 \u043d\u0435 \u0441\u043e\u0434\u0435\u0440\u0436\u0438\u0442 \u0430\u0431\u0441\u043e\u043b\u044e\u0442\u043d\u043e \u043d\u0438\u043a\u0430\u043a\u0438\u0445 \u044d\u043c\u043e\u0446\u0438\u0439"}, {"text": "\u041d\u0438\u0444\u0438\u0433\u0430 \u0441\u0435\u0431\u0435, \u043d\u0435\u0443\u0436\u0435\u043b\u0438 \u0442\u0430\u043a \u0442\u043e\u0436\u0435 \u0431\u044b\u0432\u0430\u0435\u0442!"}]}
|
cointegrated/rubert-tiny2-cedr-emotion-detection
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"russian",
"classification",
"sentiment",
"emotion-classification",
"multiclass",
"ru",
"dataset:cedr",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
sentence-similarity
|
sentence-transformers
|
This is an updated version of [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny): a small Russian BERT-based encoder with high-quality sentence embeddings. This [post in Russian](https://habr.com/ru/post/669674/) gives more details.
The differences from the previous version include:
- a larger vocabulary: 83828 tokens instead of 29564;
- larger supported sequences: 2048 instead of 512;
- sentence embeddings approximate LaBSE closer than before;
- meaningful segment embeddings (tuned on the NLI task)
- the model is focused only on Russian.
The model should be used as is to produce sentence embeddings (e.g. for KNN classification of short texts) or fine-tuned for a downstream task.
Sentence embeddings can be produced as follows:
```python
# pip install transformers sentencepiece
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("cointegrated/rubert-tiny2")
model = AutoModel.from_pretrained("cointegrated/rubert-tiny2")
# model.cuda() # uncomment it if you have a GPU
def embed_bert_cls(text, model, tokenizer):
t = tokenizer(text, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
model_output = model(**{k: v.to(model.device) for k, v in t.items()})
embeddings = model_output.last_hidden_state[:, 0, :]
embeddings = torch.nn.functional.normalize(embeddings)
return embeddings[0].cpu().numpy()
print(embed_bert_cls('привет мир', model, tokenizer).shape)
# (312,)
```
Alternatively, you can use the model with `sentence_transformers`:
```Python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('cointegrated/rubert-tiny2')
sentences = ["привет мир", "hello world", "здравствуй вселенная"]
embeddings = model.encode(sentences)
print(embeddings)
```
|
{"language": ["ru"], "license": "mit", "tags": ["russian", "fill-mask", "pretraining", "embeddings", "masked-lm", "tiny", "feature-extraction", "sentence-similarity", "sentence-transformers", "transformers"], "pipeline_tag": "sentence-similarity", "widget": [{"text": "\u041c\u0438\u043d\u0438\u0430\u0442\u044e\u0440\u043d\u0430\u044f \u043c\u043e\u0434\u0435\u043b\u044c \u0434\u043b\u044f [MASK] \u0440\u0430\u0437\u043d\u044b\u0445 \u0437\u0430\u0434\u0430\u0447."}]}
|
cointegrated/rubert-tiny2
| null |
[
"sentence-transformers",
"pytorch",
"safetensors",
"bert",
"pretraining",
"russian",
"fill-mask",
"embeddings",
"masked-lm",
"tiny",
"feature-extraction",
"sentence-similarity",
"transformers",
"ru",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
summarization
|
transformers
|
This is a model for abstractive Russian summarization, based on [cointegrated/rut5-base-multitask](https://huggingface.co/cointegrated/rut5-base-multitask) and fine-tuned on 4 datasets.
It can be used as follows:
```python
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
MODEL_NAME = 'cointegrated/rut5-base-absum'
model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME)
tokenizer = T5Tokenizer.from_pretrained(MODEL_NAME)
model.cuda();
model.eval();
def summarize(
text, n_words=None, compression=None,
max_length=1000, num_beams=3, do_sample=False, repetition_penalty=10.0,
**kwargs
):
"""
Summarize the text
The following parameters are mutually exclusive:
- n_words (int) is an approximate number of words to generate.
- compression (float) is an approximate length ratio of summary and original text.
"""
if n_words:
text = '[{}] '.format(n_words) + text
elif compression:
text = '[{0:.1g}] '.format(compression) + text
x = tokenizer(text, return_tensors='pt', padding=True).to(model.device)
with torch.inference_mode():
out = model.generate(
**x,
max_length=max_length, num_beams=num_beams,
do_sample=do_sample, repetition_penalty=repetition_penalty,
**kwargs
)
return tokenizer.decode(out[0], skip_special_tokens=True)
text = """Высота башни составляет 324 метра (1063 фута), примерно такая же высота, как у 81-этажного здания, и самое высокое сооружение в Париже. Его основание квадратно, размером 125 метров (410 футов) с любой стороны. Во время строительства Эйфелева башня превзошла монумент Вашингтона, став самым высоким искусственным сооружением в мире, и этот титул она удерживала в течение 41 года до завершения строительство здания Крайслер в Нью-Йорке в 1930 году. Это первое сооружение которое достигло высоты 300 метров. Из-за добавления вещательной антенны на вершине башни в 1957 году она сейчас выше здания Крайслер на 5,2 метра (17 футов). За исключением передатчиков, Эйфелева башня является второй самой высокой отдельно стоящей структурой во Франции после виадука Мийо."""
print(summarize(text))
# Эйфелева башня достигла высоты 300 метров.
print(summarize(text, n_words=10))
# Французская Эйфелева башня достигла высоты 300 метров.
```
|
{"language": ["ru"], "license": "mit", "tags": ["russian", "summarization"], "datasets": ["IlyaGusev/gazeta", "csebuetnlp/xlsum", "mlsum", "wiki_lingua"], "widget": [{"text": "\u0412\u044b\u0441\u043e\u0442\u0430 \u0431\u0430\u0448\u043d\u0438 \u0441\u043e\u0441\u0442\u0430\u0432\u043b\u044f\u0435\u0442 324 \u043c\u0435\u0442\u0440\u0430 (1063 \u0444\u0443\u0442\u0430), \u043f\u0440\u0438\u043c\u0435\u0440\u043d\u043e \u0442\u0430\u043a\u0430\u044f \u0436\u0435 \u0432\u044b\u0441\u043e\u0442\u0430, \u043a\u0430\u043a \u0443 81-\u044d\u0442\u0430\u0436\u043d\u043e\u0433\u043e \u0437\u0434\u0430\u043d\u0438\u044f, \u0438 \u0441\u0430\u043c\u043e\u0435 \u0432\u044b\u0441\u043e\u043a\u043e\u0435 \u0441\u043e\u043e\u0440\u0443\u0436\u0435\u043d\u0438\u0435 \u0432 \u041f\u0430\u0440\u0438\u0436\u0435. \u0415\u0433\u043e \u043e\u0441\u043d\u043e\u0432\u0430\u043d\u0438\u0435 \u043a\u0432\u0430\u0434\u0440\u0430\u0442\u043d\u043e, \u0440\u0430\u0437\u043c\u0435\u0440\u043e\u043c 125 \u043c\u0435\u0442\u0440\u043e\u0432 (410 \u0444\u0443\u0442\u043e\u0432) \u0441 \u043b\u044e\u0431\u043e\u0439 \u0441\u0442\u043e\u0440\u043e\u043d\u044b. \u0412\u043e \u0432\u0440\u0435\u043c\u044f \u0441\u0442\u0440\u043e\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u0430 \u042d\u0439\u0444\u0435\u043b\u0435\u0432\u0430 \u0431\u0430\u0448\u043d\u044f \u043f\u0440\u0435\u0432\u0437\u043e\u0448\u043b\u0430 \u043c\u043e\u043d\u0443\u043c\u0435\u043d\u0442 \u0412\u0430\u0448\u0438\u043d\u0433\u0442\u043e\u043d\u0430, \u0441\u0442\u0430\u0432 \u0441\u0430\u043c\u044b\u043c \u0432\u044b\u0441\u043e\u043a\u0438\u043c \u0438\u0441\u043a\u0443\u0441\u0441\u0442\u0432\u0435\u043d\u043d\u044b\u043c \u0441\u043e\u043e\u0440\u0443\u0436\u0435\u043d\u0438\u0435\u043c \u0432 \u043c\u0438\u0440\u0435, \u0438 \u044d\u0442\u043e\u0442 \u0442\u0438\u0442\u0443\u043b \u043e\u043d\u0430 \u0443\u0434\u0435\u0440\u0436\u0438\u0432\u0430\u043b\u0430 \u0432 \u0442\u0435\u0447\u0435\u043d\u0438\u0435 41 \u0433\u043e\u0434\u0430 \u0434\u043e \u0437\u0430\u0432\u0435\u0440\u0448\u0435\u043d\u0438\u044f \u0441\u0442\u0440\u043e\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u043e \u0437\u0434\u0430\u043d\u0438\u044f \u041a\u0440\u0430\u0439\u0441\u043b\u0435\u0440 \u0432 \u041d\u044c\u044e-\u0419\u043e\u0440\u043a\u0435 \u0432 1930 \u0433\u043e\u0434\u0443. \u042d\u0442\u043e \u043f\u0435\u0440\u0432\u043e\u0435 \u0441\u043e\u043e\u0440\u0443\u0436\u0435\u043d\u0438\u0435 \u043a\u043e\u0442\u043e\u0440\u043e\u0435 \u0434\u043e\u0441\u0442\u0438\u0433\u043b\u043e \u0432\u044b\u0441\u043e\u0442\u044b 300 \u043c\u0435\u0442\u0440\u043e\u0432. \u0418\u0437-\u0437\u0430 \u0434\u043e\u0431\u0430\u0432\u043b\u0435\u043d\u0438\u044f \u0432\u0435\u0449\u0430\u0442\u0435\u043b\u044c\u043d\u043e\u0439 \u0430\u043d\u0442\u0435\u043d\u043d\u044b \u043d\u0430 \u0432\u0435\u0440\u0448\u0438\u043d\u0435 \u0431\u0430\u0448\u043d\u0438 \u0432 1957 \u0433\u043e\u0434\u0443 \u043e\u043d\u0430 \u0441\u0435\u0439\u0447\u0430\u0441 \u0432\u044b\u0448\u0435 \u0437\u0434\u0430\u043d\u0438\u044f \u041a\u0440\u0430\u0439\u0441\u043b\u0435\u0440 \u043d\u0430 5,2 \u043c\u0435\u0442\u0440\u0430 (17 \u0444\u0443\u0442\u043e\u0432). \u0417\u0430 \u0438\u0441\u043a\u043b\u044e\u0447\u0435\u043d\u0438\u0435\u043c \u043f\u0435\u0440\u0435\u0434\u0430\u0442\u0447\u0438\u043a\u043e\u0432, \u042d\u0439\u0444\u0435\u043b\u0435\u0432\u0430 \u0431\u0430\u0448\u043d\u044f \u044f\u0432\u043b\u044f\u0435\u0442\u0441\u044f \u0432\u0442\u043e\u0440\u043e\u0439 \u0441\u0430\u043c\u043e\u0439 \u0432\u044b\u0441\u043e\u043a\u043e\u0439 \u043e\u0442\u0434\u0435\u043b\u044c\u043d\u043e \u0441\u0442\u043e\u044f\u0449\u0435\u0439 \u0441\u0442\u0440\u0443\u043a\u0442\u0443\u0440\u043e\u0439 \u0432\u043e \u0424\u0440\u0430\u043d\u0446\u0438\u0438 \u043f\u043e\u0441\u043b\u0435 \u0432\u0438\u0430\u0434\u0443\u043a\u0430 \u041c\u0438\u0439\u043e."}]}
|
cointegrated/rut5-base-absum
| null |
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"russian",
"summarization",
"ru",
"dataset:IlyaGusev/gazeta",
"dataset:csebuetnlp/xlsum",
"dataset:mlsum",
"dataset:wiki_lingua",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
This is a smaller version of the [google/mt5-base](https://huggingface.co/google/mt5-base) with only some Rusian and English embeddings left.
More details are given in a Russian post: https://habr.com/ru/post/581932/
The model has been fine-tuned for several tasks with sentences or short paragraphs:
* translation (`translate ru-en` and `translate en-ru`)
* Paraphrasing (`paraphrase`)
* Filling gaps in a text (`fill`). The gaps can be denoted as `___` or `_3_`, where `3` is the approximate number of words that should be inserted.
* Restoring the text from a noisy bag of words (`assemble`)
* Simplification of texts (`simplify`)
* Dialogue response generation (`reply` based on fiction and `answer` based on online forums)
* Open-book question answering (`comprehend`)
* Asking questions about a text (`ask`)
* News title generation (`headline`)
For each task, the task name is joined with the input text by the ` | ` separator.
The model can be run with the following code:
```
# !pip install transformers sentencepiece
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("cointegrated/rut5-base-multitask")
model = T5ForConditionalGeneration.from_pretrained("cointegrated/rut5-base-multitask")
def generate(text, **kwargs):
inputs = tokenizer(text, return_tensors='pt')
with torch.no_grad():
hypotheses = model.generate(**inputs, num_beams=5, **kwargs)
return tokenizer.decode(hypotheses[0], skip_special_tokens=True)
```
The model can be applied to each of the pretraining tasks:
```
print(generate('translate ru-en | Каждый охотник желает знать, где сидит фазан.'))
# Each hunter wants to know, where he is.
print(generate('paraphrase | Каждый охотник желает знать, где сидит фазан.',
encoder_no_repeat_ngram_size=1, repetition_penalty=0.5, no_repeat_ngram_size=1))
# В любом случае каждый рыбак мечтает познакомиться со своей фермой
print(generate('fill | Каждый охотник _3_, где сидит фазан.'))
# смотрит на озеро
print(generate('assemble | охотник каждый знать фазан сидит'))
# Каждый охотник знает, что фазан сидит.
print(generate('simplify | Местным продуктом-специалитетом с защищённым географическим наименованием по происхождению считается люнебургский степной барашек.', max_length=32))
# Местным продуктом-специалитетом считается люнебургский степной барашек.
print(generate('reply | Помогите мне закадрить девушку'))
# Что я хочу?
print(generate('answer | Помогите мне закадрить девушку'))
# я хочу познакомиться с девушкой!!!!!!!!
print(generate("comprehend | На фоне земельного конфликта между владельцами овец и ранчеро разворачивается история любви овцевода Моргана Лейна, "
"прибывшего в США из Австралии, и Марии Синглетон, владелицы богатого скотоводческого ранчо. Вопрос: откуда приехал Морган?"))
# из Австралии
print(generate("ask | На фоне земельного конфликта между владельцами овец и ранчеро разворачивается история любви овцевода Моргана Лейна, "
"прибывшего в США из Австралии, и Марии Синглетон, владелицы богатого скотоводческого ранчо.", max_length=32))
# Что разворачивается на фоне земельного конфликта между владельцами овец и ранчеро?
print(generate("headline | На фоне земельного конфликта между владельцами овец и ранчеро разворачивается история любви овцевода Моргана Лейна, "
"прибывшего в США из Австралии, и Марии Синглетон, владелицы богатого скотоводческого ранчо.", max_length=32))
# На фоне земельного конфликта разворачивается история любви овцевода Моргана Лейна и Марии Синглетон
```
However, it is strongly recommended that you fine tune the model for your own task.
|
{"language": ["ru", "en"], "license": "mit", "tags": ["russian"], "widget": [{"text": "fill | \u041f\u043e\u0447\u0435\u043c\u0443 \u043e\u043d\u0438 \u043d\u0435 ___ \u043d\u0430 \u043c\u0435\u043d\u044f?"}]}
|
cointegrated/rut5-base-multitask
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"t5",
"text2text-generation",
"russian",
"ru",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
This is a paraphraser for Russian sentences described [in this Habr post](https://habr.com/ru/post/564916/).
It is recommended to use the model with the `encoder_no_repeat_ngram_size` argument:
```
from transformers import T5ForConditionalGeneration, T5Tokenizer
MODEL_NAME = 'cointegrated/rut5-base-paraphraser'
model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME)
tokenizer = T5Tokenizer.from_pretrained(MODEL_NAME)
model.cuda();
model.eval();
def paraphrase(text, beams=5, grams=4, do_sample=False):
x = tokenizer(text, return_tensors='pt', padding=True).to(model.device)
max_size = int(x.input_ids.shape[1] * 1.5 + 10)
out = model.generate(**x, encoder_no_repeat_ngram_size=grams, num_beams=beams, max_length=max_size, do_sample=do_sample)
return tokenizer.decode(out[0], skip_special_tokens=True)
print(paraphrase('Каждый охотник желает знать, где сидит фазан.'))
# Все охотники хотят знать где фазан сидит.
```
|
{"language": ["ru"], "license": "mit", "tags": ["russian", "paraphrasing", "paraphraser", "paraphrase"], "datasets": ["cointegrated/ru-paraphrase-NMT-Leipzig"], "widget": [{"text": "\u041a\u0430\u0436\u0434\u044b\u0439 \u043e\u0445\u043e\u0442\u043d\u0438\u043a \u0436\u0435\u043b\u0430\u0435\u0442 \u0437\u043d\u0430\u0442\u044c, \u0433\u0434\u0435 \u0441\u0438\u0434\u0438\u0442 \u0444\u0430\u0437\u0430\u043d."}]}
|
cointegrated/rut5-base-paraphraser
| null |
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"russian",
"paraphrasing",
"paraphraser",
"paraphrase",
"ru",
"dataset:cointegrated/ru-paraphrase-NMT-Leipzig",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
{}
|
cointegrated/rut5-base-quiz
| null |
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
{}
|
cointegrated/rut5-base-review
| null |
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
This is a smaller version of the [google/mt5-base](https://huggingface.co/google/mt5-base) model with only Russian and some English embeddings left.
* The original model has 582M parameters, with 384M of them being input and output embeddings.
* After shrinking the `sentencepiece` vocabulary from 250K to 30K (top 10K English and top 20K Russian tokens) the number of model parameters reduced to 244M parameters, and model size reduced from 2.2GB to 0.9GB - 42% of the original one.
The creation of this model is described in the post [How to adapt a multilingual T5 model for a single language](https://cointegrated.medium.com/how-to-adapt-a-multilingual-t5-model-for-a-single-language-b9f94f3d9c90) along with the source code.
|
{"language": ["ru", "en", "multilingual"], "license": "mit", "tags": ["russian"]}
|
cointegrated/rut5-base
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"t5",
"text2text-generation",
"russian",
"ru",
"en",
"multilingual",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
This is a version of the [cointegrated/rut5-small](https://huggingface.co/cointegrated/rut5-small) model fine-tuned on some Russian dialogue data. It is not very smart and creative, but it is small and fast, and can serve as a fallback response generator for some chatbot or can be fine-tuned to imitate the style of someone.
The input of the model is the previous dialogue utterances separated by `'\n\n'`, and the output is the next utterance.
The model can be used as follows:
```
# !pip install transformers sentencepiece
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("cointegrated/rut5-small-chitchat")
model = T5ForConditionalGeneration.from_pretrained("cointegrated/rut5-small-chitchat")
text = 'Привет! Расскажи, как твои дела?'
inputs = tokenizer(text, return_tensors='pt')
with torch.no_grad():
hypotheses = model.generate(
**inputs,
do_sample=True, top_p=0.5, num_return_sequences=3,
repetition_penalty=2.5,
max_length=32,
)
for h in hypotheses:
print(tokenizer.decode(h, skip_special_tokens=True))
# Как обычно.
# Сейчас - в порядке.
# Хорошо.
# Wall time: 363 ms
```
|
{"language": "ru", "license": "mit", "tags": ["dialogue", "russian"]}
|
cointegrated/rut5-small-chitchat
| null |
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"dialogue",
"russian",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
A version of https://huggingface.co/cointegrated/rut5-small-chitchat which is more dull but less toxic.
|
{}
|
cointegrated/rut5-small-chitchat2
| null |
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
This is a small Russian denoising autoencoder. It can be used for restoring corrupted sentences.
This model was produced by fine-tuning the [rut5-small](https://huggingface.co/cointegrated/rut5-small) model on the task of reconstructing a sentence:
* restoring word positions (after slightly shuffling them)
* restoring dropped words and punctuation marks (after dropping some of them randomly)
* restoring inflection of words (after changing their inflection randomly using [natasha](https://github.com/natasha/natasha) and [pymorphy2](https://github.com/kmike/pymorphy2) packages)
The fine-tuning was performed on a [Leipzig web corpus](https://wortschatz.uni-leipzig.de/en/download/Russian) of Russian sentences.
The model can be applied as follows:
```
# !pip install transformers sentencepiece
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("cointegrated/rut5-small-normalizer")
model = T5ForConditionalGeneration.from_pretrained("cointegrated/rut5-small-normalizer")
text = 'меня тобой не понимать'
inputs = tokenizer(text, return_tensors='pt')
with torch.no_grad():
hypotheses = model.generate(
**inputs,
do_sample=True, top_p=0.95,
num_return_sequences=5,
repetition_penalty=2.5,
max_length=32,
)
for h in hypotheses:
print(tokenizer.decode(h, skip_special_tokens=True))
```
A possible output is:
```
# Мне тебя не понимать.
# Если бы ты понимаешь меня?
# Я с тобой не понимаю.
# Я тебя не понимаю.
# Я не понимаю о чем ты.
```
|
{"language": "ru", "license": "mit", "tags": ["normalization", "denoising autoencoder", "russian"], "widget": [{"text": "\u043c\u0435\u043d\u044f \u0442\u043e\u0431\u043e\u0439 \u043d\u0435 \u043f\u043e\u043d\u0438\u043c\u0430\u0442\u044c"}]}
|
cointegrated/rut5-small-normalizer
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"t5",
"text2text-generation",
"normalization",
"denoising autoencoder",
"russian",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
This is a small Russian paraphraser based on the [google/mt5-small](https://huggingface.co/google/mt5-small) model.
It has rather poor paraphrasing performance, but can be fine tuned for this or other tasks.
This model was created by taking the [alenusch/mt5small-ruparaphraser](https://huggingface.co/alenusch/mt5small-ruparaphraser) model and stripping 96% of its vocabulary which is unrelated to the Russian language or infrequent.
* The original model has 300M parameters, with 256M of them being input and output embeddings.
* After shrinking the `sentencepiece` vocabulary from 250K to 20K the number of model parameters reduced to 65M parameters, and model size reduced from 1.1GB to 246MB.
* The first 5K tokens in the new vocabulary are taken from the original `mt5-small`.
* The next 15K tokens are the most frequent tokens obtained by tokenizing a Russian web corpus from the [Leipzig corpora collection](https://wortschatz.uni-leipzig.de/en/download/Russian).
The model can be used as follows:
```
# !pip install transformers sentencepiece
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("cointegrated/rut5-small")
model = T5ForConditionalGeneration.from_pretrained("cointegrated/rut5-small")
text = 'Ехал Грека через реку, видит Грека в реке рак. '
inputs = tokenizer(text, return_tensors='pt')
with torch.no_grad():
hypotheses = model.generate(
**inputs,
do_sample=True, top_p=0.95, num_return_sequences=10,
repetition_penalty=2.5,
max_length=32,
)
for h in hypotheses:
print(tokenizer.decode(h, skip_special_tokens=True))
```
|
{"language": "ru", "license": "mit", "tags": ["paraphrasing", "russian"]}
|
cointegrated/rut5-small
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"mt5",
"text2text-generation",
"paraphrasing",
"russian",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
{}
|
coiour/mymodel001
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chinese-address-ner
This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1080
- Precision: 0.9664
- Recall: 0.9774
- F1: 0.9719
- Accuracy: 0.9758
## Model description
输入一串地址中文信息,比如快递单:`北京市海淀区西北旺东路10号院(马连洼街道西北旺社区东北方向)`,按照行政级别(总有 7 级)抽取地址信息,返回每个 token 的类别。具体类别含义表示如下:
| 返回类别 | BIO 体系 | 解释 |
| ----------- | -------- | ---------------------- |
| **LABEL_0** | O | 忽略信息 |
| **LABEL_1** | B-A1 | 第一级地址(头) |
| **LABEL_2** | I-A1 | 第一级地址(其余部分) |
| ... | ... | ... |
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 50
- eval_batch_size: 50
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 2.5055 | 1.0 | 7 | 1.6719 | 0.1977 | 0.2604 | 0.2248 | 0.5649 |
| 1.837 | 2.0 | 14 | 1.0719 | 0.4676 | 0.6 | 0.5256 | 0.7421 |
| 1.0661 | 3.0 | 21 | 0.7306 | 0.6266 | 0.7472 | 0.6816 | 0.8106 |
| 0.8373 | 4.0 | 28 | 0.5197 | 0.6456 | 0.8113 | 0.7191 | 0.8614 |
| 0.522 | 5.0 | 35 | 0.3830 | 0.7667 | 0.8679 | 0.8142 | 0.9001 |
| 0.4295 | 6.0 | 42 | 0.3104 | 0.8138 | 0.8906 | 0.8505 | 0.9178 |
| 0.3483 | 7.0 | 49 | 0.2453 | 0.8462 | 0.9132 | 0.8784 | 0.9404 |
| 0.2471 | 8.0 | 56 | 0.2081 | 0.8403 | 0.9132 | 0.8752 | 0.9428 |
| 0.2299 | 9.0 | 63 | 0.1979 | 0.8419 | 0.9245 | 0.8813 | 0.9420 |
| 0.1761 | 10.0 | 70 | 0.1823 | 0.8830 | 0.9396 | 0.9104 | 0.9500 |
| 0.1434 | 11.0 | 77 | 0.1480 | 0.9036 | 0.9547 | 0.9284 | 0.9629 |
| 0.134 | 12.0 | 84 | 0.1341 | 0.9173 | 0.9623 | 0.9392 | 0.9678 |
| 0.128 | 13.0 | 91 | 0.1365 | 0.9375 | 0.9623 | 0.9497 | 0.9694 |
| 0.0824 | 14.0 | 98 | 0.1159 | 0.9557 | 0.9774 | 0.9664 | 0.9734 |
| 0.0744 | 15.0 | 105 | 0.1092 | 0.9591 | 0.9736 | 0.9663 | 0.9766 |
| 0.0569 | 16.0 | 112 | 0.1117 | 0.9556 | 0.9736 | 0.9645 | 0.9742 |
| 0.0559 | 17.0 | 119 | 0.1040 | 0.9628 | 0.9774 | 0.9700 | 0.9790 |
| 0.0456 | 18.0 | 126 | 0.1052 | 0.9593 | 0.9774 | 0.9682 | 0.9782 |
| 0.0405 | 19.0 | 133 | 0.1133 | 0.9590 | 0.9698 | 0.9644 | 0.9718 |
| 0.0315 | 20.0 | 140 | 0.1060 | 0.9591 | 0.9736 | 0.9663 | 0.9750 |
| 0.0262 | 21.0 | 147 | 0.1087 | 0.9554 | 0.9698 | 0.9625 | 0.9718 |
| 0.0338 | 22.0 | 154 | 0.1183 | 0.9625 | 0.9698 | 0.9662 | 0.9726 |
| 0.0225 | 23.0 | 161 | 0.1080 | 0.9664 | 0.9774 | 0.9719 | 0.9758 |
| 0.028 | 24.0 | 168 | 0.1057 | 0.9591 | 0.9736 | 0.9663 | 0.9742 |
| 0.0202 | 25.0 | 175 | 0.1062 | 0.9628 | 0.9774 | 0.9700 | 0.9766 |
| 0.0168 | 26.0 | 182 | 0.1097 | 0.9664 | 0.9774 | 0.9719 | 0.9758 |
| 0.0173 | 27.0 | 189 | 0.1093 | 0.9628 | 0.9774 | 0.9700 | 0.9774 |
| 0.0151 | 28.0 | 196 | 0.1162 | 0.9628 | 0.9774 | 0.9700 | 0.9766 |
| 0.0135 | 29.0 | 203 | 0.1126 | 0.9483 | 0.9698 | 0.9590 | 0.9758 |
| 0.0179 | 30.0 | 210 | 0.1100 | 0.9449 | 0.9698 | 0.9572 | 0.9774 |
| 0.0161 | 31.0 | 217 | 0.1098 | 0.9449 | 0.9698 | 0.9572 | 0.9766 |
| 0.0158 | 32.0 | 224 | 0.1191 | 0.9483 | 0.9698 | 0.9590 | 0.9734 |
| 0.0151 | 33.0 | 231 | 0.1058 | 0.9483 | 0.9698 | 0.9590 | 0.9750 |
| 0.0121 | 34.0 | 238 | 0.0990 | 0.9593 | 0.9774 | 0.9682 | 0.9790 |
| 0.0092 | 35.0 | 245 | 0.1128 | 0.9519 | 0.9698 | 0.9607 | 0.9774 |
| 0.0097 | 36.0 | 252 | 0.1181 | 0.9627 | 0.9736 | 0.9681 | 0.9766 |
| 0.0118 | 37.0 | 259 | 0.1185 | 0.9591 | 0.9736 | 0.9663 | 0.9782 |
| 0.0118 | 38.0 | 266 | 0.1021 | 0.9557 | 0.9774 | 0.9664 | 0.9823 |
| 0.0099 | 39.0 | 273 | 0.1000 | 0.9559 | 0.9811 | 0.9683 | 0.9815 |
| 0.0102 | 40.0 | 280 | 0.1025 | 0.9559 | 0.9811 | 0.9683 | 0.9815 |
| 0.0068 | 41.0 | 287 | 0.1080 | 0.9522 | 0.9774 | 0.9646 | 0.9807 |
| 0.0105 | 42.0 | 294 | 0.1157 | 0.9449 | 0.9698 | 0.9572 | 0.9766 |
| 0.0083 | 43.0 | 301 | 0.1207 | 0.9380 | 0.9698 | 0.9536 | 0.9766 |
| 0.0077 | 44.0 | 308 | 0.1208 | 0.9483 | 0.9698 | 0.9590 | 0.9766 |
| 0.0077 | 45.0 | 315 | 0.1176 | 0.9483 | 0.9698 | 0.9590 | 0.9774 |
| 0.0071 | 46.0 | 322 | 0.1137 | 0.9483 | 0.9698 | 0.9590 | 0.9790 |
| 0.0075 | 47.0 | 329 | 0.1144 | 0.9483 | 0.9698 | 0.9590 | 0.9782 |
| 0.0084 | 48.0 | 336 | 0.1198 | 0.9483 | 0.9698 | 0.9590 | 0.9766 |
| 0.0103 | 49.0 | 343 | 0.1217 | 0.9519 | 0.9698 | 0.9607 | 0.9766 |
| 0.0087 | 50.0 | 350 | 0.1230 | 0.9519 | 0.9698 | 0.9607 | 0.9766 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.0
- Datasets 1.9.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "chinese-address-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.975825946817083}}]}]}
|
jiaqianjing/chinese-address-ner
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
jiaqianjing/distilbert-base-uncased-finetuned-cola
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0975 | 1.0 | 291 | 1.7060 |
| 1.648 | 2.0 | 582 | 1.4280 |
| 1.4837 | 3.0 | 873 | 1.3980 |
| 1.3978 | 4.0 | 1164 | 1.4040 |
| 1.3314 | 5.0 | 1455 | 1.2032 |
| 1.2954 | 6.0 | 1746 | 1.2814 |
| 1.2448 | 7.0 | 2037 | 1.2635 |
| 1.1983 | 8.0 | 2328 | 1.2071 |
| 1.1849 | 9.0 | 2619 | 1.1675 |
| 1.1414 | 10.0 | 2910 | 1.2095 |
| 1.1314 | 11.0 | 3201 | 1.1858 |
| 1.0943 | 12.0 | 3492 | 1.1658 |
| 1.0838 | 13.0 | 3783 | 1.2336 |
| 1.0733 | 14.0 | 4074 | 1.1606 |
| 1.0627 | 15.0 | 4365 | 1.1188 |
| 1.055 | 16.0 | 4656 | 1.2500 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "bert-base-uncased", "model-index": [{"name": "bert-base-uncased-issues-128", "results": []}]}
|
coldfir3/bert-base-uncased-issues-128
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2175
- Accuracy: 0.922
- F1: 0.9222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8262 | 1.0 | 250 | 0.3073 | 0.904 | 0.9021 |
| 0.2484 | 2.0 | 500 | 0.2175 | 0.922 | 0.9222 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.922, "name": "Accuracy"}, {"type": "f1", "value": 0.9222116474112371, "name": "F1"}]}]}]}
|
coldfir3/distilbert-base-uncased-finetuned-emotion
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1759
- F1: 0.8527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3038 | 1.0 | 835 | 0.1922 | 0.8065 |
| 0.1559 | 2.0 | 1670 | 0.1714 | 0.8422 |
| 0.1002 | 3.0 | 2505 | 0.1759 | 0.8527 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-all", "results": []}]}
|
coldfir3/xlm-roberta-base-finetuned-panx-all
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1667
- F1: 0.8582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2885 | 1.0 | 715 | 0.1817 | 0.8287 |
| 0.1497 | 2.0 | 1430 | 0.1618 | 0.8442 |
| 0.0944 | 3.0 | 2145 | 0.1667 | 0.8582 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-de-fr", "results": []}]}
|
coldfir3/xlm-roberta-base-finetuned-panx-de-fr
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
coldfir3/xlm-roberta-base-finetuned-panx-de
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3925
- F1: 0.7075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1493 | 1.0 | 50 | 0.5884 | 0.4748 |
| 0.5135 | 2.0 | 100 | 0.4088 | 0.6623 |
| 0.3558 | 3.0 | 150 | 0.3925 | 0.7075 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["xtreme"], "metrics": ["f1"], "model-index": [{"name": "xlm-roberta-base-finetuned-panx-en", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "xtreme", "type": "xtreme", "args": "PAN-X.en"}, "metrics": [{"type": "f1", "value": 0.7075365579302588, "name": "F1"}]}]}]}
|
coldfir3/xlm-roberta-base-finetuned-panx-en
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2651
- F1: 0.8355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5954 | 1.0 | 191 | 0.3346 | 0.7975 |
| 0.2689 | 2.0 | 382 | 0.2900 | 0.8347 |
| 0.1821 | 3.0 | 573 | 0.2651 | 0.8355 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["xtreme"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-fr", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "xtreme", "type": "xtreme", "args": "PAN-X.fr"}, "metrics": [{"type": "f1", "value": 0.8354854938789199, "name": "F1"}]}]}]}
|
coldfir3/xlm-roberta-base-finetuned-panx-fr
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2323
- F1: 0.8228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8126 | 1.0 | 70 | 0.3361 | 0.7231 |
| 0.2995 | 2.0 | 140 | 0.2526 | 0.8079 |
| 0.1865 | 3.0 | 210 | 0.2323 | 0.8228 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["xtreme"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-it", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "xtreme", "type": "xtreme", "args": "PAN-X.it"}, "metrics": [{"type": "f1", "value": 0.822805578342904, "name": "F1"}]}]}]}
|
coldfir3/xlm-roberta-base-finetuned-panx-it
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
colinad07/random
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
# Harry Potter DialoGPT Model
|
{"tags": ["conversational"]}
|
colochoplay/DialoGTP-small-harrypotter
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
colon0722/distilbert-base-uncased-finetuned-imdb
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
# BERT base Japanese model
This repository contains a BERT base model trained on Japanese Wikipedia dataset.
## Training data
[Japanese Wikipedia](https://ja.wikipedia.org/wiki/Wikipedia:データベースダウンロード) dataset as of June 20, 2021 which is released under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/) is used for training.
The dataset is splitted into three subsets - train, valid and test. Both tokenizer and model are trained with the train split.
## Model description
The model architecture is the same as BERT base model (hidden_size: 768, num_hidden_layers: 12, num_attention_heads: 12, max_position_embeddings: 512) except for a vocabulary size.
The vocabulary size is set to 32,000 instead of an original size of 30,522.
For the model, `transformers.BertForPreTraining` is used.
## Tokenizer description
[SentencePiece](https://github.com/google/sentencepiece) tokenizer is used as a tokenizer for this model.
While training, the tokenizer model was trained with 1,000,000 samples which were extracted from the train split.
The vocabulary size is set to 32,000. A `add_dummy_prefix` option is set to `True` because words are not separated by whitespaces in Japanese.
After training, the model is imported to `transformers.DebertaV2Tokenizer` because it supports SentencePiece models and its behavior is consistent when `use_fast` option is set to `True` or `False`.
**Note:**
The meaning of "consistent" here is as follows.
For example, AlbertTokenizer provides AlbertTokenizer and AlbertTokenizerFast. Fast model is used as default. However, the tokenization behavior between them is different and a behavior this mdoel expects is the verions of not fast.
Although `use_fast=False` option passing to AutoTokenier or pipeline solves this problem to force to use not fast version of the tokenizer, this option cannot be passed to config.json or model card.
Therefore unexpected behavior happens when using Inference API. To avoid this kind of problems, `transformers.DebertaV2Tokenizer` is used in this model.
## Training
Training details are as follows.
* gradient update is every 256 samples (batch size: 8, accumulate_grad_batches: 32)
* gradient clip norm is 1.0
* Learning rate starts from 0 and linearly increased to 0.0001 in the first 10,000 steps
* The training set contains around 20M samples. Because 80k * 256 ~ 20M, 1 epochs has around 80k steps.
Trainind was conducted on Ubuntu 18.04.5 LTS with one RTX 2080 Ti.
The training continued until validation loss got worse. Totally the number of training steps were around 214k.
The test set loss was 2.80 .
Training code is available in [a GitHub repository](https://github.com/colorfulscoop/bert-ja).
## Usage
First, install dependecies.
```sh
$ pip install torch==1.8.0 transformers==4.8.2 sentencepiece==0.1.95
```
Then use `transformers.pipeline` to try mask fill task.
```sh
>>> import transformers
>>> pipeline = transformers.pipeline("fill-mask", "colorfulscoop/bert-base-ja", revision="v1.0")
>>> pipeline("専門として[MASK]を専攻しています")
[{'sequence': '専門として工学を専攻しています', 'score': 0.03630176931619644, 'token': 3988, 'token_str': '工学'}, {'sequence': '専門として政治学を専攻しています', 'score': 0.03547220677137375, 'token': 22307, 'token_str': '政治学'}, {'sequence': '専門として教育を専攻しています', 'score': 0.03162326663732529, 'token': 414, 'token_str': '教育'}, {'sequence': '専門として経済学を専攻しています', 'score': 0.026036914438009262, 'token': 6814, 'token_str': '経済学'}, {'sequence': '専門として法学を専攻しています', 'score': 0.02561848610639572, 'token': 10810, 'token_str': '法学'}]
```
Note: specifying a `revision` option is recommended to keep reproducibility when downloading a model via `transformers.pipeline` or `transformers.AutoModel.from_pretrained` .
## License
Copyright (c) 2021 Colorful Scoop
All the models included in this repository are licensed under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
**Disclaimer:** The model potentially has possibility that it generates similar texts in the training data, texts not to be true, or biased texts. Use of the model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output.
---
This model utilizes the following data as training data
* **Name:** ウィキペディア (Wikipedia): フリー百科事典
* **Credit:** https://ja.wikipedia.org/
* **License:** [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/)
* **Link:** https://ja.wikipedia.org/
|
{"language": "ja", "license": "cc-by-sa-4.0", "datasets": "wikipedia", "pipeline_tag": "fill-mask", "widget": [{"text": "\u5f97\u610f\u306a\u79d1\u76ee\u306f[MASK]\u3067\u3059\u3002"}]}
|
colorfulscoop/bert-base-ja
| null |
[
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"fill-mask",
"ja",
"dataset:wikipedia",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# GPT-2 small Japanese model
This repository contains a GPT2-small model trained on Japanese Wikipedia dataset.
## Training data
[Japanese Wikipedia](https://ja.wikipedia.org/wiki/Wikipedia:データベースダウンロード) dataset as of Aug20, 2021 released under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/) is used for both tokenizer and GPT-2 model.
We splitted the dataset into three subsets - train, valid and test sets. Both tokenizer and model were trained on the train set.
Train set contains around 540M tokens.
## Model description
The model architecture is the same as GPT-2 small model (n_ctx: 1024, n_embd 768, n_head: 12, n_layer: 12) except for a vocabulary size.
The vocabulary size is set to 32,000 instead of an original size of 50,257.
`transformers.GPT2LMHeadModel` is used for training.
## Tokenizer description
[SentencePiece](https://github.com/google/sentencepiece) is used as a tokenizer for this model.
We utilized 1,000,000 sentences from train set.
The vocabulary size was 32,000.
A `add_dummy_prefix` option was set to `True` because Japanese words are not separated by whitespaces.
After training, the tokenizer model was imported as `transformers.BERTGenerationTokenizer`
because it supports SentencePiece models and it does not add any special tokens as default,
which is useful expecially for a text generation task.
## Training
The model was trained on the train set for 30 epochs with batch size 32. Each sample contained 1024 tokens.
We utilized Adam optimizer. Learning rate was linearly increased from `0` to `1e-4` during the first 10,000 steps.
A clip norm was set to `1.0`.
Test set perplexity of the trained model was 29.13.
Please refer to [GitHub](https://github.com/colorfulscoop/gpt-ja) for more training details.
## Usage
First, install dependecies.
```sh
$ pip install transformers==4.10.0 torch==1.8.1 sentencepiece==0.1.96
```
Then use pipeline to generate sentences.
```sh
>>> import transformers
>>> pipeline = transformers.pipeline("text-generation", "colorfulscoop/gpt2-small-ja")
>>> pipeline("統計的機械学習でのニューラルネットワーク", do_sample=True, top_p=0.95, top_k=50, num_return_sequences=3)
```
**Note:** The default model configuration `config.json` sets parameters for text generation with `do_sample=True`, `top_k=50`, `top_p=0.95`.
Please set these parameters when you need to use different parameters.
## Versions
We recommend to specify `revision` to load the model for reproducibility.
| Revision | Date of Wikipedia dump |
| --- | --- |
| 20210820.1.0 | Aug 20, 2021 |
| 20210301.1.0 | March 1, 2021 |
You can specify `revision` as follows.
```py
# Example of pipeline
>>> transformers.pipeline("text-generation", "colorfulscoop/gpt2-small-ja", revision="20210820.1.0")
# Example of AutoModel
>>> transformers.AutoModel.from_pretrained("colorfulscoop/gpt2-small-ja", revision="20210820.1.0")
```
## License
All the models included in this repository are licensed under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
**Disclaimer:** The model potentially has possibility that it generates similar texts in the training data, texts not to be true, or biased texts. Use of the model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output.
**Author:** Colorful Scoop
|
{"language": "ja", "license": "cc", "datasets": "wikipedia", "widget": [{"text": "\u7d71\u8a08\u7684\u6a5f\u68b0\u5b66\u7fd2\u3067\u306e\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8\u30ef\u30fc\u30af"}]}
|
colorfulscoop/gpt2-small-ja
| null |
[
"transformers",
"pytorch",
"tf",
"gpt2",
"text-generation",
"ja",
"dataset:wikipedia",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
sentence-similarity
|
sentence-transformers
|
# Sentence BERT base Japanese model
This repository contains a Sentence BERT base model for Japanese.
## Pretrained model
This model utilizes a Japanese BERT model [colorfulscoop/bert-base-ja](https://huggingface.co/colorfulscoop/bert-base-ja) v1.0 released under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/) as a pretrained model.
## Training data
[Japanese SNLI dataset](https://nlp.ist.i.kyoto-u.ac.jp/index.php?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88) released under [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/) is used for training.
Original training dataset is splitted into train/valid dataset. Finally, follwoing data is prepared.
* Train data: 523,005 samples
* Valid data: 10,000 samples
* Test data: 3,916 samples
## Model description
This model utilizes `SentenceTransformer` model from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers) .
The model detail is as below.
```py
>>> from sentence_transformers import SentenceTransformer
>>> SentenceTransformer("colorfulscoop/sbert-base-ja")
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Training
This model finetuned [colorfulscoop/bert-base-ja](https://huggingface.co/colorfulscoop/bert-base-ja) with Softmax classifier of 3 labels of SNLI. AdamW optimizer with learning rate of 2e-05 linearly warmed-up in 10% of train data was used. The model was trained in 1 epoch with batch size 8.
Note: in a original paper of [Sentence BERT](https://arxiv.org/abs/1908.10084), a batch size of the model trained on SNLI and Multi-Genle NLI was 16. In this model, the dataset is around half smaller than the origial one, therefore the batch size was set to half of the original batch size of 16.
Trainind was conducted on Ubuntu 18.04.5 LTS with one RTX 2080 Ti.
After training, test set accuracy reached to 0.8529.
Training code is available in [a GitHub repository](https://github.com/colorfulscoop/sbert-ja).
## Usage
First, install dependecies.
```sh
$ pip install sentence-transformers==2.0.0
```
Then initialize `SentenceTransformer` model and use `encode` method to convert to vectors.
```py
>>> from sentence_transformers import SentenceTransformer
>>> model = SentenceTransformer("colorfulscoop/sbert-base-ja")
>>> sentences = ["外をランニングするのが好きです", "海外旅行に行くのが趣味です"]
>>> model.encode(sentences)
```
## License
Copyright (c) 2021 Colorful Scoop
All the models included in this repository are licensed under [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
**Disclaimer:** Use of this model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output.
---
This model utilizes the folllowing pretrained model.
* **Name:** bert-base-ja
* **Credit:** (c) 2021 Colorful Scoop
* **License:** [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/)
* **Disclaimer:** The model potentially has possibility that it generates similar texts in the training data, texts not to be true, or biased texts. Use of the model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output.
* **Link:** https://huggingface.co/colorfulscoop/bert-base-ja
---
This model utilizes the following data for fine-tuning.
* **Name:** 日本語SNLI(JSNLI)データセット
* **Credit:** [https://nlp.ist.i.kyoto-u.ac.jp/index.php?日本語SNLI(JSNLI)データセット](https://nlp.ist.i.kyoto-u.ac.jp/index.php?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88)
* **License:** [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)
* **Link:** [https://nlp.ist.i.kyoto-u.ac.jp/index.php?日本語SNLI(JSNLI)データセット](https://nlp.ist.i.kyoto-u.ac.jp/index.php?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88)
|
{"language": "ja", "license": "cc-by-sa-4.0", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity", "widget": {"source_sentence": "\u8d70\u308b\u306e\u304c\u8da3\u5473\u3067\u3059", "sentences": ["\u5916\u3092\u30e9\u30f3\u30cb\u30f3\u30b0\u3059\u308b\u306e\u304c\u597d\u304d\u3067\u3059", "\u904b\u52d5\u306f\u305d\u3053\u305d\u3053\u3067\u3059", "\u8d70\u308b\u306e\u306f\u5acc\u3044\u3067\u3059"]}}
|
colorfulscoop/sbert-base-ja
| null |
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"ja",
"arxiv:1908.10084",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
question-answering
|
transformers
|
{}
|
comacrae/roberta-eda-and-parav3
| null |
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
question-answering
|
transformers
|
{}
|
comacrae/roberta-edav3
| null |
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
question-answering
|
transformers
|
{}
|
comacrae/roberta-paraphrasev3
| null |
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
question-answering
|
transformers
|
{}
|
comacrae/roberta-unaugmentedv3
| null |
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
question-answering
|
transformers
|
{}
|
comacrae/roberta-unaugv3
| null |
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
automatic-speech-recognition
|
transformers
|
# Czech wav2vec2-xls-r-300m-cs-250
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice 8.0 dataset as well as other datasets listed below.
It achieves the following results on the evaluation set:
- Loss: 0.1271
- Wer: 0.1475
- Cer: 0.0329
The `eval.py` script results using a LM are:
- WER: 0.07274312090176113
- CER: 0.021207369275558875
## Model description
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Czech using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("mozilla-foundation/common_voice_8_0", "cs", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs-250")
model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs-250")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated using the attached `eval.py` script:
```
python eval.py --model_id comodoro/wav2vec2-xls-r-300m-cs-250 --dataset mozilla-foundation/common-voice_8_0 --split test --config cs
```
## Training and evaluation data
The Common Voice 8.0 `train` and `validation` datasets were used for training, as well as the following datasets:
- Šmídl, Luboš and Pražák, Aleš, 2013, OVM – Otázky Václava Moravce, LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University, http://hdl.handle.net/11858/00-097C-0000-000D-EC98-3.
- Pražák, Aleš and Šmídl, Luboš, 2012, Czech Parliament Meetings, LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University, http://hdl.handle.net/11858/00-097C-0000-0005-CF9C-4.
- Plátek, Ondřej; Dušek, Ondřej and Jurčíček, Filip, 2016, Vystadial 2016 – Czech data, LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University, http://hdl.handle.net/11234/1-1740.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 3.4203 | 0.16 | 800 | 3.3148 | 1.0 | 1.0 |
| 2.8151 | 0.32 | 1600 | 0.8508 | 0.8938 | 0.2345 |
| 0.9411 | 0.48 | 2400 | 0.3335 | 0.3723 | 0.0847 |
| 0.7408 | 0.64 | 3200 | 0.2573 | 0.2840 | 0.0642 |
| 0.6516 | 0.8 | 4000 | 0.2365 | 0.2581 | 0.0595 |
| 0.6242 | 0.96 | 4800 | 0.2039 | 0.2433 | 0.0541 |
| 0.5754 | 1.12 | 5600 | 0.1832 | 0.2156 | 0.0482 |
| 0.5626 | 1.28 | 6400 | 0.1827 | 0.2091 | 0.0463 |
| 0.5342 | 1.44 | 7200 | 0.1744 | 0.2033 | 0.0468 |
| 0.4965 | 1.6 | 8000 | 0.1705 | 0.1963 | 0.0444 |
| 0.5047 | 1.76 | 8800 | 0.1604 | 0.1889 | 0.0422 |
| 0.4814 | 1.92 | 9600 | 0.1604 | 0.1827 | 0.0411 |
| 0.4471 | 2.09 | 10400 | 0.1566 | 0.1822 | 0.0406 |
| 0.4509 | 2.25 | 11200 | 0.1619 | 0.1853 | 0.0432 |
| 0.4415 | 2.41 | 12000 | 0.1513 | 0.1764 | 0.0397 |
| 0.4313 | 2.57 | 12800 | 0.1515 | 0.1739 | 0.0392 |
| 0.4163 | 2.73 | 13600 | 0.1445 | 0.1695 | 0.0377 |
| 0.4142 | 2.89 | 14400 | 0.1478 | 0.1699 | 0.0385 |
| 0.4184 | 3.05 | 15200 | 0.1430 | 0.1669 | 0.0376 |
| 0.3886 | 3.21 | 16000 | 0.1433 | 0.1644 | 0.0374 |
| 0.3795 | 3.37 | 16800 | 0.1426 | 0.1648 | 0.0373 |
| 0.3859 | 3.53 | 17600 | 0.1357 | 0.1604 | 0.0361 |
| 0.3762 | 3.69 | 18400 | 0.1344 | 0.1558 | 0.0349 |
| 0.384 | 3.85 | 19200 | 0.1379 | 0.1576 | 0.0359 |
| 0.3762 | 4.01 | 20000 | 0.1344 | 0.1539 | 0.0346 |
| 0.3559 | 4.17 | 20800 | 0.1339 | 0.1525 | 0.0351 |
| 0.3683 | 4.33 | 21600 | 0.1315 | 0.1518 | 0.0342 |
| 0.3572 | 4.49 | 22400 | 0.1307 | 0.1507 | 0.0342 |
| 0.3494 | 4.65 | 23200 | 0.1294 | 0.1491 | 0.0335 |
| 0.3476 | 4.81 | 24000 | 0.1287 | 0.1491 | 0.0336 |
| 0.3475 | 4.97 | 24800 | 0.1271 | 0.1475 | 0.0329 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"language": ["cs"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "xlsr-fine-tuning-week"], "datasets": ["mozilla-foundation/common_voice_8_0", "ovm", "pscr", "vystadial2016"], "base_model": "facebook/wav2vec2-xls-r-300m", "model-index": [{"name": "Czech comodoro Wav2Vec2 XLSR 300M 250h data", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "cs"}, "metrics": [{"type": "wer", "value": 7.3, "name": "Test WER"}, {"type": "cer", "value": 2.1, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "cs"}, "metrics": [{"type": "wer", "value": 43.44, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "cs"}, "metrics": [{"type": "wer", "value": 38.5, "name": "Test WER"}]}]}]}
|
comodoro/wav2vec2-xls-r-300m-cs-250
| null |
[
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"xlsr-fine-tuning-week",
"cs",
"dataset:mozilla-foundation/common_voice_8_0",
"dataset:ovm",
"dataset:pscr",
"dataset:vystadial2016",
"base_model:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-cs-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice 8.0 dataset.
It achieves the following results on the evaluation set while training:
- Loss: 0.2327
- Wer: 0.1608
- Cer: 0.0376
The `eval.py` script results using a LM are:
WER: 0.10281503199350225
CER: 0.02622802241689026
## Model description
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Czech using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("mozilla-foundation/common_voice_8_0", "cs", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs-cv8")
model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs-cv8")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated using the attached `eval.py` script:
```
python eval.py --model_id comodoro/wav2vec2-xls-r-300m-cs-cv8 --dataset mozilla-foundation/common-voice_8_0 --split test --config cs
```
## Training and evaluation data
The Common Voice 8.0 `train` and `validation` datasets were used for training
## Training procedure
### Training hyperparameters
The following hyperparameters were used during first stage of training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 640
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 150
- mixed_precision_training: Native AMP
The following hyperparameters were used during second stage of training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 640
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| 7.2926 | 8.06 | 250 | 3.8497 | 1.0 | 1.0 |
| 3.417 | 16.13 | 500 | 3.2852 | 1.0 | 0.9857 |
| 2.0264 | 24.19 | 750 | 0.7099 | 0.7342 | 0.1768 |
| 0.4018 | 32.25 | 1000 | 0.6188 | 0.6415 | 0.1551 |
| 0.2444 | 40.32 | 1250 | 0.6632 | 0.6362 | 0.1600 |
| 0.1882 | 48.38 | 1500 | 0.6070 | 0.5783 | 0.1388 |
| 0.153 | 56.44 | 1750 | 0.6425 | 0.5720 | 0.1377 |
| 0.1214 | 64.51 | 2000 | 0.6363 | 0.5546 | 0.1337 |
| 0.1011 | 72.57 | 2250 | 0.6310 | 0.5222 | 0.1224 |
| 0.0879 | 80.63 | 2500 | 0.6353 | 0.5258 | 0.1253 |
| 0.0782 | 88.7 | 2750 | 0.6078 | 0.4904 | 0.1127 |
| 0.0709 | 96.76 | 3000 | 0.6465 | 0.4960 | 0.1154 |
| 0.0661 | 104.82 | 3250 | 0.6622 | 0.4945 | 0.1166 |
| 0.0616 | 112.89 | 3500 | 0.6440 | 0.4786 | 0.1104 |
| 0.0579 | 120.95 | 3750 | 0.6815 | 0.4887 | 0.1144 |
| 0.0549 | 129.03 | 4000 | 0.6603 | 0.4780 | 0.1105 |
| 0.0527 | 137.09 | 4250 | 0.6652 | 0.4749 | 0.1090 |
| 0.0506 | 145.16 | 4500 | 0.6958 | 0.4846 | 0.1133 |
Further fine-tuning with slightly different architecture and higher learning rate:
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.576 | 8.06 | 250 | 0.2411 | 0.2340 | 0.0502 |
| 0.2564 | 16.13 | 500 | 0.2305 | 0.2097 | 0.0492 |
| 0.2018 | 24.19 | 750 | 0.2371 | 0.2059 | 0.0494 |
| 0.1549 | 32.25 | 1000 | 0.2298 | 0.1844 | 0.0435 |
| 0.1224 | 40.32 | 1250 | 0.2288 | 0.1725 | 0.0407 |
| 0.1004 | 48.38 | 1500 | 0.2327 | 0.1608 | 0.0376 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
{"language": ["cs"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "xlsr-fine-tuning-week", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Czech comodoro Wav2Vec2 XLSR 300M CV8", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "cs"}, "metrics": [{"type": "wer", "value": 10.3, "name": "Test WER"}, {"type": "cer", "value": 2.6, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "cs"}, "metrics": [{"type": "wer", "value": 54.29, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "cs"}, "metrics": [{"type": "wer", "value": 44.55, "name": "Test WER"}]}]}]}
|
comodoro/wav2vec2-xls-r-300m-cs-cv8
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"cs",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Czech
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Czech using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "cs", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs")
model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Czech test data of Common Voice 6.1
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "cs", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs")
model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\/\"\“\„\%\”\�\–\'\`\«\»\—\’\…]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 22.20 %
## Training
The Common Voice `train` and `validation` datasets were used for training
# TODO The script used for training can be found [here](...)
|
{"language": ["cs"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "Czech comodoro Wav2Vec2 XLSR 300M CV6.1", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 6.1", "type": "common_voice", "args": "cs"}, "metrics": [{"type": "wer", "value": 22.2, "name": "Test WER"}, {"type": "cer", "value": 5.1, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "cs"}, "metrics": [{"type": "wer", "value": 66.78, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "cs"}, "metrics": [{"type": "wer", "value": 57.52, "name": "Test WER"}]}]}]}
|
comodoro/wav2vec2-xls-r-300m-cs
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"xlsr-fine-tuning-week",
"cs",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
transformers
|
# Upper Sorbian wav2vec2-xls-r-300m-hsb-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9643
- Wer: 0.5037
- Cer: 0.1278
## Evaluation
The model can be evaluated using the attached `eval.py` script:
```
python eval.py --model_id comodoro/wav2vec2-xls-r-300m-hsb-cv8 --dataset mozilla-foundation/common-voice_8_0 --split test --config hsb
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:-----:|:---------------:|:------:|:------:|
| 4.3121 | 19.35 | 1200 | 3.2059 | 1.0 | 1.0 |
| 2.6525 | 38.71 | 2400 | 1.1324 | 0.9387 | 0.3204 |
| 1.3644 | 58.06 | 3600 | 0.8767 | 0.8099 | 0.2271 |
| 1.093 | 77.42 | 4800 | 0.8739 | 0.7603 | 0.2090 |
| 0.9546 | 96.77 | 6000 | 0.8454 | 0.6983 | 0.1882 |
| 0.8554 | 116.13 | 7200 | 0.8197 | 0.6484 | 0.1708 |
| 0.775 | 135.48 | 8400 | 0.8452 | 0.6345 | 0.1681 |
| 0.7167 | 154.84 | 9600 | 0.8551 | 0.6241 | 0.1631 |
| 0.6609 | 174.19 | 10800 | 0.8442 | 0.5821 | 0.1531 |
| 0.616 | 193.55 | 12000 | 0.8892 | 0.5864 | 0.1527 |
| 0.5815 | 212.9 | 13200 | 0.8839 | 0.5772 | 0.1503 |
| 0.55 | 232.26 | 14400 | 0.8905 | 0.5665 | 0.1436 |
| 0.5173 | 251.61 | 15600 | 0.8995 | 0.5471 | 0.1417 |
| 0.4969 | 270.97 | 16800 | 0.8633 | 0.5325 | 0.1334 |
| 0.4803 | 290.32 | 18000 | 0.9074 | 0.5253 | 0.1352 |
| 0.4596 | 309.68 | 19200 | 0.9159 | 0.5146 | 0.1294 |
| 0.4415 | 329.03 | 20400 | 0.9055 | 0.5189 | 0.1314 |
| 0.434 | 348.39 | 21600 | 0.9435 | 0.5208 | 0.1314 |
| 0.4199 | 367.74 | 22800 | 0.9199 | 0.5136 | 0.1290 |
| 0.4008 | 387.1 | 24000 | 0.9342 | 0.5174 | 0.1303 |
| 0.4051 | 406.45 | 25200 | 0.9436 | 0.5132 | 0.1292 |
| 0.3861 | 425.81 | 26400 | 0.9417 | 0.5084 | 0.1283 |
| 0.3738 | 445.16 | 27600 | 0.9573 | 0.5079 | 0.1299 |
| 0.3768 | 464.52 | 28800 | 0.9682 | 0.5062 | 0.1289 |
| 0.3647 | 483.87 | 30000 | 0.9643 | 0.5037 | 0.1278 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"language": ["hsb"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "xlsr-fine-tuning-week", "hf-asr-leaderboard"], "datasets": ["common_voice"], "model-index": [{"name": "Upper Sorbian comodoro Wav2Vec2 XLSR 300M CV8", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "hsb"}, "metrics": [{"type": "wer", "value": 56.3, "name": "Test WER"}, {"type": "cer", "value": 14.3, "name": "Test CER"}]}]}]}
|
comodoro/wav2vec2-xls-r-300m-hsb-cv8
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"hsb",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
transformers
|
# wav2vec2-xls-r-300m-pl-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice 8.0 dataset.
It achieves the following results on the evaluation set while training:
- Loss: 0.1716
- Wer: 0.1697
- Cer: 0.0385
The `eval.py` script results are:
WER: 0.16970531733661967
CER: 0.03839135416519316
## Model description
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Polish using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("mozilla-foundation/common_voice_8_0", "pl", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-pl-cv8")
model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-pl-cv8")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated using the attached `eval.py` script:
```
python eval.py --model_id comodoro/wav2vec2-xls-r-300m-pl-cv8 --dataset mozilla-foundation/common-voice_8_0 --split test --config pl
```
## Training and evaluation data
The Common Voice 8.0 `train` and `validation` datasets were used for training
## Training procedure
### Training hyperparameters
The following hyperparameters were used:
- learning_rate: 1e-4
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 1
- total_train_batch_size: 640
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 150
- mixed_precision_training: Native AMP
The training was interrupted after 3250 steps.
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
{"language": ["pl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "xlsr-fine-tuning-week", "hf-asr-leaderboard"], "datasets": ["common_voice"], "model-index": [{"name": "Polish comodoro Wav2Vec2 XLSR 300M CV8", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "pl"}, "metrics": [{"type": "wer", "value": 17.0, "name": "Test WER"}, {"type": "cer", "value": 3.8, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "pl"}, "metrics": [{"type": "wer", "value": 38.97, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "pl"}, "metrics": [{"type": "wer", "value": 46.05, "name": "Test WER"}]}]}]}
|
comodoro/wav2vec2-xls-r-300m-pl-cv8
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"pl",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
transformers
|
# wav2vec2-xls-r-300m-cs-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice 8.0 dataset.
It achieves the following results on the evaluation set:
- WER: 0.49575384615384616
- CER: 0.13333333333333333
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("mozilla-foundation/common_voice_8_0", "sk", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-sk-cv8")
model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-sk-cv8")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated using the attached `eval.py` script:
```
python eval.py --model_id comodoro/wav2vec2-xls-r-300m-sk-cv8 --dataset mozilla-foundation/common_voice_8_0 --split test --config sk
```
## Training and evaluation data
The Common Voice 8.0 `train` and `validation` datasets were used for training
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-4
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 640
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
{"language": ["sk"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "xlsr-fine-tuning-week", "hf-asr-leaderboard"], "datasets": ["common_voice"], "model-index": [{"name": "Slovak comodoro Wav2Vec2 XLSR 300M CV8", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "sk"}, "metrics": [{"type": "wer", "value": 49.6, "name": "Test WER"}, {"type": "cer", "value": 13.3, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "sk"}, "metrics": [{"type": "wer", "value": 81.7, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "sk"}, "metrics": [{"type": "wer", "value": 80.26, "name": "Test WER"}]}]}]}
|
comodoro/wav2vec2-xls-r-300m-sk-cv8
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"sk",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
transformers
|
# Serbian wav2vec2-xls-r-300m-sr-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7302
- Wer: 0.4825
- Cer: 0.1847
Evaluation on mozilla-foundation/common_voice_8_0 gave the following results:
- WER: 0.48530097993467103
- CER: 0.18413288165227845
Evaluation on speech-recognition-community-v2/dev_data gave the following results:
- WER: 0.9718373107518604
- CER: 0.8302740620263108
The model can be evaluated using the attached `eval.py` script:
```
python eval.py --model_id comodoro/wav2vec2-xls-r-300m-sr-cv8 --dataset mozilla-foundation/common-voice_8_0 --split test --config sr
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 800
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 5.6536 | 15.0 | 1200 | 2.9744 | 1.0 | 1.0 |
| 2.7935 | 30.0 | 2400 | 1.6613 | 0.8998 | 0.4670 |
| 1.6538 | 45.0 | 3600 | 0.9248 | 0.6918 | 0.2699 |
| 1.2446 | 60.0 | 4800 | 0.9151 | 0.6452 | 0.2398 |
| 1.0766 | 75.0 | 6000 | 0.9110 | 0.5995 | 0.2207 |
| 0.9548 | 90.0 | 7200 | 1.0273 | 0.5921 | 0.2149 |
| 0.8919 | 105.0 | 8400 | 0.9929 | 0.5646 | 0.2117 |
| 0.8185 | 120.0 | 9600 | 1.0850 | 0.5483 | 0.2069 |
| 0.7692 | 135.0 | 10800 | 1.1001 | 0.5394 | 0.2055 |
| 0.7249 | 150.0 | 12000 | 1.1018 | 0.5380 | 0.1958 |
| 0.6786 | 165.0 | 13200 | 1.1344 | 0.5114 | 0.1941 |
| 0.6432 | 180.0 | 14400 | 1.1516 | 0.5054 | 0.1905 |
| 0.6009 | 195.0 | 15600 | 1.3149 | 0.5324 | 0.1991 |
| 0.5773 | 210.0 | 16800 | 1.2468 | 0.5124 | 0.1903 |
| 0.559 | 225.0 | 18000 | 1.2186 | 0.4956 | 0.1922 |
| 0.5298 | 240.0 | 19200 | 1.4483 | 0.5333 | 0.2085 |
| 0.5136 | 255.0 | 20400 | 1.2871 | 0.4802 | 0.1846 |
| 0.4824 | 270.0 | 21600 | 1.2891 | 0.4974 | 0.1885 |
| 0.4669 | 285.0 | 22800 | 1.3283 | 0.4942 | 0.1878 |
| 0.4511 | 300.0 | 24000 | 1.4502 | 0.5002 | 0.1994 |
| 0.4337 | 315.0 | 25200 | 1.4714 | 0.5035 | 0.1911 |
| 0.4221 | 330.0 | 26400 | 1.4971 | 0.5124 | 0.1962 |
| 0.3994 | 345.0 | 27600 | 1.4473 | 0.5007 | 0.1920 |
| 0.3892 | 360.0 | 28800 | 1.3904 | 0.4937 | 0.1887 |
| 0.373 | 375.0 | 30000 | 1.4971 | 0.4946 | 0.1902 |
| 0.3657 | 390.0 | 31200 | 1.4208 | 0.4900 | 0.1821 |
| 0.3559 | 405.0 | 32400 | 1.4648 | 0.4895 | 0.1835 |
| 0.3476 | 420.0 | 33600 | 1.4848 | 0.4946 | 0.1829 |
| 0.3276 | 435.0 | 34800 | 1.5597 | 0.4979 | 0.1873 |
| 0.3193 | 450.0 | 36000 | 1.7329 | 0.5040 | 0.1980 |
| 0.3078 | 465.0 | 37200 | 1.6379 | 0.4937 | 0.1882 |
| 0.3058 | 480.0 | 38400 | 1.5878 | 0.4942 | 0.1921 |
| 0.2987 | 495.0 | 39600 | 1.5590 | 0.4811 | 0.1846 |
| 0.2931 | 510.0 | 40800 | 1.6001 | 0.4825 | 0.1849 |
| 0.276 | 525.0 | 42000 | 1.7388 | 0.4942 | 0.1918 |
| 0.2702 | 540.0 | 43200 | 1.7037 | 0.4839 | 0.1866 |
| 0.2619 | 555.0 | 44400 | 1.6704 | 0.4755 | 0.1840 |
| 0.262 | 570.0 | 45600 | 1.6042 | 0.4751 | 0.1865 |
| 0.2528 | 585.0 | 46800 | 1.6402 | 0.4821 | 0.1865 |
| 0.2442 | 600.0 | 48000 | 1.6693 | 0.4886 | 0.1862 |
| 0.244 | 615.0 | 49200 | 1.6203 | 0.4765 | 0.1792 |
| 0.2388 | 630.0 | 50400 | 1.6829 | 0.4830 | 0.1828 |
| 0.2362 | 645.0 | 51600 | 1.8100 | 0.4928 | 0.1888 |
| 0.2224 | 660.0 | 52800 | 1.7746 | 0.4932 | 0.1899 |
| 0.2218 | 675.0 | 54000 | 1.7752 | 0.4946 | 0.1901 |
| 0.2201 | 690.0 | 55200 | 1.6775 | 0.4788 | 0.1844 |
| 0.2147 | 705.0 | 56400 | 1.7085 | 0.4844 | 0.1851 |
| 0.2103 | 720.0 | 57600 | 1.7624 | 0.4848 | 0.1864 |
| 0.2101 | 735.0 | 58800 | 1.7213 | 0.4783 | 0.1835 |
| 0.1983 | 750.0 | 60000 | 1.7452 | 0.4848 | 0.1856 |
| 0.2015 | 765.0 | 61200 | 1.7525 | 0.4872 | 0.1869 |
| 0.1969 | 780.0 | 62400 | 1.7443 | 0.4844 | 0.1852 |
| 0.2043 | 795.0 | 63600 | 1.7302 | 0.4825 | 0.1847 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"language": ["sr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "xlsr-fine-tuning-week", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0", {"name": "Serbian comodoro Wav2Vec2 XLSR 300M CV8", "results": [{"task": {"name": "Automatic Speech Recognition", "type": "automatic-speech-recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "sr"}, "metrics": [{"name": "Test WER", "type": "wer", "value": 48.5}, {"name": "Test CER", "type": "cer", "value": 18.4}]}]}], "model-index": [{"name": "wav2vec2-xls-r-300m-sr-cv8", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8.0", "type": "mozilla-foundation/common_voice_8_0", "args": "sr"}, "metrics": [{"type": "wer", "value": 48.53, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "sr"}, "metrics": [{"type": "wer", "value": 97.43, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "sr"}, "metrics": [{"type": "wer", "value": 96.69, "name": "Test WER"}]}]}]}
|
comodoro/wav2vec2-xls-r-300m-sr-cv8
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"sr",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
transformers
|
# wav2vec2-xls-r-300m-west-slavic-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the Common Voice 8 dataset of five similar languages with similar scripts: Czech, Slovak, Polish, Slovenian and Upper Sorbian. Training and validation sets were concatenated and shuffled.
Evaluation set used for training was concatenated from the respective test sets and shuffled while limiting each language to at most 2000 samples. During training, cca WER 70 was achieved on this set.
### Evaluation script
```
python eval.py --model_id comodoro/wav2vec2-xls-r-300m-west-slavic-cv8 --dataset mozilla-foundation/common_voice_8_0 --split test --config {lang}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"language": ["cs", "hsb", "pl", "sk", "sl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "xlsr-fine-tuning-week"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-xls-r-300m-west-slavic-cv8", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "cs"}, "metrics": [{"type": "wer", "value": 53.5, "name": "Test WER"}, {"type": "cer", "value": 14.7, "name": "Test CER"}, {"type": "wer", "value": 81.7, "name": "Test WER"}, {"type": "cer", "value": 21.2, "name": "Test CER"}, {"type": "wer", "value": 60.2, "name": "Test WER"}, {"type": "cer", "value": 15.6, "name": "Test CER"}, {"type": "wer", "value": 69.6, "name": "Test WER"}, {"type": "cer", "value": 20.7, "name": "Test CER"}, {"type": "wer", "value": 73.2, "name": "Test WER"}, {"type": "cer", "value": 23.2, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "cs"}, "metrics": [{"type": "wer", "value": 84.11, "name": "Test WER"}, {"type": "wer", "value": 65.3, "name": "Test WER"}, {"type": "wer", "value": 88.37, "name": "Test WER"}, {"type": "wer", "value": 87.69, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "cs"}, "metrics": [{"type": "wer", "value": 75.99, "name": "Test WER"}, {"type": "wer", "value": 72.0, "name": "Test WER"}, {"type": "wer", "value": 89.08, "name": "Test WER"}, {"type": "wer", "value": 87.89, "name": "Test WER"}]}]}]}
|
comodoro/wav2vec2-xls-r-300m-west-slavic-cv8
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"xlsr-fine-tuning-week",
"cs",
"hsb",
"pl",
"sk",
"sl",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
{}
|
congcongwang/bart-base-en-zh
| null |
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
{}
|
congcongwang/distilgpt2_fine_tuned_coder
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
{}
|
congcongwang/gpt2_medium_fine_tuned_coder
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
{}
|
congcongwang/t5-base-fine-tuned-wnut-2020-task3
| null |
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
{}
|
congcongwang/t5-large-fine-tuned-wnut-2020-task3
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
congpt/wav2vec2-large-xlsr-vietnamese
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-toxic
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2768
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5338 | 1.0 | 313 | 2.3127 |
| 2.4482 | 2.0 | 626 | 2.2985 |
| 2.4312 | 3.0 | 939 | 2.2411 |
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.0
- Datasets 1.18.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilroberta-base-finetuned-toxic", "results": []}]}
|
conjuring92/distilroberta-base-finetuned-toxic
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# Snape DialoGPT Model
|
{"tags": ["conversational"]}
|
conniezyj/DialoGPT-small-snape
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
transformers
|
Named-entity recognition model trained on the I2B2 training data set for PHI.
|
{}
|
connorboyle/bert-ner-i2b2
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
{"license": "mit"}
|
conrizzo/dialogue_summarization_with_BART
| null |
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
conrizzo/my-awesome-model
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
hello
|
{}
|
conversify/response-score
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null |
transformers
|
# LIMIT-BERT
Code and model for the *EMNLP 2020 Findings* paper:
[LIMIT-BERT: Linguistic Informed Multi-task BERT](https://arxiv.org/abs/1910.14296))
## Contents
1. [Requirements](#Requirements)
2. [Training](#Training)
## Requirements
* Python 3.6 or higher.
* Cython 0.25.2 or any compatible version.
* [PyTorch](http://pytorch.org/) 1.0.0+.
* [EVALB](http://nlp.cs.nyu.edu/evalb/). Before starting, run `make` inside the `EVALB/` directory to compile an `evalb` executable. This will be called from Python for evaluation.
* [pytorch-transformers](https://github.com/huggingface/pytorch-transformers) PyTorch 1.0.0+ or any compatible version.
#### Pre-trained Models (PyTorch)
The following pre-trained models are available for download from Google Drive:
* [`LIMIT-BERT`](https://drive.google.com/open?id=1fm0cK2A91iLG3lCpwowCCQSALnWS2X4i):
PyTorch version, same setting with BERT-Large-WWM,loading model with [pytorch-transformers](https://github.com/huggingface/pytorch-transformers).
## How to use
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("cooelf/limitbert")
model = AutoModel.from_pretrained("cooelf/limitbert")
```
Please see our original repo for the training scripts.
https://github.com/cooelf/LIMIT-BERT
## Training
To train LIMIT-BERT, simply run:
```
sh run_limitbert.sh
```
### Evaluation Instructions
To test after setting model path:
```
sh test_bert.sh
```
## Citation
```
@article{zhou2019limit,
title={{LIMIT-BERT}: Linguistic informed multi-task {BERT}},
author={Zhou, Junru and Zhang, Zhuosheng and Zhao, Hai},
journal={arXiv preprint arXiv:1910.14296},
year={2019}
}
```
|
{}
|
cooelf/limitbert
| null |
[
"transformers",
"pytorch",
"arxiv:1910.14296",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
# Cicero-Similis
## Model description
A Latin Language Model, trained on Latin texts, and evaluated using the corpus of Cicero, as described in the paper _What Would Cicero Write? -- Examining Critical Textual Decisions with a Language Model_ by Todd Cook,
Published in Ciceroniana On Line, Vol. V, #2.
## Intended uses & limitations
#### How to use
Normalize text using JV Replacement and tokenize using CLTK to separate enclitics such as "-que", then:
```
from transformers import BertForMaskedLM, AutoTokenizer, FillMaskPipeline
tokenizer = AutoTokenizer.from_pretrained("cook/cicero-similis")
model = BertForMaskedLM.from_pretrained("cook/cicero-similis")
fill_mask = FillMaskPipeline(model=model, tokenizer=tokenizer, top_k=10_000)
# Cicero, De Re Publica, VI, 32, 2
# "animal" is found in A, Q, PhD manuscripts
# 'anima' H^1 Macr. et codd. Tusc.
results = fill_mask("inanimum est enim omne quod pulsu agitatur externo; quod autem est [MASK],")
```
#### Limitations and bias
Currently the model training data excludes modern and 19th century texts, but that weakness is the model's strength; it's not aimed to be a one-size-fits-all model.
## Training data
Trained on the corpora Phi5, Tesserae, Thomas Aquinas, and Patrologes Latina.
## Training procedure
5 epochs, masked language modeling .15, effective batch size 32
## Eval results
A novel evaluation metric is proposed in the paper _What Would Cicero Write? -- Examining Critical Textual Decisions with a Language Model_ by Todd Cook,
Published in Ciceroniana On Line, Vol. V, #2.
### BibTeX entry and citation info
TODO
_What Would Cicero Write? -- Examining Critical Textual Decisions with a Language Model_ by Todd Cook,
Published in Ciceroniana On Line, Vol. V, #2.
|
{"language": ["la"], "license": "apache-2.0", "tags": ["language model"], "datasets": ["Tesserae", "Phi5", "Thomas Aquinas", "Patrologia Latina"]}
|
cook/cicero-similis
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"language model",
"la",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
cook/test
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
# Joreyar DialoGPT Model
|
{"tags": ["conversational"]}
|
cookirei/DialoGPT-medium-Joreyar
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{"license": "mit"}
|
coolzude/Landmark_Detection
| null |
[
"license:mit",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
feature-extraction
|
transformers
|
{}
|
copenlu/citebert-cite-only
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
feature-extraction
|
transformers
|
This is the SciBERT pretrained language model further fine-tuned on masked language modeling and cite-worthiness detection on the [CiteWorth](https://github.com/copenlu/cite-worth) dataset. Note that this model should be used for further fine-tuning on downstream scientific document understanding tasks.
|
{}
|
copenlu/citebert
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
# Uzbek news category classifier (based on UzBERT)
UzBERT fine-tuned to classify news articles into one of the following
categories:
- дунё
- жамият
- жиноят
- иқтисодиёт
- маданият
- реклама
- саломатлик
- сиёсат
- спорт
- фан ва техника
- шоу-бизнес
## How to use
```python
>>> from transformers import pipeline
>>> classifier = pipeline('text-classification', model='coppercitylabs/uzbek-news-category-classifier')
>>> text = """Маҳоратли пара-енгил атлетикачимиз Ҳусниддин Норбеков Токио-2020 Паралимпия ўйинларида ғалаба қозониб, делегациямиз ҳисобига навбатдаги олтин медални келтирди. Бу ҳақда МОҚ хабар берди.
Норбеков ҳозиргина ядро улоқтириш дастурида ўз ғалабасини тантана қилди. Ушбу машқда вакилимиз 16:13 метр натижа билан энг яхши кўрсаткични қайд этди.
Шу тариқа, делегациямиз ҳисобидаги медаллар сони 16 (6 та олтин, 4 та кумуш ва 6 та бронза) тага етди. Кейинги кун дастурларида иштирок этадиган ҳамюртларимизга омад тилаб қоламиз!"""
>>> classifier(text)
[{'label': 'спорт', 'score': 0.9865401983261108}]
```
## Fine-tuning data
Fine-tuned on ~60K news articles for 3 epochs.
|
{"language": "uz", "license": "mit", "tags": ["uzbek", "cyrillic", "news category classifier"], "datasets": ["webcrawl"]}
|
coppercitylabs/uzbek-news-category-classifier
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"uzbek",
"cyrillic",
"news category classifier",
"uz",
"dataset:webcrawl",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
# UzBERT base model (uncased)
Pretrained model on Uzbek language (Cyrillic script) using a masked
language modeling and next sentence prediction objectives.
## How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='coppercitylabs/uzbert-base-uncased')
>>> unmasker("Алишер Навоий – улуғ ўзбек ва бошқа туркий халқларнинг [MASK], мутафаккири ва давлат арбоби бўлган.")
[
{
'token_str': 'шоири',
'token': 13587,
'score': 0.7974384427070618,
'sequence': 'алишер навоий – улуғ ўзбек ва бошқа туркий халқларнинг шоири, мутафаккир ##и ва давлат арбоби бўлган.'
},
{
'token_str': 'олими',
'token': 18500,
'score': 0.09166576713323593,
'sequence': 'алишер навоий – улуғ ўзбек ва бошқа туркий халқларнинг олими, мутафаккир ##и ва давлат арбоби бўлган.'
},
{
'token_str': 'асосчиси',
'token': 7469,
'score': 0.02451123297214508,
'sequence': 'алишер навоий – улуғ ўзбек ва бошқа туркий халқларнинг асосчиси, мутафаккир ##и ва давлат арбоби бўлган.'
},
{
'token_str': 'ёзувчиси',
'token': 22439,
'score': 0.017601722851395607,
'sequence': 'алишер навоий – улуғ ўзбек ва бошқа туркий халқларнинг ёзувчиси, мутафаккир ##и ва давлат арбоби бўлган.'
},
{
'token_str': 'устози',
'token': 11494,
'score': 0.010115668177604675,
'sequence': 'алишер навоий – улуғ ўзбек ва бошқа туркий халқларнинг устози, мутафаккир ##и ва давлат арбоби бўлган.'
}
]
```
## Training data
UzBERT model was pretrained on \~625K news articles (\~142M words).
## BibTeX entry and citation info
```bibtex
@misc{mansurov2021uzbert,
title={{UzBERT: pretraining a BERT model for Uzbek}},
author={B. Mansurov and A. Mansurov},
year={2021},
eprint={2108.09814},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "uz", "license": "mit", "tags": ["uzbert", "uzbek", "bert", "cyrillic"], "datasets": ["webcrawl"]}
|
coppercitylabs/uzbert-base-uncased
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"uzbert",
"uzbek",
"cyrillic",
"uz",
"dataset:webcrawl",
"arxiv:2108.09814",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.