pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text2text-generation
|
transformers
|
[Google's mT5](https://github.com/google-research/multilingual-t5)
This is a model for generating questions from Thai texts. It was fine-tuned on NSC2018 corpus
```python
from transformers import T5Tokenizer, MT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Pollawat/mt5-small-thai-qg")
model = MT5ForConditionalGeneration.from_pretrained("Pollawat/mt5-small-thai-qg")
text = "กรุงเทพมหานคร เป็นเมืองหลวงและนครที่มีประชากรมากที่สุดของประเทศไทย เป็นศูนย์กลางการปกครอง การศึกษา การคมนาคมขนส่ง การเงินการธนาคาร การพาณิชย์ การสื่อสาร และความเจริญของประเทศ เป็นเมืองที่มีชื่อยาวที่สุดในโลก ตั้งอยู่บนสามเหลี่ยมปากแม่น้ำเจ้าพระยา มีแม่น้ำเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งพระนครและฝั่งธนบุรี กรุงเทพมหานครมีพื้นที่ทั้งหมด 1,568.737 ตร.กม. มีประชากรตามทะเบียนราษฎรกว่า 5 ล้านคน ทำให้กรุงเทพมหานครเป็นเอกนคร (Primate City) จัด มีผู้กล่าวว่า กรุงเทพมหานครเป็น 'เอกนครที่สุดในโลก' เพราะมีประชากรมากกว่านครที่มีประชากรมากเป็นอันดับ 2 ถึง 40 เท่า[3]"
input_ids = tokenizer.encode(text, return_tensors='pt')
beam_output = model.generate(
input_ids,
max_length=50,
num_beams=5,
early_stopping=True
)
print(tokenizer.decode(beam_output[0], skip_special_tokens=True))
>> <extra_id_0>ของกรุงเทพมหานครเป็นเมืองหลวงของประเทศใด
```
|
{"language": ["thai", "th"], "license": "mit", "tags": ["question-generation"], "datasets": ["NSC2018"]}
|
Pollawat/mt5-small-thai-qg
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question-generation",
"dataset:NSC2018",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"thai",
"th"
] |
TAGS
#transformers #pytorch #mt5 #text2text-generation #question-generation #dataset-NSC2018 #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
Google's mT5
This is a model for generating questions from Thai texts. It was fine-tuned on NSC2018 corpus
|
[] |
[
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #question-generation #dataset-NSC2018 #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
Shrek, with all 4 scripts!
|
{"tags": ["conversational"]}
|
Poly-Pixel/shrek-medium-full
| null |
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Shrek, with all 4 scripts!
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
Shrek
|
{"tags": ["conversational"]}
|
Poly-Pixel/shrek-medium
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Shrek
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
# Shrek Small DialoGPT Model
|
{"tags": ["conversational"]}
|
Poly-Pixel/shrek-test-small
| null |
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Shrek Small DialoGPT Model
|
[
"# Shrek Small DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Shrek Small DialoGPT Model"
] |
text-generation
|
transformers
|
This model generate the time shift's text of Norbit Company also generate the same ending of the textes of any phrases like base gpt model.
|
{}
|
PolyakovMaxim/ModelGptTS
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
This model generate the time shift's text of Norbit Company also generate the same ending of the textes of any phrases like base gpt model.
|
[] |
[
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab-1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3857
- Wer: 0.3874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4285 | 2.01 | 500 | 1.4732 | 0.9905 |
| 0.7457 | 4.02 | 1000 | 0.5278 | 0.4960 |
| 0.3463 | 6.02 | 1500 | 0.4245 | 0.4155 |
| 0.2034 | 8.03 | 2000 | 0.3857 | 0.3874 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab-1", "results": []}]}
|
Prasadi/wav2vec2-base-timit-demo-colab-1
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-base-timit-demo-colab-1
================================
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3857
* Wer: 0.3874
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9575
- Mae: 0.5488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1253 | 1.0 | 235 | 0.9960 | 0.5366 |
| 0.9708 | 2.0 | 470 | 0.9575 | 0.5488 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "model-index": [{"name": "xlm-roberta-base-finetuned-marc-en", "results": []}]}
|
Pratibha/xlm-roberta-base-finetuned-marc-en
| null |
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
xlm-roberta-base-finetuned-marc-en
==================================
This model is a fine-tuned version of xlm-roberta-base on the amazon\_reviews\_multi dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9575
* Mae: 0.5488
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
# ALBERT-base for QA
## Overview
**Language model:** albert-base </br>
**Language:** English </br>
**Downstream-task:** Extractive QA </br>
**Training data:** SQuAD 2.0 </br>
**Eval data:** SQuAD 2.0 </br>
**Code:** <TBD> </br>
## Env Information
`transformers` version: 4.9.1 </br>
Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic </br>
Python version: 3.7.11 </br>
PyTorch version (GPU?): 1.9.0+cu102 (False)</br>
Tensorflow version (GPU?): 2.5.0 (False)</br>
## Hyperparameters
```
max_seq_len=386
doc_stride=128
n_best_size=20
max_answer_length=30
min_null_score=7.0
batch_size=32
n_epochs=3
base_LM_model = "albert-base-v2"
learning_rate=3e-5
adam_epsilon=1e-5
adam_beta1=0.95
adam_beta2=0.999
warmup_steps=300
weight_decay=0.01
optimizer=AdamW
lr_scheduler="polynomial"
```
## Performance
```
"exact": 78.253
"f1": 81.523
"total": 11873
"HasAns_exact": 73.616
"HasAns_f1": 80.165
"HasAns_total": 5928
"NoAns_exact": 82.876
"NoAns_f1": 82.876
"NoAns_total": 5945
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "PremalMatalia/albert-base-best-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Which name is also used to describe the Amazon rainforest in English?',
'context': 'The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain "Amazonas" in their names. The Amazon represents over half of the planet\'s remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species.'
}
res = nlp(QA_input)
print(res)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Premal Matalia
|
{"datasets": ["squad_v2"]}
|
PremalMatalia/albert-base-best-squad2
| null |
[
"transformers",
"pytorch",
"albert",
"question-answering",
"dataset:squad_v2",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #albert #question-answering #dataset-squad_v2 #endpoints_compatible #region-us
|
# ALBERT-base for QA
## Overview
Language model: albert-base </br>
Language: English </br>
Downstream-task: Extractive QA </br>
Training data: SQuAD 2.0 </br>
Eval data: SQuAD 2.0 </br>
Code: <TBD> </br>
## Env Information
'transformers' version: 4.9.1 </br>
Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic </br>
Python version: 3.7.11 </br>
PyTorch version (GPU?): 1.9.0+cu102 (False)</br>
Tensorflow version (GPU?): 2.5.0 (False)</br>
## Hyperparameters
## Performance
## Usage
### In Transformers
## Authors
Premal Matalia
|
[
"# ALBERT-base for QA",
"## Overview\nLanguage model: albert-base </br>\nLanguage: English </br>\nDownstream-task: Extractive QA </br>\nTraining data: SQuAD 2.0 </br>\nEval data: SQuAD 2.0 </br>\nCode: <TBD> </br>",
"## Env Information\n'transformers' version: 4.9.1 </br>\nPlatform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic </br>\nPython version: 3.7.11 </br>\nPyTorch version (GPU?): 1.9.0+cu102 (False)</br>\nTensorflow version (GPU?): 2.5.0 (False)</br>",
"## Hyperparameters",
"## Performance",
"## Usage",
"### In Transformers",
"## Authors\nPremal Matalia"
] |
[
"TAGS\n#transformers #pytorch #albert #question-answering #dataset-squad_v2 #endpoints_compatible #region-us \n",
"# ALBERT-base for QA",
"## Overview\nLanguage model: albert-base </br>\nLanguage: English </br>\nDownstream-task: Extractive QA </br>\nTraining data: SQuAD 2.0 </br>\nEval data: SQuAD 2.0 </br>\nCode: <TBD> </br>",
"## Env Information\n'transformers' version: 4.9.1 </br>\nPlatform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic </br>\nPython version: 3.7.11 </br>\nPyTorch version (GPU?): 1.9.0+cu102 (False)</br>\nTensorflow version (GPU?): 2.5.0 (False)</br>",
"## Hyperparameters",
"## Performance",
"## Usage",
"### In Transformers",
"## Authors\nPremal Matalia"
] |
question-answering
|
transformers
|
# ELECTRA-base for QA
## Overview
**Language model:** electra-base </br>
**Language:** English </br>
**Downstream-task:** Extractive QA </br>
**Training data:** SQuAD 2.0 </br>
**Eval data:** SQuAD 2.0 </br>
**Code:** <TBD> </br>
## Env Information
`transformers` version: 4.9.1 </br>
Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic </br>
Python version: 3.7.11 </br>
PyTorch version (GPU?): 1.9.0+cu102 (False)</br>
Tensorflow version (GPU?): 2.5.0 (False)</br>
## Hyperparameters
```
max_seq_len=386
doc_stride=128
n_best_size=20
max_answer_length=30
min_null_score=7.0
batch_size=8
n_epochs=2
base_LM_model = "google/electra-base-discriminator"
learning_rate=1.5e-5
adam_epsilon=1e-5
adam_beta1=0.95
adam_beta2=0.999
warmup_steps=100
weight_decay=0.01
optimizer=AdamW
lr_scheduler="polynomial"
```
##### There is a special threshold value CLS_threshold=-3 used to more accurately identify no answers [Logic will be available in GitHub Repo [TBD]
## Performance
```
"exact": 79.331256
"f1": 83.232347\t
"total": 11873
"HasAns_exact": 76.501350
"HasAns_f1": 84.314719
"HasAns_total": 5928
"NoAns_exact": 82.153070
"NoAns_f1": 82.153070
"NoAns_total": 5945
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "PremalMatalia/electra-base-best-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Which name is also used to describe the Amazon rainforest in English?',
'context': 'The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain "Amazonas" in their names. The Amazon represents over half of the planet\'s remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species.'
}
res = nlp(QA_input)
print(res)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Premal Matalia
|
{"datasets": ["squad_v2"]}
|
PremalMatalia/electra-base-best-squad2
| null |
[
"transformers",
"pytorch",
"electra",
"question-answering",
"dataset:squad_v2",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #electra #question-answering #dataset-squad_v2 #endpoints_compatible #region-us
|
# ELECTRA-base for QA
## Overview
Language model: electra-base </br>
Language: English </br>
Downstream-task: Extractive QA </br>
Training data: SQuAD 2.0 </br>
Eval data: SQuAD 2.0 </br>
Code: <TBD> </br>
## Env Information
'transformers' version: 4.9.1 </br>
Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic </br>
Python version: 3.7.11 </br>
PyTorch version (GPU?): 1.9.0+cu102 (False)</br>
Tensorflow version (GPU?): 2.5.0 (False)</br>
## Hyperparameters
##### There is a special threshold value CLS_threshold=-3 used to more accurately identify no answers [Logic will be available in GitHub Repo [TBD]
## Performance
## Usage
### In Transformers
## Authors
Premal Matalia
|
[
"# ELECTRA-base for QA",
"## Overview\nLanguage model: electra-base </br>\nLanguage: English </br>\nDownstream-task: Extractive QA </br>\nTraining data: SQuAD 2.0 </br>\nEval data: SQuAD 2.0 </br>\nCode: <TBD> </br>",
"## Env Information\n'transformers' version: 4.9.1 </br>\nPlatform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic </br>\nPython version: 3.7.11 </br>\nPyTorch version (GPU?): 1.9.0+cu102 (False)</br>\nTensorflow version (GPU?): 2.5.0 (False)</br>",
"## Hyperparameters",
"##### There is a special threshold value CLS_threshold=-3 used to more accurately identify no answers [Logic will be available in GitHub Repo [TBD]",
"## Performance",
"## Usage",
"### In Transformers",
"## Authors\nPremal Matalia"
] |
[
"TAGS\n#transformers #pytorch #electra #question-answering #dataset-squad_v2 #endpoints_compatible #region-us \n",
"# ELECTRA-base for QA",
"## Overview\nLanguage model: electra-base </br>\nLanguage: English </br>\nDownstream-task: Extractive QA </br>\nTraining data: SQuAD 2.0 </br>\nEval data: SQuAD 2.0 </br>\nCode: <TBD> </br>",
"## Env Information\n'transformers' version: 4.9.1 </br>\nPlatform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic </br>\nPython version: 3.7.11 </br>\nPyTorch version (GPU?): 1.9.0+cu102 (False)</br>\nTensorflow version (GPU?): 2.5.0 (False)</br>",
"## Hyperparameters",
"##### There is a special threshold value CLS_threshold=-3 used to more accurately identify no answers [Logic will be available in GitHub Repo [TBD]",
"## Performance",
"## Usage",
"### In Transformers",
"## Authors\nPremal Matalia"
] |
question-answering
|
transformers
|
# RoBERTa-base for QA
## Overview
**Language model:** 'roberta-base' </br>
**Language:** English </br>
**Downstream-task:** Extractive QA </br>
**Training data:** SQuAD 2.0 </br>
**Eval data:** SQuAD 2.0 </br>
**Code:** <TBD> </br>
## Env Information
`transformers` version: 4.9.1 </br>
Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic </br>
Python version: 3.7.11 </br>
PyTorch version (GPU?): 1.9.0+cu102 (False)</br>
Tensorflow version (GPU?): 2.5.0 (False)</br>
## Hyperparameters
```
max_seq_len=386
doc_stride=128
n_best_size=20
max_answer_length=30
min_null_score=7.0
batch_size=8
n_epochs=6
base_LM_model = "roberta-base"
learning_rate=1.5e-5
adam_epsilon=1e-5
adam_beta1=0.95
adam_beta2=0.999
warmup_steps=100
weight_decay=0.01
optimizer=AdamW
lr_scheduler="polynomial"
```
##### There is a special threshold value CLS_threshold=-3 used to more accurately identify no answers [Logic will be available in GitHub Repo [TBD]
## Performance
```
"exact": 81.192622
"f1": 83.95408
"total": 11873
"HasAns_exact": 74.190283
"HasAns_f1": 79.721119
"HasAns_total": 5928
"NoAns_exact": 88.174937
"NoAns_f1": 88.174937
"NoAns_total": 5945
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "PremalMatalia/roberta-base-best-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Which name is also used to describe the Amazon rainforest in English?',
'context': 'The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain "Amazonas" in their names. The Amazon represents over half of the planet\'s remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species.'
}
res = nlp(QA_input)
print(res)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Premal Matalia
|
{"datasets": ["squad_v2"]}
|
PremalMatalia/roberta-base-best-squad2
| null |
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"dataset:squad_v2",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #roberta #question-answering #dataset-squad_v2 #endpoints_compatible #region-us
|
# RoBERTa-base for QA
## Overview
Language model: 'roberta-base' </br>
Language: English </br>
Downstream-task: Extractive QA </br>
Training data: SQuAD 2.0 </br>
Eval data: SQuAD 2.0 </br>
Code: <TBD> </br>
## Env Information
'transformers' version: 4.9.1 </br>
Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic </br>
Python version: 3.7.11 </br>
PyTorch version (GPU?): 1.9.0+cu102 (False)</br>
Tensorflow version (GPU?): 2.5.0 (False)</br>
## Hyperparameters
##### There is a special threshold value CLS_threshold=-3 used to more accurately identify no answers [Logic will be available in GitHub Repo [TBD]
## Performance
## Usage
### In Transformers
## Authors
Premal Matalia
|
[
"# RoBERTa-base for QA",
"## Overview\nLanguage model: 'roberta-base' </br>\nLanguage: English </br>\nDownstream-task: Extractive QA </br>\nTraining data: SQuAD 2.0 </br>\nEval data: SQuAD 2.0 </br>\nCode: <TBD> </br>",
"## Env Information\n'transformers' version: 4.9.1 </br>\nPlatform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic </br>\nPython version: 3.7.11 </br>\nPyTorch version (GPU?): 1.9.0+cu102 (False)</br>\nTensorflow version (GPU?): 2.5.0 (False)</br>",
"## Hyperparameters",
"##### There is a special threshold value CLS_threshold=-3 used to more accurately identify no answers [Logic will be available in GitHub Repo [TBD]",
"## Performance",
"## Usage",
"### In Transformers",
"## Authors\nPremal Matalia"
] |
[
"TAGS\n#transformers #pytorch #roberta #question-answering #dataset-squad_v2 #endpoints_compatible #region-us \n",
"# RoBERTa-base for QA",
"## Overview\nLanguage model: 'roberta-base' </br>\nLanguage: English </br>\nDownstream-task: Extractive QA </br>\nTraining data: SQuAD 2.0 </br>\nEval data: SQuAD 2.0 </br>\nCode: <TBD> </br>",
"## Env Information\n'transformers' version: 4.9.1 </br>\nPlatform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic </br>\nPython version: 3.7.11 </br>\nPyTorch version (GPU?): 1.9.0+cu102 (False)</br>\nTensorflow version (GPU?): 2.5.0 (False)</br>",
"## Hyperparameters",
"##### There is a special threshold value CLS_threshold=-3 used to more accurately identify no answers [Logic will be available in GitHub Repo [TBD]",
"## Performance",
"## Usage",
"### In Transformers",
"## Authors\nPremal Matalia"
] |
null | null |
https://github.com/Prim9000/Thai_TTS
|
{}
|
Prim9000/try
| null |
[
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#region-us
|
URL
|
[] |
[
"TAGS\n#region-us \n"
] |
question-answering
|
transformers
|
# BART-Squad2
## Model description
BART for extractive (span-based) question answering, trained on Squad 2.0.
F1 score of 87.4.
## Intended uses & limitations
Unfortunately, the Huggingface auto-inference API won't run this model, so if you're attempting to try it through the input box above and it complains, don't be discouraged!
#### How to use
Here's a quick way to get question answering running locally:
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("Primer/bart-squad2")
model = AutoModelForQuestionAnswering.from_pretrained("Primer/bart-squad2")
model.to('cuda'); model.eval()
def answer(question, text):
seq = '<s>' + question + ' </s> </s> ' + text + ' </s>'
tokens = tokenizer.encode_plus(seq, return_tensors='pt', padding='max_length', max_length=1024)
input_ids = tokens['input_ids'].to('cuda')
attention_mask = tokens['attention_mask'].to('cuda')
start, end, _ = model(input_ids, attention_mask=attention_mask)
start_idx = int(start.argmax().int())
end_idx = int(end.argmax().int())
print(tokenizer.decode(input_ids[0, start_idx:end_idx]).strip())
# ^^ it will be an empty string if the model decided "unanswerable"
>>> question = "Where does Tom live?"
>>> context = "Tom is an engineer in San Francisco."
>>> answer(question, context)
San Francisco
```
(Just drop the `.to('cuda')` stuff if running on CPU).
#### Limitations and bias
Unknown, no further evaluation has been performed. In a technical sense one big limitation is that it's 1.6G 😬
## Training procedure
`run_squad.py` with:
|param|value|
|---|---|
|batch size|8|
|max_seq_length|1024|
|learning rate|1e-5|
|epochs|2|
Modified to freeze shared parameters and encoder embeddings.
|
{"language": "en"}
|
primer-ai/bart-squad2
| null |
[
"transformers",
"pytorch",
"bart",
"question-answering",
"en",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bart #question-answering #en #endpoints_compatible #region-us
|
BART-Squad2
===========
Model description
-----------------
BART for extractive (span-based) question answering, trained on Squad 2.0.
F1 score of 87.4.
Intended uses & limitations
---------------------------
Unfortunately, the Huggingface auto-inference API won't run this model, so if you're attempting to try it through the input box above and it complains, don't be discouraged!
#### How to use
Here's a quick way to get question answering running locally:
(Just drop the '.to('cuda')' stuff if running on CPU).
#### Limitations and bias
Unknown, no further evaluation has been performed. In a technical sense one big limitation is that it's 1.6G
Training procedure
------------------
'run\_squad.py' with:
Modified to freeze shared parameters and encoder embeddings.
|
[
"#### How to use\n\n\nHere's a quick way to get question answering running locally:\n\n\n(Just drop the '.to('cuda')' stuff if running on CPU).",
"#### Limitations and bias\n\n\nUnknown, no further evaluation has been performed. In a technical sense one big limitation is that it's 1.6G\n\n\nTraining procedure\n------------------\n\n\n'run\\_squad.py' with:\n\n\n\nModified to freeze shared parameters and encoder embeddings."
] |
[
"TAGS\n#transformers #pytorch #bart #question-answering #en #endpoints_compatible #region-us \n",
"#### How to use\n\n\nHere's a quick way to get question answering running locally:\n\n\n(Just drop the '.to('cuda')' stuff if running on CPU).",
"#### Limitations and bias\n\n\nUnknown, no further evaluation has been performed. In a technical sense one big limitation is that it's 1.6G\n\n\nTraining procedure\n------------------\n\n\n'run\\_squad.py' with:\n\n\n\nModified to freeze shared parameters and encoder embeddings."
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the COMMON_VOICE - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 248.1278
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
{"language": ["hi"], "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]}
|
Priyajay/xls-r-ab-test
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"hi",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"hi"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #hi #dataset-common_voice #endpoints_compatible #region-us
|
#
This model is a fine-tuned version of hf-test/xls-r-dummy on the COMMON_VOICE - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 248.1278
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
[
"# \n\nThis model is a fine-tuned version of hf-test/xls-r-dummy on the COMMON_VOICE - HI dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 248.1278\n- Wer: 1.0",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 2.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.2.dev0\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #hi #dataset-common_voice #endpoints_compatible #region-us \n",
"# \n\nThis model is a fine-tuned version of hf-test/xls-r-dummy on the COMMON_VOICE - HI dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 248.1278\n- Wer: 1.0",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 2.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.2.dev0\n- Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 26.7866
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
{"language": ["hi"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]}
|
Priyajay/xls-r-kn-test
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"hi",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"hi"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #hi #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
#
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the COMMON_VOICE - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 26.7866
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
[
"# \n\nThis model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the COMMON_VOICE - HI dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 26.7866\n- Wer: 1.0",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 2.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.2.dev0\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #hi #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"# \n\nThis model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the COMMON_VOICE - HI dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 26.7866\n- Wer: 1.0",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 2.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.2.dev0\n- Tokenizers 0.11.0"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3011
- Accuracy: 0.9185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2427 | 1.0 | 125 | 0.2109 | 0.919 |
| 0.0986 | 2.0 | 250 | 0.3011 | 0.9185 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
{"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "metrics": ["accuracy"], "model_index": [{"name": "roberta-base-bne-finetuned-amazon_reviews_multi", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "amazon_reviews_multi", "type": "amazon_reviews_multi", "args": "es"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9185}}]}]}
|
Proggleb/roberta-base-bne-finetuned-amazon_reviews_multi
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
roberta-base-bne-finetuned-amazon\_reviews\_multi
=================================================
This model is a fine-tuned version of BSC-TeMU/roberta-base-bne on the amazon\_reviews\_multi dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3011
* Accuracy: 0.9185
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.9.2
* Pytorch 1.9.0+cu102
* Datasets 1.11.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
null | null |
# ***LegalNLP*** - Natural Language Processing Methods for the Brazilian Legal Language ⚖️
### The library of Natural Language Processing for Brazilian legal language, *LegalNLP*, was born in a partnership between Brazilian researchers and the legal tech [Tikal Tech](https://www.tikal.tech) based in São Paulo, Brazil. Besides containing pre-trained language models for the Brazilian legal language, ***LegalNLP*** provides functions that can facilitate the manipulation of legal texts in Portuguese and demonstration/tutorials to help people in their own work.
You can access our paper by clicking [**here**](https://arxiv.org/abs/2110.15709).
If you use our library in your academic work, please cite us in the following way
@article{polo2021legalnlp,
title={LegalNLP--Natural Language Processing methods for the Brazilian Legal Language},
author={Polo, Felipe Maia and Mendon{\c{c}}a, Gabriel Caiaffa Floriano and Parreira, Kau{\^e} Capellato J and Gianvechio, Lucka and Cordeiro, Peterson and Ferreira, Jonathan Batista and de Lima, Leticia Maria Paz and Maia, Ant{\^o}nio Carlos do Amaral and Vicente, Renato},
journal={arXiv preprint arXiv:2110.15709},
year={2021}
}
--------------
## Summary
0. [Accessing the Language Models](#0)
1. [ Introduction / Installing package](#1)
2. [ Language Models (Details / How to use)](#2)
1. [ Word2Vec/Doc2Vec ](#2.1)
3. [ Demonstrations / Tutorials](#3)
4. [ References](#4)
--------------
<a name="0"></a>
## 0\. Accessing the Language Models
All our models can be found [here](https://drive.google.com/drive/folders/1tCccOXPLSEAEUQtcWXvED3YaNJi3p7la?usp=sharing).
Please contact *[email protected]* if you have any problem accessing the language models.
--------------
<a name="1"></a>
## 1\. Introduction / Installing package
*LegalNLP* is promising given the scarcity of Natural Language Processing resources focused on the Brazilian legal language. It is worth mentioning that our library was made for Python, one of the most well-known programming languages for machine learning.
You first need to install the HuggingFaceHub library running the following command on terminal
``` :sh
$ pip install huggingface_hub
```
Import `hf_hub_download`:
```python
from huggingface_hub import hf_hub_download
```
And then you can download our Word2Vec(SG)/Doc2Vec(DBOW) and Word2Vec(CBOW)/Doc2Vec(DM) by the following commands:
```python
w2v_sg_d2v_dbow = hf_hub_download(repo_id = "Projeto/LegalNLP", filename = "w2v_d2v_dbow_size_100_window_15_epochs_20")
w2v_cbow_d2v_dm = hf_hub_download(repo_id = "Projeto/LegalNLP", filename = "w2v_d2v_dm_size_100_window_15_epochs_20")
```
--------------
<a name="2"></a>
## 2\. Model Languages
<a name="3.2"></a>
### 3.2\. Word2Vec/Doc2Vec
Our first models for generating vector representation for tokens and
texts (embeddings) are variations of the Word2Vec [1,
2] and Doc2Vec [3] methods. In short, the
Word2Vec methods generate embeddings for tokens5 and that somehow capture
the meaning of the various textual elements, based on the contexts in which these
elements appear. Doc2Vec methods are extensions/modifications of Word2Vec
for generating whole text representations.
Remember to at least make all letters lowercase. Please check our paper or [Gensim page](https://radimrehurek.com/gensim_3.8.3/models/doc2vec.html) for more details. Preferably use Gensim version 3.8.3.
Below we have a summary table with some important information about the trained models:
| Filenames | Doc2Vec | Word2Vec | Size | Windows
|:-------------------:|:--------------:|:--------------:|:--------------:|:--------------:|
| ```w2v_d2v_dm*``` | Distributed Memory (DM) | Continuous Bag-of-Words (CBOW) | 100, 200, 300 | 15
| ```w2v_d2v_dbow*``` | Distributed Bag-of-Words (DBOW) | Skip-Gram (SG) | 100, 200, 300 | 15
Here we made available both models with 100 size and 15 window.
#### Using *Word2Vec*
Installing Gensim
```python
!pip install gensim=='3.8.3'
```
Loading W2V:
```python
from gensim.models import KeyedVectors
#Loading a W2V model
w2v=KeyedVectors.load(w2v_cbow_d2v_dm)
w2v=w2v.wv
```
Viewing the first 10 entries of 'juiz' vector
```python
w2v['juiz'][:10]
```
array([ 6.570131 , -1.262787 , 5.156106 , -8.943866 , -5.884408 ,
-7.717058 , 1.8819941 , -8.02803 , -0.66901577, 6.7223144 ],
dtype=float32)
Viewing closest tokens to 'juiz'
```python
w2v.most_similar('juiz')
```
[('juíza', 0.8210258483886719),
('juiza', 0.7306275367736816),
('juíz', 0.691645085811615),
('juízo', 0.6605231165885925),
('magistrado', 0.6213295459747314),
('mmª_juíza', 0.5510469675064087),
('juizo', 0.5494943261146545),
('desembargador', 0.5313084721565247),
('mmjuiz', 0.5277603268623352),
('fabíola_melo_feijão_juíza', 0.5043971538543701)]
#### Using *Doc2Vec*
Installing Gensim
```python
!pip install gensim=='3.8.3'
```
Loading D2V
```python
from gensim.models import Doc2Vec
#Loading a D2V model
d2v=Doc2Vec.load(w2v_cbow_d2v_dm)
```
Inferring vector for a text
```python
txt='direito do consumidor origem : bangu regional xxix juizado especial civel ação : [processo] - - recte : fundo de investimento em direitos creditórios'
tokens=txt.split()
txt_vec=d2v.infer_vector(tokens, epochs=20)
txt_vec[:10]
```
array([ 0.02626514, -0.3876521 , -0.24873355, -0.0318402 , 0.3343679 ,
-0.21307918, 0.07193747, 0.02030687, 0.407305 , 0.20065512],
dtype=float32)
--------------
<a name="4"></a>
## 4\. Demonstrations
For a better understanding of the application of these models, below are the links to notebooks where we apply them to a legal dataset using various classification models such as Logistic Regression and CatBoost:
- **BERT notebook** :
[](https://colab.research.google.com/github/felipemaiapolo/legalnlp/blob/main/demo/BERT/BERT_TUTORIAL.ipynb)
- **Word2Vec notebook** :
[](https://colab.research.google.com/github/felipemaiapolo/legalnlp/blob/main/demo/Word2Vec/Word2Vec_TUTORIAL.ipynb)
- **Doc2Vec notebook** :
[](https://colab.research.google.com/github/felipemaiapolo/legalnlp/blob/main/demo/Doc2Vec/Doc2Vec_TUTORIAL.ipynb)
--------------
<a name="5"></a>
## 5\. References
[1] Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013b).
Distributed representations of words and phrases and their compositionality.
In Advances in neural information processing systems, pages 3111–3119.
[2] Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013a). Efficient estimation of
word representations in vector space. arXiv preprint arXiv:1301.3781.
[3] Le, Q. and Mikolov, T. (2014). Distributed representations of sentences and
documents. In International conference on machine learning, pages 1188–1196.
PMLR.
[4] Bojanowski, P., Grave, E., Joulin, A., and Mikolov, T. (2017). Enriching
word vectors with subword information. Transactions of the Association for
Computational Linguistics, 5:135–146.
[5] Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training
of deep bidirectional transformers for language understanding. arXiv preprint
arXiv:1810.04805.
[6] Souza, F., Nogueira, R., and Lotufo, R. (2020). BERTimbau: pretrained BERT
models for Brazilian Portuguese. In 9th Brazilian Conference on Intelligent
Systems, BRACIS, Rio Grande do Sul, Brazil, October 20-23
|
{"language": "pt-br", "license": "mit", "tags": ["LegalNLP", "NLP", "legal field", "python", "word2vec", "doc2vec"]}
|
Projeto/LegalNLP
| null |
[
"LegalNLP",
"NLP",
"legal field",
"python",
"word2vec",
"doc2vec",
"arxiv:2110.15709",
"license:mit",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[
"2110.15709"
] |
[
"pt-br"
] |
TAGS
#LegalNLP #NLP #legal field #python #word2vec #doc2vec #arxiv-2110.15709 #license-mit #region-us
|
*LegalNLP* - Natural Language Processing Methods for the Brazilian Legal Language ️
===================================================================================
### The library of Natural Language Processing for Brazilian legal language, *LegalNLP*, was born in a partnership between Brazilian researchers and the legal tech Tikal Tech based in São Paulo, Brazil. Besides containing pre-trained language models for the Brazilian legal language, *LegalNLP* provides functions that can facilitate the manipulation of legal texts in Portuguese and demonstration/tutorials to help people in their own work.
You can access our paper by clicking here.
If you use our library in your academic work, please cite us in the following way
```
@article{polo2021legalnlp,
title={LegalNLP--Natural Language Processing methods for the Brazilian Legal Language},
author={Polo, Felipe Maia and Mendon{\c{c}}a, Gabriel Caiaffa Floriano and Parreira, Kau{\^e} Capellato J and Gianvechio, Lucka and Cordeiro, Peterson and Ferreira, Jonathan Batista and de Lima, Leticia Maria Paz and Maia, Ant{\^o}nio Carlos do Amaral and Vicente, Renato},
journal={arXiv preprint arXiv:2110.15709},
year={2021}
}
```
---
Summary
-------
0. Accessing the Language Models
1. Introduction / Installing package
2. Language Models (Details / How to use)
1. Word2Vec/Doc2Vec
3. Demonstrations / Tutorials
4. References
---
0. Accessing the Language Models
--------------------------------
All our models can be found here.
Please contact *felipemaiapolo@URL* if you have any problem accessing the language models.
---
1. Introduction / Installing package
------------------------------------
*LegalNLP* is promising given the scarcity of Natural Language Processing resources focused on the Brazilian legal language. It is worth mentioning that our library was made for Python, one of the most well-known programming languages for machine learning.
You first need to install the HuggingFaceHub library running the following command on terminal
Import 'hf\_hub\_download':
And then you can download our Word2Vec(SG)/Doc2Vec(DBOW) and Word2Vec(CBOW)/Doc2Vec(DM) by the following commands:
---
2. Model Languages
------------------
### 3.2. Word2Vec/Doc2Vec
Our first models for generating vector representation for tokens and
texts (embeddings) are variations of the Word2Vec [1,
2] and Doc2Vec [3] methods. In short, the
Word2Vec methods generate embeddings for tokens5 and that somehow capture
the meaning of the various textual elements, based on the contexts in which these
elements appear. Doc2Vec methods are extensions/modifications of Word2Vec
for generating whole text representations.
Remember to at least make all letters lowercase. Please check our paper or Gensim page for more details. Preferably use Gensim version 3.8.3.
Below we have a summary table with some important information about the trained models:
Here we made available both models with 100 size and 15 window.
#### Using *Word2Vec*
Installing Gensim
Loading W2V:
Viewing the first 10 entries of 'juiz' vector
```
array([ 6.570131 , -1.262787 , 5.156106 , -8.943866 , -5.884408 ,
-7.717058 , 1.8819941 , -8.02803 , -0.66901577, 6.7223144 ],
dtype=float32)
```
Viewing closest tokens to 'juiz'
```
[('juíza', 0.8210258483886719),
('juiza', 0.7306275367736816),
('juíz', 0.691645085811615),
('juízo', 0.6605231165885925),
('magistrado', 0.6213295459747314),
('mmª_juíza', 0.5510469675064087),
('juizo', 0.5494943261146545),
('desembargador', 0.5313084721565247),
('mmjuiz', 0.5277603268623352),
('fabíola_melo_feijão_juíza', 0.5043971538543701)]
```
#### Using *Doc2Vec*
Installing Gensim
Loading D2V
Inferring vector for a text
```
array([ 0.02626514, -0.3876521 , -0.24873355, -0.0318402 , 0.3343679 ,
-0.21307918, 0.07193747, 0.02030687, 0.407305 , 0.20065512],
dtype=float32)
```
---
4. Demonstrations
-----------------
For a better understanding of the application of these models, below are the links to notebooks where we apply them to a legal dataset using various classification models such as Logistic Regression and CatBoost:
* BERT notebook :
.
Distributed representations of words and phrases and their compositionality.
In Advances in neural information processing systems, pages 3111–3119.
[2] Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013a). Efficient estimation of
word representations in vector space. arXiv preprint arXiv:1301.3781.
[3] Le, Q. and Mikolov, T. (2014). Distributed representations of sentences and
documents. In International conference on machine learning, pages 1188–1196.
PMLR.
[4] Bojanowski, P., Grave, E., Joulin, A., and Mikolov, T. (2017). Enriching
word vectors with subword information. Transactions of the Association for
Computational Linguistics, 5:135–146.
[5] Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training
of deep bidirectional transformers for language understanding. arXiv preprint
arXiv:1810.04805.
[6] Souza, F., Nogueira, R., and Lotufo, R. (2020). BERTimbau: pretrained BERT
models for Brazilian Portuguese. In 9th Brazilian Conference on Intelligent
Systems, BRACIS, Rio Grande do Sul, Brazil, October 20-23
|
[
"### The library of Natural Language Processing for Brazilian legal language, *LegalNLP*, was born in a partnership between Brazilian researchers and the legal tech Tikal Tech based in São Paulo, Brazil. Besides containing pre-trained language models for the Brazilian legal language, *LegalNLP* provides functions that can facilitate the manipulation of legal texts in Portuguese and demonstration/tutorials to help people in their own work.\n\n\nYou can access our paper by clicking here.\n\n\nIf you use our library in your academic work, please cite us in the following way\n\n\n\n```\n@article{polo2021legalnlp,\n title={LegalNLP--Natural Language Processing methods for the Brazilian Legal Language},\n author={Polo, Felipe Maia and Mendon{\\c{c}}a, Gabriel Caiaffa Floriano and Parreira, Kau{\\^e} Capellato J and Gianvechio, Lucka and Cordeiro, Peterson and Ferreira, Jonathan Batista and de Lima, Leticia Maria Paz and Maia, Ant{\\^o}nio Carlos do Amaral and Vicente, Renato},\n journal={arXiv preprint arXiv:2110.15709},\n year={2021}\n}\n\n```\n\n\n\n---\n\n\nSummary\n-------\n\n\n0. Accessing the Language Models\n1. Introduction / Installing package\n2. Language Models (Details / How to use)\n\t1. Word2Vec/Doc2Vec\n3. Demonstrations / Tutorials\n4. References\n\n\n\n\n---\n\n\n\n0. Accessing the Language Models\n--------------------------------\n\n\nAll our models can be found here.\n\n\nPlease contact *felipemaiapolo@URL* if you have any problem accessing the language models.\n\n\n\n\n---\n\n\n\n1. Introduction / Installing package\n------------------------------------\n\n\n*LegalNLP* is promising given the scarcity of Natural Language Processing resources focused on the Brazilian legal language. It is worth mentioning that our library was made for Python, one of the most well-known programming languages for machine learning.\n\n\nYou first need to install the HuggingFaceHub library running the following command on terminal\n\n\nImport 'hf\\_hub\\_download':\n\n\nAnd then you can download our Word2Vec(SG)/Doc2Vec(DBOW) and Word2Vec(CBOW)/Doc2Vec(DM) by the following commands:\n\n\n\n\n---\n\n\n\n2. Model Languages\n------------------",
"### 3.2. Word2Vec/Doc2Vec\n\n\nOur first models for generating vector representation for tokens and\ntexts (embeddings) are variations of the Word2Vec [1,\n2] and Doc2Vec [3] methods. In short, the\nWord2Vec methods generate embeddings for tokens5 and that somehow capture\nthe meaning of the various textual elements, based on the contexts in which these\nelements appear. Doc2Vec methods are extensions/modifications of Word2Vec\nfor generating whole text representations.\n\n\nRemember to at least make all letters lowercase. Please check our paper or Gensim page for more details. Preferably use Gensim version 3.8.3.\n\n\nBelow we have a summary table with some important information about the trained models:\n\n\n\nHere we made available both models with 100 size and 15 window.",
"#### Using *Word2Vec*\n\n\nInstalling Gensim\n\n\nLoading W2V:\n\n\nViewing the first 10 entries of 'juiz' vector\n\n\n\n```\narray([ 6.570131 , -1.262787 , 5.156106 , -8.943866 , -5.884408 ,\n -7.717058 , 1.8819941 , -8.02803 , -0.66901577, 6.7223144 ],\n dtype=float32)\n\n```\n\nViewing closest tokens to 'juiz'\n\n\n\n```\n[('juíza', 0.8210258483886719),\n ('juiza', 0.7306275367736816),\n ('juíz', 0.691645085811615),\n ('juízo', 0.6605231165885925),\n ('magistrado', 0.6213295459747314),\n ('mmª_juíza', 0.5510469675064087),\n ('juizo', 0.5494943261146545),\n ('desembargador', 0.5313084721565247),\n ('mmjuiz', 0.5277603268623352),\n ('fabíola_melo_feijão_juíza', 0.5043971538543701)]\n\n```",
"#### Using *Doc2Vec*\n\n\nInstalling Gensim\n\n\nLoading D2V\n\n\nInferring vector for a text\n\n\n\n```\narray([ 0.02626514, -0.3876521 , -0.24873355, -0.0318402 , 0.3343679 ,\n -0.21307918, 0.07193747, 0.02030687, 0.407305 , 0.20065512],\n dtype=float32)\n\n```\n\n\n\n---\n\n\n\n4. Demonstrations\n-----------------\n\n\nFor a better understanding of the application of these models, below are the links to notebooks where we apply them to a legal dataset using various classification models such as Logistic Regression and CatBoost:\n\n\n* BERT notebook :\n.\nDistributed representations of words and phrases and their compositionality.\nIn Advances in neural information processing systems, pages 3111–3119.\n\n\n[2] Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013a). Efficient estimation of\nword representations in vector space. arXiv preprint arXiv:1301.3781.\n\n\n[3] Le, Q. and Mikolov, T. (2014). Distributed representations of sentences and\ndocuments. In International conference on machine learning, pages 1188–1196.\nPMLR.\n\n\n[4] Bojanowski, P., Grave, E., Joulin, A., and Mikolov, T. (2017). Enriching\nword vectors with subword information. Transactions of the Association for\nComputational Linguistics, 5:135–146.\n\n\n[5] Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training\nof deep bidirectional transformers for language understanding. arXiv preprint\narXiv:1810.04805.\n\n\n[6] Souza, F., Nogueira, R., and Lotufo, R. (2020). BERTimbau: pretrained BERT\nmodels for Brazilian Portuguese. In 9th Brazilian Conference on Intelligent\nSystems, BRACIS, Rio Grande do Sul, Brazil, October 20-23"
] |
[
"TAGS\n#LegalNLP #NLP #legal field #python #word2vec #doc2vec #arxiv-2110.15709 #license-mit #region-us \n",
"### The library of Natural Language Processing for Brazilian legal language, *LegalNLP*, was born in a partnership between Brazilian researchers and the legal tech Tikal Tech based in São Paulo, Brazil. Besides containing pre-trained language models for the Brazilian legal language, *LegalNLP* provides functions that can facilitate the manipulation of legal texts in Portuguese and demonstration/tutorials to help people in their own work.\n\n\nYou can access our paper by clicking here.\n\n\nIf you use our library in your academic work, please cite us in the following way\n\n\n\n```\n@article{polo2021legalnlp,\n title={LegalNLP--Natural Language Processing methods for the Brazilian Legal Language},\n author={Polo, Felipe Maia and Mendon{\\c{c}}a, Gabriel Caiaffa Floriano and Parreira, Kau{\\^e} Capellato J and Gianvechio, Lucka and Cordeiro, Peterson and Ferreira, Jonathan Batista and de Lima, Leticia Maria Paz and Maia, Ant{\\^o}nio Carlos do Amaral and Vicente, Renato},\n journal={arXiv preprint arXiv:2110.15709},\n year={2021}\n}\n\n```\n\n\n\n---\n\n\nSummary\n-------\n\n\n0. Accessing the Language Models\n1. Introduction / Installing package\n2. Language Models (Details / How to use)\n\t1. Word2Vec/Doc2Vec\n3. Demonstrations / Tutorials\n4. References\n\n\n\n\n---\n\n\n\n0. Accessing the Language Models\n--------------------------------\n\n\nAll our models can be found here.\n\n\nPlease contact *felipemaiapolo@URL* if you have any problem accessing the language models.\n\n\n\n\n---\n\n\n\n1. Introduction / Installing package\n------------------------------------\n\n\n*LegalNLP* is promising given the scarcity of Natural Language Processing resources focused on the Brazilian legal language. It is worth mentioning that our library was made for Python, one of the most well-known programming languages for machine learning.\n\n\nYou first need to install the HuggingFaceHub library running the following command on terminal\n\n\nImport 'hf\\_hub\\_download':\n\n\nAnd then you can download our Word2Vec(SG)/Doc2Vec(DBOW) and Word2Vec(CBOW)/Doc2Vec(DM) by the following commands:\n\n\n\n\n---\n\n\n\n2. Model Languages\n------------------",
"### 3.2. Word2Vec/Doc2Vec\n\n\nOur first models for generating vector representation for tokens and\ntexts (embeddings) are variations of the Word2Vec [1,\n2] and Doc2Vec [3] methods. In short, the\nWord2Vec methods generate embeddings for tokens5 and that somehow capture\nthe meaning of the various textual elements, based on the contexts in which these\nelements appear. Doc2Vec methods are extensions/modifications of Word2Vec\nfor generating whole text representations.\n\n\nRemember to at least make all letters lowercase. Please check our paper or Gensim page for more details. Preferably use Gensim version 3.8.3.\n\n\nBelow we have a summary table with some important information about the trained models:\n\n\n\nHere we made available both models with 100 size and 15 window.",
"#### Using *Word2Vec*\n\n\nInstalling Gensim\n\n\nLoading W2V:\n\n\nViewing the first 10 entries of 'juiz' vector\n\n\n\n```\narray([ 6.570131 , -1.262787 , 5.156106 , -8.943866 , -5.884408 ,\n -7.717058 , 1.8819941 , -8.02803 , -0.66901577, 6.7223144 ],\n dtype=float32)\n\n```\n\nViewing closest tokens to 'juiz'\n\n\n\n```\n[('juíza', 0.8210258483886719),\n ('juiza', 0.7306275367736816),\n ('juíz', 0.691645085811615),\n ('juízo', 0.6605231165885925),\n ('magistrado', 0.6213295459747314),\n ('mmª_juíza', 0.5510469675064087),\n ('juizo', 0.5494943261146545),\n ('desembargador', 0.5313084721565247),\n ('mmjuiz', 0.5277603268623352),\n ('fabíola_melo_feijão_juíza', 0.5043971538543701)]\n\n```",
"#### Using *Doc2Vec*\n\n\nInstalling Gensim\n\n\nLoading D2V\n\n\nInferring vector for a text\n\n\n\n```\narray([ 0.02626514, -0.3876521 , -0.24873355, -0.0318402 , 0.3343679 ,\n -0.21307918, 0.07193747, 0.02030687, 0.407305 , 0.20065512],\n dtype=float32)\n\n```\n\n\n\n---\n\n\n\n4. Demonstrations\n-----------------\n\n\nFor a better understanding of the application of these models, below are the links to notebooks where we apply them to a legal dataset using various classification models such as Logistic Regression and CatBoost:\n\n\n* BERT notebook :\n.\nDistributed representations of words and phrases and their compositionality.\nIn Advances in neural information processing systems, pages 3111–3119.\n\n\n[2] Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013a). Efficient estimation of\nword representations in vector space. arXiv preprint arXiv:1301.3781.\n\n\n[3] Le, Q. and Mikolov, T. (2014). Distributed representations of sentences and\ndocuments. In International conference on machine learning, pages 1188–1196.\nPMLR.\n\n\n[4] Bojanowski, P., Grave, E., Joulin, A., and Mikolov, T. (2017). Enriching\nword vectors with subword information. Transactions of the Association for\nComputational Linguistics, 5:135–146.\n\n\n[5] Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training\nof deep bidirectional transformers for language understanding. arXiv preprint\narXiv:1810.04805.\n\n\n[6] Souza, F., Nogueira, R., and Lotufo, R. (2020). BERTimbau: pretrained BERT\nmodels for Brazilian Portuguese. In 9th Brazilian Conference on Intelligent\nSystems, BRACIS, Rio Grande do Sul, Brazil, October 20-23"
] |
text-classification
|
transformers
|
# Prompsit/paraphrase-bert-en
This model allows to evaluate paraphrases for a given phrase.
We have fine-tuned this model from pretrained "bert-base-uncased".
Model built under a TSI-100905-2019-4 project, co-financed by Ministry of Economic Affairs and Digital Transformation from the Government of Spain.
# How to use it
The model answer the following question: Is "phrase B" a paraphrase of "phrase A".
Please note that we're considering phrases instead of sentences. Therefore, we must take into account that the model doesn't expect to find punctuation marks or long pieces of text.
Resulting probabilities correspond to classes:
* 0: Not a paraphrase
* 1: It's a paraphrase
So, considering the phrase "may be addressed" and a candidate paraphrase like "could be included", you can use the model like this:
```
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Prompsit/paraphrase-bert-en")
model = AutoModelForSequenceClassification.from_pretrained("Prompsit/paraphrase-bert-en")
input = tokenizer('may be addressed','could be included',return_tensors='pt')
logits = model(**input).logits
soft = torch.nn.Softmax(dim=1)
print(soft(logits))
```
Code output is:
```
tensor([[0.1592, 0.8408]], grad_fn=<SoftmaxBackward>)
```
As the probability of 1 (=It's a paraphrase) is 0.84 and the probability of 0 (=It is not a paraphrase) is 0.15, we can conclude, for our previous example, that "could be included" is a paraphrase of "may be addressed".
# Evaluation results
We have used as test dataset 16500 pairs of phrases human tagged.
Metrics obtained are:
```
metrics={
'test_loss': 0.5660144090652466,
'test_accuracy': 0.8170742794799527,
'test_precision': 0.7043977055449331,
'test_recall': 0.5978578383641675,
'test_f1': 0.6467696629213483,
'test_matthews_correlation': 0.5276716223607356,
'test_runtime': 19.3345,
'test_samples_per_second': 568.88,
'test_steps_per_second': 17.792
}
```
|
{"language": "en", "tags": ["transformers"], "pipeline_tag": "text-classification", "inference": false}
|
Prompsit/paraphrase-bert-en
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"autotrain_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #text-classification #en #autotrain_compatible #region-us
|
# Prompsit/paraphrase-bert-en
This model allows to evaluate paraphrases for a given phrase.
We have fine-tuned this model from pretrained "bert-base-uncased".
Model built under a TSI-100905-2019-4 project, co-financed by Ministry of Economic Affairs and Digital Transformation from the Government of Spain.
# How to use it
The model answer the following question: Is "phrase B" a paraphrase of "phrase A".
Please note that we're considering phrases instead of sentences. Therefore, we must take into account that the model doesn't expect to find punctuation marks or long pieces of text.
Resulting probabilities correspond to classes:
* 0: Not a paraphrase
* 1: It's a paraphrase
So, considering the phrase "may be addressed" and a candidate paraphrase like "could be included", you can use the model like this:
Code output is:
As the probability of 1 (=It's a paraphrase) is 0.84 and the probability of 0 (=It is not a paraphrase) is 0.15, we can conclude, for our previous example, that "could be included" is a paraphrase of "may be addressed".
# Evaluation results
We have used as test dataset 16500 pairs of phrases human tagged.
Metrics obtained are:
|
[
"# Prompsit/paraphrase-bert-en\n\nThis model allows to evaluate paraphrases for a given phrase. \nWe have fine-tuned this model from pretrained \"bert-base-uncased\".\n\nModel built under a TSI-100905-2019-4 project, co-financed by Ministry of Economic Affairs and Digital Transformation from the Government of Spain.",
"# How to use it\n\nThe model answer the following question: Is \"phrase B\" a paraphrase of \"phrase A\".\nPlease note that we're considering phrases instead of sentences. Therefore, we must take into account that the model doesn't expect to find punctuation marks or long pieces of text.\n\nResulting probabilities correspond to classes: \n* 0: Not a paraphrase\n* 1: It's a paraphrase\n\n\n\nSo, considering the phrase \"may be addressed\" and a candidate paraphrase like \"could be included\", you can use the model like this:\n\n\n\nCode output is:\n \n\nAs the probability of 1 (=It's a paraphrase) is 0.84 and the probability of 0 (=It is not a paraphrase) is 0.15, we can conclude, for our previous example, that \"could be included\" is a paraphrase of \"may be addressed\".",
"# Evaluation results\n\nWe have used as test dataset 16500 pairs of phrases human tagged. \n\nMetrics obtained are:"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #en #autotrain_compatible #region-us \n",
"# Prompsit/paraphrase-bert-en\n\nThis model allows to evaluate paraphrases for a given phrase. \nWe have fine-tuned this model from pretrained \"bert-base-uncased\".\n\nModel built under a TSI-100905-2019-4 project, co-financed by Ministry of Economic Affairs and Digital Transformation from the Government of Spain.",
"# How to use it\n\nThe model answer the following question: Is \"phrase B\" a paraphrase of \"phrase A\".\nPlease note that we're considering phrases instead of sentences. Therefore, we must take into account that the model doesn't expect to find punctuation marks or long pieces of text.\n\nResulting probabilities correspond to classes: \n* 0: Not a paraphrase\n* 1: It's a paraphrase\n\n\n\nSo, considering the phrase \"may be addressed\" and a candidate paraphrase like \"could be included\", you can use the model like this:\n\n\n\nCode output is:\n \n\nAs the probability of 1 (=It's a paraphrase) is 0.84 and the probability of 0 (=It is not a paraphrase) is 0.15, we can conclude, for our previous example, that \"could be included\" is a paraphrase of \"may be addressed\".",
"# Evaluation results\n\nWe have used as test dataset 16500 pairs of phrases human tagged. \n\nMetrics obtained are:"
] |
text-classification
|
transformers
|
# Prompsit/paraphrase-bert-pt
This model allows to evaluate paraphrases for a given phrase.
We have fine-tuned this model from pretrained "neuralmind/bert-base-portuguese-cased".
Model built under a TSI-100905-2019-4 project, co-financed by Ministry of Economic Affairs and Digital Transformation from the Government of Spain.
# How to use it
The model answer the following question: Is "phrase B" a paraphrase of "phrase A".
Please note that we're considering phrases instead of sentences. Therefore, we must take into account that the model doesn't expect to find punctuation marks or long pieces of text.
Resulting probabilities correspond to classes:
* 0: Not a paraphrase
* 1: It's a paraphrase
So, considering the phrase "logo após o homicídio" and a candidate paraphrase like "pouco depois do assassinato", you can use the model like this:
```
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Prompsit/paraphrase-bert-pt")
model = AutoModelForSequenceClassification.from_pretrained("Prompsit/paraphrase-bert-pt")
input = tokenizer('logo após o homicídio','pouco depois do assassinato',return_tensors='pt')
logits = model(**input).logits
soft = torch.nn.Softmax(dim=1)
print(soft(logits))
```
Code output is:
```
tensor([[0.2137, 0.7863]], grad_fn=<SoftmaxBackward>)
```
As the probability of 1 (=It's a paraphrase) is 0.7863 and the probability of 0 (=It is not a paraphrase) is 0.2137, we can conclude, for our previous example, that "pouco depois do assassinato" is a paraphrase of "logo após o homicidio".
# Evaluation results
We have used as test dataset 16500 pairs of phrases human tagged.
Metrics obtained are:
```
metrics={
'test_loss': 0.6074697375297546,
'test_accuracy': 0.7809,
'test_precision': 0.7157638466220329,
'test_recall': 0.40551724137931033,
'test_f1': 0.5177195685670262,
'test_matthews_correlation': 0.41603913834665324,
'test_runtime': 16.4585,
'test_samples_per_second': 607.587,
'test_steps_per_second': 19.017
}
```
|
{"language": "pt", "tags": ["transformers"], "pipeline_tag": "text-classification", "inference": false}
|
Prompsit/paraphrase-bert-pt
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"pt",
"autotrain_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"pt"
] |
TAGS
#transformers #pytorch #bert #text-classification #pt #autotrain_compatible #region-us
|
# Prompsit/paraphrase-bert-pt
This model allows to evaluate paraphrases for a given phrase.
We have fine-tuned this model from pretrained "neuralmind/bert-base-portuguese-cased".
Model built under a TSI-100905-2019-4 project, co-financed by Ministry of Economic Affairs and Digital Transformation from the Government of Spain.
# How to use it
The model answer the following question: Is "phrase B" a paraphrase of "phrase A".
Please note that we're considering phrases instead of sentences. Therefore, we must take into account that the model doesn't expect to find punctuation marks or long pieces of text.
Resulting probabilities correspond to classes:
* 0: Not a paraphrase
* 1: It's a paraphrase
So, considering the phrase "logo após o homicídio" and a candidate paraphrase like "pouco depois do assassinato", you can use the model like this:
Code output is:
As the probability of 1 (=It's a paraphrase) is 0.7863 and the probability of 0 (=It is not a paraphrase) is 0.2137, we can conclude, for our previous example, that "pouco depois do assassinato" is a paraphrase of "logo após o homicidio".
# Evaluation results
We have used as test dataset 16500 pairs of phrases human tagged.
Metrics obtained are:
|
[
"# Prompsit/paraphrase-bert-pt\n\nThis model allows to evaluate paraphrases for a given phrase. \n\nWe have fine-tuned this model from pretrained \"neuralmind/bert-base-portuguese-cased\".\n\nModel built under a TSI-100905-2019-4 project, co-financed by Ministry of Economic Affairs and Digital Transformation from the Government of Spain.",
"# How to use it\n\nThe model answer the following question: Is \"phrase B\" a paraphrase of \"phrase A\".\n\nPlease note that we're considering phrases instead of sentences. Therefore, we must take into account that the model doesn't expect to find punctuation marks or long pieces of text.\n\nResulting probabilities correspond to classes: \n\n* 0: Not a paraphrase\n* 1: It's a paraphrase\n\nSo, considering the phrase \"logo após o homicídio\" and a candidate paraphrase like \"pouco depois do assassinato\", you can use the model like this:\n\n\n\nCode output is:\n\n \n\nAs the probability of 1 (=It's a paraphrase) is 0.7863 and the probability of 0 (=It is not a paraphrase) is 0.2137, we can conclude, for our previous example, that \"pouco depois do assassinato\" is a paraphrase of \"logo após o homicidio\".",
"# Evaluation results\n\nWe have used as test dataset 16500 pairs of phrases human tagged. \n\nMetrics obtained are:"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #pt #autotrain_compatible #region-us \n",
"# Prompsit/paraphrase-bert-pt\n\nThis model allows to evaluate paraphrases for a given phrase. \n\nWe have fine-tuned this model from pretrained \"neuralmind/bert-base-portuguese-cased\".\n\nModel built under a TSI-100905-2019-4 project, co-financed by Ministry of Economic Affairs and Digital Transformation from the Government of Spain.",
"# How to use it\n\nThe model answer the following question: Is \"phrase B\" a paraphrase of \"phrase A\".\n\nPlease note that we're considering phrases instead of sentences. Therefore, we must take into account that the model doesn't expect to find punctuation marks or long pieces of text.\n\nResulting probabilities correspond to classes: \n\n* 0: Not a paraphrase\n* 1: It's a paraphrase\n\nSo, considering the phrase \"logo após o homicídio\" and a candidate paraphrase like \"pouco depois do assassinato\", you can use the model like this:\n\n\n\nCode output is:\n\n \n\nAs the probability of 1 (=It's a paraphrase) is 0.7863 and the probability of 0 (=It is not a paraphrase) is 0.2137, we can conclude, for our previous example, that \"pouco depois do assassinato\" is a paraphrase of \"logo após o homicidio\".",
"# Evaluation results\n\nWe have used as test dataset 16500 pairs of phrases human tagged. \n\nMetrics obtained are:"
] |
text-classification
|
transformers
|
# Prompsit/paraphrase-roberta-es
This model allows to evaluate paraphrases for a given phrase.
We have fine-tuned this model from pretrained "PlanTL-GOB-ES/roberta-base-bne".
Model built under a TSI-100905-2019-4 project, co-financed by Ministry of Economic Affairs and Digital Transformation from the Government of Spain.
# How to use it
The model answer the following question: Is "phrase B" a paraphrase of "phrase A".
Please note that we're considering phrases instead of sentences. Therefore, we must take into account that the model doesn't expect to find punctuation marks or long pieces of text.
Resulting probabilities correspond to classes:
* 0: Not a paraphrase
* 1: It's a paraphrase
So, considering the phrase "se buscarán acuerdos" and a candidate paraphrase like "se deberá obtener el acuerdo", you can use the model like this:
```
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Prompsit/paraphrase-roberta-es")
model = AutoModelForSequenceClassification.from_pretrained("Prompsit/paraphrase-roberta-es")
input = tokenizer('se buscarán acuerdos','se deberá obtener el acuerdo',return_tensors='pt')
logits = model(**input).logits
soft = torch.nn.Softmax(dim=1)
print(soft(logits))
```
Code output is:
```
tensor([[0.2266, 0.7734]], grad_fn=<SoftmaxBackward>)
```
As the probability of 1 (=It's a paraphrase) is 0.77 and the probability of 0 (=It is not a paraphrase) is 0.22, we can conclude, for our previous example, that "se deberá obtener el acuerdo" is a paraphrase of "se buscarán acuerdos".
# Evaluation results
We have used as test dataset 16500 pairs of phrases human tagged.
Metrics obtained are:
```
metrics={
'test_loss': 0.4869941473007202,
'test_accuracy': 0.8003636363636364,
'test_precision': 0.6692456479690522,
'test_recall': 0.5896889646357052,
'test_f1': 0.6269535673839184,
'test_matthews_correlation': 0.49324489316659575,
'test_runtime': 27.1537,
'test_samples_per_second': 607.652,
'test_steps_per_second': 19.003
}
```
|
{"language": "es", "tags": ["transformers"], "pipeline_tag": "text-classification", "inference": false}
|
Prompsit/paraphrase-roberta-es
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"es",
"autotrain_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"es"
] |
TAGS
#transformers #pytorch #roberta #text-classification #es #autotrain_compatible #region-us
|
# Prompsit/paraphrase-roberta-es
This model allows to evaluate paraphrases for a given phrase.
We have fine-tuned this model from pretrained "PlanTL-GOB-ES/roberta-base-bne".
Model built under a TSI-100905-2019-4 project, co-financed by Ministry of Economic Affairs and Digital Transformation from the Government of Spain.
# How to use it
The model answer the following question: Is "phrase B" a paraphrase of "phrase A".
Please note that we're considering phrases instead of sentences. Therefore, we must take into account that the model doesn't expect to find punctuation marks or long pieces of text.
Resulting probabilities correspond to classes:
* 0: Not a paraphrase
* 1: It's a paraphrase
So, considering the phrase "se buscarán acuerdos" and a candidate paraphrase like "se deberá obtener el acuerdo", you can use the model like this:
Code output is:
As the probability of 1 (=It's a paraphrase) is 0.77 and the probability of 0 (=It is not a paraphrase) is 0.22, we can conclude, for our previous example, that "se deberá obtener el acuerdo" is a paraphrase of "se buscarán acuerdos".
# Evaluation results
We have used as test dataset 16500 pairs of phrases human tagged.
Metrics obtained are:
|
[
"# Prompsit/paraphrase-roberta-es\n\nThis model allows to evaluate paraphrases for a given phrase. \n\nWe have fine-tuned this model from pretrained \"PlanTL-GOB-ES/roberta-base-bne\".\n\nModel built under a TSI-100905-2019-4 project, co-financed by Ministry of Economic Affairs and Digital Transformation from the Government of Spain.",
"# How to use it\n\nThe model answer the following question: Is \"phrase B\" a paraphrase of \"phrase A\".\n\nPlease note that we're considering phrases instead of sentences. Therefore, we must take into account that the model doesn't expect to find punctuation marks or long pieces of text.\n\nResulting probabilities correspond to classes: \n\n* 0: Not a paraphrase\n* 1: It's a paraphrase\n\nSo, considering the phrase \"se buscarán acuerdos\" and a candidate paraphrase like \"se deberá obtener el acuerdo\", you can use the model like this:\n\n\n\nCode output is:\n\n \n\nAs the probability of 1 (=It's a paraphrase) is 0.77 and the probability of 0 (=It is not a paraphrase) is 0.22, we can conclude, for our previous example, that \"se deberá obtener el acuerdo\" is a paraphrase of \"se buscarán acuerdos\".",
"# Evaluation results\n\nWe have used as test dataset 16500 pairs of phrases human tagged. \nMetrics obtained are:"
] |
[
"TAGS\n#transformers #pytorch #roberta #text-classification #es #autotrain_compatible #region-us \n",
"# Prompsit/paraphrase-roberta-es\n\nThis model allows to evaluate paraphrases for a given phrase. \n\nWe have fine-tuned this model from pretrained \"PlanTL-GOB-ES/roberta-base-bne\".\n\nModel built under a TSI-100905-2019-4 project, co-financed by Ministry of Economic Affairs and Digital Transformation from the Government of Spain.",
"# How to use it\n\nThe model answer the following question: Is \"phrase B\" a paraphrase of \"phrase A\".\n\nPlease note that we're considering phrases instead of sentences. Therefore, we must take into account that the model doesn't expect to find punctuation marks or long pieces of text.\n\nResulting probabilities correspond to classes: \n\n* 0: Not a paraphrase\n* 1: It's a paraphrase\n\nSo, considering the phrase \"se buscarán acuerdos\" and a candidate paraphrase like \"se deberá obtener el acuerdo\", you can use the model like this:\n\n\n\nCode output is:\n\n \n\nAs the probability of 1 (=It's a paraphrase) is 0.77 and the probability of 0 (=It is not a paraphrase) is 0.22, we can conclude, for our previous example, that \"se deberá obtener el acuerdo\" is a paraphrase of \"se buscarán acuerdos\".",
"# Evaluation results\n\nWe have used as test dataset 16500 pairs of phrases human tagged. \nMetrics obtained are:"
] |
text-classification
|
transformers
|
FinBERT is a pre-trained NLP model to analyze sentiment of financial text. It is built by further training the BERT language model in the finance domain, using a large financial corpus and thereby fine-tuning it for financial sentiment classification. [Financial PhraseBank](https://www.researchgate.net/publication/251231107_Good_Debt_or_Bad_Debt_Detecting_Semantic_Orientations_in_Economic_Texts) by Malo et al. (2014) is used for fine-tuning. For more details, please see the paper [FinBERT: Financial Sentiment Analysis with Pre-trained Language Models](https://arxiv.org/abs/1908.10063) and our related [blog post](https://medium.com/prosus-ai-tech-blog/finbert-financial-sentiment-analysis-with-bert-b277a3607101) on Medium.
The model will give softmax outputs for three labels: positive, negative or neutral.
---
About Prosus
Prosus is a global consumer internet group and one of the largest technology investors in the world. Operating and investing globally in markets with long-term growth potential, Prosus builds leading consumer internet companies that empower people and enrich communities. For more information, please visit www.prosus.com.
Contact information
Please contact Dogu Araci dogu.araci[at]prosus[dot]com and Zulkuf Genc zulkuf.genc[at]prosus[dot]com about any FinBERT related issues and questions.
|
{"language": "en", "tags": ["financial-sentiment-analysis", "sentiment-analysis"], "widget": [{"text": "Stocks rallied and the British pound gained."}]}
|
ProsusAI/finbert
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"financial-sentiment-analysis",
"sentiment-analysis",
"en",
"arxiv:1908.10063",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[
"1908.10063"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #bert #text-classification #financial-sentiment-analysis #sentiment-analysis #en #arxiv-1908.10063 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
FinBERT is a pre-trained NLP model to analyze sentiment of financial text. It is built by further training the BERT language model in the finance domain, using a large financial corpus and thereby fine-tuning it for financial sentiment classification. Financial PhraseBank by Malo et al. (2014) is used for fine-tuning. For more details, please see the paper FinBERT: Financial Sentiment Analysis with Pre-trained Language Models and our related blog post on Medium.
The model will give softmax outputs for three labels: positive, negative or neutral.
---
About Prosus
Prosus is a global consumer internet group and one of the largest technology investors in the world. Operating and investing globally in markets with long-term growth potential, Prosus builds leading consumer internet companies that empower people and enrich communities. For more information, please visit URL.
Contact information
Please contact Dogu Araci URL[at]prosus[dot]com and Zulkuf Genc URL[at]prosus[dot]com about any FinBERT related issues and questions.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #text-classification #financial-sentiment-analysis #sentiment-analysis #en #arxiv-1908.10063 #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
text-generation
|
transformers
|
# Shrek DialoGPT Model
|
{"tags": ["conversational"]}
|
Pupihed/DialoGPT-small-shrek
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Shrek DialoGPT Model
|
[
"# Shrek DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Shrek DialoGPT Model"
] |
text-generation
|
transformers
|
# Jarvis DialoGPT Model
|
{"tags": ["conversational"]}
|
PurpleJacketGuy/My_Jarvis
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Jarvis DialoGPT Model
|
[
"# Jarvis DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Jarvis DialoGPT Model"
] |
text-generation
|
transformers
|
# Jarvis DialoGPT Model
|
{"tags": ["conversational"]}
|
PurpleJacketGuy/My_Jarvis_2
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Jarvis DialoGPT Model
|
[
"# Jarvis DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Jarvis DialoGPT Model"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-dutch-cased-finetuned-gv
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4741 | 1.0 | 2603 | 1.8404 |
| 1.2384 | 2.0 | 5206 | 1.8457 |
| 1.2121 | 3.0 | 7809 | 1.7837 |
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "model_index": [{"name": "bert-base-dutch-cased-finetuned-gv", "results": [{"task": {"name": "Masked Language Modeling", "type": "fill-mask"}}]}]}
|
Pyjay/bert-base-dutch-cased-finetuned-gv
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
bert-base-dutch-cased-finetuned-gv
==================================
This model is a fine-tuned version of GroNLP/bert-base-dutch-cased on an unkown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.7837
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.9.0
* Pytorch 1.9.0+cu102
* Datasets 1.10.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.10.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.10.2\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-medium-dutch-finetuned-text-generation
This model is a fine-tuned version of [GroNLP/gpt2-medium-dutch-embeddings](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 394 | 4.0144 |
| 3.3633 | 2.0 | 788 | 3.9379 |
| 2.7108 | 3.0 | 1182 | 3.9268 |
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "model_index": [{"name": "gpt2-medium-dutch-finetuned-text-generation", "results": [{"task": {"name": "Causal Language Modeling", "type": "text-generation"}}]}]}
|
Pyjay/gpt2-medium-dutch-finetuned-text-generation
| null |
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
gpt2-medium-dutch-finetuned-text-generation
===========================================
This model is a fine-tuned version of GroNLP/gpt2-medium-dutch-embeddings on an unkown dataset.
It achieves the following results on the evaluation set:
* Loss: 3.9268
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.9.0
* Pytorch 1.9.0+cu102
* Datasets 1.10.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.10.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.10.2\n* Tokenizers 0.10.3"
] |
sentence-similarity
|
sentence-transformers
|
# Pyjay/sentence-transformers-multilingual-snli-v2-500k
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Pyjay/sentence-transformers-multilingual-snli-v2-500k')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Pyjay/sentence-transformers-multilingual-snli-v2-500k')
model = AutoModel.from_pretrained('Pyjay/sentence-transformers-multilingual-snli-v2-500k')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Pyjay/sentence-transformers-multilingual-snli-v2-500k)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15604 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 180 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 72,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
Pyjay/sentence-transformers-multilingual-snli-v2-500k
| null |
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# Pyjay/sentence-transformers-multilingual-snli-v2-500k
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 15604 with parameters:
Loss:
'sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss'
DataLoader:
'URL.dataloader.DataLoader' of length 180 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# Pyjay/sentence-transformers-multilingual-snli-v2-500k\r\n\r\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\r\n\r\nUsing this model becomes easy when you have sentence-transformers installed:\r\n\r\n\r\n\r\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\r\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\r\n\r\n\r\n\r\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\r\nThe model was trained with the parameters:\r\n\r\nDataLoader:\r\n\r\n'URL.dataloader.DataLoader' of length 15604 with parameters:\r\n\r\n\r\nLoss:\r\n\r\n'sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss' \r\n\r\nDataLoader:\r\n\r\n'URL.dataloader.DataLoader' of length 180 with parameters:\r\n\r\n\r\nLoss:\r\n\r\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \r\n\r\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# Pyjay/sentence-transformers-multilingual-snli-v2-500k\r\n\r\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\r\n\r\nUsing this model becomes easy when you have sentence-transformers installed:\r\n\r\n\r\n\r\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\r\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\r\n\r\n\r\n\r\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\r\nThe model was trained with the parameters:\r\n\r\nDataLoader:\r\n\r\n'URL.dataloader.DataLoader' of length 15604 with parameters:\r\n\r\n\r\nLoss:\r\n\r\n'sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss' \r\n\r\nDataLoader:\r\n\r\n'URL.dataloader.DataLoader' of length 180 with parameters:\r\n\r\n\r\nLoss:\r\n\r\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \r\n\r\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
text2text-generation
|
transformers
|
This model is finetuned by Qichang Zheng(Pyke) based on bart with patent abstract dataset(7 million records), with 'facebook/bart-base' being the tokenizer and original model. The input is the same as the output, which is the patent abstract.
This model is finetuned to serve as a reference to the research that Qichang is in.
|
{}
|
Pyke/bart-finetuned-with-patent
| null |
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bart #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
|
This model is finetuned by Qichang Zheng(Pyke) based on bart with patent abstract dataset(7 million records), with 'facebook/bart-base' being the tokenizer and original model. The input is the same as the output, which is the patent abstract.
This model is finetuned to serve as a reference to the research that Qichang is in.
|
[] |
[
"TAGS\n#transformers #pytorch #bart #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null |
transformers
|
Propaganda Techniques Analysis BERT
----
This model is a BERT based model to make predictions of propaganda techniques in
news articles in English. The model is described in
[this paper](https://propaganda.qcri.org/papers/EMNLP_2019__Fine_Grained_Propaganda_Detection.pdf).
## Model description
Please find propaganda definition here:
https://propaganda.qcri.org/annotations/definitions.html
You can also try the model in action here: https://www.tanbih.org/prta
### How to use
```python
>>> from transformers import BertTokenizerFast
>>> from .model import BertForTokenAndSequenceJointClassification
>>>
>>> tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
>>> model = BertForTokenAndSequenceJointClassification.from_pretrained(
>>> "QCRI/PropagandaTechniquesAnalysis-en-BERT",
>>> revision="v0.1.0",
>>> )
>>>
>>> inputs = tokenizer.encode_plus("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> sequence_class_index = torch.argmax(outputs.sequence_logits, dim=-1)
>>> sequence_class = model.sequence_tags[sequence_class_index[0]]
>>> token_class_index = torch.argmax(outputs.token_logits, dim=-1)
>>> tokens = tokenizer.convert_ids_to_tokens(inputs.input_ids[0][1:-1])
>>> tags = [model.token_tags[i] for i in token_class_index[0].tolist()[1:-1]]
```
### BibTeX entry and citation info
```bibtex
@inproceedings{da-san-martino-etal-2019-fine,
title = "Fine-Grained Analysis of Propaganda in News Article",
author = "Da San Martino, Giovanni and
Yu, Seunghak and
Barr{\'o}n-Cede{\~n}o, Alberto and
Petrov, Rostislav and
Nakov, Preslav",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-1565",
doi = "10.18653/v1/D19-1565",
pages = "5636--5646",
abstract = "Propaganda aims at influencing people{'}s mindset with the purpose of advancing a specific agenda. Previous work has addressed propaganda detection at document level, typically labelling all articles from a propagandistic news outlet as propaganda. Such noisy gold labels inevitably affect the quality of any learning system trained on them. A further issue with most existing systems is the lack of explainability. To overcome these limitations, we propose a novel task: performing fine-grained analysis of texts by detecting all fragments that contain propaganda techniques as well as their type. In particular, we create a corpus of news articles manually annotated at fragment level with eighteen propaganda techniques and propose a suitable evaluation measure. We further design a novel multi-granularity neural network, and we show that it outperforms several strong BERT-based baselines.",
}
```
|
{"language": "en", "license": "MIT", "tags": ["propaganda", "bert"], "datasets": [], "metrics": [], "thumbnail": "https://pbs.twimg.com/profile_images/1092721745994440704/d6R-AHzj_400x400.jpg"}
|
QCRI/PropagandaTechniquesAnalysis-en-BERT
| null |
[
"transformers",
"pytorch",
"bert",
"propaganda",
"en",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #propaganda #en #endpoints_compatible #has_space #region-us
|
Propaganda Techniques Analysis BERT
----
This model is a BERT based model to make predictions of propaganda techniques in
news articles in English. The model is described in
this paper.
## Model description
Please find propaganda definition here:
URL
You can also try the model in action here: URL
### How to use
### BibTeX entry and citation info
|
[
"## Model description\n\nPlease find propaganda definition here:\nURL\n\nYou can also try the model in action here: URL",
"### How to use",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #bert #propaganda #en #endpoints_compatible #has_space #region-us \n",
"## Model description\n\nPlease find propaganda definition here:\nURL\n\nYou can also try the model in action here: URL",
"### How to use",
"### BibTeX entry and citation info"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 36769078
- CO2 Emissions (in grams): 23.42719853096565
## Validation Metrics
- Loss: 0.15959647297859192
- Accuracy: 0.9817757009345794
- Precision: 0.980411361410382
- Recall: 0.9813725490196078
- AUC: 0.9982379201680672
- F1: 0.9808917197452229
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Qinghui/autonlp-fake-covid-news-36769078
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Qinghui/autonlp-fake-covid-news-36769078", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Qinghui/autonlp-fake-covid-news-36769078", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "unk", "tags": "autonlp", "datasets": ["Qinghui/autonlp-data-fake-covid-news"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 23.42719853096565}
|
Qinghui/autonlp-fake-covid-news-36769078
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"unk",
"dataset:Qinghui/autonlp-data-fake-covid-news",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"unk"
] |
TAGS
#transformers #pytorch #roberta #text-classification #autonlp #unk #dataset-Qinghui/autonlp-data-fake-covid-news #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 36769078
- CO2 Emissions (in grams): 23.42719853096565
## Validation Metrics
- Loss: 0.15959647297859192
- Accuracy: 0.9817757009345794
- Precision: 0.980411361410382
- Recall: 0.9813725490196078
- AUC: 0.9982379201680672
- F1: 0.9808917197452229
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 36769078\n- CO2 Emissions (in grams): 23.42719853096565",
"## Validation Metrics\n\n- Loss: 0.15959647297859192\n- Accuracy: 0.9817757009345794\n- Precision: 0.980411361410382\n- Recall: 0.9813725490196078\n- AUC: 0.9982379201680672\n- F1: 0.9808917197452229",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #roberta #text-classification #autonlp #unk #dataset-Qinghui/autonlp-data-fake-covid-news #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 36769078\n- CO2 Emissions (in grams): 23.42719853096565",
"## Validation Metrics\n\n- Loss: 0.15959647297859192\n- Accuracy: 0.9817757009345794\n- Precision: 0.980411361410382\n- Recall: 0.9813725490196078\n- AUC: 0.9982379201680672\n- F1: 0.9808917197452229",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
token-classification
|
transformers
|
# Punctuator for Uncased English
The model is fine-tuned based on `DistilBertForTokenClassification` for adding punctuations to plain text (uncased English)
## Usage
```python
from transformers import DistilBertForTokenClassification, DistilBertTokenizerFast
model = DistilBertForTokenClassification.from_pretrained("Qishuai/distilbert_punctuator_en")
tokenizer = DistilBertTokenizerFast.from_pretrained("Qishuai/distilbert_punctuator_en")
```
## Model Overview
### Training data
Combination of following three dataset:
- BBC news: From BBC news website corresponding to stories in five topical areas from 2004-2005. [Reference](https://www.kaggle.com/hgultekin/bbcnewsarchive)
- News articles: 20000 samples of short news articles scraped from Hindu, Indian times and Guardian between Feb 2017 and Aug 2017 [Reference](https://www.kaggle.com/sunnysai12345/news-summary?select=news_summary_more.csv)
- Ted talks: transcripts of over 4,000 TED talks between 2004 and 2019 [Reference](https://www.kaggle.com/miguelcorraljr/ted-ultimate-dataset)
### Model Performance
- Validation with 500 samples of dataset scraped from https://www.thenews.com.pk website. [Reference](https://www.kaggle.com/asad1m9a9h6mood/news-articles)
- Metrics Report:
| | precision | recall | f1-score | support |
|:--------------:|:---------:|:------:|:--------:|:-------:|
| COMMA | 0.66 | 0.55 | 0.60 | 7064 |
| EXLAMATIONMARK | 1.00 | 0.00 | 0.00 | 5 |
| PERIOD | 0.73 | 0.63 | 0.68 | 6573 |
| QUESTIONMARK | 0.54 | 0.41 | 0.47 | 17 |
| micro avg | 0.69 | 0.59 | 0.64 | 13659 |
| macro avg | 0.73 | 0.40 | 0.44 | 13659 |
| weighted avg | 0.69 | 0.59 | 0.64 | 13659 |
- Validation with 86 news ted talks of 2020 which are not included in training dataset [Reference](https://www.kaggle.com/thegupta/ted-talk)
- Metrics Report:
| | precision | recall | f1-score | support |
|:--------------:|:---------:|:------:|:--------:|:-------:|
| COMMA | 0.71 | 0.56 | 0.63 | 10712 |
| EXLAMATIONMARK | 0.45 | 0.07 | 0.12 | 75 |
| PERIOD | 0.75 | 0.65 | 0.70 | 7921 |
| QUESTIONMARK | 0.73 | 0.67 | 0.70 | 827 |
| micro avg | 0.73 | 0.60 | 0.66 | 19535 |
| macro avg | 0.66 | 0.49 | 0.53 | 19535 |
| weighted avg | 0.73 | 0.60 | 0.66 | 19535 |
|
{}
|
Qishuai/distilbert_punctuator_en
| null |
[
"transformers",
"pytorch",
"safetensors",
"distilbert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #distilbert #token-classification #autotrain_compatible #endpoints_compatible #has_space #region-us
|
Punctuator for Uncased English
==============================
The model is fine-tuned based on 'DistilBertForTokenClassification' for adding punctuations to plain text (uncased English)
Usage
-----
Model Overview
--------------
### Training data
Combination of following three dataset:
* BBC news: From BBC news website corresponding to stories in five topical areas from 2004-2005. Reference
* News articles: 20000 samples of short news articles scraped from Hindu, Indian times and Guardian between Feb 2017 and Aug 2017 Reference
* Ted talks: transcripts of over 4,000 TED talks between 2004 and 2019 Reference
### Model Performance
* Validation with 500 samples of dataset scraped from URL website. Reference
* Metrics Report:
* Validation with 86 news ted talks of 2020 which are not included in training dataset Reference
* Metrics Report:
|
[
"### Training data\n\n\nCombination of following three dataset:\n\n\n* BBC news: From BBC news website corresponding to stories in five topical areas from 2004-2005. Reference\n* News articles: 20000 samples of short news articles scraped from Hindu, Indian times and Guardian between Feb 2017 and Aug 2017 Reference\n* Ted talks: transcripts of over 4,000 TED talks between 2004 and 2019 Reference",
"### Model Performance\n\n\n* Validation with 500 samples of dataset scraped from URL website. Reference\n* Metrics Report:\n* Validation with 86 news ted talks of 2020 which are not included in training dataset Reference\n* Metrics Report:"
] |
[
"TAGS\n#transformers #pytorch #safetensors #distilbert #token-classification #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Training data\n\n\nCombination of following three dataset:\n\n\n* BBC news: From BBC news website corresponding to stories in five topical areas from 2004-2005. Reference\n* News articles: 20000 samples of short news articles scraped from Hindu, Indian times and Guardian between Feb 2017 and Aug 2017 Reference\n* Ted talks: transcripts of over 4,000 TED talks between 2004 and 2019 Reference",
"### Model Performance\n\n\n* Validation with 500 samples of dataset scraped from URL website. Reference\n* Metrics Report:\n* Validation with 86 news ted talks of 2020 which are not included in training dataset Reference\n* Metrics Report:"
] |
token-classification
|
transformers
|
# Punctuator for Simplified Chinese
The model is fine-tuned based on `DistilBertForTokenClassification` for adding punctuations to plain text (simplified Chinese). The model is fine-tuned based on distilled model `bert-base-chinese`.
## Usage
```python
from transformers import DistilBertForTokenClassification, DistilBertTokenizerFast
model = DistilBertForTokenClassification.from_pretrained("Qishuai/distilbert_punctuator_zh")
tokenizer = DistilBertTokenizerFast.from_pretrained("Qishuai/distilbert_punctuator_zh")
```
## Model Overview
### Training data
Combination of following three dataset:
- News articles of People's Daily 2014. [Reference](https://github.com/InsaneLife/ChineseNLPCorpus)
### Model Performance
- Validation with MSRA training dataset. [Reference](https://github.com/InsaneLife/ChineseNLPCorpus/tree/master/NER/MSRA)
- Metrics Report:
| | precision | recall | f1-score | support |
|:----------------:|:---------:|:------:|:--------:|:-------:|
| C_COMMA | 0.67 | 0.59 | 0.63 | 91566 |
| C_DUNHAO | 0.50 | 0.37 | 0.42 | 21013 |
| C_EXLAMATIONMARK | 0.23 | 0.06 | 0.09 | 399 |
| C_PERIOD | 0.84 | 0.99 | 0.91 | 44258 |
| C_QUESTIONMARK | 0.00 | 1.00 | 0.00 | 0 |
| micro avg | 0.71 | 0.67 | 0.69 | 157236 |
| macro avg | 0.45 | 0.60 | 0.41 | 157236 |
| weighted avg | 0.69 | 0.67 | 0.68 | 157236 |
|
{}
|
Qishuai/distilbert_punctuator_zh
| null |
[
"transformers",
"pytorch",
"safetensors",
"distilbert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #distilbert #token-classification #autotrain_compatible #endpoints_compatible #region-us
|
Punctuator for Simplified Chinese
=================================
The model is fine-tuned based on 'DistilBertForTokenClassification' for adding punctuations to plain text (simplified Chinese). The model is fine-tuned based on distilled model 'bert-base-chinese'.
Usage
-----
Model Overview
--------------
### Training data
Combination of following three dataset:
* News articles of People's Daily 2014. Reference
### Model Performance
* Validation with MSRA training dataset. Reference
* Metrics Report:
|
[
"### Training data\n\n\nCombination of following three dataset:\n\n\n* News articles of People's Daily 2014. Reference",
"### Model Performance\n\n\n* Validation with MSRA training dataset. Reference\n* Metrics Report:"
] |
[
"TAGS\n#transformers #pytorch #safetensors #distilbert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training data\n\n\nCombination of following three dataset:\n\n\n* News articles of People's Daily 2014. Reference",
"### Model Performance\n\n\n* Validation with MSRA training dataset. Reference\n* Metrics Report:"
] |
text2text-generation
|
transformers
|
Testing PPO-trainer
|
{}
|
QuickRead/PPO_training
| null |
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #pegasus #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
|
Testing PPO-trainer
|
[] |
[
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tune-Pegasus
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3242
- Rouge1: 17.993
- Rouge2: 2.9392
- Rougel: 12.313
- Rougelsum: 13.3091
- Gen Len: 67.0552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.35e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["xsum"], "metrics": ["rouge"], "model-index": [{"name": "fine-tune-Pegasus", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "xsum", "type": "xsum", "args": "default"}, "metrics": [{"type": "rouge", "value": 17.993, "name": "Rouge1"}]}]}]}
|
QuickRead/fine-tune-Pegasus
| null |
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #pegasus #text2text-generation #generated_from_trainer #dataset-xsum #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# fine-tune-Pegasus
This model is a fine-tuned version of google/pegasus-large on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3242
- Rouge1: 17.993
- Rouge2: 2.9392
- Rougel: 12.313
- Rougelsum: 13.3091
- Gen Len: 67.0552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.35e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# fine-tune-Pegasus\n\nThis model is a fine-tuned version of google/pegasus-large on the xsum dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 2.3242\n- Rouge1: 17.993\n- Rouge2: 2.9392\n- Rougel: 12.313\n- Rougelsum: 13.3091\n- Gen Len: 67.0552",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 6.35e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 1.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.1\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #generated_from_trainer #dataset-xsum #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# fine-tune-Pegasus\n\nThis model is a fine-tuned version of google/pegasus-large on the xsum dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 2.3242\n- Rouge1: 17.993\n- Rouge2: 2.9392\n- Rougel: 12.313\n- Rougelsum: 13.3091\n- Gen Len: 67.0552",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 6.35e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 1.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.1\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-reddit
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the reddit dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3329
- Rouge1: 23.967
- Rouge2: 5.0032
- Rougel: 15.3267
- Rougelsum: 18.5905
- Gen Len: 69.2193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.35e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["reddit"], "metrics": ["rouge"], "model-index": [{"name": "pegasus-reddit", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "reddit", "type": "reddit", "args": "default"}, "metrics": [{"type": "rouge", "value": 23.967, "name": "Rouge1"}]}]}]}
|
QuickRead/pegasus-reddit
| null |
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:reddit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #pegasus #text2text-generation #generated_from_trainer #dataset-reddit #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# pegasus-reddit
This model is a fine-tuned version of google/pegasus-large on the reddit dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3329
- Rouge1: 23.967
- Rouge2: 5.0032
- Rougel: 15.3267
- Rougelsum: 18.5905
- Gen Len: 69.2193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.35e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# pegasus-reddit\n\nThis model is a fine-tuned version of google/pegasus-large on the reddit dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 3.3329\n- Rouge1: 23.967\n- Rouge2: 5.0032\n- Rougel: 15.3267\n- Rougelsum: 18.5905\n- Gen Len: 69.2193",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 6.35e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 1.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.1\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #generated_from_trainer #dataset-reddit #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# pegasus-reddit\n\nThis model is a fine-tuned version of google/pegasus-large on the reddit dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 3.3329\n- Rouge1: 23.967\n- Rouge2: 5.0032\n- Rougel: 15.3267\n- Rougelsum: 18.5905\n- Gen Len: 69.2193",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 6.35e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 1.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.1\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-et-lm-1B
This model was finetuned with mozilla_foundation/common_voice_8_0 et with train+other+validation splits.
It achieves the following results on the test set:
(Loss reported with last eval step at step 2000/2040 during training)
- Loss: 0.2150
- Wer: 0.2012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00005
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 1
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"language": "et", "tags": ["generated_from_trainer", "mozilla-foundation/common_voice_8_0", "audio", "automatic-speech-recognition", "speech", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLS-R 1B Wav2Vec2 Estonian by Rasmus Toivanen", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "et"}, "metrics": [{"type": "wer", "value": 20.12, "name": "Test WER"}, {"type": "cer", "value": 3.82, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "et"}, "metrics": [{"type": "wer", "value": 40.77, "name": "Test WER"}, {"type": "cer", "value": 12.32, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "et"}, "metrics": [{"type": "wer", "value": 41.97, "name": "Test WER"}]}]}]}
|
RASMUS/wav2vec2-xlsr-1b-et
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"mozilla-foundation/common_voice_8_0",
"audio",
"speech",
"robust-speech-event",
"hf-asr-leaderboard",
"et",
"dataset:mozilla-foundation/common_voice_8_0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"et"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #mozilla-foundation/common_voice_8_0 #audio #speech #robust-speech-event #hf-asr-leaderboard #et #dataset-mozilla-foundation/common_voice_8_0 #model-index #endpoints_compatible #region-us
|
# wav2vec2-xlsr-et-lm-1B
This model was finetuned with mozilla_foundation/common_voice_8_0 et with train+other+validation splits.
It achieves the following results on the test set:
(Loss reported with last eval step at step 2000/2040 during training)
- Loss: 0.2150
- Wer: 0.2012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00005
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 1
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
[
"# wav2vec2-xlsr-et-lm-1B\n\nThis model was finetuned with mozilla_foundation/common_voice_8_0 et with train+other+validation splits.\nIt achieves the following results on the test set:\n(Loss reported with last eval step at step 2000/2040 during training)\n- Loss: 0.2150 \n- Wer: 0.2012",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.00005\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 1\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 10\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #mozilla-foundation/common_voice_8_0 #audio #speech #robust-speech-event #hf-asr-leaderboard #et #dataset-mozilla-foundation/common_voice_8_0 #model-index #endpoints_compatible #region-us \n",
"# wav2vec2-xlsr-et-lm-1B\n\nThis model was finetuned with mozilla_foundation/common_voice_8_0 et with train+other+validation splits.\nIt achieves the following results on the test set:\n(Loss reported with last eval step at step 2000/2040 during training)\n- Loss: 0.2150 \n- Wer: 0.2012",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.00005\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 1\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 10\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-1b-ru
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1352
- Wer: 0.0971
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.5462 | 0.35 | 500 | 0.4027 | 0.3575 |
| 0.498 | 0.69 | 1000 | 0.2588 | 0.2513 |
| 0.4279 | 1.04 | 1500 | 0.2265 | 0.2204 |
| 0.4099 | 1.38 | 2000 | 0.2189 | 0.1979 |
| 0.4688 | 1.73 | 2500 | 0.2100 | 0.1920 |
| 0.2241 | 2.07 | 3000 | 0.1980 | 0.1767 |
| 0.2056 | 2.42 | 3500 | 0.2020 | 0.1683 |
| 0.3423 | 2.76 | 4000 | 0.1862 | 0.1606 |
| 0.2478 | 3.11 | 4500 | 0.1787 | 0.1563 |
| 0.3079 | 3.45 | 5000 | 0.1759 | 0.1555 |
| 0.2477 | 3.8 | 5500 | 0.1713 | 0.1423 |
| 0.1718 | 4.14 | 6000 | 0.1695 | 0.1391 |
| 0.1675 | 4.49 | 6500 | 0.1677 | 0.1372 |
| 0.1631 | 4.83 | 7000 | 0.1652 | 0.1333 |
| 0.1429 | 5.18 | 7500 | 0.1605 | 0.1308 |
| 0.1505 | 5.52 | 8000 | 0.1612 | 0.1245 |
| 0.1385 | 5.87 | 8500 | 0.1487 | 0.1225 |
| 0.1285 | 6.22 | 9000 | 0.1526 | 0.1201 |
| 0.1153 | 6.56 | 9500 | 0.1464 | 0.1172 |
| 0.1159 | 6.91 | 10000 | 0.1505 | 0.1143 |
| 0.1061 | 7.25 | 10500 | 0.1444 | 0.1106 |
| 0.1016 | 7.6 | 11000 | 0.1427 | 0.1075 |
| 0.1125 | 7.94 | 11500 | 0.1386 | 0.1045 |
| 0.0937 | 8.29 | 12000 | 0.1403 | 0.1022 |
| 0.1059 | 8.63 | 12500 | 0.1406 | 0.1022 |
| 0.0857 | 8.98 | 13000 | 0.1372 | 0.0992 |
| 0.0901 | 9.32 | 13500 | 0.1380 | 0.0977 |
| 0.0913 | 9.67 | 14000 | 0.1352 | 0.0971 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"language": "ru", "tags": ["audio", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "speech"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLS-R 1B Wav2Vec2 Russian by Rasmus Toivanen", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "ru"}, "metrics": [{"type": "wer", "value": 10.83, "name": "Test WER"}, {"type": "cer", "value": 2.41, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ru"}, "metrics": [{"type": "wer", "value": 37.71, "name": "Test WER"}, {"type": "cer", "value": 12.98, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ru"}, "metrics": [{"type": "wer", "value": 31.89, "name": "Test WER"}]}]}]}
|
RASMUS/wav2vec2-xlsr-1b-ru
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"speech",
"ru",
"dataset:mozilla-foundation/common_voice_8_0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"ru"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #audio #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #speech #ru #dataset-mozilla-foundation/common_voice_8_0 #model-index #endpoints_compatible #region-us
|
wav2vec2-xlsr-1b-ru
===================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1352
* Wer: 0.0971
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #audio #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #speech #ru #dataset-mozilla-foundation/common_voice_8_0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-fi-lm-1B
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common voice train/dev/other datasets.
It achieves the following results on the evaluation set without language model:
- Loss: 0.1853
- Wer: 0.2205
With language model:
- Wer: 0.1026
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8158 | 0.67 | 400 | 0.4835 | 0.6310 |
| 0.5679 | 1.33 | 800 | 0.4806 | 0.5538 |
| 0.6055 | 2.0 | 1200 | 0.3888 | 0.5083 |
| 0.5353 | 2.67 | 1600 | 0.3258 | 0.4365 |
| 0.4883 | 3.33 | 2000 | 0.3313 | 0.4204 |
| 0.4513 | 4.0 | 2400 | 0.2924 | 0.3904 |
| 0.3753 | 4.67 | 2800 | 0.2593 | 0.3608 |
| 0.3478 | 5.33 | 3200 | 0.2832 | 0.3551 |
| 0.3796 | 6.0 | 3600 | 0.2495 | 0.3402 |
| 0.2556 | 6.67 | 4000 | 0.2342 | 0.3106 |
| 0.229 | 7.33 | 4400 | 0.2181 | 0.2812 |
| 0.205 | 8.0 | 4800 | 0.2041 | 0.2523 |
| 0.1654 | 8.67 | 5200 | 0.2015 | 0.2416 |
| 0.152 | 9.33 | 5600 | 0.1942 | 0.2294 |
| 0.1569 | 10.0 | 6000 | 0.1853 | 0.2205 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
{"language": ["fi"], "license": "apache-2.0", "tags": ["generated_from_trainer", "automatic-speech-recognition", "robust-speech-event", "hf-asr-leaderboard"], "model-index": [{"name": "wav2vec2-xlsr-fi-lm-1B", "results": []}]}
|
RASMUS/wav2vec2-xlsr-fi-lm-1B
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"fi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"fi"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #fi #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-xlsr-fi-lm-1B
======================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the common voice train/dev/other datasets.
It achieves the following results on the evaluation set without language model:
* Loss: 0.1853
* Wer: 0.2205
With language model:
* Wer: 0.1026
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.17.1.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #fi #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-fi-train-aug-lm-1B
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1499
- Wer: 0.1955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6473 | 0.29 | 400 | 0.2857 | 0.3825 |
| 0.6039 | 0.58 | 800 | 0.2459 | 0.3476 |
| 0.4757 | 0.87 | 1200 | 0.2338 | 0.3274 |
| 0.4473 | 1.15 | 1600 | 0.2246 | 0.3128 |
| 0.4322 | 1.44 | 2000 | 0.1962 | 0.2805 |
| 0.3961 | 1.73 | 2400 | 0.2070 | 0.2797 |
| 0.3642 | 2.02 | 2800 | 0.1790 | 0.2473 |
| 0.3561 | 2.31 | 3200 | 0.1769 | 0.2375 |
| 0.282 | 2.6 | 3600 | 0.1672 | 0.2263 |
| 0.2978 | 2.89 | 4000 | 0.1636 | 0.2192 |
| 0.2722 | 3.17 | 4400 | 0.1637 | 0.2102 |
| 0.2924 | 3.46 | 4800 | 0.1506 | 0.2021 |
| 0.2631 | 3.75 | 5200 | 0.1499 | 0.1955 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
{"language": "fi", "tags": ["generated_from_trainer", "mozilla-foundation/common_voice_7_0", "audio", "automatic-speech-recognition", "speech"], "datasets": ["mozilla-foundation/common_voice_7_0"], "metrics": ["wer", "cer"]}
|
RASMUS/wav2vec2-xlsr-fi-train-aug-bigLM-1B
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"mozilla-foundation/common_voice_7_0",
"audio",
"speech",
"fi",
"dataset:mozilla-foundation/common_voice_7_0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"fi"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #mozilla-foundation/common_voice_7_0 #audio #speech #fi #dataset-mozilla-foundation/common_voice_7_0 #endpoints_compatible #region-us
|
wav2vec2-xlsr-fi-train-aug-lm-1B
================================
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1499
* Wer: 0.1955
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 4
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.17.1.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #mozilla-foundation/common_voice_7_0 #audio #speech #fi #dataset-mozilla-foundation/common_voice_7_0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-fi-train-aug-lm-1B
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1499
- Wer: 0.1955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6473 | 0.29 | 400 | 0.2857 | 0.3825 |
| 0.6039 | 0.58 | 800 | 0.2459 | 0.3476 |
| 0.4757 | 0.87 | 1200 | 0.2338 | 0.3274 |
| 0.4473 | 1.15 | 1600 | 0.2246 | 0.3128 |
| 0.4322 | 1.44 | 2000 | 0.1962 | 0.2805 |
| 0.3961 | 1.73 | 2400 | 0.2070 | 0.2797 |
| 0.3642 | 2.02 | 2800 | 0.1790 | 0.2473 |
| 0.3561 | 2.31 | 3200 | 0.1769 | 0.2375 |
| 0.282 | 2.6 | 3600 | 0.1672 | 0.2263 |
| 0.2978 | 2.89 | 4000 | 0.1636 | 0.2192 |
| 0.2722 | 3.17 | 4400 | 0.1637 | 0.2102 |
| 0.2924 | 3.46 | 4800 | 0.1506 | 0.2021 |
| 0.2631 | 3.75 | 5200 | 0.1499 | 0.1955 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
{"language": "fi", "tags": ["generated_from_trainer", "mozilla-foundation/common_voice_7_0", "audio", "automatic-speech-recognition", "speech", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLS-R 1B Wav2Vec2 Finnish by Rasmus Toivanen", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "fi"}, "metrics": [{"type": "wer", "value": 10.96, "name": "Test WER"}, {"type": "cer", "value": 2.81, "name": "Test CER"}]}]}]}
|
RASMUS/wav2vec2-xlsr-fi-train-aug-lm-1B
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"mozilla-foundation/common_voice_7_0",
"audio",
"speech",
"robust-speech-event",
"hf-asr-leaderboard",
"fi",
"dataset:mozilla-foundation/common_voice_7_0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"fi"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #mozilla-foundation/common_voice_7_0 #audio #speech #robust-speech-event #hf-asr-leaderboard #fi #dataset-mozilla-foundation/common_voice_7_0 #model-index #endpoints_compatible #region-us
|
wav2vec2-xlsr-fi-train-aug-lm-1B
================================
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1499
* Wer: 0.1955
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 4
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.17.1.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #mozilla-foundation/common_voice_7_0 #audio #speech #robust-speech-event #hf-asr-leaderboard #fi #dataset-mozilla-foundation/common_voice_7_0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
text-generation
|
transformers
|
# Harry Potter DialoGPT Model
|
{"tags": ["conversational"]}
|
RAhul03/DialoGPT-small-harrypotter
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model
|
[
"# Harry Potter DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] |
text-generation
|
transformers
|
# chatbot
|
{"tags": ["conversational"]}
|
REAP3R/Chat-bot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# chatbot
|
[
"# chatbot"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# chatbot"
] |
text-generation
|
transformers
|
# Saitama DialoGPT Model
|
{"tags": ["conversational"]}
|
REZERO/DialoGPT-medium-saitama
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Saitama DialoGPT Model
|
[
"# Saitama DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Saitama DialoGPT Model"
] |
null | null |
RICH双子
|
{}
|
RICH/rui-test
| null |
[
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#region-us
|
RICH双子
|
[] |
[
"TAGS\n#region-us \n"
] |
null | null |
this is a test by rui
|
{}
|
RICH/test
| null |
[
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#region-us
|
this is a test by rui
|
[] |
[
"TAGS\n#region-us \n"
] |
token-classification
|
transformers
|
Try the test sentence:
<i>The woman said "my name is Sarah [and] I live in London."</i>
The model should tag the tokens in the sentence with information about whether or not they are contained within a compound clause. If you find the model useful, please cite my thesis which presents the dataset used for finetuning:
Evans, R. (2020) Sentence Simplification for Text Processing. Doctoral thesis. University of Wolverhampton. Wolverhampton, UK. (http://rgcl.wlv.ac.uk/~richard/Evans2020_SentenceSimplificationForTextProcessing.pdf)
There you will find more information about the tagging scheme.
The model was derived using code adapted from an original program written by Dr. Le An Ha at the University of Wolverhampton.
|
{}
|
RJ3vans/CCVspanTagger
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us
|
Try the test sentence:
<i>The woman said "my name is Sarah [and] I live in London."</i>
The model should tag the tokens in the sentence with information about whether or not they are contained within a compound clause. If you find the model useful, please cite my thesis which presents the dataset used for finetuning:
Evans, R. (2020) Sentence Simplification for Text Processing. Doctoral thesis. University of Wolverhampton. Wolverhampton, UK. (URL
There you will find more information about the tagging scheme.
The model was derived using code adapted from an original program written by Dr. Le An Ha at the University of Wolverhampton.
|
[] |
[
"TAGS\n#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
token-classification
|
transformers
|
This model identifies compound nouns in input sentences.
Try the test sentence:
I love apples [and] potatoes.
Accuracy is best when you place square brackets around the coordinating conjunction.
The model was derived using code adapted from an original program written by Dr. Le An Ha at the University of Wolverhampton.
|
{}
|
RJ3vans/CLNspanTagger
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us
|
This model identifies compound nouns in input sentences.
Try the test sentence:
I love apples [and] potatoes.
Accuracy is best when you place square brackets around the coordinating conjunction.
The model was derived using code adapted from an original program written by Dr. Le An Ha at the University of Wolverhampton.
|
[] |
[
"TAGS\n#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
token-classification
|
transformers
|
This model identifies compound noun phrases in an input sentence.
Try the test sentence:
The inquiry, which continues, will recall John Smith [and] Peter Montgomery next month for further questioning.
Note that you need square brackets around the conjunction coordinating the NPs.
The model was derived using code adapted from an original program written by Dr. Le An Ha at the University of Wolverhampton.
|
{}
|
RJ3vans/CMN1spanTagger
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us
|
This model identifies compound noun phrases in an input sentence.
Try the test sentence:
The inquiry, which continues, will recall John Smith [and] Peter Montgomery next month for further questioning.
Note that you need square brackets around the conjunction coordinating the NPs.
The model was derived using code adapted from an original program written by Dr. Le An Ha at the University of Wolverhampton.
|
[] |
[
"TAGS\n#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
token-classification
|
transformers
|
This model identifies compound verb phrases (including conjoins and coordinators) in an input sentence.
Try the test sentence:
John kicked the ball [and] chased after it.
The model was derived using code adapted from an original program written by Dr. Le An Ha at the University of Wolverhampton.
|
{}
|
RJ3vans/CMV1spanTagger
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us
|
This model identifies compound verb phrases (including conjoins and coordinators) in an input sentence.
Try the test sentence:
John kicked the ball [and] chased after it.
The model was derived using code adapted from an original program written by Dr. Le An Ha at the University of Wolverhampton.
|
[] |
[
"TAGS\n#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
token-classification
|
transformers
|
Try the test sentences:
<i>My name is Sarah and I live in London[, which] is the largest city in the UK.</i>
<i>John thought that that was a strange idea.</i>
<i>It was on Tuesdays when Peter took Tess for a walk.</i>
<i>John was so large that he had to crouch to fit through the front door.</i>
The model should tag the tokens in the sentence with information about whether or not they are contained within particular types of syntactic constituents.
If you find the model useful, please cite my thesis which presents the dataset used for finetuning:
Evans, R. (2020) Sentence Simplification for Text Processing. Doctoral thesis. University of Wolverhampton. Wolverhampton, UK. (http://rgcl.wlv.ac.uk/~richard/Evans2020_SentenceSimplificationForTextProcessing.pdf)
There you will find more information about the tagging scheme.
|
{}
|
RJ3vans/13.05.2022.SSCCVspanTagger
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us
|
Try the test sentences:
<i>My name is Sarah and I live in London[, which] is the largest city in the UK.</i>
<i>John thought that that was a strange idea.</i>
<i>It was on Tuesdays when Peter took Tess for a walk.</i>
<i>John was so large that he had to crouch to fit through the front door.</i>
The model should tag the tokens in the sentence with information about whether or not they are contained within particular types of syntactic constituents.
If you find the model useful, please cite my thesis which presents the dataset used for finetuning:
Evans, R. (2020) Sentence Simplification for Text Processing. Doctoral thesis. University of Wolverhampton. Wolverhampton, UK. (URL
There you will find more information about the tagging scheme.
|
[] |
[
"TAGS\n#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
token-classification
|
transformers
|
This model identifies complex NPs modified by non-finite nominal clauses ("appositives") in the input sentence.
Try the test sentence:
My name is Sarah and I live in London[,] the capital of England.
Note that accuracy is greatly improved if you place square brackets around the left boundary of the non-finite nominal clause.
The model was derived using code adapted from an original program written by Dr. Le An Ha at the University of Wolverhampton.
|
{}
|
RJ3vans/SSMNspanTagger
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us
|
This model identifies complex NPs modified by non-finite nominal clauses ("appositives") in the input sentence.
Try the test sentence:
My name is Sarah and I live in London[,] the capital of England.
Note that accuracy is greatly improved if you place square brackets around the left boundary of the non-finite nominal clause.
The model was derived using code adapted from an original program written by Dr. Le An Ha at the University of Wolverhampton.
|
[] |
[
"TAGS\n#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
token-classification
|
transformers
|
This model is used to tag the tokens in an input sequence with information about the different signs of syntactic complexity that they contain. For more details, please see Chapters 2 and 3 of my thesis (http://rgcl.wlv.ac.uk/~richard/Evans2020_SentenceSimplificationForTextProcessing.pdf).
It was derived using code written by Dr. Le An Ha at the University of Wolverhampton.
To use this model, the following code snippet may help:
======================================================================
import torch
from transformers import AutoModelForTokenClassification, AutoTokenizer
SignTaggingModel = AutoModelForTokenClassification.from_pretrained('RJ3vans/SignTagger')
SignTaggingTokenizer = AutoTokenizer.from_pretrained('RJ3vans/SignTagger')
label_list = ["M:N_CCV", "M:N_CIN", "M:N_CLA", "M:N_CLAdv", "M:N_CLN", "M:N_CLP", # This could be obtained from the config file
"M:N_CLQ", "M:N_CLV", "M:N_CMA1", "M:N_CMAdv", "M:N_CMN1",
"M:N_CMN2", "M:N_CMN3", "M:N_CMN4", "M:N_CMP", "M:N_CMP2",
"M:N_CMV1", "M:N_CMV2", "M:N_CMV3", "M:N_COMBINATORY", "M:N_CPA",
"M:N_ESAdvP", "M:N_ESCCV", "M:N_ESCM", "M:N_ESMA", "M:N_ESMAdvP",
"M:N_ESMI", "M:N_ESMN", "M:N_ESMP", "M:N_ESMV", "M:N_HELP",
"M:N_SPECIAL", "M:N_SSCCV", "M:N_SSCM", "M:N_SSMA", "M:N_SSMAdvP",
"M:N_SSMI", "M:N_SSMN", "M:N_SSMP", "M:N_SSMV", "M:N_STQ",
"M:N_V", "M:N_nan", "M:Y_CCV", "M:Y_CIN", "M:Y_CLA", "M:Y_CLAdv",
"M:Y_CLN", "M:Y_CLP", "M:Y_CLQ", "M:Y_CLV", "M:Y_CMA1",
"M:Y_CMAdv", "M:Y_CMN1", "M:Y_CMN2", "M:Y_CMN4", "M:Y_CMP",
"M:Y_CMP2", "M:Y_CMV1", "M:Y_CMV2", "M:Y_CMV3",
"M:Y_COMBINATORY", "M:Y_CPA", "M:Y_ESAdvP", "M:Y_ESCCV",
"M:Y_ESCM", "M:Y_ESMA", "M:Y_ESMAdvP", "M:Y_ESMI", "M:Y_ESMN",
"M:Y_ESMP", "M:Y_ESMV", "M:Y_HELP", "M:Y_SPECIAL", "M:Y_SSCCV",
"M:Y_SSCM", "M:Y_SSMA", "M:Y_SSMAdvP", "M:Y_SSMI", "M:Y_SSMN",
"M:Y_SSMP", "M:Y_SSMV", "M:Y_STQ"]
sentence = 'The County Court in Nottingham heard that Roger Gedge, 30, had his leg amputated following the incident outside a rock festival in Wollaton Park, Nottingham, five years ago.'
tokens = SignTaggingTokenizer.tokenize(SignTaggingTokenizer.decode(SignTaggingTokenizer.encode(sentence)))
inputs = SignTaggingTokenizer.encode(sentence, return_tensors="pt")
outputs = SignTaggingModel(inputs)[0]
predictions = torch.argmax(outputs, dim=2)
print([(token, label_list[prediction]) for token, prediction in zip(tokens, predictions[0].tolist())])
======================================================================
|
{}
|
RJ3vans/SignTagger
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us
|
This model is used to tag the tokens in an input sequence with information about the different signs of syntactic complexity that they contain. For more details, please see Chapters 2 and 3 of my thesis (URL
It was derived using code written by Dr. Le An Ha at the University of Wolverhampton.
To use this model, the following code snippet may help:
======================================================================
import torch
from transformers import AutoModelForTokenClassification, AutoTokenizer
SignTaggingModel = AutoModelForTokenClassification.from_pretrained('RJ3vans/SignTagger')
SignTaggingTokenizer = AutoTokenizer.from_pretrained('RJ3vans/SignTagger')
label_list = ["M:N_CCV", "M:N_CIN", "M:N_CLA", "M:N_CLAdv", "M:N_CLN", "M:N_CLP", # This could be obtained from the config file
"M:N_CLQ", "M:N_CLV", "M:N_CMA1", "M:N_CMAdv", "M:N_CMN1",
"M:N_CMN2", "M:N_CMN3", "M:N_CMN4", "M:N_CMP", "M:N_CMP2",
"M:N_CMV1", "M:N_CMV2", "M:N_CMV3", "M:N_COMBINATORY", "M:N_CPA",
"M:N_ESAdvP", "M:N_ESCCV", "M:N_ESCM", "M:N_ESMA", "M:N_ESMAdvP",
"M:N_ESMI", "M:N_ESMN", "M:N_ESMP", "M:N_ESMV", "M:N_HELP",
"M:N_SPECIAL", "M:N_SSCCV", "M:N_SSCM", "M:N_SSMA", "M:N_SSMAdvP",
"M:N_SSMI", "M:N_SSMN", "M:N_SSMP", "M:N_SSMV", "M:N_STQ",
"M:N_V", "M:N_nan", "M:Y_CCV", "M:Y_CIN", "M:Y_CLA", "M:Y_CLAdv",
"M:Y_CLN", "M:Y_CLP", "M:Y_CLQ", "M:Y_CLV", "M:Y_CMA1",
"M:Y_CMAdv", "M:Y_CMN1", "M:Y_CMN2", "M:Y_CMN4", "M:Y_CMP",
"M:Y_CMP2", "M:Y_CMV1", "M:Y_CMV2", "M:Y_CMV3",
"M:Y_COMBINATORY", "M:Y_CPA", "M:Y_ESAdvP", "M:Y_ESCCV",
"M:Y_ESCM", "M:Y_ESMA", "M:Y_ESMAdvP", "M:Y_ESMI", "M:Y_ESMN",
"M:Y_ESMP", "M:Y_ESMV", "M:Y_HELP", "M:Y_SPECIAL", "M:Y_SSCCV",
"M:Y_SSCM", "M:Y_SSMA", "M:Y_SSMAdvP", "M:Y_SSMI", "M:Y_SSMN",
"M:Y_SSMP", "M:Y_SSMV", "M:Y_STQ"]
sentence = 'The County Court in Nottingham heard that Roger Gedge, 30, had his leg amputated following the incident outside a rock festival in Wollaton Park, Nottingham, five years ago.'
tokens = SignTaggingTokenizer.tokenize(URL(URL(sentence)))
inputs = URL(sentence, return_tensors="pt")
outputs = SignTaggingModel(inputs)[0]
predictions = URL(outputs, dim=2)
print([(token, label_list[prediction]) for token, prediction in zip(tokens, predictions[0].tolist())])
======================================================================
|
[
"# This could be obtained from the config file\n \"M:N_CLQ\", \"M:N_CLV\", \"M:N_CMA1\", \"M:N_CMAdv\", \"M:N_CMN1\", \n \"M:N_CMN2\", \"M:N_CMN3\", \"M:N_CMN4\", \"M:N_CMP\", \"M:N_CMP2\", \n \"M:N_CMV1\", \"M:N_CMV2\", \"M:N_CMV3\", \"M:N_COMBINATORY\", \"M:N_CPA\", \n \"M:N_ESAdvP\", \"M:N_ESCCV\", \"M:N_ESCM\", \"M:N_ESMA\", \"M:N_ESMAdvP\", \n \"M:N_ESMI\", \"M:N_ESMN\", \"M:N_ESMP\", \"M:N_ESMV\", \"M:N_HELP\", \n \"M:N_SPECIAL\", \"M:N_SSCCV\", \"M:N_SSCM\", \"M:N_SSMA\", \"M:N_SSMAdvP\",\n \"M:N_SSMI\", \"M:N_SSMN\", \"M:N_SSMP\", \"M:N_SSMV\", \"M:N_STQ\", \n \"M:N_V\", \"M:N_nan\", \"M:Y_CCV\", \"M:Y_CIN\", \"M:Y_CLA\", \"M:Y_CLAdv\", \n \"M:Y_CLN\", \"M:Y_CLP\", \"M:Y_CLQ\", \"M:Y_CLV\", \"M:Y_CMA1\", \n \"M:Y_CMAdv\", \"M:Y_CMN1\", \"M:Y_CMN2\", \"M:Y_CMN4\", \"M:Y_CMP\", \n \"M:Y_CMP2\", \"M:Y_CMV1\", \"M:Y_CMV2\", \"M:Y_CMV3\", \n \"M:Y_COMBINATORY\", \"M:Y_CPA\", \"M:Y_ESAdvP\", \"M:Y_ESCCV\", \n \"M:Y_ESCM\", \"M:Y_ESMA\", \"M:Y_ESMAdvP\", \"M:Y_ESMI\", \"M:Y_ESMN\", \n \"M:Y_ESMP\", \"M:Y_ESMV\", \"M:Y_HELP\", \"M:Y_SPECIAL\", \"M:Y_SSCCV\", \n \"M:Y_SSCM\", \"M:Y_SSMA\", \"M:Y_SSMAdvP\", \"M:Y_SSMI\", \"M:Y_SSMN\", \n \"M:Y_SSMP\", \"M:Y_SSMV\", \"M:Y_STQ\"]\n \nsentence = 'The County Court in Nottingham heard that Roger Gedge, 30, had his leg amputated following the incident outside a rock festival in Wollaton Park, Nottingham, five years ago.'\n\ntokens = SignTaggingTokenizer.tokenize(URL(URL(sentence)))\ninputs = URL(sentence, return_tensors=\"pt\")\n\noutputs = SignTaggingModel(inputs)[0]\npredictions = URL(outputs, dim=2)\n\nprint([(token, label_list[prediction]) for token, prediction in zip(tokens, predictions[0].tolist())]) \n\n \n======================================================================"
] |
[
"TAGS\n#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"# This could be obtained from the config file\n \"M:N_CLQ\", \"M:N_CLV\", \"M:N_CMA1\", \"M:N_CMAdv\", \"M:N_CMN1\", \n \"M:N_CMN2\", \"M:N_CMN3\", \"M:N_CMN4\", \"M:N_CMP\", \"M:N_CMP2\", \n \"M:N_CMV1\", \"M:N_CMV2\", \"M:N_CMV3\", \"M:N_COMBINATORY\", \"M:N_CPA\", \n \"M:N_ESAdvP\", \"M:N_ESCCV\", \"M:N_ESCM\", \"M:N_ESMA\", \"M:N_ESMAdvP\", \n \"M:N_ESMI\", \"M:N_ESMN\", \"M:N_ESMP\", \"M:N_ESMV\", \"M:N_HELP\", \n \"M:N_SPECIAL\", \"M:N_SSCCV\", \"M:N_SSCM\", \"M:N_SSMA\", \"M:N_SSMAdvP\",\n \"M:N_SSMI\", \"M:N_SSMN\", \"M:N_SSMP\", \"M:N_SSMV\", \"M:N_STQ\", \n \"M:N_V\", \"M:N_nan\", \"M:Y_CCV\", \"M:Y_CIN\", \"M:Y_CLA\", \"M:Y_CLAdv\", \n \"M:Y_CLN\", \"M:Y_CLP\", \"M:Y_CLQ\", \"M:Y_CLV\", \"M:Y_CMA1\", \n \"M:Y_CMAdv\", \"M:Y_CMN1\", \"M:Y_CMN2\", \"M:Y_CMN4\", \"M:Y_CMP\", \n \"M:Y_CMP2\", \"M:Y_CMV1\", \"M:Y_CMV2\", \"M:Y_CMV3\", \n \"M:Y_COMBINATORY\", \"M:Y_CPA\", \"M:Y_ESAdvP\", \"M:Y_ESCCV\", \n \"M:Y_ESCM\", \"M:Y_ESMA\", \"M:Y_ESMAdvP\", \"M:Y_ESMI\", \"M:Y_ESMN\", \n \"M:Y_ESMP\", \"M:Y_ESMV\", \"M:Y_HELP\", \"M:Y_SPECIAL\", \"M:Y_SSCCV\", \n \"M:Y_SSCM\", \"M:Y_SSMA\", \"M:Y_SSMAdvP\", \"M:Y_SSMI\", \"M:Y_SSMN\", \n \"M:Y_SSMP\", \"M:Y_SSMV\", \"M:Y_STQ\"]\n \nsentence = 'The County Court in Nottingham heard that Roger Gedge, 30, had his leg amputated following the incident outside a rock festival in Wollaton Park, Nottingham, five years ago.'\n\ntokens = SignTaggingTokenizer.tokenize(URL(URL(sentence)))\ninputs = URL(sentence, return_tensors=\"pt\")\n\noutputs = SignTaggingModel(inputs)[0]\npredictions = URL(outputs, dim=2)\n\nprint([(token, label_list[prediction]) for token, prediction in zip(tokens, predictions[0].tolist())]) \n\n \n======================================================================"
] |
text-generation
| null |
# My Awesome Model
|
{"tags": ["conversational"]}
|
RTM/ChatBot
| null |
[
"conversational",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#conversational #region-us
|
# My Awesome Model
|
[
"# My Awesome Model"
] |
[
"TAGS\n#conversational #region-us \n",
"# My Awesome Model"
] |
text-generation
| null |
# Lucky
|
{"tags": ["conversational"]}
|
RTM/Lucky
| null |
[
"conversational",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#conversational #region-us
|
# Lucky
|
[
"# Lucky"
] |
[
"TAGS\n#conversational #region-us \n",
"# Lucky"
] |
text-generation
|
transformers
|
# TIMBOT DialoGPT model
|
{"tags": ["conversational"]}
|
RTurk/DialoGPT-small-TIMBOT
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# TIMBOT DialoGPT model
|
[
"# TIMBOT DialoGPT model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# TIMBOT DialoGPT model"
] |
fill-mask
|
transformers
|
!!!
At the moment, the model is distilled, a version from one of the first checkpoints is available for download.
We plan to post the full model in the next few days.
!!!
This is a distilled HRBert model for an mlm task.
Sentence embeddings can be produced as follows:
```python
# pip install transformers
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model='RabotaRu/HRBert-mini',
tokenizer='RabotaRu/HRBert-mini'
)
fill_mask('<mask> на склад')
```
|
{"language": ["ru", "en", "be", "bg", "uk", "ro", "kz", "tg", "tat", "sv", "sl", "sr", "uz", "es", "fi"], "license": "mit", "tags": ["russian", "fill-mask", "pretraining", "embeddings", "masked-lm"], "widget": [{"text": "<mask> \u043d\u0430 \u0441\u043a\u043b\u0430\u0434"}]}
|
RabotaRu/HRBert-mini
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"russian",
"pretraining",
"embeddings",
"masked-lm",
"ru",
"en",
"be",
"bg",
"uk",
"ro",
"kz",
"tg",
"tat",
"sv",
"sl",
"sr",
"uz",
"es",
"fi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"ru",
"en",
"be",
"bg",
"uk",
"ro",
"kz",
"tg",
"tat",
"sv",
"sl",
"sr",
"uz",
"es",
"fi"
] |
TAGS
#transformers #pytorch #safetensors #roberta #fill-mask #russian #pretraining #embeddings #masked-lm #ru #en #be #bg #uk #ro #kz #tg #tat #sv #sl #sr #uz #es #fi #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
!!!
At the moment, the model is distilled, a version from one of the first checkpoints is available for download.
We plan to post the full model in the next few days.
!!!
This is a distilled HRBert model for an mlm task.
Sentence embeddings can be produced as follows:
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #fill-mask #russian #pretraining #embeddings #masked-lm #ru #en #be #bg #uk #ro #kz #tg #tat #sv #sl #sr #uz #es #fi #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text2text-generation
|
transformers
|
### T5 for question-generation
This is [t5-base](https://arxiv.org/abs/1910.10683) model trained for answer aware question generation task. The answer spans are highlighted within the text with special highlight tokens.
You can play with the model using the inference API, just highlight the answer spans with `<hl>` tokens and end the text with `</s>`. For example
`<hl> 42 <hl> is the answer to life, the universe and everything. </s>`
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
|
{"license": "mit", "tags": ["question-generation"], "datasets": ["squad"], "widget": [{"text": "<hl> 42 <hl> is the answer to life, the universe and everything. </s>"}, {"text": "Python is a programming language. It is developed by <hl> Guido Van Rossum <hl>. </s>"}, {"text": "Although <hl> practicality <hl> beats purity </s>"}]}
|
Rachneet/t5-base-qg-hl-squadv2
| null |
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"question-generation",
"dataset:squad",
"arxiv:1910.10683",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[
"1910.10683"
] |
[] |
TAGS
#transformers #pytorch #jax #t5 #text2text-generation #question-generation #dataset-squad #arxiv-1910.10683 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
### T5 for question-generation
This is t5-base model trained for answer aware question generation task. The answer spans are highlighted within the text with special highlight tokens.
You can play with the model using the inference API, just highlight the answer spans with '<hl>' tokens and end the text with '</s>'. For example
'<hl> 42 <hl> is the answer to life, the universe and everything. </s>'
For more deatils see this repo.
|
[
"### T5 for question-generation\r\nThis is t5-base model trained for answer aware question generation task. The answer spans are highlighted within the text with special highlight tokens. \r\n\r\nYou can play with the model using the inference API, just highlight the answer spans with '<hl>' tokens and end the text with '</s>'. For example\r\n\r\n'<hl> 42 <hl> is the answer to life, the universe and everything. </s>'\r\n\r\nFor more deatils see this repo."
] |
[
"TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #question-generation #dataset-squad #arxiv-1910.10683 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### T5 for question-generation\r\nThis is t5-base model trained for answer aware question generation task. The answer spans are highlighted within the text with special highlight tokens. \r\n\r\nYou can play with the model using the inference API, just highlight the answer spans with '<hl>' tokens and end the text with '</s>'. For example\r\n\r\n'<hl> 42 <hl> is the answer to life, the universe and everything. </s>'\r\n\r\nFor more deatils see this repo."
] |
text-generation
|
transformers
|
# radical DialoGPT Model
|
{"tags": ["conversational"]}
|
Radicalkiddo/DialoGPT-small-Radical
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# radical DialoGPT Model
|
[
"# radical DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# radical DialoGPT Model"
] |
text2text-generation
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 14502562
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP", "parameters":{"max_length":1000}}' https://api-inference.huggingface.co/Radvian/autonlp-indo_summarization-14502562
```
|
{"language": "unk", "tags": "autonlp", "datasets": ["Radvian/autonlp-data-indo_summarization"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
|
Radvian/t5_liputan6_finetuned_indonesia_summarization
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autonlp",
"unk",
"dataset:Radvian/autonlp-data-indo_summarization",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"unk"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #autonlp #unk #dataset-Radvian/autonlp-data-indo_summarization #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 14502562
## Usage
You can use cURL to access this model:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 14502562",
"## Usage\n\nYou can use cURL to access this model:"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #autonlp #unk #dataset-Radvian/autonlp-data-indo_summarization #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 14502562",
"## Usage\n\nYou can use cURL to access this model:"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4229
- Wer: 0.2386
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5486 | 4.0 | 500 | 2.1672 | 0.9876 |
| 0.6819 | 8.0 | 1000 | 0.4502 | 0.3301 |
| 0.2353 | 12.0 | 1500 | 0.4352 | 0.2841 |
| 0.1427 | 16.0 | 2000 | 0.4237 | 0.2584 |
| 0.0945 | 20.0 | 2500 | 0.4409 | 0.2545 |
| 0.0671 | 24.0 | 3000 | 0.4257 | 0.2413 |
| 0.0492 | 28.0 | 3500 | 0.4229 | 0.2386 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]}
|
Rafat/wav2vec2-base-timit-demo-colab
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-base-timit-demo-colab
==============================
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4229
* Wer: 0.2386
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4526
- Wer: 0.3411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.7503 | 4.0 | 500 | 2.4125 | 1.0006 |
| 0.9595 | 8.0 | 1000 | 0.4833 | 0.4776 |
| 0.3018 | 12.0 | 1500 | 0.4333 | 0.4062 |
| 0.1751 | 16.0 | 2000 | 0.4474 | 0.3697 |
| 0.1288 | 20.0 | 2500 | 0.4445 | 0.3558 |
| 0.1073 | 24.0 | 3000 | 0.4695 | 0.3464 |
| 0.0816 | 28.0 | 3500 | 0.4526 | 0.3411 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]}
|
Raintree/wav2vec2-base-timit-demo-colab
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-base-timit-demo-colab
==============================
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4526
* Wer: 0.3411
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-sports-titles
This model is a fine-tuned pegasus on some **sports news articles scraped from the internet. (For educational purposes only)**. The model can generate titles for sports articles. Try it out using the inference API.
## Model description
A Pegasus model tuned on generating scientific titles has been further fine-tuned to generate titles for sports articles. While training articles on **Tennis, Football (Soccer), Cricket , Athletics and Rugby** were used to generate titles. I experimented training the Tokenizer from scratch but it did not give good results compared to the pre-trained tokenizer.
## Usage
```python
from transformers import pipeline
#Feel free to play around with the generation parameters.
#Reduce the beam width for faster inference
#Note that the maximum length for the generated titles is 64
gen_kwargs = {"length_penalty": 0.6, "num_beams":4, "num_return_sequences": 4,"num_beam_groups":4,"diversity_penalty":2.0}
pipe = pipeline("summarization", model="RajSang/pegasus-sports-titles")
#Change the article according to your wish
article="""
Coutinho was just about to be introduced by Villa boss Gerrard midway through the second half when Bruno Fernandes slammed home
his second goal of the game off the underside of the bar. But the Brazilian proved the catalyst for a memorable response.
First he drove at the United defence, helping to create the space which Jacob Ramsey exploited to halve the deficit. Then Ramsey slid over an excellent
cross from the left which Raphael Varane was unable to intercept as he slid back, leaving Coutinho to finish into an empty net.
The goal brought celebrations at both ends of the pitch as Emiliano Martinez also went into the crowd in relief - it was the Argentine's horrible sixth-minute error that had gifted Fernandes the visitors' opener.
Given his background - with Liverpool, Barcelona and Bayern Munich - Coutinho is a bold loan signing by Villa, and underlines the pedigree of the man they appointed as manager in November.
Gerrard is not at Villa to learn how to avoid relegation.
His demands remain as high as they were as a player and Coutinho's arrival is an example of that.
Villa are a better team since Gerrard's arrival and, after a sluggish start against opponents they dominated but lost to in the FA Cup five days ago, they grew into the game.
The club's other newboy, Lucas Digne, was among those denied by United keeper David de Gea at the end of the first half - in unorthodox fashion, with his knees.
Ollie Watkins did not really test the Spain keeper when Villa broke after Edinson Cavani lost possession in his own half. However, Emi Buendia certainly did with a near-post header. Rooted to his line, De Gea's reactions were up to the job as he beat Buendia's effort away.
When De Gea produced more saves after half-time to deny Ramsey and Digne again, it appeared the image of the night for Villa would be midfielder Morgan Sanson kicking a drinks bottle in fury after his error in gifting Fred possession to set up Fernandes for the visitors' second had been followed immediately by his substitution.
However, as it was the prelude to Coutinho's arrival, it was the moment that changed the course of the game - and the acclaim for the Brazilian at the final whistle indicated Villa's fans are already firmly behind him.
"""
result=pipe(article, **gen_kwargs)[0]["summary_text"]
print(result)
''' Output
Title 1 :
Coutinho's arrival sparks Villa comeback
Title 2 :
Philippe Coutinho marked his debut for Aston Villa with a goal and an assist as Steven Gerrard's side came from two goals down to draw with Manchester United.
Title 3 :
Steven Gerrard's first game in charge of Aston Villa ended in a dramatic draw against Manchester United - but it was the arrival of Philippe Coutinho that marked the night.
Title 4 :
Liverpool loanee Philippe Coutinho marked his first appearance for Aston Villa with two goals as Steven Gerrard's side came from two goals down to draw 2-2.'''
```
## Training procedure
While training, **short titles were combined with the subtitles for the articles to improve the quality of the generated titles and the subtitles were removed from the main body of the articles.**
##Limitations
In rare cases, if the opening few lines of a passage/article are descriptive enough, the model often just copies these lines instead of looking for information further down the articles, which may not be conducive in some cases.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
### Training results
**Rouge1:38.2315**
**Rouge2: 18.6598**
**RougueL: 31.7393**
**RougeLsum: 31.7086**
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"language": "en", "tags": ["generated_from_trainer"], "widget": [{"text": "Coutinho was just about to be introduced by Villa boss Gerrard midway through the second half when Bruno Fernandes slammed home his second goal of the game off the underside of the bar. But the Brazilian proved the catalyst for a memorable response. First he drove at the United defence, helping to create the space which Jacob Ramsey exploited to halve the deficit. Then Ramsey slid over an excellent cross from the left which Raphael Varane was unable to intercept as he slid back, leaving Coutinho to finish into an empty net. The goal brought celebrations at both ends of the pitch as Emiliano Martinez also went into the crowd in relief - it was the Argentine's horrible sixth-minute error that had gifted Fernandes the visitors' opener. Given his background - with Liverpool, Barcelona and Bayern Munich - Coutinho is a bold loan signing by Villa, and underlines the pedigree of the man they appointed as manager in November. Gerrard is not at Villa to learn how to avoid relegation. His demands remain as high as they were as a player and Coutinho's arrival is an example of that. Villa are a better team since Gerrard's arrival and, after a sluggish start against opponents they dominated but lost to in the FA Cup five days ago, they grew into the game. The club's other newboy, Lucas Digne, was among those denied by United keeper David de Gea at the end of the first half - in unorthodox fashion, with his knees. Ollie Watkins did not really test the Spain keeper when Villa broke after Edinson Cavani lost possession in his own half. However, Emi Buendia certainly did with a near-post header. Rooted to his line, De Gea's reactions were up to the job as he beat Buendia's effort away. When De Gea produced more saves after half-time to deny Ramsey and Digne again, it appeared the image of the night for Villa would be midfielder Morgan Sanson kicking a drinks bottle in fury after his error in gifting Fred possession to set up Fernandes for the visitors' second had been followed immediately by his substitution. However, as it was the prelude to Coutinho's arrival, it was the moment that changed the course of the game - and the acclaim for the Brazilian at the final whistle indicated Villa's fans are already firmly behind him."}]}
|
RajSang/pegasus-sports-titles
| null |
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"en",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tensorboard #pegasus #text2text-generation #generated_from_trainer #en #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# pegasus-sports-titles
This model is a fine-tuned pegasus on some sports news articles scraped from the internet. (For educational purposes only). The model can generate titles for sports articles. Try it out using the inference API.
## Model description
A Pegasus model tuned on generating scientific titles has been further fine-tuned to generate titles for sports articles. While training articles on Tennis, Football (Soccer), Cricket , Athletics and Rugby were used to generate titles. I experimented training the Tokenizer from scratch but it did not give good results compared to the pre-trained tokenizer.
## Usage
## Training procedure
While training, short titles were combined with the subtitles for the articles to improve the quality of the generated titles and the subtitles were removed from the main body of the articles.
##Limitations
In rare cases, if the opening few lines of a passage/article are descriptive enough, the model often just copies these lines instead of looking for information further down the articles, which may not be conducive in some cases.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
### Training results
Rouge1:38.2315
Rouge2: 18.6598
RougueL: 31.7393
RougeLsum: 31.7086
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# pegasus-sports-titles\n\nThis model is a fine-tuned pegasus on some sports news articles scraped from the internet. (For educational purposes only). The model can generate titles for sports articles. Try it out using the inference API.",
"## Model description\n\nA Pegasus model tuned on generating scientific titles has been further fine-tuned to generate titles for sports articles. While training articles on Tennis, Football (Soccer), Cricket , Athletics and Rugby were used to generate titles. I experimented training the Tokenizer from scratch but it did not give good results compared to the pre-trained tokenizer.",
"## Usage",
"## Training procedure\nWhile training, short titles were combined with the subtitles for the articles to improve the quality of the generated titles and the subtitles were removed from the main body of the articles.",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 2",
"### Training results\n\nRouge1:38.2315\n\nRouge2: 18.6598\n\nRougueL: 31.7393\n\nRougeLsum: 31.7086",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #pegasus #text2text-generation #generated_from_trainer #en #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# pegasus-sports-titles\n\nThis model is a fine-tuned pegasus on some sports news articles scraped from the internet. (For educational purposes only). The model can generate titles for sports articles. Try it out using the inference API.",
"## Model description\n\nA Pegasus model tuned on generating scientific titles has been further fine-tuned to generate titles for sports articles. While training articles on Tennis, Football (Soccer), Cricket , Athletics and Rugby were used to generate titles. I experimented training the Tokenizer from scratch but it did not give good results compared to the pre-trained tokenizer.",
"## Usage",
"## Training procedure\nWhile training, short titles were combined with the subtitles for the articles to improve the quality of the generated titles and the subtitles were removed from the main body of the articles.",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 2",
"### Training results\n\nRouge1:38.2315\n\nRouge2: 18.6598\n\nRougueL: 31.7393\n\nRougeLsum: 31.7086",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
# NepaliBERT(Phase 1)
NEPALIBERT is a state-of-the-art language model for Nepali based on the BERT model. The model is trained using a masked language modeling (MLM).
# Loading the model and tokenizer
1. clone the model repo
```
git lfs install
git clone https://huggingface.co/Rajan/NepaliBERT
```
2. Loading the Tokenizer
```
from transformers import BertTokenizer
vocab_file_dir = './NepaliBERT/'
tokenizer = BertTokenizer.from_pretrained(vocab_file_dir,
strip_accents=False,
clean_text=False )
```
3. Loading the model:
```
from transformers import BertForMaskedLM
model = BertForMaskedLM.from_pretrained('./NepaliBERT')
```
The easiest way to check whether our language model is learning anything interesting is via the ```FillMaskPipeline```.
Pipelines are simple wrappers around tokenizers and models, and the 'fill-mask' one will let you input a sequence containing a masked token (here, [mask]) and return a list of the most probable filled sequences, with their probabilities.
```
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model=model,
tokenizer=tokenizer
)
```
For more info visit the [GITHUB🤗](https://github.com/R4j4n/NepaliBERT)
|
{}
|
Rajan/NepaliBERT
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
# NepaliBERT(Phase 1)
NEPALIBERT is a state-of-the-art language model for Nepali based on the BERT model. The model is trained using a masked language modeling (MLM).
# Loading the model and tokenizer
1. clone the model repo
2. Loading the Tokenizer
3. Loading the model:
The easiest way to check whether our language model is learning anything interesting is via the .
Pipelines are simple wrappers around tokenizers and models, and the 'fill-mask' one will let you input a sequence containing a masked token (here, [mask]) and return a list of the most probable filled sequences, with their probabilities.
For more info visit the GITHUB
|
[
"# NepaliBERT(Phase 1) \nNEPALIBERT is a state-of-the-art language model for Nepali based on the BERT model. The model is trained using a masked language modeling (MLM).",
"# Loading the model and tokenizer \n1. clone the model repo \n\n2. Loading the Tokenizer \n\n3. Loading the model:\n\n\nThe easiest way to check whether our language model is learning anything interesting is via the .\n\nPipelines are simple wrappers around tokenizers and models, and the 'fill-mask' one will let you input a sequence containing a masked token (here, [mask]) and return a list of the most probable filled sequences, with their probabilities.\n\n\nFor more info visit the GITHUB"
] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n",
"# NepaliBERT(Phase 1) \nNEPALIBERT is a state-of-the-art language model for Nepali based on the BERT model. The model is trained using a masked language modeling (MLM).",
"# Loading the model and tokenizer \n1. clone the model repo \n\n2. Loading the Tokenizer \n\n3. Loading the model:\n\n\nThe easiest way to check whether our language model is learning anything interesting is via the .\n\nPipelines are simple wrappers around tokenizers and models, and the 'fill-mask' one will let you input a sequence containing a masked token (here, [mask]) and return a list of the most probable filled sequences, with their probabilities.\n\n\nFor more info visit the GITHUB"
] |
null | null | ERROR: type should be string, got "\r\nhttps://github.com/R4j4n/Nepali-Word2Vec-from-scratch\r\n\r\nHow to clone : \r\n```\r\ngit lfs install\r\ngit clone https://huggingface.co/Rajan/Nepali_Word2Vec\r\n```" |
{"license": "mit"}
|
Rajan/Nepali_Word2Vec
| null |
[
"license:mit",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
URL
How to clone :
|
[] |
[
"TAGS\n#license-mit #region-us \n"
] |
image-classification
|
transformers
|
# metrics:
# - accuracy
# model-index:
# - name: FacialEmoRecog
# results:
# - task:
# name: Image Classification
# type: image-classification
# - metrics:
# name: Accuracy
# type: accuracy
# value: 0.9189583659172058
# FacialEmoRecog
Create your own image classifier for **anything** by running this repo
## Example Images
|
{"language": ["en"], "license": "mit", "tags": ["image CLassification", "pytorch"], "datasets": ["Jeneral/fer2013"], "metrics": ["accuracy"], "inference": true, "pipeline_tag": "image-classification"}
|
Rajaram1996/FacialEmoRecog
| null |
[
"transformers",
"pytorch",
"vit",
"image-classification",
"image CLassification",
"en",
"dataset:Jeneral/fer2013",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #vit #image-classification #image CLassification #en #dataset-Jeneral/fer2013 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# metrics:
# - accuracy
# model-index:
# - name: FacialEmoRecog
# results:
# - task:
# name: Image Classification
# type: image-classification
# - metrics:
# name: Accuracy
# type: accuracy
# value: 0.9189583659172058
# FacialEmoRecog
Create your own image classifier for anything by running this repo
## Example Images
|
[
"# metrics:",
"# - accuracy",
"# model-index:",
"# - name: FacialEmoRecog",
"# results:\n # - task:\n # name: Image Classification\n # type: image-classification\n # - metrics:\n # name: Accuracy\n # type: accuracy\n # value: 0.9189583659172058",
"# FacialEmoRecog \nCreate your own image classifier for anything by running this repo \n\n ## Example Images"
] |
[
"TAGS\n#transformers #pytorch #vit #image-classification #image CLassification #en #dataset-Jeneral/fer2013 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# metrics:",
"# - accuracy",
"# model-index:",
"# - name: FacialEmoRecog",
"# results:\n # - task:\n # name: Image Classification\n # type: image-classification\n # - metrics:\n # name: Accuracy\n # type: accuracy\n # value: 0.9189583659172058",
"# FacialEmoRecog \nCreate your own image classifier for anything by running this repo \n\n ## Example Images"
] |
audio-classification
|
transformers
|
Working example of using pretrained model to predict emotion in local audio file
```
def predict_emotion_hubert(audio_file):
""" inspired by an example from https://github.com/m3hrdadfi/soxan """
from audio_models import HubertForSpeechClassification
from transformers import Wav2Vec2FeatureExtractor, AutoConfig
import torch.nn.functional as F
import torch
import numpy as np
from pydub import AudioSegment
model = HubertForSpeechClassification.from_pretrained("Rajaram1996/Hubert_emotion") # Downloading: 362M
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("facebook/hubert-base-ls960")
sampling_rate=16000 # defined by the model; must convert mp3 to this rate.
config = AutoConfig.from_pretrained("Rajaram1996/Hubert_emotion")
def speech_file_to_array(path, sampling_rate):
# using torchaudio...
# speech_array, _sampling_rate = torchaudio.load(path)
# resampler = torchaudio.transforms.Resample(_sampling_rate, sampling_rate)
# speech = resampler(speech_array).squeeze().numpy()
sound = AudioSegment.from_file(path)
sound = sound.set_frame_rate(sampling_rate)
sound_array = np.array(sound.get_array_of_samples())
return sound_array
sound_array = speech_file_to_array(audio_file, sampling_rate)
inputs = feature_extractor(sound_array, sampling_rate=sampling_rate, return_tensors="pt", padding=True)
inputs = {key: inputs[key].to("cpu").float() for key in inputs}
with torch.no_grad():
logits = model(**inputs).logits
scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0]
outputs = [{
"emo": config.id2label[i],
"score": round(score * 100, 1)}
for i, score in enumerate(scores)
]
return [row for row in sorted(outputs, key=lambda x:x["score"], reverse=True) if row['score'] != '0.0%'][:2]
```
```
result = predict_emotion_hubert("male-crying.mp3")
>>> result
[{'emo': 'male_sad', 'score': 91.0}, {'emo': 'male_fear', 'score': 4.8}]
```
|
{"tags": ["speech", "audio", "HUBert"], "inference": true, "pipeline_tag": "audio-classification"}
|
Rajaram1996/Hubert_emotion
| null |
[
"transformers",
"pytorch",
"hubert",
"speech",
"audio",
"HUBert",
"audio-classification",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #hubert #speech #audio #HUBert #audio-classification #endpoints_compatible #has_space #region-us
|
Working example of using pretrained model to predict emotion in local audio file
|
[] |
[
"TAGS\n#transformers #pytorch #hubert #speech #audio #HUBert #audio-classification #endpoints_compatible #has_space #region-us \n"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-tamil
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Tamil using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ta", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("Rajaram1996/wav2vec2-large-xlsr-53-tamil")
model = Wav2Vec2ForCTC.from_pretrained("Rajaram1996/wav2vec2-large-xlsr-53-tamil")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ta", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Rajaram1996/wav2vec2-large-xlsr-53-tamil")
model = Wav2Vec2ForCTC.from_pretrained("Rajaram1996/wav2vec2-large-xlsr-53-tamil")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 69.76 %
|
{"language": ["ta"], "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "hf-asr-leaderboard"], "datasets": ["common_voice"], "model-index": [{"name": "Rajaram1996/wav2vec2-large-xlsr-53-tamil", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ta", "type": "common_voice", "args": "ta"}, "metrics": [{"type": "wer", "value": 69.76, "name": "Test WER"}]}]}]}
|
Rajaram1996/wav2vec2-large-xlsr-53-tamil
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"ta",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"ta"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #hf-asr-leaderboard #ta #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-tamil
Fine-tuned facebook/wav2vec2-large-xlsr-53 in Tamil using the Common Voice
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
Test Result: 69.76 %
|
[
"# Wav2Vec2-Large-XLSR-53-tamil\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Tamil using the Common Voice\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the {language} test data of Common Voice.\n\n\n\nTest Result: 69.76 %"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #hf-asr-leaderboard #ta #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-tamil\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Tamil using the Common Voice\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the {language} test data of Common Voice.\n\n\n\nTest Result: 69.76 %"
] |
question-answering
|
transformers
|
# Model Card for roberta-base-on-cuad
# Model Details
## Model Description
- **Developed by:** Mohammed Rakib
- **Shared by [Optional]:** More information needed
- **Model type:** Question Answering
- **Language(s) (NLP):** en
- **License:** MIT
- **Related Models:**
- **Parent Model:** RoBERTa
- **Resources for more information:**
- GitHub Repo: [defactolaw](https://github.com/afra-tech/defactolaw)
- Associated Paper: [An Open Source Contractual Language Understanding Application Using Machine Learning](https://aclanthology.org/2022.lateraisse-1.6/)
# Uses
## Direct Use
This model can be used for the task of Question Answering on Legal Documents.
# Training Details
Read: [An Open Source Contractual Language Understanding Application Using Machine Learning](https://aclanthology.org/2022.lateraisse-1.6/)
for detailed information on training procedure, dataset preprocessing and evaluation.
## Training Data
See [CUAD dataset card](https://huggingface.co/datasets/cuad) for more information.
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
See [CUAD dataset card](https://huggingface.co/datasets/cuad) for more information.
### Factors
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
Used V100/P100 from Google Colab Pro
### Software
Python, Transformers
# Citation
**BibTeX:**
```
@inproceedings{nawar-etal-2022-open,
title = "An Open Source Contractual Language Understanding Application Using Machine Learning",
author = "Nawar, Afra and
Rakib, Mohammed and
Hai, Salma Abdul and
Haq, Sanaulla",
booktitle = "Proceedings of the First Workshop on Language Technology and Resources for a Fair, Inclusive, and Safe Society within the 13th Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lateraisse-1.6",
pages = "42--50",
abstract = "Legal field is characterized by its exclusivity and non-transparency. Despite the frequency and relevance of legal dealings, legal documents like contracts remains elusive to non-legal professionals for the copious usage of legal jargon. There has been little advancement in making legal contracts more comprehensible. This paper presents how Machine Learning and NLP can be applied to solve this problem, further considering the challenges of applying ML to the high length of contract documents and training in a low resource environment. The largest open-source contract dataset so far, the Contract Understanding Atticus Dataset (CUAD) is utilized. Various pre-processing experiments and hyperparameter tuning have been carried out and we successfully managed to eclipse SOTA results presented for models in the CUAD dataset trained on RoBERTa-base. Our model, A-type-RoBERTa-base achieved an AUPR score of 46.6{\%} compared to 42.6{\%} on the original RoBERT-base. This model is utilized in our end to end contract understanding application which is able to take a contract and highlight the clauses a user is looking to find along with it{'}s descriptions to aid due diligence before signing. Alongside digital, i.e. searchable, contracts the system is capable of processing scanned, i.e. non-searchable, contracts using tesseract OCR. This application is aimed to not only make contract review a comprehensible process to non-legal professionals, but also to help lawyers and attorneys more efficiently review contracts.",
}
```
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Mohammed Rakib in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("Rakib/roberta-base-on-cuad")
model = AutoModelForQuestionAnswering.from_pretrained("Rakib/roberta-base-on-cuad")
```
</details>
|
{"language": ["en"], "license": "mit", "library_name": "transformers", "tags": ["legal-contract-review", "roberta", "cuad"], "datasets": ["cuad"], "pipeline_tag": "question-answering"}
|
Rakib/roberta-base-on-cuad
| null |
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"legal-contract-review",
"cuad",
"en",
"dataset:cuad",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #question-answering #legal-contract-review #cuad #en #dataset-cuad #license-mit #endpoints_compatible #has_space #region-us
|
# Model Card for roberta-base-on-cuad
# Model Details
## Model Description
- Developed by: Mohammed Rakib
- Shared by [Optional]: More information needed
- Model type: Question Answering
- Language(s) (NLP): en
- License: MIT
- Related Models:
- Parent Model: RoBERTa
- Resources for more information:
- GitHub Repo: defactolaw
- Associated Paper: An Open Source Contractual Language Understanding Application Using Machine Learning
# Uses
## Direct Use
This model can be used for the task of Question Answering on Legal Documents.
# Training Details
Read: An Open Source Contractual Language Understanding Application Using Machine Learning
for detailed information on training procedure, dataset preprocessing and evaluation.
## Training Data
See CUAD dataset card for more information.
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
See CUAD dataset card for more information.
### Factors
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
- Hardware Type: More information needed
- Hours used: More information needed
- Cloud Provider: More information needed
- Compute Region: More information needed
- Carbon Emitted: More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
Used V100/P100 from Google Colab Pro
### Software
Python, Transformers
BibTeX:
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Mohammed Rakib in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
</details>
|
[
"# Model Card for roberta-base-on-cuad",
"# Model Details",
"## Model Description\n \n- Developed by: Mohammed Rakib\n- Shared by [Optional]: More information needed\n- Model type: Question Answering \n- Language(s) (NLP): en\n- License: MIT\n- Related Models:\n - Parent Model: RoBERTa \n- Resources for more information: \n - GitHub Repo: defactolaw\n - Associated Paper: An Open Source Contractual Language Understanding Application Using Machine Learning",
"# Uses",
"## Direct Use\n \nThis model can be used for the task of Question Answering on Legal Documents.",
"# Training Details\n\nRead: An Open Source Contractual Language Understanding Application Using Machine Learning \nfor detailed information on training procedure, dataset preprocessing and evaluation.",
"## Training Data\n \nSee CUAD dataset card for more information.",
"## Training Procedure",
"### Preprocessing\n \nMore information needed",
"### Speeds, Sizes, Times\n \nMore information needed",
"# Evaluation",
"## Testing Data, Factors & Metrics",
"### Testing Data\n \nSee CUAD dataset card for more information.",
"### Factors",
"### Metrics\n \nMore information needed",
"## Results \n \nMore information needed",
"# Model Examination\n \nMore information needed\n \n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications [optional]",
"## Model Architecture and Objective\n \nMore information needed",
"## Compute Infrastructure\n \nMore information needed",
"### Hardware\n \nUsed V100/P100 from Google Colab Pro",
"### Software\n\nPython, Transformers\n \nBibTeX:",
"# Glossary [optional]\nMore information needed",
"# More Information [optional]\n \nMore information needed",
"# Model Card Authors [optional]\n \nMohammed Rakib in collaboration with Ezi Ozoani and the Hugging Face team",
"# Model Card Contact\n \nMore information needed",
"# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\n\n</details>"
] |
[
"TAGS\n#transformers #pytorch #roberta #question-answering #legal-contract-review #cuad #en #dataset-cuad #license-mit #endpoints_compatible #has_space #region-us \n",
"# Model Card for roberta-base-on-cuad",
"# Model Details",
"## Model Description\n \n- Developed by: Mohammed Rakib\n- Shared by [Optional]: More information needed\n- Model type: Question Answering \n- Language(s) (NLP): en\n- License: MIT\n- Related Models:\n - Parent Model: RoBERTa \n- Resources for more information: \n - GitHub Repo: defactolaw\n - Associated Paper: An Open Source Contractual Language Understanding Application Using Machine Learning",
"# Uses",
"## Direct Use\n \nThis model can be used for the task of Question Answering on Legal Documents.",
"# Training Details\n\nRead: An Open Source Contractual Language Understanding Application Using Machine Learning \nfor detailed information on training procedure, dataset preprocessing and evaluation.",
"## Training Data\n \nSee CUAD dataset card for more information.",
"## Training Procedure",
"### Preprocessing\n \nMore information needed",
"### Speeds, Sizes, Times\n \nMore information needed",
"# Evaluation",
"## Testing Data, Factors & Metrics",
"### Testing Data\n \nSee CUAD dataset card for more information.",
"### Factors",
"### Metrics\n \nMore information needed",
"## Results \n \nMore information needed",
"# Model Examination\n \nMore information needed\n \n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications [optional]",
"## Model Architecture and Objective\n \nMore information needed",
"## Compute Infrastructure\n \nMore information needed",
"### Hardware\n \nUsed V100/P100 from Google Colab Pro",
"### Software\n\nPython, Transformers\n \nBibTeX:",
"# Glossary [optional]\nMore information needed",
"# More Information [optional]\n \nMore information needed",
"# Model Card Authors [optional]\n \nMohammed Rakib in collaboration with Ezi Ozoani and the Hugging Face team",
"# Model Card Contact\n \nMore information needed",
"# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\n\n</details>"
] |
question-answering
|
transformers
|
GreatModel does not solve any NLP problem ... for exercise purpose only.
|
{}
|
RaphBL/great-model
| null |
[
"transformers",
"pytorch",
"camembert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #camembert #question-answering #endpoints_compatible #region-us
|
GreatModel does not solve any NLP problem ... for exercise purpose only.
|
[] |
[
"TAGS\n#transformers #pytorch #camembert #question-answering #endpoints_compatible #region-us \n"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8535 | 1.0 | 661 | 2.0684 |
| 1.5385 | 2.0 | 1322 | 2.0954 |
| 1.2312 | 3.0 | 1983 | 2.1323 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad_v2"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]}
|
Raphaelg9/distilbert-base-uncased-finetuned-squad
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad_v2 #license-apache-2.0 #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-squad
=======================================
This model is a fine-tuned version of distilbert-base-uncased on the squad\_v2 dataset.
It achieves the following results on the evaluation set:
* Loss: 2.1323
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad_v2 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
# Rick Morty DialoGPT Model
|
{"tags": ["conversational"]}
|
Rashid11/DialoGPT-small-rick
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick Morty DialoGPT Model
|
[
"# Rick Morty DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick Morty DialoGPT Model"
] |
text-generation
|
transformers
|
# Harry Potter DialoGPT Model
|
{"tags": ["conversational"]}
|
Rathod/DialoGPT-small-harrypotter
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model
|
[
"# Harry Potter DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-thai-ASR
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6108
- Wer: 0.5636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.1123 | 2.65 | 400 | 3.3946 | 1.0002 |
| 1.5734 | 5.3 | 800 | 0.6881 | 0.7290 |
| 0.5934 | 7.94 | 1200 | 0.5789 | 0.6402 |
| 0.4059 | 10.59 | 1600 | 0.5496 | 0.5976 |
| 0.3136 | 13.24 | 2000 | 0.6109 | 0.5863 |
| 0.2546 | 15.89 | 2400 | 0.6113 | 0.5865 |
| 0.2184 | 18.54 | 2800 | 0.6108 | 0.5636 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-thai-ASR", "results": []}]}
|
Rattana/wav2vec2-thai-ASR
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-thai-ASR
=================
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6108
* Wer: 0.5636
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 20
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 20\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 20\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-thai-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-thai-colab", "results": []}]}
|
Rattana/wav2vec2-thai-colab
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-thai-colab
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
[
"# wav2vec2-thai-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 20\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-thai-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 20\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
This model is finetuned for masked language modeling.
I have used xlm-roberta-large model for pretraining over half a million tokens of
Hindi fraud call transcripts.
You can import this model with pretrained() method from the transformer library.
please note this works well on general Hindi but it's result on native language dialogues are enhanced
in comparison to general libraries.
|
{}
|
Raviraj/xlm-roberta-large-MLMfintune-hi-fraudcall
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
This model is finetuned for masked language modeling.
I have used xlm-roberta-large model for pretraining over half a million tokens of
Hindi fraud call transcripts.
You can import this model with pretrained() method from the transformer library.
please note this works well on general Hindi but it's result on native language dialogues are enhanced
in comparison to general libraries.
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
DO NOT USE THIS
|
{}
|
Raychanan/chinese-roberta-wwm-ext-FineTuned-Binary
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
DO NOT USE THIS
|
[] |
[
"TAGS\n#transformers #pytorch #jax #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# QAIDeptModel
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 105 | 2.6675 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "model-index": [{"name": "QAIDeptModel", "results": []}]}
|
Razan/QAIDeptModel
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
QAIDeptModel
============
This model is a fine-tuned version of aubmindlab/bert-base-arabertv2 on the None dataset.
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
zero-shot-classification
|
transformers
|
# bert-base-spanish-wwm-cased-xnli
**UPDATE, 15.10.2021: Check out our new zero-shot classifiers, much more lightweight and even outperforming this one: [zero-shot SELECTRA small](https://huggingface.co/Recognai/zeroshot_selectra_small) and [zero-shot SELECTRA medium](https://huggingface.co/Recognai/zeroshot_selectra_medium).**
## Model description
This model is a fine-tuned version of the [spanish BERT model](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) with the Spanish portion of the XNLI dataset. You can have a look at the [training script](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli/blob/main/zeroshot_training_script.py) for details of the training.
### How to use
You can use this model with Hugging Face's [zero-shot-classification pipeline](https://discuss.huggingface.co/t/new-pipeline-for-zero-shot-text-classification/681):
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="Recognai/bert-base-spanish-wwm-cased-xnli")
classifier(
"El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo",
candidate_labels=["cultura", "sociedad", "economia", "salud", "deportes"],
hypothesis_template="Este ejemplo es {}."
)
"""output
{'sequence': 'El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo',
'labels': ['cultura', 'sociedad', 'economia', 'salud', 'deportes'],
'scores': [0.38897448778152466,
0.22997373342514038,
0.1658431738615036,
0.1205764189362526,
0.09463217109441757]}
"""
```
## Eval results
Accuracy for the test set:
| | XNLI-es |
|-----------------------------|---------|
|bert-base-spanish-wwm-cased-xnli | 79.9% |
|
{"language": "es", "license": "mit", "tags": ["zero-shot-classification", "nli", "pytorch"], "datasets": ["xnli"], "pipeline_tag": "zero-shot-classification", "widget": [{"text": "El autor se perfila, a los 50 a\u00f1os de su muerte, como uno de los grandes de su siglo", "candidate_labels": "cultura, sociedad, economia, salud, deportes"}]}
|
Recognai/bert-base-spanish-wwm-cased-xnli
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"text-classification",
"zero-shot-classification",
"nli",
"es",
"dataset:xnli",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"es"
] |
TAGS
#transformers #pytorch #jax #safetensors #bert #text-classification #zero-shot-classification #nli #es #dataset-xnli #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
bert-base-spanish-wwm-cased-xnli
================================
UPDATE, 15.10.2021: Check out our new zero-shot classifiers, much more lightweight and even outperforming this one: zero-shot SELECTRA small and zero-shot SELECTRA medium.
Model description
-----------------
This model is a fine-tuned version of the spanish BERT model with the Spanish portion of the XNLI dataset. You can have a look at the training script for details of the training.
### How to use
You can use this model with Hugging Face's zero-shot-classification pipeline:
Eval results
------------
Accuracy for the test set:
|
[
"### How to use\n\n\nYou can use this model with Hugging Face's zero-shot-classification pipeline:\n\n\nEval results\n------------\n\n\nAccuracy for the test set:"
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #bert #text-classification #zero-shot-classification #nli #es #dataset-xnli #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nYou can use this model with Hugging Face's zero-shot-classification pipeline:\n\n\nEval results\n------------\n\n\nAccuracy for the test set:"
] |
fill-mask
|
transformers
|
# DistilBERT base multilingual model Spanish subset (cased)
This model is the Spanish extract of `distilbert-base-multilingual-cased` (https://huggingface.co/distilbert-base-multilingual-cased), a distilled version of the [BERT base multilingual model](bert-base-multilingual-cased). This model is cased: it does make a difference between english and English.
It uses the extraction method proposed by Geotrend described in https://github.com/Geotrend-research/smaller-transformers.
The resulting model has the same architecture as DistilmBERT: 6 layers, 768 dimension and 12 heads, with a total of **63M parameters** (compared to 134M parameters for DistilmBERT).
The goal of this model is to reduce even further the size of the `distilbert-base-multilingual` multilingual model by selecting only most frequent tokens for Spanish, reducing the size of the embedding layer. For more details visit the paper from the Geotrend team: Load What You Need: Smaller Versions of Multilingual BERT.
|
{"language": "es", "license": "apache-2.0", "datasets": ["wikipedia"], "widget": [{"text": "Mi nombre es Juan y vivo en [MASK]."}]}
|
Recognai/distilbert-base-es-multilingual-cased
| null |
[
"transformers",
"pytorch",
"safetensors",
"distilbert",
"fill-mask",
"es",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"es"
] |
TAGS
#transformers #pytorch #safetensors #distilbert #fill-mask #es #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# DistilBERT base multilingual model Spanish subset (cased)
This model is the Spanish extract of 'distilbert-base-multilingual-cased' (URL a distilled version of the BERT base multilingual model. This model is cased: it does make a difference between english and English.
It uses the extraction method proposed by Geotrend described in URL
The resulting model has the same architecture as DistilmBERT: 6 layers, 768 dimension and 12 heads, with a total of 63M parameters (compared to 134M parameters for DistilmBERT).
The goal of this model is to reduce even further the size of the 'distilbert-base-multilingual' multilingual model by selecting only most frequent tokens for Spanish, reducing the size of the embedding layer. For more details visit the paper from the Geotrend team: Load What You Need: Smaller Versions of Multilingual BERT.
|
[
"# DistilBERT base multilingual model Spanish subset (cased)\n\nThis model is the Spanish extract of 'distilbert-base-multilingual-cased' (URL a distilled version of the BERT base multilingual model. This model is cased: it does make a difference between english and English.\n\nIt uses the extraction method proposed by Geotrend described in URL \n\nThe resulting model has the same architecture as DistilmBERT: 6 layers, 768 dimension and 12 heads, with a total of 63M parameters (compared to 134M parameters for DistilmBERT).\n\nThe goal of this model is to reduce even further the size of the 'distilbert-base-multilingual' multilingual model by selecting only most frequent tokens for Spanish, reducing the size of the embedding layer. For more details visit the paper from the Geotrend team: Load What You Need: Smaller Versions of Multilingual BERT."
] |
[
"TAGS\n#transformers #pytorch #safetensors #distilbert #fill-mask #es #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# DistilBERT base multilingual model Spanish subset (cased)\n\nThis model is the Spanish extract of 'distilbert-base-multilingual-cased' (URL a distilled version of the BERT base multilingual model. This model is cased: it does make a difference between english and English.\n\nIt uses the extraction method proposed by Geotrend described in URL \n\nThe resulting model has the same architecture as DistilmBERT: 6 layers, 768 dimension and 12 heads, with a total of 63M parameters (compared to 134M parameters for DistilmBERT).\n\nThe goal of this model is to reduce even further the size of the 'distilbert-base-multilingual' multilingual model by selecting only most frequent tokens for Spanish, reducing the size of the embedding layer. For more details visit the paper from the Geotrend team: Load What You Need: Smaller Versions of Multilingual BERT."
] |
null |
transformers
|
# SELECTRA: A Spanish ELECTRA
SELECTRA is a Spanish pre-trained language model based on [ELECTRA](https://github.com/google-research/electra).
We release a `small` and `medium` version with the following configuration:
| Model | Layers | Embedding/Hidden Size | Params | Vocab Size | Max Sequence Length | Cased |
| --- | --- | --- | --- | --- | --- | --- |
| [SELECTRA small](https://huggingface.co/Recognai/selectra_small) | 12 | 256 | 22M | 50k | 512 | True |
| **SELECTRA medium** | **12** | **384** | **41M** | **50k** | **512** | **True** |
**SELECTRA small (medium) is about 5 (3) times smaller than BETO but achieves comparable results** (see Metrics section below).
## Usage
From the original [ELECTRA model card](https://huggingface.co/google/electra-small-discriminator): "ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN."
The discriminator should therefore activate the logit corresponding to the fake input token, as the following example demonstrates:
```python
from transformers import ElectraForPreTraining, ElectraTokenizerFast
discriminator = ElectraForPreTraining.from_pretrained("Recognai/selectra_small")
tokenizer = ElectraTokenizerFast.from_pretrained("Recognai/selectra_small")
sentence_with_fake_token = "Estamos desayunando pan rosa con tomate y aceite de oliva."
inputs = tokenizer.encode(sentence_with_fake_token, return_tensors="pt")
logits = discriminator(inputs).logits.tolist()[0]
print("\t".join(tokenizer.tokenize(sentence_with_fake_token)))
print("\t".join(map(lambda x: str(x)[:4], logits[1:-1])))
"""Output:
Estamos desayun ##ando pan rosa con tomate y aceite de oliva .
-3.1 -3.6 -6.9 -3.0 0.19 -4.5 -3.3 -5.1 -5.7 -7.7 -4.4 -4.2
"""
```
However, you probably want to use this model to fine-tune it on a downstream task.
We provide models fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which can be used together with the zero-shot classification pipeline:
- [Zero-shot SELECTRA small](https://huggingface.co/Recognai/zeroshot_selectra_small)
- [Zero-shot SELECTRA medium](https://huggingface.co/Recognai/zeroshot_selectra_medium)
## Metrics
We fine-tune our models on 3 different down-stream tasks:
- [XNLI](https://huggingface.co/datasets/xnli)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [CoNLL2002 - NER](https://huggingface.co/datasets/conll2002)
For each task, we conduct 5 trials and state the mean and standard deviation of the metrics in the table below.
To compare our results to other Spanish language models, we provide the same metrics taken from the [evaluation table](https://github.com/PlanTL-SANIDAD/lm-spanish#evaluation-) of the [Spanish Language Model](https://github.com/PlanTL-SANIDAD/lm-spanish) repo.
| Model | CoNLL2002 - NER (f1) | PAWS-X (acc) | XNLI (acc) | Params |
| --- | --- | --- | --- | --- |
| SELECTRA small | 0.865 +- 0.004 | 0.896 +- 0.002 | 0.784 +- 0.002 | 22M |
| SELECTRA medium | 0.873 +- 0.003 | 0.896 +- 0.002 | 0.804 +- 0.002 | 41M |
| | | | | |
| [mBERT](https://huggingface.co/bert-base-multilingual-cased) | 0.8691 | 0.8955 | 0.7876 | 178M |
| [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) | 0.8759 | 0.9000 | 0.8130 | 110M |
| [RoBERTa-b](https://huggingface.co/BSC-TeMU/roberta-base-bne) | 0.8851 | 0.9000 | 0.8016 | 125M |
| [RoBERTa-l](https://huggingface.co/BSC-TeMU/roberta-large-bne) | 0.8772 | 0.9060 | 0.7958 | 355M |
| [Bertin](https://huggingface.co/bertin-project/bertin-roberta-base-spanish/tree/v1-512) | 0.8835 | 0.8990 | 0.7890 | 125M |
| [ELECTRICIDAD](https://huggingface.co/mrm8488/electricidad-base-discriminator) | 0.7954 | 0.9025 | 0.7878 | 109M |
Some details of our fine-tuning runs:
- epochs: 5
- batch-size: 32
- learning rate: 1e-4
- warmup proportion: 0.1
- linear learning rate decay
- layerwise learning rate decay
For all the details, check out our [selectra repo](https://github.com/recognai/selectra).
## Training
We pre-trained our SELECTRA models on the Spanish portion of the [Oscar](https://huggingface.co/datasets/oscar) dataset, which is about 150GB in size.
Each model version is trained for 300k steps, with a warm restart of the learning rate after the first 150k steps.
Some details of the training:
- steps: 300k
- batch-size: 128
- learning rate: 5e-4
- warmup steps: 10k
- linear learning rate decay
- TPU cores: 8 (v2-8)
For all details, check out our [selectra repo](https://github.com/recognai/selectra).
**Note:** Due to a misconfiguration in the pre-training scripts the embeddings of the vocabulary containing an accent were not optimized. If you fine-tune this model on a down-stream task, you might consider using a tokenizer that does not strip the accents:
```python
tokenizer = ElectraTokenizerFast.from_pretrained("Recognai/selectra_small", strip_accents=False)
```
## Motivation
Despite the abundance of excellent Spanish language models (BETO, BSC-BNE, Bertin, ELECTRICIDAD, etc.), we felt there was still a lack of distilled or compact Spanish language models and a lack of comparing those to their bigger siblings.
## Acknowledgment
This research was supported by the Google TPU Research Cloud (TRC) program.
## Authors
- David Fidalgo ([GitHub](https://github.com/dcfidalgo))
- Javier Lopez ([GitHub](https://github.com/javispp))
- Daniel Vila ([GitHub](https://github.com/dvsrepo))
- Francisco Aranda ([GitHub](https://github.com/frascuchon))
|
{"language": ["es"], "license": "apache-2.0", "datasets": ["oscar"], "thumbnail": "url to a thumbnail used in social sharing"}
|
Recognai/selectra_medium
| null |
[
"transformers",
"pytorch",
"electra",
"pretraining",
"es",
"dataset:oscar",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"es"
] |
TAGS
#transformers #pytorch #electra #pretraining #es #dataset-oscar #license-apache-2.0 #endpoints_compatible #region-us
|
SELECTRA: A Spanish ELECTRA
===========================
SELECTRA is a Spanish pre-trained language model based on ELECTRA.
We release a 'small' and 'medium' version with the following configuration:
SELECTRA small (medium) is about 5 (3) times smaller than BETO but achieves comparable results (see Metrics section below).
Usage
-----
From the original ELECTRA model card: "ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN."
The discriminator should therefore activate the logit corresponding to the fake input token, as the following example demonstrates:
However, you probably want to use this model to fine-tune it on a downstream task.
We provide models fine-tuned on the XNLI dataset, which can be used together with the zero-shot classification pipeline:
* Zero-shot SELECTRA small
* Zero-shot SELECTRA medium
Metrics
-------
We fine-tune our models on 3 different down-stream tasks:
* XNLI
* PAWS-X
* CoNLL2002 - NER
For each task, we conduct 5 trials and state the mean and standard deviation of the metrics in the table below.
To compare our results to other Spanish language models, we provide the same metrics taken from the evaluation table of the Spanish Language Model repo.
Some details of our fine-tuning runs:
* epochs: 5
* batch-size: 32
* learning rate: 1e-4
* warmup proportion: 0.1
* linear learning rate decay
* layerwise learning rate decay
For all the details, check out our selectra repo.
Training
--------
We pre-trained our SELECTRA models on the Spanish portion of the Oscar dataset, which is about 150GB in size.
Each model version is trained for 300k steps, with a warm restart of the learning rate after the first 150k steps.
Some details of the training:
* steps: 300k
* batch-size: 128
* learning rate: 5e-4
* warmup steps: 10k
* linear learning rate decay
* TPU cores: 8 (v2-8)
For all details, check out our selectra repo.
Note: Due to a misconfiguration in the pre-training scripts the embeddings of the vocabulary containing an accent were not optimized. If you fine-tune this model on a down-stream task, you might consider using a tokenizer that does not strip the accents:
Motivation
----------
Despite the abundance of excellent Spanish language models (BETO, BSC-BNE, Bertin, ELECTRICIDAD, etc.), we felt there was still a lack of distilled or compact Spanish language models and a lack of comparing those to their bigger siblings.
Acknowledgment
--------------
This research was supported by the Google TPU Research Cloud (TRC) program.
Authors
-------
* David Fidalgo (GitHub)
* Javier Lopez (GitHub)
* Daniel Vila (GitHub)
* Francisco Aranda (GitHub)
|
[] |
[
"TAGS\n#transformers #pytorch #electra #pretraining #es #dataset-oscar #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
null |
transformers
|
# SELECTRA: A Spanish ELECTRA
SELECTRA is a Spanish pre-trained language model based on [ELECTRA](https://github.com/google-research/electra).
We release a `small` and `medium` version with the following configuration:
| Model | Layers | Embedding/Hidden Size | Params | Vocab Size | Max Sequence Length | Cased |
| --- | --- | --- | --- | --- | --- | --- |
| **SELECTRA small** | **12** | **256** | **22M** | **50k** | **512** | **True** |
| [SELECTRA medium](https://huggingface.co/Recognai/selectra_medium) | 12 | 384 | 41M | 50k | 512 | True |
**SELECTRA small (medium) is about 5 (3) times smaller than BETO but achieves comparable results** (see Metrics section below).
## Usage
From the original [ELECTRA model card](https://huggingface.co/google/electra-small-discriminator): "ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN."
The discriminator should therefore activate the logit corresponding to the fake input token, as the following example demonstrates:
```python
from transformers import ElectraForPreTraining, ElectraTokenizerFast
discriminator = ElectraForPreTraining.from_pretrained("Recognai/selectra_small")
tokenizer = ElectraTokenizerFast.from_pretrained("Recognai/selectra_small")
sentence_with_fake_token = "Estamos desayunando pan rosa con tomate y aceite de oliva."
inputs = tokenizer.encode(sentence_with_fake_token, return_tensors="pt")
logits = discriminator(inputs).logits.tolist()[0]
print("\t".join(tokenizer.tokenize(sentence_with_fake_token)))
print("\t".join(map(lambda x: str(x)[:4], logits[1:-1])))
"""Output:
Estamos desayun ##ando pan rosa con tomate y aceite de oliva .
-3.1 -3.6 -6.9 -3.0 0.19 -4.5 -3.3 -5.1 -5.7 -7.7 -4.4 -4.2
"""
```
However, you probably want to use this model to fine-tune it on a downstream task.
We provide models fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which can be used together with the zero-shot classification pipeline:
- [Zero-shot SELECTRA small](https://huggingface.co/Recognai/zeroshot_selectra_small)
- [Zero-shot SELECTRA medium](https://huggingface.co/Recognai/zeroshot_selectra_medium)
## Metrics
We fine-tune our models on 3 different down-stream tasks:
- [XNLI](https://huggingface.co/datasets/xnli)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [CoNLL2002 - NER](https://huggingface.co/datasets/conll2002)
For each task, we conduct 5 trials and state the mean and standard deviation of the metrics in the table below.
To compare our results to other Spanish language models, we provide the same metrics taken from the [evaluation table](https://github.com/PlanTL-SANIDAD/lm-spanish#evaluation-) of the [Spanish Language Model](https://github.com/PlanTL-SANIDAD/lm-spanish) repo.
| Model | CoNLL2002 - NER (f1) | PAWS-X (acc) | XNLI (acc) | Params |
| --- | --- | --- | --- | --- |
| SELECTRA small | 0.865 +- 0.004 | 0.896 +- 0.002 | 0.784 +- 0.002 | 22M |
| SELECTRA medium | 0.873 +- 0.003 | 0.896 +- 0.002 | 0.804 +- 0.002 | 41M |
| | | | | |
| [mBERT](https://huggingface.co/bert-base-multilingual-cased) | 0.8691 | 0.8955 | 0.7876 | 178M |
| [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) | 0.8759 | 0.9000 | 0.8130 | 110M |
| [RoBERTa-b](https://huggingface.co/BSC-TeMU/roberta-base-bne) | 0.8851 | 0.9000 | 0.8016 | 125M |
| [RoBERTa-l](https://huggingface.co/BSC-TeMU/roberta-large-bne) | 0.8772 | 0.9060 | 0.7958 | 355M |
| [Bertin](https://huggingface.co/bertin-project/bertin-roberta-base-spanish/tree/v1-512) | 0.8835 | 0.8990 | 0.7890 | 125M |
| [ELECTRICIDAD](https://huggingface.co/mrm8488/electricidad-base-discriminator) | 0.7954 | 0.9025 | 0.7878 | 109M |
Some details of our fine-tuning runs:
- epochs: 5
- batch-size: 32
- learning rate: 1e-4
- warmup proportion: 0.1
- linear learning rate decay
- layerwise learning rate decay
For all the details, check out our [selectra repo](https://github.com/recognai/selectra).
## Training
We pre-trained our SELECTRA models on the Spanish portion of the [Oscar](https://huggingface.co/datasets/oscar) dataset, which is about 150GB in size.
Each model version is trained for 300k steps, with a warm restart of the learning rate after the first 150k steps.
Some details of the training:
- steps: 300k
- batch-size: 128
- learning rate: 5e-4
- warmup steps: 10k
- linear learning rate decay
- TPU cores: 8 (v2-8)
For all details, check out our [selectra repo](https://github.com/recognai/selectra).
**Note:** Due to a misconfiguration in the pre-training scripts the embeddings of the vocabulary containing an accent were not optimized. If you fine-tune this model on a down-stream task, you might consider using a tokenizer that does not strip the accents:
```python
tokenizer = ElectraTokenizerFast.from_pretrained("Recognai/selectra_small", strip_accents=False)
```
## Motivation
Despite the abundance of excellent Spanish language models (BETO, BSC-BNE, Bertin, ELECTRICIDAD, etc.), we felt there was still a lack of distilled or compact Spanish language models and a lack of comparing those to their bigger siblings.
## Acknowledgment
This research was supported by the Google TPU Research Cloud (TRC) program.
## Authors
- David Fidalgo ([GitHub](https://github.com/dcfidalgo))
- Javier Lopez ([GitHub](https://github.com/javispp))
- Daniel Vila ([GitHub](https://github.com/dvsrepo))
- Francisco Aranda ([GitHub](https://github.com/frascuchon))
|
{"language": ["es"], "license": "apache-2.0", "datasets": ["oscar"], "thumbnail": "url to a thumbnail used in social sharing"}
|
Recognai/selectra_small
| null |
[
"transformers",
"pytorch",
"electra",
"pretraining",
"es",
"dataset:oscar",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"es"
] |
TAGS
#transformers #pytorch #electra #pretraining #es #dataset-oscar #license-apache-2.0 #endpoints_compatible #region-us
|
SELECTRA: A Spanish ELECTRA
===========================
SELECTRA is a Spanish pre-trained language model based on ELECTRA.
We release a 'small' and 'medium' version with the following configuration:
SELECTRA small (medium) is about 5 (3) times smaller than BETO but achieves comparable results (see Metrics section below).
Usage
-----
From the original ELECTRA model card: "ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN."
The discriminator should therefore activate the logit corresponding to the fake input token, as the following example demonstrates:
However, you probably want to use this model to fine-tune it on a downstream task.
We provide models fine-tuned on the XNLI dataset, which can be used together with the zero-shot classification pipeline:
* Zero-shot SELECTRA small
* Zero-shot SELECTRA medium
Metrics
-------
We fine-tune our models on 3 different down-stream tasks:
* XNLI
* PAWS-X
* CoNLL2002 - NER
For each task, we conduct 5 trials and state the mean and standard deviation of the metrics in the table below.
To compare our results to other Spanish language models, we provide the same metrics taken from the evaluation table of the Spanish Language Model repo.
Some details of our fine-tuning runs:
* epochs: 5
* batch-size: 32
* learning rate: 1e-4
* warmup proportion: 0.1
* linear learning rate decay
* layerwise learning rate decay
For all the details, check out our selectra repo.
Training
--------
We pre-trained our SELECTRA models on the Spanish portion of the Oscar dataset, which is about 150GB in size.
Each model version is trained for 300k steps, with a warm restart of the learning rate after the first 150k steps.
Some details of the training:
* steps: 300k
* batch-size: 128
* learning rate: 5e-4
* warmup steps: 10k
* linear learning rate decay
* TPU cores: 8 (v2-8)
For all details, check out our selectra repo.
Note: Due to a misconfiguration in the pre-training scripts the embeddings of the vocabulary containing an accent were not optimized. If you fine-tune this model on a down-stream task, you might consider using a tokenizer that does not strip the accents:
Motivation
----------
Despite the abundance of excellent Spanish language models (BETO, BSC-BNE, Bertin, ELECTRICIDAD, etc.), we felt there was still a lack of distilled or compact Spanish language models and a lack of comparing those to their bigger siblings.
Acknowledgment
--------------
This research was supported by the Google TPU Research Cloud (TRC) program.
Authors
-------
* David Fidalgo (GitHub)
* Javier Lopez (GitHub)
* Daniel Vila (GitHub)
* Francisco Aranda (GitHub)
|
[] |
[
"TAGS\n#transformers #pytorch #electra #pretraining #es #dataset-oscar #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
zero-shot-classification
|
transformers
|
# Zero-shot SELECTRA: A zero-shot classifier based on SELECTRA
*Zero-shot SELECTRA* is a [SELECTRA model](https://huggingface.co/Recognai/selectra_small) fine-tuned on the Spanish portion of the [XNLI dataset](https://huggingface.co/datasets/xnli). You can use it with Hugging Face's [Zero-shot pipeline](https://huggingface.co/transformers/master/main_classes/pipelines.html#transformers.ZeroShotClassificationPipeline) to make [zero-shot classifications](https://joeddav.github.io/blog/2020/05/29/ZSL.html).
In comparison to our previous zero-shot classifier [based on BETO](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli), zero-shot SELECTRA is **much more lightweight**. As shown in the *Metrics* section, the *small* version (5 times fewer parameters) performs slightly worse, while the *medium* version (3 times fewer parameters) **outperforms** the BETO based zero-shot classifier.
## Usage
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="Recognai/zeroshot_selectra_medium")
classifier(
"El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo",
candidate_labels=["cultura", "sociedad", "economia", "salud", "deportes"],
hypothesis_template="Este ejemplo es {}."
)
"""Output
{'sequence': 'El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo',
'labels': ['sociedad', 'cultura', 'economia', 'salud', 'deportes'],
'scores': [0.6450043320655823,
0.16710571944713593,
0.08507631719112396,
0.0759836807847023,
0.026829993352293968]}
"""
```
The `hypothesis_template` parameter is important and should be in Spanish. **In the widget on the right, this parameter is set to its default value: "This example is {}.", so different results are expected.**
## Demo and tutorial
If you want to see this model in action, we have created a basic tutorial using [Rubrix](https://www.rubrix.ml/), a free and open-source tool to *explore, annotate, and monitor data for NLP*.
The tutorial shows you how to evaluate this classifier for news categorization in Spanish, and how it could be used to build a training set for training a supervised classifier (which might be useful if you want obtain more precise results or improve the model over time).
You can [find the tutorial here](https://rubrix.readthedocs.io/en/master/tutorials/zeroshot_data_annotation.html).
See the video below showing the predictions within the annotation process (see that the predictions are almost correct for every example).
<video width="100%" controls><source src="https://github.com/recognai/rubrix-materials/raw/main/tutorials/videos/zeroshot_selectra_news_data_annotation.mp4" type="video/mp4"></video>
## Metrics
| Model | Params | XNLI (acc) | \*MLSUM (acc) |
| --- | --- | --- | --- |
| [zs BETO](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli) | 110M | 0.799 | 0.530 |
| zs SELECTRA medium | 41M | **0.807** | **0.589** |
| [zs SELECTRA small](https://huggingface.co/Recognai/zeroshot_selectra_small) | **22M** | 0.795 | 0.446 |
\*evaluated with zero-shot learning (ZSL)
- **XNLI**: The stated accuracy refers to the test portion of the [XNLI dataset](https://huggingface.co/datasets/xnli), after finetuning the model on the training portion.
- **MLSUM**: For this accuracy we take the test set of the [MLSUM dataset](https://huggingface.co/datasets/mlsum) and classify the summaries of 5 selected labels. For details, check out our [evaluation notebook](https://github.com/recognai/selectra/blob/main/zero-shot_classifier/evaluation.ipynb)
## Training
Check out our [training notebook](https://github.com/recognai/selectra/blob/main/zero-shot_classifier/training.ipynb) for all the details.
## Authors
- David Fidalgo ([GitHub](https://github.com/dcfidalgo))
- Daniel Vila ([GitHub](https://github.com/dvsrepo))
- Francisco Aranda ([GitHub](https://github.com/frascuchon))
- Javier Lopez ([GitHub](https://github.com/javispp))
|
{"language": "es", "license": "apache-2.0", "tags": ["zero-shot-classification", "nli", "pytorch"], "datasets": ["xnli"], "pipeline_tag": "zero-shot-classification", "widget": [{"text": "El autor se perfila, a los 50 a\u00f1os de su muerte, como uno de los grandes de su siglo", "candidate_labels": "cultura, sociedad, economia, salud, deportes"}]}
|
Recognai/zeroshot_selectra_medium
| null |
[
"transformers",
"pytorch",
"safetensors",
"electra",
"text-classification",
"zero-shot-classification",
"nli",
"es",
"dataset:xnli",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"es"
] |
TAGS
#transformers #pytorch #safetensors #electra #text-classification #zero-shot-classification #nli #es #dataset-xnli #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
Zero-shot SELECTRA: A zero-shot classifier based on SELECTRA
============================================================
*Zero-shot SELECTRA* is a SELECTRA model fine-tuned on the Spanish portion of the XNLI dataset. You can use it with Hugging Face's Zero-shot pipeline to make zero-shot classifications.
In comparison to our previous zero-shot classifier based on BETO, zero-shot SELECTRA is much more lightweight. As shown in the *Metrics* section, the *small* version (5 times fewer parameters) performs slightly worse, while the *medium* version (3 times fewer parameters) outperforms the BETO based zero-shot classifier.
Usage
-----
The 'hypothesis\_template' parameter is important and should be in Spanish. In the widget on the right, this parameter is set to its default value: "This example is {}.", so different results are expected.
Demo and tutorial
-----------------
If you want to see this model in action, we have created a basic tutorial using Rubrix, a free and open-source tool to *explore, annotate, and monitor data for NLP*.
The tutorial shows you how to evaluate this classifier for news categorization in Spanish, and how it could be used to build a training set for training a supervised classifier (which might be useful if you want obtain more precise results or improve the model over time).
You can find the tutorial here.
See the video below showing the predictions within the annotation process (see that the predictions are almost correct for every example).
<source src="URL type="video/mp4">
Metrics
-------
\*evaluated with zero-shot learning (ZSL)
* XNLI: The stated accuracy refers to the test portion of the XNLI dataset, after finetuning the model on the training portion.
* MLSUM: For this accuracy we take the test set of the MLSUM dataset and classify the summaries of 5 selected labels. For details, check out our evaluation notebook
Training
--------
Check out our training notebook for all the details.
Authors
-------
* David Fidalgo (GitHub)
* Daniel Vila (GitHub)
* Francisco Aranda (GitHub)
* Javier Lopez (GitHub)
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #electra #text-classification #zero-shot-classification #nli #es #dataset-xnli #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
zero-shot-classification
|
transformers
|
# Zero-shot SELECTRA: A zero-shot classifier based on SELECTRA
*Zero-shot SELECTRA* is a [SELECTRA model](https://huggingface.co/Recognai/selectra_small) fine-tuned on the Spanish portion of the [XNLI dataset](https://huggingface.co/datasets/xnli). You can use it with Hugging Face's [Zero-shot pipeline](https://huggingface.co/transformers/master/main_classes/pipelines.html#transformers.ZeroShotClassificationPipeline) to make [zero-shot classifications](https://joeddav.github.io/blog/2020/05/29/ZSL.html).
In comparison to our previous zero-shot classifier [based on BETO](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli), zero-shot SELECTRA is **much more lightweight**. As shown in the *Metrics* section, the *small* version (5 times fewer parameters) performs slightly worse, while the *medium* version (3 times fewer parameters) **outperforms** the BETO based zero-shot classifier.
## Usage
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="Recognai/zeroshot_selectra_medium")
classifier(
"El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo",
candidate_labels=["cultura", "sociedad", "economia", "salud", "deportes"],
hypothesis_template="Este ejemplo es {}."
)
"""Output
{'sequence': 'El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo',
'labels': ['sociedad', 'cultura', 'salud', 'economia', 'deportes'],
'scores': [0.3711881935596466,
0.25650349259376526,
0.17355826497077942,
0.1641489565372467,
0.03460107371211052]}
"""
```
The `hypothesis_template` parameter is important and should be in Spanish. **In the widget on the right, this parameter is set to its default value: "This example is {}.", so different results are expected.**
## Metrics
| Model | Params | XNLI (acc) | \*MLSUM (acc) |
| --- | --- | --- | --- |
| [zs BETO](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli) | 110M | 0.799 | 0.530 |
| [zs SELECTRA medium](https://huggingface.co/Recognai/zeroshot_selectra_medium) | 41M | **0.807** | **0.589** |
| zs SELECTRA small | **22M** | 0.795 | 0.446 |
\*evaluated with zero-shot learning (ZSL)
- **XNLI**: The stated accuracy refers to the test portion of the [XNLI dataset](https://huggingface.co/datasets/xnli), after finetuning the model on the training portion.
- **MLSUM**: For this accuracy we take the test set of the [MLSUM dataset](https://huggingface.co/datasets/mlsum) and classify the summaries of 5 selected labels. For details, check out our [evaluation notebook](https://github.com/recognai/selectra/blob/main/zero-shot_classifier/evaluation.ipynb)
## Training
Check out our [training notebook](https://github.com/recognai/selectra/blob/main/zero-shot_classifier/training.ipynb) for all the details.
## Authors
- David Fidalgo ([GitHub](https://github.com/dcfidalgo))
- Daniel Vila ([GitHub](https://github.com/dvsrepo))
- Francisco Aranda ([GitHub](https://github.com/frascuchon))
- Javier Lopez ([GitHub](https://github.com/javispp))
|
{"language": "es", "license": "apache-2.0", "tags": ["zero-shot-classification", "nli", "pytorch"], "datasets": ["xnli"], "pipeline_tag": "zero-shot-classification", "widget": [{"text": "El autor se perfila, a los 50 a\u00f1os de su muerte, como uno de los grandes de su siglo", "candidate_labels": "cultura, sociedad, economia, salud, deportes"}]}
|
Recognai/zeroshot_selectra_small
| null |
[
"transformers",
"pytorch",
"safetensors",
"electra",
"text-classification",
"zero-shot-classification",
"nli",
"es",
"dataset:xnli",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"es"
] |
TAGS
#transformers #pytorch #safetensors #electra #text-classification #zero-shot-classification #nli #es #dataset-xnli #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
Zero-shot SELECTRA: A zero-shot classifier based on SELECTRA
============================================================
*Zero-shot SELECTRA* is a SELECTRA model fine-tuned on the Spanish portion of the XNLI dataset. You can use it with Hugging Face's Zero-shot pipeline to make zero-shot classifications.
In comparison to our previous zero-shot classifier based on BETO, zero-shot SELECTRA is much more lightweight. As shown in the *Metrics* section, the *small* version (5 times fewer parameters) performs slightly worse, while the *medium* version (3 times fewer parameters) outperforms the BETO based zero-shot classifier.
Usage
-----
The 'hypothesis\_template' parameter is important and should be in Spanish. In the widget on the right, this parameter is set to its default value: "This example is {}.", so different results are expected.
Metrics
-------
\*evaluated with zero-shot learning (ZSL)
* XNLI: The stated accuracy refers to the test portion of the XNLI dataset, after finetuning the model on the training portion.
* MLSUM: For this accuracy we take the test set of the MLSUM dataset and classify the summaries of 5 selected labels. For details, check out our evaluation notebook
Training
--------
Check out our training notebook for all the details.
Authors
-------
* David Fidalgo (GitHub)
* Daniel Vila (GitHub)
* Francisco Aranda (GitHub)
* Javier Lopez (GitHub)
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #electra #text-classification #zero-shot-classification #nli #es #dataset-xnli #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
token-classification
|
transformers
|
## Swedish BERT models for sentiment analysis, Sentiment targets.
[Recorded Future](https://www.recordedfuture.com/) together with [AI Sweden](https://www.ai.se/en) releases a Named Entity Recognition(NER) model for entety detection in Swedish. The model is based on [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased) and finetuned on data collected from various internet sources and forums.
The model has been trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.
The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.
### Available tags
* Location
* Organization
* Person
* Religion
* Title
### Evaluation metrics
The model had the following metrics when evaluated on test data originating from the same domain as the training data.
#### F1-score
| Loc | Org | Per | Nat | Rel | Tit | Total |
|------|------|------|------|------|------|-------|
| 0.91 | 0.88 | 0.96 | 0.95 | 0.91 | 0.84 | 0.92 |
|
{"language": "sv", "license": "mit"}
|
RecordedFuture/Swedish-NER
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"sv",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"sv"
] |
TAGS
#transformers #pytorch #bert #token-classification #sv #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
Swedish BERT models for sentiment analysis, Sentiment targets.
--------------------------------------------------------------
Recorded Future together with AI Sweden releases a Named Entity Recognition(NER) model for entety detection in Swedish. The model is based on KB/bert-base-swedish-cased and finetuned on data collected from various internet sources and forums.
The model has been trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.
The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.
### Available tags
* Location
* Organization
* Person
* Religion
* Title
### Evaluation metrics
The model had the following metrics when evaluated on test data originating from the same domain as the training data.
#### F1-score
|
[
"### Available tags\n\n\n* Location\n* Organization\n* Person\n* Religion\n* Title",
"### Evaluation metrics\n\n\nThe model had the following metrics when evaluated on test data originating from the same domain as the training data.",
"#### F1-score"
] |
[
"TAGS\n#transformers #pytorch #bert #token-classification #sv #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Available tags\n\n\n* Location\n* Organization\n* Person\n* Religion\n* Title",
"### Evaluation metrics\n\n\nThe model had the following metrics when evaluated on test data originating from the same domain as the training data.",
"#### F1-score"
] |
token-classification
|
transformers
|
## Swedish BERT models for sentiment analysis, Sentiment targets.
[Recorded Future](https://www.recordedfuture.com/) together with [AI Sweden](https://www.ai.se/en) releases two language models for target/role assignment in Swedish. The two models are based on the [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased), the models as has been fine tuned to solve a Named Entety Recognition(NER) token classification task.
This is a downstream model to be used in conjunction with the [Swedish violence sentiment classifier](https://huggingface.co/RecordedFuture/Swedish-Sentiment-Violence) or [Swedish violence sentiment classifier](https://huggingface.co/RecordedFuture/Swedish-Sentiment-Fear). The models are trained to tag parts of sentences that has recieved a positive classification from the upstream sentiment classifier. The model will tag parts of sentences that contains the targets that the upstream model has activated on.
The NER sentiment target models do work as standalone models but their recommended application is downstreamfrom a sentence classification model.
The models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.
The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.
### Fear targets
The model can be imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear-Targets")
classifier_fear_targets= BertForTokenClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear-Targets")
When the model and tokenizer are initialized the model can be used for inference.
#### Verification metrics
During training the Fear target model had the following verification metrics when using "any overlap" as the evaluation metric.
| F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.8361 | 0.7903 | 0.8876 |
#### Swedish-Sentiment-Violence
The model be can imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence-Targets")
classifier_violence_targets = BertForTokenClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence-Targets")
When the model and tokenizer are initialized the model can be used for inference.
#### Verification metrics
During training the Violence target model had the following verification metrics when using "any overlap" as the evaluation metric.
| F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.7831| 0.9155| 0.8442 |
|
{"language": "sv", "license": "mit"}
|
RecordedFuture/Swedish-Sentiment-Fear-Targets
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"sv",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"sv"
] |
TAGS
#transformers #pytorch #tf #jax #bert #token-classification #sv #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
## Swedish BERT models for sentiment analysis, Sentiment targets.
Recorded Future together with AI Sweden releases two language models for target/role assignment in Swedish. The two models are based on the KB/bert-base-swedish-cased, the models as has been fine tuned to solve a Named Entety Recognition(NER) token classification task.
This is a downstream model to be used in conjunction with the Swedish violence sentiment classifier or Swedish violence sentiment classifier. The models are trained to tag parts of sentences that has recieved a positive classification from the upstream sentiment classifier. The model will tag parts of sentences that contains the targets that the upstream model has activated on.
The NER sentiment target models do work as standalone models but their recommended application is downstreamfrom a sentence classification model.
The models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.
The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.
### Fear targets
The model can be imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear-Targets")
classifier_fear_targets= BertForTokenClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear-Targets")
When the model and tokenizer are initialized the model can be used for inference.
#### Verification metrics
During training the Fear target model had the following verification metrics when using "any overlap" as the evaluation metric.
| F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.8361 | 0.7903 | 0.8876 |
#### Swedish-Sentiment-Violence
The model be can imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence-Targets")
classifier_violence_targets = BertForTokenClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence-Targets")
When the model and tokenizer are initialized the model can be used for inference.
#### Verification metrics
During training the Violence target model had the following verification metrics when using "any overlap" as the evaluation metric.
| F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.7831| 0.9155| 0.8442 |
|
[
"## Swedish BERT models for sentiment analysis, Sentiment targets. \nRecorded Future together with AI Sweden releases two language models for target/role assignment in Swedish. The two models are based on the KB/bert-base-swedish-cased, the models as has been fine tuned to solve a Named Entety Recognition(NER) token classification task.\n\nThis is a downstream model to be used in conjunction with the Swedish violence sentiment classifier or Swedish violence sentiment classifier. The models are trained to tag parts of sentences that has recieved a positive classification from the upstream sentiment classifier. The model will tag parts of sentences that contains the targets that the upstream model has activated on. \n\nThe NER sentiment target models do work as standalone models but their recommended application is downstreamfrom a sentence classification model. \n\nThe models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.\n\nThe current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.",
"### Fear targets\n\nThe model can be imported from the transformers library by running\n\n from transformers import BertForSequenceClassification, BertTokenizerFast\n \n tokenizer = BertTokenizerFast.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Fear-Targets\")\n classifier_fear_targets= BertForTokenClassification.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Fear-Targets\") \n\nWhen the model and tokenizer are initialized the model can be used for inference.",
"#### Verification metrics \n\nDuring training the Fear target model had the following verification metrics when using \"any overlap\" as the evaluation metric. \n\n\n| F-score | Precision | Recall |\n|:-------------------------:|:-------:|:---------:|:------:|\n| 0.8361 | 0.7903 | 0.8876 |",
"#### Swedish-Sentiment-Violence\nThe model be can imported from the transformers library by running\n\n from transformers import BertForSequenceClassification, BertTokenizerFast\n \n tokenizer = BertTokenizerFast.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Violence-Targets\")\n classifier_violence_targets = BertForTokenClassification.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Violence-Targets\") \n\nWhen the model and tokenizer are initialized the model can be used for inference.",
"#### Verification metrics \nDuring training the Violence target model had the following verification metrics when using \"any overlap\" as the evaluation metric. \n\n| F-score | Precision | Recall |\n|:-------------------------:|:-------:|:---------:|:------:|\n| 0.7831| 0.9155| 0.8442 |"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #token-classification #sv #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"## Swedish BERT models for sentiment analysis, Sentiment targets. \nRecorded Future together with AI Sweden releases two language models for target/role assignment in Swedish. The two models are based on the KB/bert-base-swedish-cased, the models as has been fine tuned to solve a Named Entety Recognition(NER) token classification task.\n\nThis is a downstream model to be used in conjunction with the Swedish violence sentiment classifier or Swedish violence sentiment classifier. The models are trained to tag parts of sentences that has recieved a positive classification from the upstream sentiment classifier. The model will tag parts of sentences that contains the targets that the upstream model has activated on. \n\nThe NER sentiment target models do work as standalone models but their recommended application is downstreamfrom a sentence classification model. \n\nThe models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.\n\nThe current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.",
"### Fear targets\n\nThe model can be imported from the transformers library by running\n\n from transformers import BertForSequenceClassification, BertTokenizerFast\n \n tokenizer = BertTokenizerFast.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Fear-Targets\")\n classifier_fear_targets= BertForTokenClassification.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Fear-Targets\") \n\nWhen the model and tokenizer are initialized the model can be used for inference.",
"#### Verification metrics \n\nDuring training the Fear target model had the following verification metrics when using \"any overlap\" as the evaluation metric. \n\n\n| F-score | Precision | Recall |\n|:-------------------------:|:-------:|:---------:|:------:|\n| 0.8361 | 0.7903 | 0.8876 |",
"#### Swedish-Sentiment-Violence\nThe model be can imported from the transformers library by running\n\n from transformers import BertForSequenceClassification, BertTokenizerFast\n \n tokenizer = BertTokenizerFast.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Violence-Targets\")\n classifier_violence_targets = BertForTokenClassification.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Violence-Targets\") \n\nWhen the model and tokenizer are initialized the model can be used for inference.",
"#### Verification metrics \nDuring training the Violence target model had the following verification metrics when using \"any overlap\" as the evaluation metric. \n\n| F-score | Precision | Recall |\n|:-------------------------:|:-------:|:---------:|:------:|\n| 0.7831| 0.9155| 0.8442 |"
] |
text-classification
|
transformers
|
## Swedish BERT models for sentiment analysis
[Recorded Future](https://www.recordedfuture.com/) together with [AI Sweden](https://www.ai.se/en) releases two language models for sentiment analysis in Swedish. The two models are based on the [KB\/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased) model and has been fine-tuned to solve a multi-label sentiment analysis task.
The models have been fine-tuned for the sentiments fear and violence. The models output three floats corresponding to the labels "Negative", "Weak sentiment", and "Strong Sentiment" at the respective indexes.
The models have been trained on Swedish data with a conversational focus, collected from various internet sources and forums.
The models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.
The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.
### Swedish-Sentiment-Fear
The model can be imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear")
classifier_fear= BertForSequenceClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear")
When the model and tokenizer are initialized the model can be used for inference.
#### Sentiment definitions
#### The strong sentiment includes but are not limited to
Texts that:
- Hold an expressive emphasis on fear and/ or anxiety
#### The weak sentiment includes but are not limited to
Texts that:
- Express fear and/ or anxiety in a neutral way
#### Verification metrics
During training, the model had maximized validation metrics at the following classification breakpoint.
| Classification Breakpoint | F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.45 | 0.8754 | 0.8618 | 0.8895 |
#### Swedish-Sentiment-Violence
The model be can imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence")
classifier_violence = BertForSequenceClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence")
When the model and tokenizer are initialized the model can be used for inference.
### Sentiment definitions
#### The strong sentiment includes but are not limited to
Texts that:
- Referencing highly violent acts
- Hold an aggressive tone
#### The weak sentiment includes but are not limited to
Texts that:
- Include general violent statements that do not fall under the strong sentiment
#### Verification metrics
During training, the model had maximized validation metrics at the following classification breakpoint.
| Classification Breakpoint | F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.35 | 0.7677 | 0.7456 | 0.791 |
|
{"language": "sv", "license": "mit"}
|
RecordedFuture/Swedish-Sentiment-Fear
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"sv",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"sv"
] |
TAGS
#transformers #pytorch #tf #jax #bert #text-classification #sv #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
Swedish BERT models for sentiment analysis
------------------------------------------
Recorded Future together with AI Sweden releases two language models for sentiment analysis in Swedish. The two models are based on the KB/bert-base-swedish-cased model and has been fine-tuned to solve a multi-label sentiment analysis task.
The models have been fine-tuned for the sentiments fear and violence. The models output three floats corresponding to the labels "Negative", "Weak sentiment", and "Strong Sentiment" at the respective indexes.
The models have been trained on Swedish data with a conversational focus, collected from various internet sources and forums.
The models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.
The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.
### Swedish-Sentiment-Fear
The model can be imported from the transformers library by running
```
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear")
classifier_fear= BertForSequenceClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear")
```
When the model and tokenizer are initialized the model can be used for inference.
#### Sentiment definitions
#### The strong sentiment includes but are not limited to
Texts that:
* Hold an expressive emphasis on fear and/ or anxiety
#### The weak sentiment includes but are not limited to
Texts that:
* Express fear and/ or anxiety in a neutral way
#### Verification metrics
During training, the model had maximized validation metrics at the following classification breakpoint.
#### Swedish-Sentiment-Violence
The model be can imported from the transformers library by running
```
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence")
classifier_violence = BertForSequenceClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence")
```
When the model and tokenizer are initialized the model can be used for inference.
### Sentiment definitions
#### The strong sentiment includes but are not limited to
Texts that:
* Referencing highly violent acts
* Hold an aggressive tone
#### The weak sentiment includes but are not limited to
Texts that:
* Include general violent statements that do not fall under the strong sentiment
#### Verification metrics
During training, the model had maximized validation metrics at the following classification breakpoint.
|
[
"### Swedish-Sentiment-Fear\n\n\nThe model can be imported from the transformers library by running\n\n\n\n```\nfrom transformers import BertForSequenceClassification, BertTokenizerFast\n\ntokenizer = BertTokenizerFast.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Fear\")\nclassifier_fear= BertForSequenceClassification.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Fear\") \n\n```\n\nWhen the model and tokenizer are initialized the model can be used for inference.",
"#### Sentiment definitions",
"#### The strong sentiment includes but are not limited to\n\n\nTexts that:\n\n\n* Hold an expressive emphasis on fear and/ or anxiety",
"#### The weak sentiment includes but are not limited to\n\n\nTexts that:\n\n\n* Express fear and/ or anxiety in a neutral way",
"#### Verification metrics\n\n\nDuring training, the model had maximized validation metrics at the following classification breakpoint.",
"#### Swedish-Sentiment-Violence\n\n\nThe model be can imported from the transformers library by running\n\n\n\n```\nfrom transformers import BertForSequenceClassification, BertTokenizerFast\n\ntokenizer = BertTokenizerFast.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Violence\")\nclassifier_violence = BertForSequenceClassification.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Violence\") \n\n```\n\nWhen the model and tokenizer are initialized the model can be used for inference.",
"### Sentiment definitions",
"#### The strong sentiment includes but are not limited to\n\n\nTexts that:\n\n\n* Referencing highly violent acts\n* Hold an aggressive tone",
"#### The weak sentiment includes but are not limited to\n\n\nTexts that:\n\n\n* Include general violent statements that do not fall under the strong sentiment",
"#### Verification metrics\n\n\nDuring training, the model had maximized validation metrics at the following classification breakpoint."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #text-classification #sv #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Swedish-Sentiment-Fear\n\n\nThe model can be imported from the transformers library by running\n\n\n\n```\nfrom transformers import BertForSequenceClassification, BertTokenizerFast\n\ntokenizer = BertTokenizerFast.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Fear\")\nclassifier_fear= BertForSequenceClassification.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Fear\") \n\n```\n\nWhen the model and tokenizer are initialized the model can be used for inference.",
"#### Sentiment definitions",
"#### The strong sentiment includes but are not limited to\n\n\nTexts that:\n\n\n* Hold an expressive emphasis on fear and/ or anxiety",
"#### The weak sentiment includes but are not limited to\n\n\nTexts that:\n\n\n* Express fear and/ or anxiety in a neutral way",
"#### Verification metrics\n\n\nDuring training, the model had maximized validation metrics at the following classification breakpoint.",
"#### Swedish-Sentiment-Violence\n\n\nThe model be can imported from the transformers library by running\n\n\n\n```\nfrom transformers import BertForSequenceClassification, BertTokenizerFast\n\ntokenizer = BertTokenizerFast.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Violence\")\nclassifier_violence = BertForSequenceClassification.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Violence\") \n\n```\n\nWhen the model and tokenizer are initialized the model can be used for inference.",
"### Sentiment definitions",
"#### The strong sentiment includes but are not limited to\n\n\nTexts that:\n\n\n* Referencing highly violent acts\n* Hold an aggressive tone",
"#### The weak sentiment includes but are not limited to\n\n\nTexts that:\n\n\n* Include general violent statements that do not fall under the strong sentiment",
"#### Verification metrics\n\n\nDuring training, the model had maximized validation metrics at the following classification breakpoint."
] |
token-classification
|
transformers
|
## Swedish BERT models for sentiment analysis, Sentiment targets.
[Recorded Future](https://www.recordedfuture.com/) together with [AI Sweden](https://www.ai.se/en) releases two language models for target/role assignment in Swedish. The two models are based on the [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased), the models as has been fine tuned to solve a Named Entety Recognition(NER) token classification task.
This is a downstream model to be used in conjunction with the [Swedish violence sentiment classifier](https://huggingface.co/RecordedFuture/Swedish-Sentiment-Violence) or [Swedish violence sentiment classifier](https://huggingface.co/RecordedFuture/Swedish-Sentiment-Fear). The models are trained to tag parts of sentences that has recieved a positive classification from the upstream sentiment classifier. The model will tag parts of sentences that contains the targets that the upstream model has activated on.
The NER sentiment target models do work as standalone models but their recommended application is downstreamfrom a sentence classification model.
The models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.
The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.
### Fear targets
The model can be imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear-Targets")
classifier_fear_targets= BertForTokenClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear-Targets")
When the model and tokenizer are initialized the model can be used for inference.
#### Verification metrics
During training the Fear target model had the following verification metrics when using "any overlap" as the evaluation metric.
| F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.8361 | 0.7903 | 0.8876 |
#### Swedish-Sentiment-Violence
The model be can imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence-Targets")
classifier_violence_targets = BertForTokenClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence-Targets")
When the model and tokenizer are initialized the model can be used for inference.
#### Verification metrics
During training the Violence target model had the following verification metrics when using "any overlap" as the evaluation metric.
| F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.7831| 0.9155| 0.8442 |
|
{"language": "sv", "license": "mit"}
|
RecordedFuture/Swedish-Sentiment-Violence-Targets
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"sv",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"sv"
] |
TAGS
#transformers #pytorch #tf #jax #bert #token-classification #sv #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
## Swedish BERT models for sentiment analysis, Sentiment targets.
Recorded Future together with AI Sweden releases two language models for target/role assignment in Swedish. The two models are based on the KB/bert-base-swedish-cased, the models as has been fine tuned to solve a Named Entety Recognition(NER) token classification task.
This is a downstream model to be used in conjunction with the Swedish violence sentiment classifier or Swedish violence sentiment classifier. The models are trained to tag parts of sentences that has recieved a positive classification from the upstream sentiment classifier. The model will tag parts of sentences that contains the targets that the upstream model has activated on.
The NER sentiment target models do work as standalone models but their recommended application is downstreamfrom a sentence classification model.
The models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.
The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.
### Fear targets
The model can be imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear-Targets")
classifier_fear_targets= BertForTokenClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear-Targets")
When the model and tokenizer are initialized the model can be used for inference.
#### Verification metrics
During training the Fear target model had the following verification metrics when using "any overlap" as the evaluation metric.
| F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.8361 | 0.7903 | 0.8876 |
#### Swedish-Sentiment-Violence
The model be can imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence-Targets")
classifier_violence_targets = BertForTokenClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence-Targets")
When the model and tokenizer are initialized the model can be used for inference.
#### Verification metrics
During training the Violence target model had the following verification metrics when using "any overlap" as the evaluation metric.
| F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.7831| 0.9155| 0.8442 |
|
[
"## Swedish BERT models for sentiment analysis, Sentiment targets. \nRecorded Future together with AI Sweden releases two language models for target/role assignment in Swedish. The two models are based on the KB/bert-base-swedish-cased, the models as has been fine tuned to solve a Named Entety Recognition(NER) token classification task.\n\nThis is a downstream model to be used in conjunction with the Swedish violence sentiment classifier or Swedish violence sentiment classifier. The models are trained to tag parts of sentences that has recieved a positive classification from the upstream sentiment classifier. The model will tag parts of sentences that contains the targets that the upstream model has activated on. \n\nThe NER sentiment target models do work as standalone models but their recommended application is downstreamfrom a sentence classification model. \n\nThe models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.\n\nThe current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.",
"### Fear targets\n\nThe model can be imported from the transformers library by running\n\n from transformers import BertForSequenceClassification, BertTokenizerFast\n \n tokenizer = BertTokenizerFast.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Fear-Targets\")\n classifier_fear_targets= BertForTokenClassification.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Fear-Targets\") \n\nWhen the model and tokenizer are initialized the model can be used for inference.",
"#### Verification metrics \n\nDuring training the Fear target model had the following verification metrics when using \"any overlap\" as the evaluation metric. \n\n\n| F-score | Precision | Recall |\n|:-------------------------:|:-------:|:---------:|:------:|\n| 0.8361 | 0.7903 | 0.8876 |",
"#### Swedish-Sentiment-Violence\nThe model be can imported from the transformers library by running\n\n from transformers import BertForSequenceClassification, BertTokenizerFast\n \n tokenizer = BertTokenizerFast.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Violence-Targets\")\n classifier_violence_targets = BertForTokenClassification.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Violence-Targets\") \n\nWhen the model and tokenizer are initialized the model can be used for inference.",
"#### Verification metrics \nDuring training the Violence target model had the following verification metrics when using \"any overlap\" as the evaluation metric. \n\n| F-score | Precision | Recall |\n|:-------------------------:|:-------:|:---------:|:------:|\n| 0.7831| 0.9155| 0.8442 |"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #token-classification #sv #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"## Swedish BERT models for sentiment analysis, Sentiment targets. \nRecorded Future together with AI Sweden releases two language models for target/role assignment in Swedish. The two models are based on the KB/bert-base-swedish-cased, the models as has been fine tuned to solve a Named Entety Recognition(NER) token classification task.\n\nThis is a downstream model to be used in conjunction with the Swedish violence sentiment classifier or Swedish violence sentiment classifier. The models are trained to tag parts of sentences that has recieved a positive classification from the upstream sentiment classifier. The model will tag parts of sentences that contains the targets that the upstream model has activated on. \n\nThe NER sentiment target models do work as standalone models but their recommended application is downstreamfrom a sentence classification model. \n\nThe models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.\n\nThe current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.",
"### Fear targets\n\nThe model can be imported from the transformers library by running\n\n from transformers import BertForSequenceClassification, BertTokenizerFast\n \n tokenizer = BertTokenizerFast.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Fear-Targets\")\n classifier_fear_targets= BertForTokenClassification.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Fear-Targets\") \n\nWhen the model and tokenizer are initialized the model can be used for inference.",
"#### Verification metrics \n\nDuring training the Fear target model had the following verification metrics when using \"any overlap\" as the evaluation metric. \n\n\n| F-score | Precision | Recall |\n|:-------------------------:|:-------:|:---------:|:------:|\n| 0.8361 | 0.7903 | 0.8876 |",
"#### Swedish-Sentiment-Violence\nThe model be can imported from the transformers library by running\n\n from transformers import BertForSequenceClassification, BertTokenizerFast\n \n tokenizer = BertTokenizerFast.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Violence-Targets\")\n classifier_violence_targets = BertForTokenClassification.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Violence-Targets\") \n\nWhen the model and tokenizer are initialized the model can be used for inference.",
"#### Verification metrics \nDuring training the Violence target model had the following verification metrics when using \"any overlap\" as the evaluation metric. \n\n| F-score | Precision | Recall |\n|:-------------------------:|:-------:|:---------:|:------:|\n| 0.7831| 0.9155| 0.8442 |"
] |
text-classification
|
transformers
|
## Swedish BERT models for sentiment analysis
[Recorded Future](https://www.recordedfuture.com/) together with [AI Sweden](https://www.ai.se/en) releases two language models for sentiment analysis in Swedish. The two models are based on the [KB\/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased) model and has been fine-tuned to solve a multi-label sentiment analysis task.
The models have been fine-tuned for the sentiments fear and violence. The models output three floats corresponding to the labels "Negative", "Weak sentiment", and "Strong Sentiment" at the respective indexes.
The models have been trained on Swedish data with a conversational focus, collected from various internet sources and forums.
The models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.
The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.
### Swedish-Sentiment-Fear
The model can be imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear")
classifier_fear= BertForSequenceClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear")
When the model and tokenizer are initialized the model can be used for inference.
#### Sentiment definitions
#### The strong sentiment includes but are not limited to
Texts that:
- Hold an expressive emphasis on fear and/ or anxiety
#### The weak sentiment includes but are not limited to
Texts that:
- Express fear and/ or anxiety in a neutral way
#### Verification metrics
During training, the model had maximized validation metrics at the following classification breakpoint.
| Classification Breakpoint | F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.45 | 0.8754 | 0.8618 | 0.8895 |
#### Swedish-Sentiment-Violence
The model be can imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence")
classifier_violence = BertForSequenceClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence")
When the model and tokenizer are initialized the model can be used for inference.
### Sentiment definitions
#### The strong sentiment includes but are not limited to
Texts that:
- Referencing highly violent acts
- Hold an aggressive tone
#### The weak sentiment includes but are not limited to
Texts that:
- Include general violent statements that do not fall under the strong sentiment
#### Verification metrics
During training, the model had maximized validation metrics at the following classification breakpoint.
| Classification Breakpoint | F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.35 | 0.7677 | 0.7456 | 0.791 |
|
{"language": "sv", "license": "mit"}
|
RecordedFuture/Swedish-Sentiment-Violence
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"sv",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"sv"
] |
TAGS
#transformers #pytorch #tf #jax #bert #text-classification #sv #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
Swedish BERT models for sentiment analysis
------------------------------------------
Recorded Future together with AI Sweden releases two language models for sentiment analysis in Swedish. The two models are based on the KB/bert-base-swedish-cased model and has been fine-tuned to solve a multi-label sentiment analysis task.
The models have been fine-tuned for the sentiments fear and violence. The models output three floats corresponding to the labels "Negative", "Weak sentiment", and "Strong Sentiment" at the respective indexes.
The models have been trained on Swedish data with a conversational focus, collected from various internet sources and forums.
The models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.
The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.
### Swedish-Sentiment-Fear
The model can be imported from the transformers library by running
```
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear")
classifier_fear= BertForSequenceClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear")
```
When the model and tokenizer are initialized the model can be used for inference.
#### Sentiment definitions
#### The strong sentiment includes but are not limited to
Texts that:
* Hold an expressive emphasis on fear and/ or anxiety
#### The weak sentiment includes but are not limited to
Texts that:
* Express fear and/ or anxiety in a neutral way
#### Verification metrics
During training, the model had maximized validation metrics at the following classification breakpoint.
#### Swedish-Sentiment-Violence
The model be can imported from the transformers library by running
```
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence")
classifier_violence = BertForSequenceClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence")
```
When the model and tokenizer are initialized the model can be used for inference.
### Sentiment definitions
#### The strong sentiment includes but are not limited to
Texts that:
* Referencing highly violent acts
* Hold an aggressive tone
#### The weak sentiment includes but are not limited to
Texts that:
* Include general violent statements that do not fall under the strong sentiment
#### Verification metrics
During training, the model had maximized validation metrics at the following classification breakpoint.
|
[
"### Swedish-Sentiment-Fear\n\n\nThe model can be imported from the transformers library by running\n\n\n\n```\nfrom transformers import BertForSequenceClassification, BertTokenizerFast\n\ntokenizer = BertTokenizerFast.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Fear\")\nclassifier_fear= BertForSequenceClassification.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Fear\") \n\n```\n\nWhen the model and tokenizer are initialized the model can be used for inference.",
"#### Sentiment definitions",
"#### The strong sentiment includes but are not limited to\n\n\nTexts that:\n\n\n* Hold an expressive emphasis on fear and/ or anxiety",
"#### The weak sentiment includes but are not limited to\n\n\nTexts that:\n\n\n* Express fear and/ or anxiety in a neutral way",
"#### Verification metrics\n\n\nDuring training, the model had maximized validation metrics at the following classification breakpoint.",
"#### Swedish-Sentiment-Violence\n\n\nThe model be can imported from the transformers library by running\n\n\n\n```\nfrom transformers import BertForSequenceClassification, BertTokenizerFast\n\ntokenizer = BertTokenizerFast.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Violence\")\nclassifier_violence = BertForSequenceClassification.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Violence\") \n\n```\n\nWhen the model and tokenizer are initialized the model can be used for inference.",
"### Sentiment definitions",
"#### The strong sentiment includes but are not limited to\n\n\nTexts that:\n\n\n* Referencing highly violent acts\n* Hold an aggressive tone",
"#### The weak sentiment includes but are not limited to\n\n\nTexts that:\n\n\n* Include general violent statements that do not fall under the strong sentiment",
"#### Verification metrics\n\n\nDuring training, the model had maximized validation metrics at the following classification breakpoint."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #text-classification #sv #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Swedish-Sentiment-Fear\n\n\nThe model can be imported from the transformers library by running\n\n\n\n```\nfrom transformers import BertForSequenceClassification, BertTokenizerFast\n\ntokenizer = BertTokenizerFast.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Fear\")\nclassifier_fear= BertForSequenceClassification.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Fear\") \n\n```\n\nWhen the model and tokenizer are initialized the model can be used for inference.",
"#### Sentiment definitions",
"#### The strong sentiment includes but are not limited to\n\n\nTexts that:\n\n\n* Hold an expressive emphasis on fear and/ or anxiety",
"#### The weak sentiment includes but are not limited to\n\n\nTexts that:\n\n\n* Express fear and/ or anxiety in a neutral way",
"#### Verification metrics\n\n\nDuring training, the model had maximized validation metrics at the following classification breakpoint.",
"#### Swedish-Sentiment-Violence\n\n\nThe model be can imported from the transformers library by running\n\n\n\n```\nfrom transformers import BertForSequenceClassification, BertTokenizerFast\n\ntokenizer = BertTokenizerFast.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Violence\")\nclassifier_violence = BertForSequenceClassification.from_pretrained(\"RecordedFuture/Swedish-Sentiment-Violence\") \n\n```\n\nWhen the model and tokenizer are initialized the model can be used for inference.",
"### Sentiment definitions",
"#### The strong sentiment includes but are not limited to\n\n\nTexts that:\n\n\n* Referencing highly violent acts\n* Hold an aggressive tone",
"#### The weak sentiment includes but are not limited to\n\n\nTexts that:\n\n\n* Include general violent statements that do not fall under the strong sentiment",
"#### Verification metrics\n\n\nDuring training, the model had maximized validation metrics at the following classification breakpoint."
] |
text-generation
|
transformers
|
#Rick DialoGPT Model.
>Following https://github.com/RuolinZheng08/twewy-discord-chatbot Tutorial.
|
{"tags": ["conversational"]}
|
Redolid/DialoGPT-small-Rick
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Rick DialoGPT Model.
>Following URL Tutorial.
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
# Steins Gate DialoGPT Model
|
{"tags": ["conversational"]}
|
Rei/DialoGPT-medium-kurisu
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Steins Gate DialoGPT Model
|
[
"# Steins Gate DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Steins Gate DialoGPT Model"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum-original
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4436
- Rouge1: 28.8838
- Rouge2: 8.1114
- Rougel: 22.8318
- Rougelsum: 22.8318
- Gen Len: 18.8141
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.6754 | 1.0 | 51012 | 2.4436 | 28.8838 | 8.1114 | 22.8318 | 22.8318 | 18.8141 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["xsum"], "metrics": ["rouge"], "model-index": [{"name": "t5-small-finetuned-xsum-original", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "xsum", "type": "xsum", "args": "default"}, "metrics": [{"type": "rouge", "value": 28.8838, "name": "Rouge1"}]}]}]}
|
RenZHU/t5-small-finetuned-xsum-original
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-xsum #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-small-finetuned-xsum-original
================================
This model is a fine-tuned version of t5-small on the xsum dataset.
It achieves the following results on the evaluation set:
* Loss: 2.4436
* Rouge1: 28.8838
* Rouge2: 8.1114
* Rougel: 22.8318
* Rougelsum: 22.8318
* Gen Len: 18.8141
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.0+cu111
* Datasets 1.17.1.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-xsum #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5310
- Rouge1: 27.9232
- Rouge2: 7.5324
- Rougel: 22.035
- Rougelsum: 22.0304
- Gen Len: 18.8116
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| 2.7564 | 1.0 | 51012 | 2.5310 | 27.9232 | 7.5324 | 22.035 | 22.0304 | 18.8116 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "t5-small-finetuned-xsum", "results": []}]}
|
RenZHU/t5-small-finetuned-xsum
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-small-finetuned-xsum
=======================
This model is a fine-tuned version of t5-small on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 2.5310
* Rouge1: 27.9232
* Rouge2: 7.5324
* Rougel: 22.035
* Rougelsum: 22.0304
* Gen Len: 18.8116
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.0+cu111
* Datasets 1.17.1.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-base-srl-seqlabeling
This model is a fine-tuned version of [./ruBert-base/](https://huggingface.co/./ruBert-base/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1723
- Causator Precision: 0.8539
- Causator Recall: 0.8352
- Causator F1: 0.8444
- Causator Number: 91
- Expiriencer Precision: 0.9259
- Expiriencer Recall: 0.9740
- Expiriencer F1: 0.9494
- Expiriencer Number: 77
- Instrument Precision: 0.375
- Instrument Recall: 1.0
- Instrument F1: 0.5455
- Instrument Number: 3
- Other Precision: 0.0
- Other Recall: 0.0
- Other F1: 0.0
- Other Number: 1
- Predicate Precision: 0.9352
- Predicate Recall: 0.9902
- Predicate F1: 0.9619
- Predicate Number: 102
- Overall Precision: 0.8916
- Overall Recall: 0.9307
- Overall F1: 0.9107
- Overall Accuracy: 0.9667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Causator Precision | Causator Recall | Causator F1 | Causator Number | Expiriencer Precision | Expiriencer Recall | Expiriencer F1 | Expiriencer Number | Instrument Precision | Instrument Recall | Instrument F1 | Instrument Number | Other Precision | Other Recall | Other F1 | Other Number | Predicate Precision | Predicate Recall | Predicate F1 | Predicate Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------:|:---------------:|:-----------:|:---------------:|:---------------------:|:------------------:|:--------------:|:------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:---------------:|:------------:|:--------:|:------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.2552 | 1.0 | 56 | 0.3471 | 0.8841 | 0.6703 | 0.7625 | 91 | 0.8421 | 0.8312 | 0.8366 | 77 | 0.0 | 0.0 | 0.0 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9259 | 0.9804 | 0.9524 | 102 | 0.8893 | 0.8212 | 0.8539 | 0.9203 |
| 0.2385 | 2.0 | 112 | 0.1608 | 0.9103 | 0.7802 | 0.8402 | 91 | 0.9375 | 0.9740 | 0.9554 | 77 | 0.2857 | 0.6667 | 0.4 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9519 | 0.9706 | 0.9612 | 102 | 0.9182 | 0.9015 | 0.9098 | 0.9554 |
| 0.0367 | 3.0 | 168 | 0.1311 | 0.8902 | 0.8022 | 0.8439 | 91 | 0.9375 | 0.9740 | 0.9554 | 77 | 0.4286 | 1.0 | 0.6 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9709 | 0.9804 | 0.9756 | 102 | 0.9228 | 0.9161 | 0.9194 | 0.9673 |
| 0.0494 | 4.0 | 224 | 0.1507 | 0.7812 | 0.8242 | 0.8021 | 91 | 0.9241 | 0.9481 | 0.9359 | 77 | 0.4286 | 1.0 | 0.6 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9524 | 0.9804 | 0.9662 | 102 | 0.8746 | 0.9161 | 0.8948 | 0.9637 |
| 0.0699 | 5.0 | 280 | 0.1830 | 0.8276 | 0.7912 | 0.8090 | 91 | 0.8941 | 0.9870 | 0.9383 | 77 | 0.375 | 1.0 | 0.5455 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9352 | 0.9902 | 0.9619 | 102 | 0.875 | 0.9197 | 0.8968 | 0.9560 |
| 0.0352 | 6.0 | 336 | 0.1994 | 0.7857 | 0.8462 | 0.8148 | 91 | 0.9048 | 0.9870 | 0.9441 | 77 | 0.375 | 1.0 | 0.5455 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9266 | 0.9902 | 0.9573 | 102 | 0.8595 | 0.9380 | 0.8970 | 0.9572 |
| 0.0186 | 7.0 | 392 | 0.1657 | 0.8652 | 0.8462 | 0.8556 | 91 | 0.9146 | 0.9740 | 0.9434 | 77 | 0.375 | 1.0 | 0.5455 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9352 | 0.9902 | 0.9619 | 102 | 0.8920 | 0.9343 | 0.9127 | 0.9673 |
| 0.0052 | 8.0 | 448 | 0.1716 | 0.8556 | 0.8462 | 0.8508 | 91 | 0.9259 | 0.9740 | 0.9494 | 77 | 0.375 | 1.0 | 0.5455 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9352 | 0.9902 | 0.9619 | 102 | 0.8920 | 0.9343 | 0.9127 | 0.9673 |
| 0.0094 | 9.0 | 504 | 0.1715 | 0.8444 | 0.8352 | 0.8398 | 91 | 0.9259 | 0.9740 | 0.9494 | 77 | 0.4286 | 1.0 | 0.6 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9352 | 0.9902 | 0.9619 | 102 | 0.8916 | 0.9307 | 0.9107 | 0.9667 |
| 0.0078 | 10.0 | 560 | 0.1723 | 0.8539 | 0.8352 | 0.8444 | 91 | 0.9259 | 0.9740 | 0.9494 | 77 | 0.375 | 1.0 | 0.5455 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9352 | 0.9902 | 0.9619 | 102 | 0.8916 | 0.9307 | 0.9107 | 0.9667 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "model-index": [{"name": "rubert-base-srl-seqlabeling", "results": []}]}
|
Rexhaif/rubert-base-srl-seqlabeling
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #bert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #has_space #region-us
|
rubert-base-srl-seqlabeling
===========================
This model is a fine-tuned version of ./ruBert-base/ on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1723
* Causator Precision: 0.8539
* Causator Recall: 0.8352
* Causator F1: 0.8444
* Causator Number: 91
* Expiriencer Precision: 0.9259
* Expiriencer Recall: 0.9740
* Expiriencer F1: 0.9494
* Expiriencer Number: 77
* Instrument Precision: 0.375
* Instrument Recall: 1.0
* Instrument F1: 0.5455
* Instrument Number: 3
* Other Precision: 0.0
* Other Recall: 0.0
* Other F1: 0.0
* Other Number: 1
* Predicate Precision: 0.9352
* Predicate Recall: 0.9902
* Predicate F1: 0.9619
* Predicate Number: 102
* Overall Precision: 0.8916
* Overall Recall: 0.9307
* Overall F1: 0.9107
* Overall Accuracy: 0.9667
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.06
* num\_epochs: 10.0
### Training results
### Framework versions
* Transformers 4.13.0.dev0
* Pytorch 1.10.0+cu102
* Datasets 1.15.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 10.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #safetensors #bert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 10.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-base-srl
This model is a fine-tuned version of [./ruBert-base/](https://huggingface.co/./ruBert-base/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2429
- F1: 0.9563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5816 | 1.0 | 57 | 0.3865 | 0.8371 |
| 0.3685 | 2.0 | 114 | 0.1707 | 0.9325 |
| 0.1057 | 3.0 | 171 | 0.0972 | 0.9563 |
| 0.0964 | 4.0 | 228 | 0.1429 | 0.9775 |
| 0.1789 | 5.0 | 285 | 0.2493 | 0.9457 |
| 0.0016 | 6.0 | 342 | 0.1900 | 0.6349 |
| 0.0013 | 7.0 | 399 | 0.2060 | 0.9563 |
| 0.0008 | 8.0 | 456 | 0.2321 | 0.9563 |
| 0.0006 | 9.0 | 513 | 0.2412 | 0.9563 |
| 0.0006 | 10.0 | 570 | 0.2429 | 0.9563 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "metrics": ["f1"], "model-index": [{"name": "rubert-base-srl", "results": []}]}
|
Rexhaif/rubert-base-srl
| null |
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #safetensors #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
rubert-base-srl
===============
This model is a fine-tuned version of ./ruBert-base/ on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2429
* F1: 0.9563
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.06
* num\_epochs: 10.0
### Training results
### Framework versions
* Transformers 4.13.0.dev0
* Pytorch 1.10.0+cu102
* Datasets 1.15.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 10.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #safetensors #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 10.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bert-mrpc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4382
- Accuracy: 0.8676
- F1: 0.9085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5454 | 1.0 | 230 | 0.4396 | 0.8309 | 0.8871 |
| 0.3387 | 2.0 | 460 | 0.3783 | 0.8529 | 0.8976 |
| 0.1956 | 3.0 | 690 | 0.4382 | 0.8676 | 0.9085 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned-bert-mrpc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.8676470588235294, "name": "Accuracy"}, {"type": "f1", "value": 0.9084745762711864, "name": "F1"}]}]}]}
|
Riad/finetuned-bert-mrpc
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
finetuned-bert-mrpc
===================
This model is a fine-tuned version of bert-base-cased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4382
* Accuracy: 0.8676
* F1: 0.9085
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.10.0
* Pytorch 1.9.0+cu102
* Datasets 1.11.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
[Github](https://github.com/rifkybujana/IndoBERT-QA)
This project is part of my research with my friend Muhammad Fajrin Buyang Daffa entitled "Teman Belajar : Asisten Digital Pelajar SMA Negeri 28 Jakarta dalam Membaca" for KOPSI (Kompetisi Penelitian Siswa Indonesia/Indonesian Student Research Competition).
## indoBERT Base-Uncased fine-tuned on Translated Squad v2.0
[IndoBERT](https://huggingface.co/indolem/indobert-base-uncased) trained by [IndoLEM](https://indolem.github.io/) and fine-tuned on [Translated SQuAD 2.0](https://github.com/Wikidepia/indonesian_datasets/tree/master/question-answering/squad) for **Q&A** downstream task.
**Model Size** (after training): 420mb
## Details of indoBERT (from their documentation)
[IndoBERT](https://huggingface.co/indolem/indobert-base-uncased) is the Indonesian version of BERT model. We train the model using over 220M words, aggregated from three main sources:
- Indonesian Wikipedia (74M words)
- news articles from Kompas, Tempo (Tala et al., 2003), and Liputan6 (55M words in total)
- an Indonesian Web Corpus (Medved and Suchomel, 2017) (90M words).
We trained the model for 2.4M steps (180 epochs) with the final perplexity over the development set being 3.97 (similar to English BERT-base).
This IndoBERT was used to examine IndoLEM - an Indonesian benchmark that comprises of seven tasks for the Indonesian language, spanning morpho-syntax, semantics, and discourse.[[1]](#1)
## Details of the downstream task (Q&A) - Dataset
SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering.
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| SQuAD2.0 | train | 130k |
| SQuAD2.0 | eval | 12.3k |
## Model Training
The model was trained on a Tesla T4 GPU and 12GB of RAM.
## Results:
| Metric | # Value |
| ------ | --------- |
| **EM** | **51.61** |
| **F1** | **69.09** |
## Simple Usage
```py
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="Rifky/Indobert-QA",
tokenizer="Rifky/Indobert-QA"
)
qa_pipeline({
'context': """Pangeran Harya Dipanegara (atau biasa dikenal dengan nama Pangeran Diponegoro, lahir di Ngayogyakarta Hadiningrat, 11 November 1785 – meninggal di Makassar, Hindia Belanda, 8 Januari 1855 pada umur 69 tahun) adalah salah seorang pahlawan nasional Republik Indonesia, yang memimpin Perang Diponegoro atau Perang Jawa selama periode tahun 1825 hingga 1830 melawan pemerintah Hindia Belanda. Sejarah mencatat, Perang Diponegoro atau Perang Jawa dikenal sebagai perang yang menelan korban terbanyak dalam sejarah Indonesia, yakni 8.000 korban serdadu Hindia Belanda, 7.000 pribumi, dan 200 ribu orang Jawa serta kerugian materi 25 juta Gulden.""",
'question': "kapan pangeran diponegoro lahir?"
})
```
*output:*
```py
{
'answer': '11 November 1785',
'end': 131,
'score': 0.9272009134292603,
'start': 115
}
```
### Reference
<a id="1">[1]</a>Fajri Koto and Afshin Rahimi and Jey Han Lau and Timothy Baldwin. 2020. IndoLEM and IndoBERT: A Benchmark Dataset and Pre-trained Language Model for Indonesian NLP. Proceedings of the 28th COLING.
|
{"language": "id", "license": "apache-2.0", "tags": ["indobert", "indolem"], "datasets": ["220M words (IndoWiki, IndoWC, News)", "Squad 2.0 (Indonesian translated)"], "widget": [{"text": "kapan pangeran diponegoro lahir?", "context": "Pangeran Harya Dipanegara (atau biasa dikenal dengan nama Pangeran Diponegoro, lahir di Ngayogyakarta Hadiningrat, 11 November 1785 \u2013 meninggal di Makassar, Hindia Belanda, 8 Januari 1855 pada umur 69 tahun) adalah salah seorang pahlawan nasional Republik Indonesia, yang memimpin Perang Diponegoro atau Perang Jawa selama periode tahun 1825 hingga 1830 melawan pemerintah Hindia Belanda. Sejarah mencatat, Perang Diponegoro atau Perang Jawa dikenal sebagai perang yang menelan korban terbanyak dalam sejarah Indonesia, yakni 8.000 korban serdadu Hindia Belanda, 7.000 pribumi, dan 200 ribu orang Jawa serta kerugian materi 25 juta Gulden."}]}
|
Rifky/Indobert-QA
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"question-answering",
"indobert",
"indolem",
"id",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #safetensors #bert #question-answering #indobert #indolem #id #license-apache-2.0 #endpoints_compatible #region-us
|
Github
This project is part of my research with my friend Muhammad Fajrin Buyang Daffa entitled "Teman Belajar : Asisten Digital Pelajar SMA Negeri 28 Jakarta dalam Membaca" for KOPSI (Kompetisi Penelitian Siswa Indonesia/Indonesian Student Research Competition).
indoBERT Base-Uncased fine-tuned on Translated Squad v2.0
---------------------------------------------------------
IndoBERT trained by IndoLEM and fine-tuned on Translated SQuAD 2.0 for Q&A downstream task.
Model Size (after training): 420mb
Details of indoBERT (from their documentation)
----------------------------------------------
IndoBERT is the Indonesian version of BERT model. We train the model using over 220M words, aggregated from three main sources:
* Indonesian Wikipedia (74M words)
* news articles from Kompas, Tempo (Tala et al., 2003), and Liputan6 (55M words in total)
* an Indonesian Web Corpus (Medved and Suchomel, 2017) (90M words).
We trained the model for 2.4M steps (180 epochs) with the final perplexity over the development set being 3.97 (similar to English BERT-base).
This IndoBERT was used to examine IndoLEM - an Indonesian benchmark that comprises of seven tasks for the Indonesian language, spanning morpho-syntax, semantics, and discourse.[[1]](#1)
Details of the downstream task (Q&A) - Dataset
----------------------------------------------
SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering.
Dataset: SQuAD2.0, Split: train, # samples: 130k
Dataset: SQuAD2.0, Split: eval, # samples: 12.3k
Model Training
--------------
The model was trained on a Tesla T4 GPU and 12GB of RAM.
Results:
--------
Simple Usage
------------
*output:*
### Reference
[1]Fajri Koto and Afshin Rahimi and Jey Han Lau and Timothy Baldwin. 2020. IndoLEM and IndoBERT: A Benchmark Dataset and Pre-trained Language Model for Indonesian NLP. Proceedings of the 28th COLING.
|
[
"# samples: 130k\nDataset: SQuAD2.0, Split: eval, # samples: 12.3k\n\n\nModel Training\n--------------\n\n\nThe model was trained on a Tesla T4 GPU and 12GB of RAM.\n\n\nResults:\n--------\n\n\n\nSimple Usage\n------------\n\n\n*output:*",
"### Reference\n\n\n[1]Fajri Koto and Afshin Rahimi and Jey Han Lau and Timothy Baldwin. 2020. IndoLEM and IndoBERT: A Benchmark Dataset and Pre-trained Language Model for Indonesian NLP. Proceedings of the 28th COLING."
] |
[
"TAGS\n#transformers #pytorch #safetensors #bert #question-answering #indobert #indolem #id #license-apache-2.0 #endpoints_compatible #region-us \n",
"# samples: 130k\nDataset: SQuAD2.0, Split: eval, # samples: 12.3k\n\n\nModel Training\n--------------\n\n\nThe model was trained on a Tesla T4 GPU and 12GB of RAM.\n\n\nResults:\n--------\n\n\n\nSimple Usage\n------------\n\n\n*output:*",
"### Reference\n\n\n[1]Fajri Koto and Afshin Rahimi and Jey Han Lau and Timothy Baldwin. 2020. IndoLEM and IndoBERT: A Benchmark Dataset and Pre-trained Language Model for Indonesian NLP. Proceedings of the 28th COLING."
] |
text-generation
|
transformers
|
# My Awesome Model
|
{"tags": ["conversational"]}
|
RifsxD/DialoGPT-medium-raifu
| null |
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# My Awesome Model
|
[
"# My Awesome Model"
] |
[
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# My Awesome Model"
] |
object-detection
| null |
<div align="left">
## You Only Look Once for Panoptic Driving Perception
> [**You Only Look at Once for Panoptic driving Perception**](https://arxiv.org/abs/2108.11250)
>
> by Dong Wu, Manwen Liao, Weitian Zhang, [Xinggang Wang](https://xinggangw.info/) [*School of EIC, HUST*](http://eic.hust.edu.cn/English/Home.htm)
>
> *arXiv technical report ([arXiv 2108.11250](https://arxiv.org/abs/2108.11250))*
---
### The Illustration of YOLOP

### Contributions
* We put forward an efficient multi-task network that can jointly handle three crucial tasks in autonomous driving: object detection, drivable area segmentation and lane detection to save computational costs, reduce inference time as well as improve the performance of each task. Our work is the first to reach real-time on embedded devices while maintaining state-of-the-art level performance on the `BDD100K `dataset.
* We design the ablative experiments to verify the effectiveness of our multi-tasking scheme. It is proved that the three tasks can be learned jointly without tedious alternating optimization.
### Results
#### Traffic Object Detection Result
| Model | Recall(%) | mAP50(%) | Speed(fps) |
| -------------- | --------- | -------- | ---------- |
| `Multinet` | 81.3 | 60.2 | 8.6 |
| `DLT-Net` | 89.4 | 68.4 | 9.3 |
| `Faster R-CNN` | 77.2 | 55.6 | 5.3 |
| `YOLOv5s` | 86.8 | 77.2 | 82 |
| `YOLOP(ours)` | 89.2 | 76.5 | 41 |
#### Drivable Area Segmentation Result
| Model | mIOU(%) | Speed(fps) |
| ------------- | ------- | ---------- |
| `Multinet` | 71.6 | 8.6 |
| `DLT-Net` | 71.3 | 9.3 |
| `PSPNet` | 89.6 | 11.1 |
| `YOLOP(ours)` | 91.5 | 41 |
#### Lane Detection Result:
| Model | mIOU(%) | IOU(%) |
| ------------- | ------- | ------ |
| `ENet` | 34.12 | 14.64 |
| `SCNN` | 35.79 | 15.84 |
| `ENet-SAD` | 36.56 | 16.02 |
| `YOLOP(ours)` | 70.50 | 26.20 |
#### Ablation Studies 1: End-to-end v.s. Step-by-step:
| Training_method | Recall(%) | AP(%) | mIoU(%) | Accuracy(%) | IoU(%) |
| --------------- | --------- | ----- | ------- | ----------- | ------ |
| `ES-W` | 87.0 | 75.3 | 90.4 | 66.8 | 26.2 |
| `ED-W` | 87.3 | 76.0 | 91.6 | 71.2 | 26.1 |
| `ES-D-W` | 87.0 | 75.1 | 91.7 | 68.6 | 27.0 |
| `ED-S-W` | 87.5 | 76.1 | 91.6 | 68.0 | 26.8 |
| `End-to-end` | 89.2 | 76.5 | 91.5 | 70.5 | 26.2 |
#### Ablation Studies 2: Multi-task v.s. Single task:
| Training_method | Recall(%) | AP(%) | mIoU(%) | Accuracy(%) | IoU(%) | Speed(ms/frame) |
| --------------- | --------- | ----- | ------- | ----------- | ------ | --------------- |
| `Det(only)` | 88.2 | 76.9 | - | - | - | 15.7 |
| `Da-Seg(only)` | - | - | 92.0 | - | - | 14.8 |
| `Ll-Seg(only)` | - | - | - | 79.6 | 27.9 | 14.8 |
| `Multitask` | 89.2 | 76.5 | 91.5 | 70.5 | 26.2 | 24.4 |
**Notes**:
- The works we has use for reference including `Multinet` ([paper](https://arxiv.org/pdf/1612.07695.pdf?utm_campaign=affiliate-ir-Optimise%20media%28%20South%20East%20Asia%29%20Pte.%20ltd._156_-99_national_R_all_ACQ_cpa_en&utm_content=&utm_source=%20388939),[code](https://github.com/MarvinTeichmann/MultiNet)),`DLT-Net` ([paper](https://ieeexplore.ieee.org/abstract/document/8937825)),`Faster R-CNN` ([paper](https://proceedings.neurips.cc/paper/2015/file/14bfa6bb14875e45bba028a21ed38046-Paper.pdf),[code](https://github.com/ShaoqingRen/faster_rcnn)),`YOLOv5s`([code](https://github.com/ultralytics/yolov5)) ,`PSPNet`([paper](https://openaccess.thecvf.com/content_cvpr_2017/papers/Zhao_Pyramid_Scene_Parsing_CVPR_2017_paper.pdf),[code](https://github.com/hszhao/PSPNet)) ,`ENet`([paper](https://arxiv.org/pdf/1606.02147.pdf),[code](https://github.com/osmr/imgclsmob)) `SCNN`([paper](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/download/16802/16322),[code](https://github.com/XingangPan/SCNN)) `SAD-ENet`([paper](https://openaccess.thecvf.com/content_ICCV_2019/papers/Hou_Learning_Lightweight_Lane_Detection_CNNs_by_Self_Attention_Distillation_ICCV_2019_paper.pdf),[code](https://github.com/cardwing/Codes-for-Lane-Detection)). Thanks for their wonderful works.
- In table 4, E, D, S and W refer to Encoder, Detect head, two Segment heads and whole network. So the Algorithm (First, we only train Encoder and Detect head. Then we freeze the Encoder and Detect head as well as train two Segmentation heads. Finally, the entire network is trained jointly for all three tasks.) can be marked as ED-S-W, and the same for others.
---
### Visualization
#### Traffic Object Detection Result

#### Drivable Area Segmentation Result

#### Lane Detection Result

**Notes**:
- The visualization of lane detection result has been post processed by quadratic fitting.
---
### Project Structure
```python
├─inference
│ ├─images # inference images
│ ├─output # inference result
├─lib
│ ├─config/default # configuration of training and validation
│ ├─core
│ │ ├─activations.py # activation function
│ │ ├─evaluate.py # calculation of metric
│ │ ├─function.py # training and validation of model
│ │ ├─general.py #calculation of metric、nms、conversion of data-format、visualization
│ │ ├─loss.py # loss function
│ │ ├─postprocess.py # postprocess(refine da-seg and ll-seg, unrelated to paper)
│ ├─dataset
│ │ ├─AutoDriveDataset.py # Superclass dataset,general function
│ │ ├─bdd.py # Subclass dataset,specific function
│ │ ├─hust.py # Subclass dataset(Campus scene, unrelated to paper)
│ │ ├─convect.py
│ │ ├─DemoDataset.py # demo dataset(image, video and stream)
│ ├─models
│ │ ├─YOLOP.py # Setup and Configuration of model
│ │ ├─light.py # Model lightweight(unrelated to paper, zwt)
│ │ ├─commom.py # calculation module
│ ├─utils
│ │ ├─augmentations.py # data augumentation
│ │ ├─autoanchor.py # auto anchor(k-means)
│ │ ├─split_dataset.py # (Campus scene, unrelated to paper)
│ │ ├─utils.py # logging、device_select、time_measure、optimizer_select、model_save&initialize 、Distributed training
│ ├─run
│ │ ├─dataset/training time # Visualization, logging and model_save
├─tools
│ │ ├─demo.py # demo(folder、camera)
│ │ ├─test.py
│ │ ├─train.py
├─toolkits
│ │ ├─depoly # Deployment of model
├─weights # Pretraining model
```
---
### Requirement
This codebase has been developed with python version 3.7, PyTorch 1.7+ and torchvision 0.8+:
```
conda install pytorch==1.7.0 torchvision==0.8.0 cudatoolkit=10.2 -c pytorch
```
See `requirements.txt` for additional dependencies and version requirements.
```setup
pip install -r requirements.txt
```
### Data preparation
#### Download
- Download the images from [images](https://bdd-data.berkeley.edu/).
- Download the annotations of detection from [det_annotations](https://drive.google.com/file/d/1Ge-R8NTxG1eqd4zbryFo-1Uonuh0Nxyl/view?usp=sharing).
- Download the annotations of drivable area segmentation from [da_seg_annotations](https://drive.google.com/file/d/1xy_DhUZRHR8yrZG3OwTQAHhYTnXn7URv/view?usp=sharing).
- Download the annotations of lane line segmentation from [ll_seg_annotations](https://drive.google.com/file/d/1lDNTPIQj_YLNZVkksKM25CvCHuquJ8AP/view?usp=sharing).
We recommend the dataset directory structure to be the following:
```
# The id represent the correspondence relation
├─dataset root
│ ├─images
│ │ ├─train
│ │ ├─val
│ ├─det_annotations
│ │ ├─train
│ │ ├─val
│ ├─da_seg_annotations
│ │ ├─train
│ │ ├─val
│ ├─ll_seg_annotations
│ │ ├─train
│ │ ├─val
```
Update the your dataset path in the `./lib/config/default.py`.
### Training
You can set the training configuration in the `./lib/config/default.py`. (Including: the loading of preliminary model, loss, data augmentation, optimizer, warm-up and cosine annealing, auto-anchor, training epochs, batch_size).
If you want try alternating optimization or train model for single task, please modify the corresponding configuration in `./lib/config/default.py` to `True`. (As following, all configurations is `False`, which means training multiple tasks end to end).
```python
# Alternating optimization
_C.TRAIN.SEG_ONLY = False # Only train two segmentation branchs
_C.TRAIN.DET_ONLY = False # Only train detection branch
_C.TRAIN.ENC_SEG_ONLY = False # Only train encoder and two segmentation branchs
_C.TRAIN.ENC_DET_ONLY = False # Only train encoder and detection branch
# Single task
_C.TRAIN.DRIVABLE_ONLY = False # Only train da_segmentation task
_C.TRAIN.LANE_ONLY = False # Only train ll_segmentation task
_C.TRAIN.DET_ONLY = False # Only train detection task
```
Start training:
```shell
python tools/train.py
```
### Evaluation
You can set the evaluation configuration in the `./lib/config/default.py`. (Including: batch_size and threshold value for nms).
Start evaluating:
```shell
python tools/test.py --weights weights/End-to-end.pth
```
### Demo Test
We provide two testing method.
#### Folder
You can store the image or video in `--source`, and then save the reasoning result to `--save-dir`
```shell
python tools/demo --source inference/images
```
#### Camera
If there are any camera connected to your computer, you can set the `source` as the camera number(The default is 0).
```shell
python tools/demo --source 0
```
### Deployment
Our model can reason in real-time on `Jetson Tx2`, with `Zed Camera` to capture image. We use `TensorRT` tool for speeding up. We provide code for deployment and reasoning of model in `./toolkits/deploy`.
## Citation
If you find our paper and code useful for your research, please consider giving a star and citation:
```BibTeX
@misc{2108.11250,
Author = {Dong Wu and Manwen Liao and Weitian Zhang and Xinggang Wang},
Title = {YOLOP: You Only Look Once for Panoptic Driving Perception},
Year = {2021},
Eprint = {arXiv:2108.11250},
}
```
|
{"tags": ["object-detection"]}
|
Riser/YOLOP
| null |
[
"object-detection",
"arxiv:2108.11250",
"arxiv:1612.07695",
"arxiv:1606.02147",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[
"2108.11250",
"1612.07695",
"1606.02147"
] |
[] |
TAGS
#object-detection #arxiv-2108.11250 #arxiv-1612.07695 #arxiv-1606.02147 #region-us
|
You Only Look Once for Panoptic Driving Perception
----------------------------------------------------
>
> You Only Look at Once for Panoptic driving Perception
>
>
> by Dong Wu, Manwen Liao, Weitian Zhang, Xinggang Wang *School of EIC, HUST*
>
>
> *arXiv technical report (arXiv 2108.11250)*
>
>
>
---
### The Illustration of YOLOP
!yolop
### Contributions
* We put forward an efficient multi-task network that can jointly handle three crucial tasks in autonomous driving: object detection, drivable area segmentation and lane detection to save computational costs, reduce inference time as well as improve the performance of each task. Our work is the first to reach real-time on embedded devices while maintaining state-of-the-art level performance on the 'BDD100K 'dataset.
* We design the ablative experiments to verify the effectiveness of our multi-tasking scheme. It is proved that the three tasks can be learned jointly without tedious alternating optimization.
### Results
#### Traffic Object Detection Result
#### Drivable Area Segmentation Result
Model: 'Multinet', mIOU(%): 71.6, Speed(fps): 8.6
Model: 'DLT-Net', mIOU(%): 71.3, Speed(fps): 9.3
Model: 'PSPNet', mIOU(%): 89.6, Speed(fps): 11.1
Model: 'YOLOP(ours)', mIOU(%): 91.5, Speed(fps): 41
#### Lane Detection Result:
Model: 'ENet', mIOU(%): 34.12, IOU(%): 14.64
Model: 'SCNN', mIOU(%): 35.79, IOU(%): 15.84
Model: 'ENet-SAD', mIOU(%): 36.56, IOU(%): 16.02
Model: 'YOLOP(ours)', mIOU(%): 70.50, IOU(%): 26.20
#### Ablation Studies 1: End-to-end v.s. Step-by-step:
#### Ablation Studies 2: Multi-task v.s. Single task:
Notes:
* The works we has use for reference including 'Multinet' (paper,code),'DLT-Net' (paper),'Faster R-CNN' (paper,code),'YOLOv5s'(code) ,'PSPNet'(paper,code) ,'ENet'(paper,code) 'SCNN'(paper,code) 'SAD-ENet'(paper,code). Thanks for their wonderful works.
* In table 4, E, D, S and W refer to Encoder, Detect head, two Segment heads and whole network. So the Algorithm (First, we only train Encoder and Detect head. Then we freeze the Encoder and Detect head as well as train two Segmentation heads. Finally, the entire network is trained jointly for all three tasks.) can be marked as ED-S-W, and the same for others.
---
### Visualization
#### Traffic Object Detection Result
!detect result
#### Drivable Area Segmentation Result

#### Lane Detection Result

Notes:
* The visualization of lane detection result has been post processed by quadratic fitting.
---
### Project Structure
---
### Requirement
This codebase has been developed with python version 3.7, PyTorch 1.7+ and torchvision 0.8+:
See 'URL' for additional dependencies and version requirements.
### Data preparation
#### Download
* Download the images from images.
* Download the annotations of detection from det\_annotations.
* Download the annotations of drivable area segmentation from da\_seg\_annotations.
* Download the annotations of lane line segmentation from ll\_seg\_annotations.
We recommend the dataset directory structure to be the following:
Update the your dataset path in the './lib/config/URL'.
### Training
You can set the training configuration in the './lib/config/URL'. (Including: the loading of preliminary model, loss, data augmentation, optimizer, warm-up and cosine annealing, auto-anchor, training epochs, batch\_size).
If you want try alternating optimization or train model for single task, please modify the corresponding configuration in './lib/config/URL' to 'True'. (As following, all configurations is 'False', which means training multiple tasks end to end).
Start training:
### Evaluation
You can set the evaluation configuration in the './lib/config/URL'. (Including: batch\_size and threshold value for nms).
Start evaluating:
### Demo Test
We provide two testing method.
#### Folder
You can store the image or video in '--source', and then save the reasoning result to '--save-dir'
#### Camera
If there are any camera connected to your computer, you can set the 'source' as the camera number(The default is 0).
### Deployment
Our model can reason in real-time on 'Jetson Tx2', with 'Zed Camera' to capture image. We use 'TensorRT' tool for speeding up. We provide code for deployment and reasoning of model in './toolkits/deploy'.
If you find our paper and code useful for your research, please consider giving a star and citation:
|
[
"### The Illustration of YOLOP\n\n\n!yolop",
"### Contributions\n\n\n* We put forward an efficient multi-task network that can jointly handle three crucial tasks in autonomous driving: object detection, drivable area segmentation and lane detection to save computational costs, reduce inference time as well as improve the performance of each task. Our work is the first to reach real-time on embedded devices while maintaining state-of-the-art level performance on the 'BDD100K 'dataset.\n* We design the ablative experiments to verify the effectiveness of our multi-tasking scheme. It is proved that the three tasks can be learned jointly without tedious alternating optimization.",
"### Results",
"#### Traffic Object Detection Result",
"#### Drivable Area Segmentation Result\n\n\nModel: 'Multinet', mIOU(%): 71.6, Speed(fps): 8.6\nModel: 'DLT-Net', mIOU(%): 71.3, Speed(fps): 9.3\nModel: 'PSPNet', mIOU(%): 89.6, Speed(fps): 11.1\nModel: 'YOLOP(ours)', mIOU(%): 91.5, Speed(fps): 41",
"#### Lane Detection Result:\n\n\nModel: 'ENet', mIOU(%): 34.12, IOU(%): 14.64\nModel: 'SCNN', mIOU(%): 35.79, IOU(%): 15.84\nModel: 'ENet-SAD', mIOU(%): 36.56, IOU(%): 16.02\nModel: 'YOLOP(ours)', mIOU(%): 70.50, IOU(%): 26.20",
"#### Ablation Studies 1: End-to-end v.s. Step-by-step:",
"#### Ablation Studies 2: Multi-task v.s. Single task:\n\n\n\nNotes:\n\n\n* The works we has use for reference including 'Multinet' (paper,code),'DLT-Net' (paper),'Faster R-CNN' (paper,code),'YOLOv5s'(code) ,'PSPNet'(paper,code) ,'ENet'(paper,code) 'SCNN'(paper,code) 'SAD-ENet'(paper,code). Thanks for their wonderful works.\n* In table 4, E, D, S and W refer to Encoder, Detect head, two Segment heads and whole network. So the Algorithm (First, we only train Encoder and Detect head. Then we freeze the Encoder and Detect head as well as train two Segmentation heads. Finally, the entire network is trained jointly for all three tasks.) can be marked as ED-S-W, and the same for others.\n\n\n\n\n---",
"### Visualization",
"#### Traffic Object Detection Result\n\n\n!detect result",
"#### Drivable Area Segmentation Result\n\n\n",
"#### Lane Detection Result\n\n\n\n\n\nNotes:\n\n\n* The visualization of lane detection result has been post processed by quadratic fitting.\n\n\n\n\n---",
"### Project Structure\n\n\n\n\n---",
"### Requirement\n\n\nThis codebase has been developed with python version 3.7, PyTorch 1.7+ and torchvision 0.8+:\n\n\nSee 'URL' for additional dependencies and version requirements.",
"### Data preparation",
"#### Download\n\n\n* Download the images from images.\n* Download the annotations of detection from det\\_annotations.\n* Download the annotations of drivable area segmentation from da\\_seg\\_annotations.\n* Download the annotations of lane line segmentation from ll\\_seg\\_annotations.\n\n\nWe recommend the dataset directory structure to be the following:\n\n\nUpdate the your dataset path in the './lib/config/URL'.",
"### Training\n\n\nYou can set the training configuration in the './lib/config/URL'. (Including: the loading of preliminary model, loss, data augmentation, optimizer, warm-up and cosine annealing, auto-anchor, training epochs, batch\\_size).\n\n\nIf you want try alternating optimization or train model for single task, please modify the corresponding configuration in './lib/config/URL' to 'True'. (As following, all configurations is 'False', which means training multiple tasks end to end).\n\n\nStart training:",
"### Evaluation\n\n\nYou can set the evaluation configuration in the './lib/config/URL'. (Including: batch\\_size and threshold value for nms).\n\n\nStart evaluating:",
"### Demo Test\n\n\nWe provide two testing method.",
"#### Folder\n\n\nYou can store the image or video in '--source', and then save the reasoning result to '--save-dir'",
"#### Camera\n\n\nIf there are any camera connected to your computer, you can set the 'source' as the camera number(The default is 0).",
"### Deployment\n\n\nOur model can reason in real-time on 'Jetson Tx2', with 'Zed Camera' to capture image. We use 'TensorRT' tool for speeding up. We provide code for deployment and reasoning of model in './toolkits/deploy'.\n\n\nIf you find our paper and code useful for your research, please consider giving a star and citation:"
] |
[
"TAGS\n#object-detection #arxiv-2108.11250 #arxiv-1612.07695 #arxiv-1606.02147 #region-us \n",
"### The Illustration of YOLOP\n\n\n!yolop",
"### Contributions\n\n\n* We put forward an efficient multi-task network that can jointly handle three crucial tasks in autonomous driving: object detection, drivable area segmentation and lane detection to save computational costs, reduce inference time as well as improve the performance of each task. Our work is the first to reach real-time on embedded devices while maintaining state-of-the-art level performance on the 'BDD100K 'dataset.\n* We design the ablative experiments to verify the effectiveness of our multi-tasking scheme. It is proved that the three tasks can be learned jointly without tedious alternating optimization.",
"### Results",
"#### Traffic Object Detection Result",
"#### Drivable Area Segmentation Result\n\n\nModel: 'Multinet', mIOU(%): 71.6, Speed(fps): 8.6\nModel: 'DLT-Net', mIOU(%): 71.3, Speed(fps): 9.3\nModel: 'PSPNet', mIOU(%): 89.6, Speed(fps): 11.1\nModel: 'YOLOP(ours)', mIOU(%): 91.5, Speed(fps): 41",
"#### Lane Detection Result:\n\n\nModel: 'ENet', mIOU(%): 34.12, IOU(%): 14.64\nModel: 'SCNN', mIOU(%): 35.79, IOU(%): 15.84\nModel: 'ENet-SAD', mIOU(%): 36.56, IOU(%): 16.02\nModel: 'YOLOP(ours)', mIOU(%): 70.50, IOU(%): 26.20",
"#### Ablation Studies 1: End-to-end v.s. Step-by-step:",
"#### Ablation Studies 2: Multi-task v.s. Single task:\n\n\n\nNotes:\n\n\n* The works we has use for reference including 'Multinet' (paper,code),'DLT-Net' (paper),'Faster R-CNN' (paper,code),'YOLOv5s'(code) ,'PSPNet'(paper,code) ,'ENet'(paper,code) 'SCNN'(paper,code) 'SAD-ENet'(paper,code). Thanks for their wonderful works.\n* In table 4, E, D, S and W refer to Encoder, Detect head, two Segment heads and whole network. So the Algorithm (First, we only train Encoder and Detect head. Then we freeze the Encoder and Detect head as well as train two Segmentation heads. Finally, the entire network is trained jointly for all three tasks.) can be marked as ED-S-W, and the same for others.\n\n\n\n\n---",
"### Visualization",
"#### Traffic Object Detection Result\n\n\n!detect result",
"#### Drivable Area Segmentation Result\n\n\n",
"#### Lane Detection Result\n\n\n\n\n\nNotes:\n\n\n* The visualization of lane detection result has been post processed by quadratic fitting.\n\n\n\n\n---",
"### Project Structure\n\n\n\n\n---",
"### Requirement\n\n\nThis codebase has been developed with python version 3.7, PyTorch 1.7+ and torchvision 0.8+:\n\n\nSee 'URL' for additional dependencies and version requirements.",
"### Data preparation",
"#### Download\n\n\n* Download the images from images.\n* Download the annotations of detection from det\\_annotations.\n* Download the annotations of drivable area segmentation from da\\_seg\\_annotations.\n* Download the annotations of lane line segmentation from ll\\_seg\\_annotations.\n\n\nWe recommend the dataset directory structure to be the following:\n\n\nUpdate the your dataset path in the './lib/config/URL'.",
"### Training\n\n\nYou can set the training configuration in the './lib/config/URL'. (Including: the loading of preliminary model, loss, data augmentation, optimizer, warm-up and cosine annealing, auto-anchor, training epochs, batch\\_size).\n\n\nIf you want try alternating optimization or train model for single task, please modify the corresponding configuration in './lib/config/URL' to 'True'. (As following, all configurations is 'False', which means training multiple tasks end to end).\n\n\nStart training:",
"### Evaluation\n\n\nYou can set the evaluation configuration in the './lib/config/URL'. (Including: batch\\_size and threshold value for nms).\n\n\nStart evaluating:",
"### Demo Test\n\n\nWe provide two testing method.",
"#### Folder\n\n\nYou can store the image or video in '--source', and then save the reasoning result to '--save-dir'",
"#### Camera\n\n\nIf there are any camera connected to your computer, you can set the 'source' as the camera number(The default is 0).",
"### Deployment\n\n\nOur model can reason in real-time on 'Jetson Tx2', with 'Zed Camera' to capture image. We use 'TensorRT' tool for speeding up. We provide code for deployment and reasoning of model in './toolkits/deploy'.\n\n\nIf you find our paper and code useful for your research, please consider giving a star and citation:"
] |
text-generation
|
transformers
|
# Rick Morty DialogGPT Model
|
{"tags": ["conversational"]}
|
RishabhRawatt/DialoGPT-small-Rickmorty
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick Morty DialogGPT Model
|
[
"# Rick Morty DialogGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick Morty DialogGPT Model"
] |
text-generation
|
transformers
|
# Kela DialoGPT Model
|
{"tags": ["conversational"]}
|
RishabhRawatt/DialoGPT-small-kela
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Kela DialoGPT Model
|
[
"# Kela DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Kela DialoGPT Model"
] |
text-generation
|
transformers
|
# Rick and Morty DialoGPT Model
|
{"tags": ["conversational"]}
|
Ritchie/DialoGPT-small-Rickandmorty
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick and Morty DialoGPT Model
|
[
"# Rick and Morty DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick and Morty DialoGPT Model"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.