modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
alireza7/ARMAN-MSR-persian-base-parsinlu-textual-entailment
|
alireza7
| 2021-09-29T19:16:04Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
More information about models is available [here](https://github.com/alirezasalemi7/ARMAN).
|
alireza7/ARMAN-MSR-persian-base-parsinlu-sentiment-movie
|
alireza7
| 2021-09-29T19:15:47Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
More information about models is available [here](https://github.com/alirezasalemi7/ARMAN).
|
alireza7/ARMAN-MSR-persian-base-parsinlu-qqp
|
alireza7
| 2021-09-29T19:15:19Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
More information about models is available [here](https://github.com/alirezasalemi7/ARMAN).
|
alireza7/ARMAN-MSR-persian-base-parsinlu-multiple-choice
|
alireza7
| 2021-09-29T19:15:05Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
More information about models is available [here](https://github.com/alirezasalemi7/ARMAN).
|
alireza7/ARMAN-MSR-persian-base-PN-summary
|
alireza7
| 2021-09-29T19:14:47Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
More information about models is available [here](https://github.com/alirezasalemi7/ARMAN).
|
huggingartists/kishlak
|
huggingartists
| 2021-09-29T17:46:52Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/kishlak",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- huggingartists/kishlak
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/c0c7e74ec794ad44eb0957d6afdd383d.815x815x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Кишлак (Kishlak)</div>
<a href="https://genius.com/artists/kishlak">
<div style="text-align: center; font-size: 14px;">@kishlak</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Кишлак (Kishlak).
Dataset is available [here](https://huggingface.co/datasets/huggingartists/kishlak).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/kishlak")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2654f8ic/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Кишлак (Kishlak)'s lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/12gu37uv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/12gu37uv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/kishlak')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/kishlak")
model = AutoModelWithLMHead.from_pretrained("huggingartists/kishlak")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingtweets/flower_dommy
|
huggingtweets
| 2021-09-29T17:45:38Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/flower_dommy/1632937534684/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1414421050415329283/SnA_5soV_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">stable lacker</div>
<div style="text-align: center; font-size: 14px;">@flower_dommy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from stable lacker.
| Data | stable lacker |
| --- | --- |
| Tweets downloaded | 1549 |
| Retweets | 270 |
| Short tweets | 210 |
| Tweets kept | 1069 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/301dw1ni/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @flower_dommy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/kf0leede) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/kf0leede/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/flower_dommy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingartists/mayot
|
huggingartists
| 2021-09-29T17:40:26Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/mayot",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- huggingartists/mayot
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/1d4b4adcdf1f58e1899ee5557375ef7c.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">MAYOT</div>
<a href="https://genius.com/artists/mayot">
<div style="text-align: center; font-size: 14px;">@mayot</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from MAYOT.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/mayot).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/mayot")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/lf4wcx85/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on MAYOT's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1uulibm2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1uulibm2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/mayot')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/mayot")
model = AutoModelWithLMHead.from_pretrained("huggingartists/mayot")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
dweb/deberta-base-CoLA
|
dweb
| 2021-09-29T17:37:10Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: deberta-base-CoLA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-CoLA
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1655
- Accuracy: 0.8482
- F1: 0.8961
- Roc Auc: 0.8987
- Mcc: 0.6288
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Roc Auc | Mcc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:-------:|:------:|
| 0.5266 | 1.0 | 535 | 0.4138 | 0.8159 | 0.8698 | 0.8627 | 0.5576 |
| 0.3523 | 2.0 | 1070 | 0.3852 | 0.8387 | 0.8880 | 0.9041 | 0.6070 |
| 0.2479 | 3.0 | 1605 | 0.3981 | 0.8482 | 0.8901 | 0.9120 | 0.6447 |
| 0.1712 | 4.0 | 2140 | 0.4732 | 0.8558 | 0.9008 | 0.9160 | 0.6486 |
| 0.1354 | 5.0 | 2675 | 0.7181 | 0.8463 | 0.8938 | 0.9024 | 0.6250 |
| 0.0876 | 6.0 | 3210 | 0.8453 | 0.8520 | 0.8992 | 0.9123 | 0.6385 |
| 0.0682 | 7.0 | 3745 | 1.0282 | 0.8444 | 0.8938 | 0.9061 | 0.6189 |
| 0.0431 | 8.0 | 4280 | 1.1114 | 0.8463 | 0.8960 | 0.9010 | 0.6239 |
| 0.0323 | 9.0 | 4815 | 1.1663 | 0.8501 | 0.8970 | 0.8967 | 0.6340 |
| 0.0163 | 10.0 | 5350 | 1.1655 | 0.8482 | 0.8961 | 0.8987 | 0.6288 |
### Framework versions
- Transformers 4.11.0
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
huggingartists/platina
|
huggingartists
| 2021-09-29T17:06:31Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/platina",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- huggingartists/platina
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/b12dc90e6f405684ef6b74c9de92fdcd.853x853x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Платина (Platina)</div>
<a href="https://genius.com/artists/platina">
<div style="text-align: center; font-size: 14px;">@platina</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Платина (Platina).
Dataset is available [here](https://huggingface.co/datasets/huggingartists/platina).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/platina")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2ih365j7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Платина (Platina)'s lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1quasiz0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1quasiz0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/platina')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/platina")
model = AutoModelWithLMHead.from_pretrained("huggingartists/platina")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
gokulkarthik/distilbert-base-uncased-finetuned-squad
|
gokulkarthik
| 2021-09-29T15:13:52Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.11.0
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
Yanzhu/bertweetfr_ner
|
Yanzhu
| 2021-09-29T14:46:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
French NER model for tweets. Fine-tuned on the CAP2017 dataset.
label_list = ['O',
'B-person',
'I-person',
'B-musicartist',
'I-musicartist',
'B-org',
'I-org',
'B-geoloc',
'I-geoloc',
'B-product',
'I-product',
'B-transportLine',
'I-transportLine',
'B-media',
'I-media',
'B-sportsteam',
'I-sportsteam',
'B-event',
'I-event',
'B-tvshow',
'I-tvshow',
'B-movie',
'I-movie',
'B-facility',
'I-facility',
'B-other',
'I-other']
|
suwani/distilbert-base-uncased-finetuned-ner
|
suwani
| 2021-09-29T08:22:37Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2787
- Precision: 0.6403
- Recall: 0.6929
- F1: 0.6655
- Accuracy: 0.9100
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 288 | 0.3360 | 0.5596 | 0.5992 | 0.5788 | 0.8956 |
| 0.4686 | 2.0 | 576 | 0.2901 | 0.6061 | 0.7231 | 0.6594 | 0.9063 |
| 0.4686 | 3.0 | 864 | 0.2787 | 0.6403 | 0.6929 | 0.6655 | 0.9100 |
### Framework versions
- Transformers 4.11.0
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
huggingtweets/cyrusshepard-fastfwdco-lilyraynyc
|
huggingtweets
| 2021-09-29T08:19:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/cyrusshepard-fastfwdco-lilyraynyc/1632903540115/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/713653445262237696/mdyVSGoj_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1241620963768201216/sG68m_iE_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1308419103510626304/gUgr1gMo_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">fastfwd & Cyrus & Lily Ray 😏</div>
<div style="text-align: center; font-size: 14px;">@cyrusshepard-fastfwdco-lilyraynyc</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from fastfwd & Cyrus & Lily Ray 😏.
| Data | fastfwd | Cyrus | Lily Ray 😏 |
| --- | --- | --- | --- |
| Tweets downloaded | 945 | 3248 | 3250 |
| Retweets | 60 | 343 | 89 |
| Short tweets | 5 | 729 | 310 |
| Tweets kept | 880 | 2176 | 2851 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3k89f9gx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cyrusshepard-fastfwdco-lilyraynyc's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3eq4v17k) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3eq4v17k/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cyrusshepard-fastfwdco-lilyraynyc')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/imjackrudd
|
huggingtweets
| 2021-09-28T23:31:37Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/imjackrudd/1632871893609/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1289653820071522304/cdikNvkG_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jack Rudd 🇹🇹 🏳️⚧️</div>
<div style="text-align: center; font-size: 14px;">@imjackrudd</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jack Rudd 🇹🇹 🏳️⚧️.
| Data | Jack Rudd 🇹🇹 🏳️⚧️ |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 55 |
| Short tweets | 327 |
| Tweets kept | 2864 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3g5589wt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @imjackrudd's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/eyywpszu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/eyywpszu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/imjackrudd')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/dndomme
|
huggingtweets
| 2021-09-28T23:14:56Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/dndomme/1632870893354/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1428106877926260736/xiq2bdMI_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pirate Queen Grey</div>
<div style="text-align: center; font-size: 14px;">@dndomme</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Pirate Queen Grey.
| Data | Pirate Queen Grey |
| --- | --- |
| Tweets downloaded | 3218 |
| Retweets | 1329 |
| Short tweets | 288 |
| Tweets kept | 1601 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ucgtv6r/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dndomme's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1sej7nbm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1sej7nbm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dndomme')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/laura_the_loser
|
huggingtweets
| 2021-09-28T22:31:52Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/laura_the_loser/1632868308444/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1405044989013364744/OowZLyUZ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Laura UwU</div>
<div style="text-align: center; font-size: 14px;">@laura_the_loser</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Laura UwU.
| Data | Laura UwU |
| --- | --- |
| Tweets downloaded | 126 |
| Retweets | 22 |
| Short tweets | 34 |
| Tweets kept | 70 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/kpebddab/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @laura_the_loser's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1jsq6074) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1jsq6074/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/laura_the_loser')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
rhtnr/ssgtrh
|
rhtnr
| 2021-09-28T21:16:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
https://escape-net.eu/groups/film-complet-venom-let-there-be-carnage-streaming-vf-gratuit-en-francais/
https://escape-net.eu/groups/venom-let-there-be-carnage-2021-streaming-vf-film-complet-en-francais/
https://escape-net.eu/groups/venom-let-there-be-carnage-streaming-vf-en-hd-fr/
https://escape-net.eu/groups/venom-let-there-be-carnage-streaming-vf-film-complet-2021-en-francais-hd/
https://escape-net.eu/groups/streaming-hd-venom-let-there-be-carnage-2021-en-streaming-vf-complet-gratuit-francais/
https://escape-net.eu/groups/vostfr-venom-let-there-be-carnage-2021-film-complet-streaming-vf-en-francais-09-27-2021/
https://escape-net.eu/groups/regarder-venom-let-there-be-carnage-streaming-vf-gratuit-en-francais-27-septembre-2021/
https://escape-net.eu/groups/regarder-venom-let-there-be-carnage-streaming-vf-2021-en-francais/
https://escape-net.eu/groups/venom-let-there-be-carnage-2021-film-complet-streaming-vf/
https://escape-net.eu/groups/venom-let-there-be-carnage-streaming-vf-2021-film-complet-415303912/
https://parentsolo31.com/advert/film-complet-mourir-peut-attendre-streaming-vf-gratuit-en-francais/
https://parentsolo31.com/advert/mourir-peut-attendre-2021-streaming-vf-film-complet-en-francais/
https://parentsolo31.com/advert/mourir-peut-attendre-streaming-vf-en-hd-fr/
https://parentsolo31.com/advert/mourir-peut-attendre-streaming-vf-film-complet-2021-en-francais-hd/
https://parentsolo31.com/advert/streaming-hd-mourir-peut-attendre-2021-en-streaming-vf-complet-gratuit-francais/
https://parentsolo31.com/advert/vostfr-mourir-peut-attendre-2021-film-complet-streaming-vf-en-francais-09-27-2021/
https://parentsolo31.com/advert/regarder-mourir-peut-attendre-streaming-vf-gratuit-en-francais-27-septembre-2021/
https://parentsolo31.com/advert/regarder-mourir-peut-attendre-streaming-vf-2021-en-francais/
https://parentsolo31.com/advert/mourir-peut-attendre-2021-film-complet-streaming-vf/
https://parentsolo31.com/advert/mourir-peut-attendre-streaming-vf-2021-film-complet/
https://clubdeportivocdl.com/advert/voir-dune-streaming-vf-2021-complet/
https://clubdeportivocdl.com/advert/film-complet-dune-streaming-vf-gratuit-en-francais/
https://clubdeportivocdl.com/advert/dune-streaming-vf-film-complet-2021-en-francais-hd/
https://clubdeportivocdl.com/advert/dune-2021-hd-film-complet-vf-francais/
https://clubdeportivocdl.com/advert/dune-streaming-vf-2021-film-complet/
https://clubdeportivocdl.com/advert/dune-2021-film-streaming-vf-streaming-vostfr/
https://clubdeportivocdl.com/advert/dune-2021-film-complet-streaming-vf/
https://clubdeportivocdl.com/advert/regarder-dune-2021-film-streaming-vf-francais/
https://clubdeportivocdl.com/advert/regarder-dune-2021-streaming-vf-en-francais/
https://clubdeportivocdl.com/advert/regarder-dune-2021-film-complet-streaming-vf/
https://clubdeportivocdl.com/advert/dune-film-complet-en-streaming-vf/
https://clubdeportivocdl.com/advert/film-complet-dune-2021-vf-streaming-francais/
https://clubdeportivocdl.com/advert/regarder-dune-vostfr-en-streaming-vf-gratuit-complet-hd-en-francais/
https://clubdeportivocdl.com/advert/filmcomplet-dune-2021-streaming-vf-en-complet-gratuit/
https://clubdeportivocdl.com/advert/regarder-dune-2021-film-complet-en-vf-hd-streaming/
https://clubdeportivocdl.com/advert/regarder-complet-dune-streaming-vf-film-gratuit/
https://clubdeportivocdl.com/advert/film-complet-dune-streaming-vf-complet-2021-francais-hd/
https://clubdeportivocdl.com/advert/vf-dune-streaming-vf-gratuit-en-francais-2021/
https://clubdeportivocdl.com/advert/film-hd-dune-2021-streaming-vf-gratuit-complet/
https://clubdeportivocdl.com/advert/film-complet-dune-2021-streaming-vf-en-fr/
https://clubdeportivocdl.com/advert/regarder-dune-2021-film-complet-streaming-vf-2/
https://clubdeportivocdl.com/advert/voir-dune-2021-streaming-vf-complet-en-gratuit/
https://clubdeportivocdl.com/advert/mourir-peut-attendre-streaming-vf-film-complet-2021-en-francais-hd/
https://clubdeportivocdl.com/advert/mourir-peut-attendre-streaming-vf-2021-film-complet/
https://www.onfeetnation.com/profiles/blogs/linkfilm-streaming-vf-complet-en-francais?xg_source=activity
http://snomoto.com/forums/timbersled/linkfilm-streaming-vf-complet-en-francais/
https://sites.google.com/view/dnjyrjmt/halaman-muka
https://escape-net.eu/groups/linkfilm-complet-vf/
|
lewtun/bert-base-uncased-finetuned-imdb
|
lewtun
| 2021-09-28T20:45:38Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: bert-base-uncased-finetuned-imdb
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: imdb
type: imdb
args: plain_text
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2244 | 1.0 | 958 | 2.0726 |
| 2.1537 | 2.0 | 1916 | 2.0381 |
| 2.1183 | 3.0 | 2874 | 2.0284 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.1+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
huggingartists/dzhizus
|
huggingartists
| 2021-09-28T19:43:19Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/dzhizus",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- huggingartists/dzhizus
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/a96a6042b4c0a4c0bdae647768c5e42b.668x668x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Джизус (Dzhizus)</div>
<a href="https://genius.com/artists/dzhizus">
<div style="text-align: center; font-size: 14px;">@dzhizus</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Джизус (Dzhizus).
Dataset is available [here](https://huggingface.co/datasets/huggingartists/dzhizus).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/dzhizus")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/35paacn1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Джизус (Dzhizus)'s lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1ug3yebo) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1ug3yebo/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/dzhizus')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/dzhizus")
model = AutoModelWithLMHead.from_pretrained("huggingartists/dzhizus")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
lewtun/MiniLM-L12-H384-uncased-finetuned-imdb
|
lewtun
| 2021-09-28T18:59:38Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: MiniLM-L12-H384-uncased-finetuned-imdb
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: imdb
type: imdb
args: plain_text
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLM-L12-H384-uncased-finetuned-imdb
This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.2464 | 1.0 | 391 | 4.2951 |
| 4.2302 | 2.0 | 782 | 4.0023 |
| 4.0726 | 3.0 | 1173 | 3.9328 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.1+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
huggingtweets/plinz
|
huggingtweets
| 2021-09-28T12:42:39Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/plinz/1632832956311/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/936396593762357248/f66CtXot_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Joscha Bach</div>
<div style="text-align: center; font-size: 14px;">@plinz</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Joscha Bach.
| Data | Joscha Bach |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 298 |
| Short tweets | 131 |
| Tweets kept | 2819 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/zr1xovwx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @plinz's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2bpt8w0c) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2bpt8w0c/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/plinz')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Kceilord/autonlp-tc-13522454
|
Kceilord
| 2021-09-28T10:46:23Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"en",
"dataset:Kceilord/autonlp-data-tc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Kceilord/autonlp-data-tc
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 13522454
## Validation Metrics
- Loss: 0.31450966000556946
- Accuracy: 0.8461538461538461
- Precision: 0.8181818181818182
- Recall: 0.782608695652174
- AUC: 0.9369259032455604
- F1: 0.8
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Kceilord/autonlp-tc-13522454
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Kceilord/autonlp-tc-13522454", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Kceilord/autonlp-tc-13522454", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
tunib/electra-ko-en-base
|
tunib
| 2021-09-28T07:50:21Z | 4,099 | 10 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"pretraining",
"arxiv:2003.10555",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# TUNiB-Electra
We release several new versions of the [ELECTRA](https://arxiv.org/abs/2003.10555) model, which we name TUNiB-Electra. There are two motivations. First, all the existing pre-trained Korean encoder models are monolingual, that is, they have knowledge about Korean only. Our bilingual models are based on the balanced corpora of Korean and English. Second, we want new off-the-shelf models trained on much more texts. To this end, we collected a large amount of Korean text from various sources such as blog posts, comments, news, web novels, etc., which sum up to 100 GB in total.
## How to use
You can use this model directly with [transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import AutoModel, AutoTokenizer
# Base Model (Korean-English bilingual model)
tokenizer = AutoTokenizer.from_pretrained('tunib/electra-ko-en-base')
model = AutoModel.from_pretrained('tunib/electra-ko-en-base')
```
### Tokenizer example
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained('tunib/electra-ko-en-base')
>>> tokenizer.tokenize("tunib is a natural language processing tech startup.")
['tun', '##ib', 'is', 'a', 'natural', 'language', 'processing', 'tech', 'startup', '.']
>>> tokenizer.tokenize("튜닙은 자연어처리 테크 스타트업입니다.")
['튜', '##닙', '##은', '자연', '##어', '##처리', '테크', '스타트업', '##입니다', '.']
```
## Results on Korean downstream tasks
| |**# Params** |**Avg.**| **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1) |**Korean-Hate-Speech (Dev)**<br/>(F1)|
| :----------------:| :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :---------------------------: | :---------------------------: | :---------------------------: | :----------------: |
|***TUNiB-Electra-ko-base*** | 110M | **85.99** | 90.95 | 87.63 | 84.65 | **82.27** | 85.00 | 95.77 | 64.01 / 90.32 |71.40 |
|***TUNiB-Electra-ko-en-base*** | 133M |85.34 |90.59 | 87.25 | **84.90** | 80.43 | 83.81 | 94.85 | 83.09 / 92.06 |68.83 |
| [KoELECTRA-base-v3](https://github.com/monologg/KoELECTRA) | 110M | 85.92 |90.63 | **88.11** | 84.45 | 82.24 | **85.53** | 95.25 | **84.83 / 93.45** | 67.61 |
| [KcELECTRA-base](https://github.com/Beomi/KcELECTRA) | 124M| 84.75 |**91.71** | 86.90 | 74.80 | 81.65 | 82.65 | **95.78** | 70.60 / 90.11 | **74.49** |
| [KoBERT-base](https://github.com/SKTBrain/KoBERT) | 90M | 84.17 | 89.63 | 86.11 | 80.65 | 79.00 | 79.64 | 93.93 | 52.81 / 80.27 | 66.21 |
| [KcBERT-base](https://github.com/Beomi/KcBERT) | 110M | 81.37 | 89.62 | 84.34 | 66.95 | 74.85 | 75.57 | 93.93 | 60.25 / 84.39 | 68.77 |
| [XLM-Roberta-base](https://github.com/pytorch/fairseq/tree/master/examples/xlmr) | 280M | 85.74 |89.49 | 86.26 | 82.95 | 79.92 | 79.09 | 93.53 | 64.70 / 88.94 | 64.06 |
## Results on English downstream tasks
| |**# Params** | **Avg.** |**CoLA**<br/>(MCC) | **SST**<br/>(Acc) |MRPC<br/>(Acc)| **STS**<br/>(Spearman) | **QQP**<br/>(Acc) | **MNLI**<br/>(Acc) | **QNLI**<br/>(Acc) | **RTE**<br/>(Acc) |
| :----------------:| :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :---------------------------: | :---------------------------: | :---------------------------: | :---------------------------: |
|***TUNiB-Electra-ko-en-base*** | 133M | 85.2| **65.36** | 92.09 | **88.97** | **90.61** | **90.91** | 85.32 | 91.51 |**76.53**|
|[ELECTRA-base](https://github.com/google-research/electra) | 110M | **85.7** | 64.6 | **96.0** | 88.1| 90.2 | 89.5 | **88.5** | **93.1** | 75.2 |
|[BERT-base](https://github.com/google-research/bert) | 110M | 80.8| 52.1 | 93.5 | 84.8| 85.8 | 89.2 | 84.6 | 90.5 | 66.4 |
|
tunib/electra-ko-base
|
tunib
| 2021-09-28T07:48:06Z | 2 | 6 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"pretraining",
"arxiv:2003.10555",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# TUNiB-Electra
We release several new versions of the [ELECTRA](https://arxiv.org/abs/2003.10555) model, which we name TUNiB-Electra. There are two motivations. First, all the existing pre-trained Korean encoder models are monolingual, that is, they have knowledge about Korean only. Our bilingual models are based on the balanced corpora of Korean and English. Second, we want new off-the-shelf models trained on much more texts. To this end, we collected a large amount of Korean text from various sources such as blog posts, comments, news, web novels, etc., which sum up to 100 GB in total.
## How to use
You can use this model directly with [transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import AutoModel, AutoTokenizer
# Base Model (Korean-only model)
tokenizer = AutoTokenizer.from_pretrained('tunib/electra-ko-base')
model = AutoModel.from_pretrained('tunib/electra-ko-base')
```
### Tokenizer example
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained('tunib/electra-ko-base')
>>> tokenizer.tokenize("tunib is a natural language processing tech startup.")
['tun', '##ib', 'is', 'a', 'natural', 'language', 'processing', 'tech', 'startup', '.']
>>> tokenizer.tokenize("튜닙은 자연어처리 테크 스타트업입니다.")
['튜', '##닙', '##은', '자연', '##어', '##처리', '테크', '스타트업', '##입니다', '.']
```
## Results on Korean downstream tasks
| |**# Params** |**Avg.**| **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1) |**Korean-Hate-Speech (Dev)**<br/>(F1)|
| :----------------:| :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :---------------------------: | :---------------------------: | :---------------------------: | :----------------: |
|***TUNiB-Electra-ko-base*** | 110M | **85.99** | 90.95 | 87.63 | 84.65 | **82.27** | 85.00 | 95.77 | 64.01 / 90.32 |71.40 |
|***TUNiB-Electra-ko-en-base*** | 133M |85.34 |90.59 | 87.25 | **84.90** | 80.43 | 83.81 | 94.85 | 83.09 / 92.06 |68.83 |
| [KoELECTRA-base-v3](https://github.com/monologg/KoELECTRA) | 110M | 85.92 |90.63 | **88.11** | 84.45 | 82.24 | **85.53** | 95.25 | **84.83 / 93.45** | 67.61 |
| [KcELECTRA-base](https://github.com/Beomi/KcELECTRA) | 124M| 84.75 |**91.71** | 86.90 | 74.80 | 81.65 | 82.65 | **95.78** | 70.60 / 90.11 | **74.49** |
| [KoBERT-base](https://github.com/SKTBrain/KoBERT) | 90M | 84.17 | 89.63 | 86.11 | 80.65 | 79.00 | 79.64 | 93.93 | 52.81 / 80.27 | 66.21 |
| [KcBERT-base](https://github.com/Beomi/KcBERT) | 110M | 81.37 | 89.62 | 84.34 | 66.95 | 74.85 | 75.57 | 93.93 | 60.25 / 84.39 | 68.77 |
| [XLM-Roberta-base](https://github.com/pytorch/fairseq/tree/master/examples/xlmr) | 280M | 85.74 |89.49 | 86.26 | 82.95 | 79.92 | 79.09 | 93.53 | 64.70 / 88.94 | 64.06 |
|
huggingtweets/fredricksonra
|
huggingtweets
| 2021-09-28T02:27:24Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/fredricksonra/1632796041349/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1421105879408066565/hBHx-Rvl_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Rica af, she/her 🗽🏳️🌈</div>
<div style="text-align: center; font-size: 14px;">@fredricksonra</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Rica af, she/her 🗽🏳️🌈.
| Data | Rica af, she/her 🗽🏳️🌈 |
| --- | --- |
| Tweets downloaded | 3208 |
| Retweets | 2893 |
| Short tweets | 47 |
| Tweets kept | 268 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3k0pcnmp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @fredricksonra's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/123sil9f) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/123sil9f/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/fredricksonra')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/elizgerber-galaxykate-ianhorswill
|
huggingtweets
| 2021-09-27T22:54:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/elizgerber-galaxykate-ianhorswill/1632783257334/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1371914197555105794/OKpRjt66_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1790733507/me-cc_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/2828021100/bfce2ad653f8d49d2ebf984b620df18b_400x400.jpeg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Dr Kate Compton, Code Wizard & Ian Horswill & Liz Gerber</div>
<div style="text-align: center; font-size: 14px;">@elizgerber-galaxykate-ianhorswill</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Dr Kate Compton, Code Wizard & Ian Horswill & Liz Gerber.
| Data | Dr Kate Compton, Code Wizard | Ian Horswill | Liz Gerber |
| --- | --- | --- | --- |
| Tweets downloaded | 3242 | 179 | 1622 |
| Retweets | 607 | 35 | 545 |
| Short tweets | 214 | 6 | 34 |
| Tweets kept | 2421 | 138 | 1043 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1dyol8xs/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elizgerber-galaxykate-ianhorswill's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/37pdtbyk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/37pdtbyk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elizgerber-galaxykate-ianhorswill')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
vesteinn/fasttext_is_rmh
|
vesteinn
| 2021-09-27T22:09:07Z | 0 | 0 | null |
[
"is",
"license:agpl-3.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: agpl-3.0
language:
- is
---
# FastText model trained on Icelandic
This model is trained on the lemmas of the Icelandic Gigaword Corpus version 20.05. It is trained using the gensim package, version 4.1.0. and parameters were set to default (100 dimensions, windows size 5)
This model can not be loaded directly since it uses gensim, clone the repository and run the following to use it.
```python
import gensim
model = gensim.models.FastText.load("./rmh.w2v.model")
```
## Example output
```bash
In [1]: model.wv.most_similar("england")
Out[1]:
[('englands', 0.8778558969497681),
('southland', 0.8573296070098877),
('skotland', 0.846065878868103),
('englaland', 0.8320872187614441),
('hoogland', 0.8299505114555359),
('hoagland', 0.8277317881584167),
('totland', 0.8265103697776794),
('lackland', 0.8234561681747437),
('skarpengland', 0.8227219581604004),
('langland', 0.8222305774688721)]
In [2]: model.wv.most_similar("kanína")
Out[2]:
[('loðkanína', 0.9271067976951599),
('dvergkanína', 0.9106121063232422),
('angórakanína', 0.895512044429779),
('angórukanína', 0.8741581439971924),
('feldkanína', 0.8696010708808899),
('kanínubangsi', 0.8562541604042053),
('holdakanína', 0.8543838858604431),
('villikanína', 0.8525990843772888),
('silkikanína', 0.8515204191207886),
('kaníni', 0.8445548415184021)]
```
|
patrickvonplaten/debug_repo
|
patrickvonplaten
| 2021-09-27T15:58:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
This repository is used to debug models, functionalities in transformers, etc...
# 1. Generation
...
# 2. Flax Wav2Vec2 Pretraining
Go into the `flax_wav2vec2` folder.
1. **Check PT loss works correctly**
`./run_pt_fsq_comp.sh` shows that HF PyTorch and Fairseq PT yield equivalent loss. Make sure to use the correct library versions as defined `branches_to_use.txt`.
2. **Check Flax loss works correctly**
`./run_flax_fsq_comp.sh` shows that HF PyTorch and HF Flax yield equivalent loss. Make sure to use the correct library versions as defined `branches_to_use.txt`.
|
colorfulscoop/gpt2-small-ja
|
colorfulscoop
| 2021-09-27T11:50:17Z | 89 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"gpt2",
"text-generation",
"ja",
"dataset:wikipedia",
"license:cc",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: ja
datasets: wikipedia
widget:
- text: 統計的機械学習でのニューラルネットワーク
license: cc
---
# GPT-2 small Japanese model
This repository contains a GPT2-small model trained on Japanese Wikipedia dataset.
## Training data
[Japanese Wikipedia](https://ja.wikipedia.org/wiki/Wikipedia:データベースダウンロード) dataset as of Aug20, 2021 released under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/) is used for both tokenizer and GPT-2 model.
We splitted the dataset into three subsets - train, valid and test sets. Both tokenizer and model were trained on the train set.
Train set contains around 540M tokens.
## Model description
The model architecture is the same as GPT-2 small model (n_ctx: 1024, n_embd 768, n_head: 12, n_layer: 12) except for a vocabulary size.
The vocabulary size is set to 32,000 instead of an original size of 50,257.
`transformers.GPT2LMHeadModel` is used for training.
## Tokenizer description
[SentencePiece](https://github.com/google/sentencepiece) is used as a tokenizer for this model.
We utilized 1,000,000 sentences from train set.
The vocabulary size was 32,000.
A `add_dummy_prefix` option was set to `True` because Japanese words are not separated by whitespaces.
After training, the tokenizer model was imported as `transformers.BERTGenerationTokenizer`
because it supports SentencePiece models and it does not add any special tokens as default,
which is useful expecially for a text generation task.
## Training
The model was trained on the train set for 30 epochs with batch size 32. Each sample contained 1024 tokens.
We utilized Adam optimizer. Learning rate was linearly increased from `0` to `1e-4` during the first 10,000 steps.
A clip norm was set to `1.0`.
Test set perplexity of the trained model was 29.13.
Please refer to [GitHub](https://github.com/colorfulscoop/gpt-ja) for more training details.
## Usage
First, install dependecies.
```sh
$ pip install transformers==4.10.0 torch==1.8.1 sentencepiece==0.1.96
```
Then use pipeline to generate sentences.
```sh
>>> import transformers
>>> pipeline = transformers.pipeline("text-generation", "colorfulscoop/gpt2-small-ja")
>>> pipeline("統計的機械学習でのニューラルネットワーク", do_sample=True, top_p=0.95, top_k=50, num_return_sequences=3)
```
**Note:** The default model configuration `config.json` sets parameters for text generation with `do_sample=True`, `top_k=50`, `top_p=0.95`.
Please set these parameters when you need to use different parameters.
## Versions
We recommend to specify `revision` to load the model for reproducibility.
| Revision | Date of Wikipedia dump |
| --- | --- |
| 20210820.1.0 | Aug 20, 2021 |
| 20210301.1.0 | March 1, 2021 |
You can specify `revision` as follows.
```py
# Example of pipeline
>>> transformers.pipeline("text-generation", "colorfulscoop/gpt2-small-ja", revision="20210820.1.0")
# Example of AutoModel
>>> transformers.AutoModel.from_pretrained("colorfulscoop/gpt2-small-ja", revision="20210820.1.0")
```
## License
All the models included in this repository are licensed under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
**Disclaimer:** The model potentially has possibility that it generates similar texts in the training data, texts not to be true, or biased texts. Use of the model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output.
**Author:** Colorful Scoop
|
vppvgit/BiblItBERT-1
|
vppvgit
| 2021-09-27T09:40:47Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: BiblItBERT-1
results:
- task:
name: Masked Language Modeling
type: fill-mask
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiblItBERT-1
This model is a fine-tuned version of [vppvgit/BiblItBERT](https://huggingface.co/vppvgit/BiblItBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 1.5764 | 1.0 | 16528 | 1.5214 |
| 1.4572 | 2.0 | 33056 | 1.4201 |
| 1.3787 | 3.0 | 49584 | 1.3728 |
| 1.3451 | 4.0 | 66112 | 1.3245 |
| 1.3066 | 5.0 | 82640 | 1.2614 |
| 1.2447 | 6.0 | 99168 | 1.2333 |
| 1.2172 | 7.0 | 115696 | 1.2149 |
| 1.2079 | 8.0 | 132224 | 1.1853 |
| 1.2167 | 9.0 | 148752 | 1.1586 |
| 1.2056 | 10.0 | 165280 | 1.1503 |
| 1.1307 | 11.0 | 181808 | 1.1224 |
| 1.1689 | 12.0 | 198336 | 1.1074 |
| 1.1007 | 13.0 | 214864 | 1.0924 |
| 1.0901 | 14.0 | 231392 | 1.0659 |
| 1.0667 | 15.0 | 247920 | 1.0650 |
| 1.0434 | 16.0 | 264448 | 1.0362 |
| 1.0333 | 17.0 | 280976 | 1.0250 |
| 1.0342 | 18.0 | 297504 | 1.0198 |
| 1.0059 | 19.0 | 314032 | 0.9950 |
| 0.9719 | 20.0 | 330560 | 0.9836 |
| 0.9863 | 21.0 | 347088 | 0.9873 |
| 0.9781 | 22.0 | 363616 | 0.9724 |
| 0.9369 | 23.0 | 380144 | 0.9599 |
| 0.9578 | 24.0 | 396672 | 0.9557 |
| 0.9253 | 25.0 | 413200 | 0.9400 |
| 0.9441 | 26.0 | 429728 | 0.9222 |
| 0.9138 | 27.0 | 446256 | 0.9140 |
| 0.882 | 28.0 | 462784 | 0.9045 |
| 0.864 | 29.0 | 479312 | 0.8880 |
| 0.8632 | 30.0 | 495840 | 0.9023 |
| 0.8342 | 32.0 | 528896 | 0.8740 |
| 0.8037 | 34.0 | 561952 | 0.8647 |
| 0.8119 | 37.0 | 611536 | 0.8358 |
| 0.8011 | 38.0 | 628064 | 0.8252 |
| 0.786 | 39.0 | 644592 | 0.8228 |
| 0.7697 | 41.0 | 677648 | 0.8138 |
| 0.7485 | 42.0 | 694176 | 0.8104 |
| 0.7689 | 43.0 | 710704 | 0.8018 |
| 0.7401 | 45.0 | 743760 | 0.7957 |
| 0.7031 | 47.0 | 776816 | 0.7726 |
| 0.7578 | 48.0 | 793344 | 0.7864 |
| 0.7298 | 49.0 | 809872 | 0.7775 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
nateraw/audio-test
|
nateraw
| 2021-09-27T03:45:48Z | 0 | 0 |
generic
|
[
"generic",
"audio-to-audio",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:05Z |
---
tags:
- audio-to-audio
library_name: generic
---
|
malaysia-ai/xlnet-large-bahasa-cased
|
malaysia-ai
| 2021-09-26T12:57:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlnet",
"feature-extraction",
"ms",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: ms
---
# xlnet-large-bahasa-cased
Pretrained XLNET large language model for Malay.
## Pretraining Corpus
`xlnet-large-bahasa-cased` model was pretrained on ~1.4 Billion words. Below is list of data we trained on,
1. [cleaned local texts](https://github.com/huseinzol05/malay-dataset/tree/master/dumping/clean).
2. [translated The Pile](https://github.com/huseinzol05/malay-dataset/tree/master/corpus/pile).
## Pretraining details
- All steps can reproduce from here, [Malaya/pretrained-model/xlnet](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/xlnet).
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import XLNetModel, XLNetTokenizer
model = XLNetModel.from_pretrained('malay-huggingface/xlnet-large-bahasa-cased')
tokenizer = XLNetTokenizer.from_pretrained(
'malay-huggingface/xlnet-large-bahasa-cased',
do_lower_case = False,
)
```
|
huggingtweets/aly__dixon-haleyosomething-svpino
|
huggingtweets
| 2021-09-26T12:49:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/aly__dixon-haleyosomething-svpino/1632660543535/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1416541994952937474/yi5cJxnq_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1368667185879584770/pKNxJut-_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1393327649318076417/cQWDVv-q_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">haley o'shaughnessy & Santiago & Aly Dixon</div>
<div style="text-align: center; font-size: 14px;">@aly__dixon-haleyosomething-svpino</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from haley o'shaughnessy & Santiago & Aly Dixon.
| Data | haley o'shaughnessy | Santiago | Aly Dixon |
| --- | --- | --- | --- |
| Tweets downloaded | 3241 | 3250 | 3003 |
| Retweets | 430 | 7 | 426 |
| Short tweets | 460 | 316 | 195 |
| Tweets kept | 2351 | 2927 | 2382 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1mt8xsda/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @aly__dixon-haleyosomething-svpino's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/31g4nsgq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/31g4nsgq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/aly__dixon-haleyosomething-svpino')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Davlan/mbart50-large-yor-eng-mt
|
Davlan
| 2021-09-26T12:40:29Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"arxiv:2103.08647",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# mbart50-large-yor-eng-mt
## Model description
**mbart50-large-yor-eng-mt** is a **machine translation** model from Yorùbá language to English language based on a fine-tuned facebook/mbart-large-50 model. It establishes a **strong baseline** for automatically translating texts from Yorùbá to English.
Specifically, this model is a *mbart-large-50* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt). The model was trained using Swahili(sw_KE) as the language since the pre-trained model does not initially support Yorùbá. Thus, you need to use the sw_KE for language code when evaluating the model.
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning mbart50-large achieves **15.88 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 15.57
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/mbart50-large-eng-yor-mt
|
Davlan
| 2021-09-26T11:57:50Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"arxiv:2103.08647",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# mbart50-large-eng-yor-mt
## Model description
**mbart50-large-eng-yor-mt** is a **machine translation** model from English language to Yorùbá language based on a fine-tuned facebook/mbart-large-50 model. It establishes a **strong baseline** for automatically translating texts from English to Yorùbá.
Specifically, this model is a *mbart-large-50* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt). The model was trained using Swahili(sw_KE) as the language since the pre-trained model does not initially support Yorùbá. Thus, you need to use the sw_KE for language code when evaluating the model.
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning mbarr50-large achieves **13.39 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 9.82
### BibTeX entry and citation info
By David Adelani
```
```
|
huggingtweets/caucasianjames-haleyosomething-officialkat
|
huggingtweets
| 2021-09-26T02:14:24Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/caucasianjames-haleyosomething-officialkat/1632622460306/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1416541994952937474/yi5cJxnq_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/933947605104685056/mumGVsyS_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1420078509230223363/u7XR7esE_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">haley o'shaughnessy & James & Kat Dennings</div>
<div style="text-align: center; font-size: 14px;">@caucasianjames-haleyosomething-officialkat</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from haley o'shaughnessy & James & Kat Dennings.
| Data | haley o'shaughnessy | James | Kat Dennings |
| --- | --- | --- | --- |
| Tweets downloaded | 3242 | 3242 | 3228 |
| Retweets | 431 | 89 | 689 |
| Short tweets | 460 | 602 | 424 |
| Tweets kept | 2351 | 2551 | 2115 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ctao3i2l/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @caucasianjames-haleyosomething-officialkat's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/vge9p265) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/vge9p265/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/caucasianjames-haleyosomething-officialkat')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/veganseltzer
|
huggingtweets
| 2021-09-25T22:38:44Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/veganseltzer/1632609483096/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1315429459663745024/S9mAz-Cs_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Senior beer disposal agent</div>
<div style="text-align: center; font-size: 14px;">@veganseltzer</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Senior beer disposal agent.
| Data | Senior beer disposal agent |
| --- | --- |
| Tweets downloaded | 1248 |
| Retweets | 477 |
| Short tweets | 108 |
| Tweets kept | 663 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/18bbz1me/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @veganseltzer's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/32xde3yh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/32xde3yh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/veganseltzer')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
emil2000/dialogpt-for-french-language
|
emil2000
| 2021-09-25T21:50:35Z | 62 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"fr",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fr
tags:
- {fr}
- {gpt2}
---
This model aims at being a french conversational agent. This consists of a fine-tuning of Dialo-GPT for french language. The dataset used gathers 36k conversations extracted from books, movies, interviews and dialogues for learning french.
More details about the model can be found [there](https://github.com/emil2000dza/DialoGPT-fine-tuned-for-french-language)
|
pere/norwegian-gptneo-blue
|
pere
| 2021-09-25T18:42:49Z | 35 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
# Norwegian GTPNeo Blue.
The first Norwegian GPTNeo model. This one is trained only on a administrative corpus.
|
Hate-speech-CNERG/deoffxlmr-mono-malyalam
|
Hate-speech-CNERG
| 2021-09-25T14:01:42Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"ml",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language: ml
license: apache-2.0
---
This model is used to detect **Offensive Content** in **Malayalam Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Malayalam(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss.
This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.97, Ensemble - 0.97)
### For more details about our paper
Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)".
***Please cite our paper in any published work that uses any of these resources.***
~~~
@inproceedings{saha-etal-2021-hate,
title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection",
author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh",
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = apr,
year = "2021",
address = "Kyiv",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38",
pages = "270--276",
abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.",
}
~~~
|
Hate-speech-CNERG/deoffxlmr-mono-kannada
|
Hate-speech-CNERG
| 2021-09-25T14:01:14Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"kn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language: kn
license: apache-2.0
---
This model is used to detect **Offensive Content** in **Kannada Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Kannada(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss.
This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the second-highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.73, Ensemble - 0.74)
### For more details about our paper
Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)".
***Please cite our paper in any published work that uses any of these resources.***
~~~
@inproceedings{saha-etal-2021-hate,
title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection",
author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh",
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = apr,
year = "2021",
address = "Kyiv",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38",
pages = "270--276",
abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.",
}
~~~
|
Hate-speech-CNERG/dehatebert-mono-spanish
|
Hate-speech-CNERG
| 2021-09-25T14:00:12Z | 136 | 8 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"es",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language: es
license: apache-2.0
---
This model is used detecting **hatespeech** in **Spanish language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.740287 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
Hate-speech-CNERG/dehatebert-mono-polish
|
Hate-speech-CNERG
| 2021-09-25T13:58:40Z | 110 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"pl",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language: pl
license: apache-2.0
---
This model is used detecting **hatespeech** in **Polish language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.723254 for a learning rate of 2e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
Hate-speech-CNERG/dehatebert-mono-italian
|
Hate-speech-CNERG
| 2021-09-25T13:56:50Z | 35 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"it",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language: it
license: apache-2.0
---
This model is used detecting **hatespeech** in **Italian language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.837288 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
Hate-speech-CNERG/dehatebert-mono-german
|
Hate-speech-CNERG
| 2021-09-25T13:55:44Z | 164 | 3 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"de",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language: de
license: apache-2.0
---
This model is used detecting **hatespeech** in **German language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.649794 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
Hate-speech-CNERG/dehatebert-mono-arabic
|
Hate-speech-CNERG
| 2021-09-25T13:54:53Z | 2,563 | 2 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"ar",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language: ar
license: apache-2.0
---
This model is used detecting **hatespeech** in **Arabic language**. The mono in the name refers to the monolingual setting, where the model is trained using only Arabic language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.877609 for a learning rate of 2e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
huggingtweets/sixjay__
|
huggingtweets
| 2021-09-25T11:43:57Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/sixjay__/1632570148333/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1434204311505055754/Ozub-Lmd_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">joj</div>
<div style="text-align: center; font-size: 14px;">@sixjay__</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from joj.
| Data | joj |
| --- | --- |
| Tweets downloaded | 2494 |
| Retweets | 508 |
| Short tweets | 429 |
| Tweets kept | 1557 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/wcyvex9s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sixjay__'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/6yf1o7q5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/6yf1o7q5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sixjay__')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
pritoms/distilgpt2-finetuned-irll2
|
pritoms
| 2021-09-25T11:34:01Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: distilgpt2-finetuned-irll2
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-irll2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 12 | 4.2919 |
| No log | 2.0 | 24 | 4.2158 |
| No log | 3.0 | 36 | 4.1925 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
lewtun/mt5-small-finetuned-mlsum
|
lewtun
| 2021-09-25T09:43:37Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"dataset:mlsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mlsum
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-mlsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: mlsum
type: mlsum
args: es
metrics:
- name: Rouge1
type: rouge
value: 1.1475
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-mlsum
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the mlsum dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 1.1475
- Rouge2: 0.1284
- Rougel: 1.0634
- Rougelsum: 1.0778
- Gen Len: 3.7939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| nan | 1.0 | 808 | nan | 1.1475 | 0.1284 | 1.0634 | 1.0778 | 3.7939 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
superb/superb-test-org__test-submission-with-example-expert__d609b3c32044e50e3d5e9067bd97af1b42f04b0e
|
superb
| 2021-09-24T19:49:31Z | 0 | 0 | null |
[
"tensorboard",
"library:s3prl",
"benchmark:superb",
"type:model",
"dataset:superb",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
datasets:
- superb
tags:
- library:s3prl
- benchmark:superb
- type:model
---
# Fine-tuned s3prl model
Upstream Model: superb-test-org/test-submission-with-example-expert
## Model description
[More information needed]
## Intended uses & limitations
[More information needed]
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
[More information needed]
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
|
SaulLu/cotet5_small_fix
|
SaulLu
| 2021-09-24T17:56:36Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"codet5",
"dataset:code_search_net",
"arxiv:2109.00859",
"arxiv:1909.09436",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- codet5
datasets:
- code_search_net
inference: false
---
# CodeT5 (small-sized model)
Pre-trained CodeT5 model. It was introduced in the paper [CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models
for Code Understanding and Generation](https://arxiv.org/abs/2109.00859) by Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi and first released in [this repository](https://github.com/salesforce/CodeT5).
Disclaimer: The team releasing CodeT5 did not write a model card for this model so this model card has been written by the Hugging Face team (more specifically, [nielsr](https://huggingface.co/nielsr)).
## Model description
From the abstract:
"We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers. Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning. Besides, we propose a novel identifier-aware pre-training task that enables the model to distinguish which code tokens are identifiers and to recover them when they are masked. Furthermore, we propose to exploit the user-written code comments with a bimodal dual generation task for better NL-PL alignment. Comprehensive experiments show that CodeT5 significantly outperforms prior methods on understanding tasks such as code defect detection and clone detection, and generation tasks across various directions including PL-NL, NL-PL, and PL-PL. Further analysis reveals that our model can better capture semantic information from code."
## Intended uses & limitations
This repository contains the pre-trained model only, so you can use this model for masked span prediction, as shown in the code example below. However, the main use of this model is to fine-tune it for a downstream task of interest, such as:
* code summarization
* code generation
* code translation
* code refinement
* code defect detection
* code clone detection.
See the [model hub](https://huggingface.co/models?search=salesforce/codet) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import RobertaTokenizer, T5ForConditionalGeneration
tokenizer = RobertaTokenizer.from_pretrained('Salesforce/codet5-small')
model = T5ForConditionalGeneration.from_pretrained('Salesforce/codet5-small')
text = "def greet(user): print(f'hello <extra_id_0>!')"
input_ids = tokenizer(text, return_tensors="pt").input_ids
# simply generate a single sequence
generated_ids = model.generate(input_ids, max_length=10)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
# this prints "user: {user.name}"
```
## Training data
The CodeT5 model was pretrained on CodeSearchNet [Husain et al., 2019](https://arxiv.org/abs/1909.09436). Additionally, the authors collected two datasets of C/CSharp from [BigQuery1](https://console.cloud.google.com/marketplace/details/github/github-repos) to ensure that all downstream tasks have overlapped programming languages with the pre-training data. In total, around 8.35 million instances are used for pretraining.
## Training procedure
### Preprocessing
This model uses a code-specific BPE (Byte-Pair Encoding) tokenizer. One can prepare text (or code) for the model using RobertaTokenizer, with the files from this repository.
## Evaluation results
For evaluation results on several downstream benchmarks, we refer to the paper.
### BibTeX entry and citation info
```bibtex
@misc{wang2021codet5,
title={CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation},
author={Yue Wang and Weishi Wang and Shafiq Joty and Steven C. H. Hoi},
year={2021},
eprint={2109.00859},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
abdouaziiz/soraberta
|
abdouaziiz
| 2021-09-24T11:31:32Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"language-model",
"wo",
"wolof",
"arxiv:1907.11692",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: wo
tags:
- roberta
- language-model
- wo
- wolof
---
# Soraberta: Unsupervised Language Model Pre-training for Wolof
**Soraberta** is pretrained roberta-base model on wolof language . Roberta was introduced in [this paper](https://arxiv.org/abs/1907.11692)
## Soraberta models
| Model name | Number of layers | Attention Heads | Embedding Dimension | Total Parameters |
| :------: | :---: | :---: | :---: | :---: |
| `soraberta-base` | 6 | 12 | 514 | 83 M |
## Using Soraberta with Hugging Face's Transformers
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='abdouaziiz/soraberta')
>>> unmasker("juroom naari jullit man nanoo boole jend aw nag walla <mask>.")
[{'sequence': 'juroom naari jullit man nanoo boole jend aw nag walla gileem.',
'score': 0.9783930778503418,
'token': 4621,
'token_str': ' gileem'},
{'sequence': 'juroom naari jullit man nanoo boole jend aw nag walla jend.',
'score': 0.009271537885069847,
'token': 2155,
'token_str': ' jend'},
{'sequence': 'juroom naari jullit man nanoo boole jend aw nag walla aw.',
'score': 0.0027585660573095083,
'token': 704,
'token_str': ' aw'},
{'sequence': 'juroom naari jullit man nanoo boole jend aw nag walla pel.',
'score': 0.001120452769100666,
'token': 1171,
'token_str': ' pel'},
{'sequence': 'juroom naari jullit man nanoo boole jend aw nag walla juum.',
'score': 0.0005133090307936072,
'token': 5820,
'token_str': ' juum'}]
```
## Training data
The data sources are [Bible OT](http://biblewolof.com/) , [WOLOF-ONLINE](http://www.wolof-online.com/)
## Contact
Please contact [email protected] for any question, feedback or request.
|
hakurei/gpt-j-random-tinier
|
hakurei
| 2021-09-24T06:21:52Z | 16 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
This model has been initialized with random values. It is supposed to be used for the purpose of debugging.
|
zgotter/bert-base-finetuned-ynat
|
zgotter
| 2021-09-24T02:00:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- f1
model-index:
- name: bert-base-finetuned-ynat
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: ynat
metrics:
- name: F1
type: f1
value: 0.8669116640755216
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-ynat
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3710
- F1: 0.8669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 179 | 0.4223 | 0.8549 |
| No log | 2.0 | 358 | 0.3710 | 0.8669 |
| 0.2576 | 3.0 | 537 | 0.3891 | 0.8631 |
| 0.2576 | 4.0 | 716 | 0.3968 | 0.8612 |
| 0.2576 | 5.0 | 895 | 0.4044 | 0.8617 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
huggingtweets/60secondrevit
|
huggingtweets
| 2021-09-23T22:17:30Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/60secondrevit/1632435423713/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1439946759585812483/S_SxM-Cu_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ʲᵒʰⁿ ᵖⁱᵉʳˢᵒⁿ 🤡🎈</div>
<div style="text-align: center; font-size: 14px;">@60secondrevit</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ʲᵒʰⁿ ᵖⁱᵉʳˢᵒⁿ 🤡🎈.
| Data | ʲᵒʰⁿ ᵖⁱᵉʳˢᵒⁿ 🤡🎈 |
| --- | --- |
| Tweets downloaded | 3247 |
| Retweets | 1050 |
| Short tweets | 676 |
| Tweets kept | 1521 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1jlkb3t2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @60secondrevit's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/d6rqhltg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/d6rqhltg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/60secondrevit')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
piotr-rybak/poleval2021-task4-herbert-large-encoder
|
piotr-rybak
| 2021-09-23T17:34:47Z | 103 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 6098 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters:
```
{'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True}
```
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 1e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3049,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 1024, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
valhalla/distilt5-qg-hl-12-6
|
valhalla
| 2021-09-23T16:42:49Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question-generation",
"distilt5",
"distilt5-qg",
"dataset:squad",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
datasets:
- squad
tags:
- question-generation
- distilt5
- distilt5-qg
widget:
- text: <hl> 42 <hl> is the answer to life, the universe and everything. </s>
- text: Python is a programming language. It is developed by <hl> Guido Van Rossum
<hl>. </s>
- text: Although <hl> practicality <hl> beats purity </s>
license: mit
---
## DistilT5 for question-generation
This is distilled version of [t5-base-qg-hl](https://huggingface.co/valhalla/t5-base-qg-hl) model trained for answer aware question generation task. The answer spans are highlighted within the text with special highlight tokens.
The model is distilled using the **No Teacher Distillation** method proposed by Huggingface, [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq#distilbart).
We just copy alternating layers from `t5-base-qg-hl` and finetune more on the same data. Following table lists other distilled models and their metrics.
| Name | BLEU-4 | METEOR | ROUGE-L | QA-EM | QA-F1 |
|---------------------------------------------------------------------------------|---------|---------|---------|--------|--------|
| [distilt5-qg-hl-6-4](https://huggingface.co/valhalla/distilt5-qg-hl-6-4) | 18.4141 | 24.8417 | 40.3435 | - | - |
| [distilt5-qa-qg-hl-6-4](https://huggingface.co/valhalla/distilt5-qa-qg-hl-6-4) | 18.6493 | 24.9685 | 40.5605 | 76.13 | 84.659 |
| [distilt5-qg-hl-12-6](https://huggingface.co/valhalla/distilt5-qg-hl-12-6) | 20.5275 | 26.5010 | 43.2676 | - | - |
| [distilt5-qa-qg-hl-12-6](https://huggingface.co/valhalla/distilt5-qa-qg-hl-12-6)| 20.6109 | 26.4533 | 43.0895 | 81.61 | 89.831 |
You can play with the model using the inference API, just highlight the answer spans with `<hl>` tokens. For example
`<hl> 42 <hl> is the answer to life, the universe and everything.`
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
### Model in action 🚀
You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
[](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
```python3
from pipelines import pipeline
nlp = pipeline("question-generation", model="valhalla/distilt5-qg-hl-12-6")
nlp("42 is the answer to life, universe and everything.")
=> [{'answer': '42', 'question': 'What is the answer to life, the universe and everything?'}]
```
|
valhalla/distilt5-qa-qg-hl-6-4
|
valhalla
| 2021-09-23T16:42:47Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"question-generation",
"distilt5",
"distilt5-qg",
"dataset:squad",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
datasets:
- squad
tags:
- question-generation
- distilt5
- distilt5-qg
widget:
- text: 'generate question: <hl> 42 <hl> is the answer to life, the universe and everything.
</s>'
- text: 'question: What is 42 context: 42 is the answer to life, the universe and
everything. </s>'
license: mit
---
## DistilT5 for question-generation
This is distilled version of [t5-small-qa-qg-hl](https://huggingface.co/valhalla/t5-small-qa-qg-hl) model trained for question answering and answer aware question generation tasks.
The model is distilled using the **No Teacher Distillation** method proposed by Huggingface, [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq#distilbart).
We just copy alternating layers from `t5-small-qa-qg-hl` and finetune more on the same data. Following table lists other distilled models and their metrics.
| Name | BLEU-4 | METEOR | ROUGE-L | QA-EM | QA-F1 |
|---------------------------------------------------------------------------------|---------|---------|---------|--------|--------|
| [distilt5-qg-hl-6-4](https://huggingface.co/valhalla/distilt5-qg-hl-6-4) | 18.4141 | 24.8417 | 40.3435 | - | - |
| [distilt5-qa-qg-hl-6-4](https://huggingface.co/valhalla/distilt5-qa-qg-hl-6-4) | 18.6493 | 24.9685 | 40.5605 | 76.13 | 84.659 |
| [distilt5-qg-hl-12-6](https://huggingface.co/valhalla/distilt5-qg-hl-12-6) | 20.5275 | 26.5010 | 43.2676 | - | - |
| [distilt5-qa-qg-hl-12-6](https://huggingface.co/valhalla/distilt5-qa-qg-hl-12-6)| 20.6109 | 26.4533 | 43.0895 | 81.61 | 89.831 |
You can play with the model using the inference API. Here's how you can use it
For QG
`generate question: <hl> 42 <hl> is the answer to life, the universe and everything.`
For QA
`question: What is 42 context: 42 is the answer to life, the universe and everything.`
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
### Model in action 🚀
You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
[](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
```python3
from pipelines import pipeline
nlp = pipeline("multitask-qa-qg", model="valhalla/distilt5-qa-qg-hl-6-4")
# to generate questions simply pass the text
nlp("42 is the answer to life, the universe and everything.")
=> [{'answer': '42', 'question': 'What is the answer to life?'}]
# for qa pass a dict with "question" and "context"
nlp({
"question": "What is 42 ?",
"context": "42 is the answer to life, the universe and everything."
})
=> 'the answer to life, the universe and everything'
```
|
valhalla/distilt5-qa-qg-hl-12-6
|
valhalla
| 2021-09-23T16:42:44Z | 8 | 2 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question-generation",
"distilt5",
"distilt5-qg",
"dataset:squad",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
datasets:
- squad
tags:
- question-generation
- distilt5
- distilt5-qg
widget:
- text: 'generate question: <hl> 42 <hl> is the answer to life, the universe and everything.
</s>'
- text: 'question: What is 42 context: 42 is the answer to life, the universe and
everything. </s>'
license: mit
---
## DistilT5 for question-generation
This is distilled version of [t5-base-qa-qg-hl](https://huggingface.co/valhalla/t5-base-qa-qg-hl) model trained for question answering and answer aware question generation tasks.
The model is distilled using the **No Teacher Distillation** method proposed by Huggingface, [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq#distilbart).
We just copy alternating layers from `t5-base-qa-qg-hl` and finetune more on the same data. Following table lists other distilled models and their metrics.
| Name | BLEU-4 | METEOR | ROUGE-L | QA-EM | QA-F1 |
|---------------------------------------------------------------------------------|---------|---------|---------|--------|--------|
| [distilt5-qg-hl-6-4](https://huggingface.co/valhalla/distilt5-qg-hl-6-4) | 18.4141 | 24.8417 | 40.3435 | - | - |
| [distilt5-qa-qg-hl-6-4](https://huggingface.co/valhalla/distilt5-qa-qg-hl-6-4) | 18.6493 | 24.9685 | 40.5605 | 76.13 | 84.659 |
| [distilt5-qg-hl-12-6](https://huggingface.co/valhalla/distilt5-qg-hl-12-6) | 20.5275 | 26.5010 | 43.2676 | - | - |
| [distilt5-qa-qg-hl-12-6](https://huggingface.co/valhalla/distilt5-qa-qg-hl-12-6)| 20.6109 | 26.4533 | 43.0895 | 81.61 | 89.831 |
You can play with the model using the inference API. Here's how you can use it
For QG
`generate question: <hl> 42 <hl> is the answer to life, the universe and everything.`
For QA
`question: What is 42 context: 42 is the answer to life, the universe and everything.`
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
### Model in action 🚀
You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
[](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
```python3
from pipelines import pipeline
nlp = pipeline("multitask-qa-qg", model="valhalla/distilt5-qa-qg-hl-12-6")
# to generate questions simply pass the text
nlp("42 is the answer to life, the universe and everything.")
=> [{'answer': '42', 'question': 'What is the answer to life, the universe and everything?'}]
# for qa pass a dict with "question" and "context"
nlp({
"question": "What is 42 ?",
"context": "42 is the answer to life, the universe and everything."
})
=> 'the answer to life, the universe and everything'
```
|
gbarone77/polibert_sa
|
gbarone77
| 2021-09-23T16:42:31Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"sentiment",
"Italian",
"it",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: it
tags:
- sentiment
- Italian
license: mit
widget:
- text: Giuseppe Rossi è un ottimo politico
---
# 🤗 + polibert_SA - POLItic BERT based Sentiment Analysis
## Model description
This model performs sentiment analysis on Italian political twitter sentences. It was trained starting from an instance of "bert-base-italian-uncased-xxl" and fine-tuned on an Italian dataset of tweets. You can try it out at https://www.unideeplearning.com/twitter_sa/ (in italian!)
#### Hands-on
```python
import torch
from torch import nn
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("unideeplearning/polibert_sa")
model = AutoModelForSequenceClassification.from_pretrained("unideeplearning/polibert_sa")
text = "Giuseppe Rossi è un pessimo politico"
input_ids = tokenizer.encode(text, add_special_tokens=True, return_tensors= 'pt')
logits, = model(input_ids)
logits = logits.squeeze(0)
prob = nn.functional.softmax(logits, dim=0)
# 0 Negative, 1 Neutral, 2 Positive
print(prob.argmax().tolist())
```
#### Hyperparameters
- Optimizer: **AdamW** with learning rate of **2e-5**, epsilon of **1e-8**
- Max epochs: **2**
- Batch size: **16**
## Acknowledgments
Thanks to the support from:
the [Hugging Face](https://huggingface.co/), https://www.unioneprofessionisti.com
https://www.unideeplearning.com/
|
toloka/t5-large-for-text-aggregation
|
toloka
| 2021-09-23T16:40:58Z | 16 | 7 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"text aggregation",
"summarization",
"en",
"dataset:toloka/CrowdSpeech",
"arxiv:1910.10683",
"arxiv:2107.01091",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- text aggregation
- summarization
license: apache-2.0
datasets:
- toloka/CrowdSpeech
metrics:
- wer
---
# T5 Large for Text Aggregation
## Model description
This is a T5 Large fine-tuned for crowdsourced text aggregation tasks. The model takes multiple performers' responses and yields a single aggregated response. This approach was introduced for the first time during [VLDB 2021 Crowd Science Challenge](https://crowdscience.ai/challenges/vldb21) and originally implemented at the second-place competitor's [GitHub](https://github.com/A1exRey/VLDB2021_workshop_t5). The [paper](http://ceur-ws.org/Vol-2932/short2.pdf) describing this model was presented at the [2nd Crowd Science Workshop](https://crowdscience.ai/conference_events/vldb21).
## How to use
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, AutoConfig
mname = "toloka/t5-large-for-text-aggregation"
tokenizer = AutoTokenizer.from_pretrained(mname)
model = AutoModelForSeq2SeqLM.from_pretrained(mname)
input = "samplee text | sampl text | sample textt"
input_ids = tokenizer.encode(input, return_tensors="pt")
outputs = model.generate(input_ids)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded) # sample text
```
## Training data
Pretrained weights were taken from the [original](https://huggingface.co/t5-large) T5 Large model by Google. For more details on the T5 architecture and training procedure see https://arxiv.org/abs/1910.10683
Model was fine-tuned on `train-clean`, `dev-clean` and `dev-other` parts of the [CrowdSpeech](https://huggingface.co/datasets/toloka/CrowdSpeech) dataset that was introduced in [our paper](https://openreview.net/forum?id=3_hgF1NAXU7&referrer=%5BAuthor%20Console%5D(%2Fgroup%3Fid%3DNeurIPS.cc%2F2021%2FTrack%2FDatasets_and_Benchmarks%2FRound1%2FAuthors%23your-submissions).
## Training procedure
The model was fine-tuned for eight epochs directly following the HuggingFace summarization training [example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization).
## Eval results
Dataset | Split | WER
-----------|------------|----------
CrowdSpeech| test-clean | 4.99
CrowdSpeech| test-other | 10.61
### BibTeX entry and citation info
```bibtex
@inproceedings{Pletenev:21,
author = {Pletenev, Sergey},
title = {{Noisy Text Sequences Aggregation as a Summarization Subtask}},
year = {2021},
booktitle = {Proceedings of the 2nd Crowd Science Workshop: Trust, Ethics, and Excellence in Crowdsourced Data Management at Scale},
pages = {15--20},
address = {Copenhagen, Denmark},
issn = {1613-0073},
url = {http://ceur-ws.org/Vol-2932/short2.pdf},
language = {english},
}
```
```bibtex
@misc{pavlichenko2021vox,
title={Vox Populi, Vox DIY: Benchmark Dataset for Crowdsourced Audio Transcription},
author={Nikita Pavlichenko and Ivan Stelmakh and Dmitry Ustalov},
year={2021},
eprint={2107.01091},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
|
tanmoyio/wav2vec2-large-xlsr-bengali
|
tanmoyio
| 2021-09-23T16:39:27Z | 1,078 | 3 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"dataset:OpenSLR",
"license:cc-by-sa-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: Bengali
datasets:
- OpenSLR
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: cc-by-sa-4.0
model-index:
- name: XLSR Wav2Vec2 Bengali by Tanmoy Sarkar
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR
type: OpenSLR
args: ben
metrics:
- name: Test WER
type: wer
value: 88.58
---
# Wav2Vec2-Large-XLSR-Bengali
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) Bengali using the [Bengali ASR training data set containing ~196K utterances](https://www.openslr.org/53/).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
Dataset must be downloaded from [this website](https://www.openslr.org/53/) and preprocessed accordingly. For example 1250 test samples has been chosen.
```python
import pandas as pd
test_dataset = pd.read_csv('utt_spk_text.tsv', sep='\\t', header=None)[60000:61250]
test_dataset.columns = ["audio_path", "__", "label"]
test_dataset = test_data.drop("__", axis=1)
def add_file_path(text):
path = "data/" + text[:2] + "/" + text + '.flac'
return path
test_dataset['audio_path'] = test_dataset['audio_path'].map(lambda x: add_file_path(x))
```
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
processor = Wav2Vec2Processor.from_pretrained("tanmoyio/wav2vec2-large-xlsr-bengali")
model = Wav2Vec2ForCTC.from_pretrained("tanmoyio/wav2vec2-large-xlsr-bengali")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["audio_path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["label"][:2])
```
## Evaluation
The model can be evaluated as follows on the Bengali test data of OpenSLR.
```python
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("tanmoyio/wav2vec2-large-xlsr-bengali")
model = Wav2Vec2ForCTC.from_pretrained("tanmoyio/wav2vec2-large-xlsr-bengali")
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["label"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 88.58 %
## Training
The script used for training can be found [Bengali ASR Fine Tuning Wav2Vec2](https://colab.research.google.com/drive/1Bkc5C_cJV9BeS0FD0MuHyayl8hqcbdRZ?usp=sharing)
|
skt/kogpt2-base-v2
|
skt
| 2021-09-23T16:29:28Z | 23,482 | 45 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"ko",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: ko
tags:
- gpt2
license: cc-by-nc-sa-4.0
---
For more details: https://github.com/SKT-AI/KoGPT2
|
popcornell/FasNetTAC-paper
|
popcornell
| 2021-09-23T16:21:33Z | 13 | 3 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"FasNet-TAC",
"audio-to-audio",
"multichannel",
"beamforming",
"dataset:TACDataset",
"dataset:sep_noisy",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:05Z |
---
tags:
- asteroid
- audio
- FasNet-TAC
- audio-to-audio
- multichannel
- beamforming
datasets:
- TACDataset
- sep_noisy
license: cc-by-sa-4.0
---
## Asteroid model `Samuele Cornell/FasNetTAC_TACDataset_separatenoisy`
Imported from [Zenodo](https://zenodo.org/record/4557489)
### Description:
This model was trained by popcornell using the TAC/TAC recipe in Asteroid. It was trained on the separate_noisy task of the TACDataset dataset.
### Training config:
```yaml
data:
dev_json: ./data/validation.json
sample_rate: 16000
segment: None
test_json: ./data/test.json
train_json: ./data/train.json
net:
chunk_size: 50
context_ms: 16
enc_dim: 64
feature_dim: 64
hidden_dim: 128
hop_size: 25
n_layers: 4
n_src: 2
window_ms: 4
optim:
lr: 0.001
weight_decay: 1e-06
training:
accumulate_batches: 1
batch_size: 8
early_stop: True
epochs: 200
gradient_clipping: 5
half_lr: True
num_workers: 8
patience: 30
save_top_k: 10
```
### Results:
```yaml
si_sdr: 10.871864315894744
si_sdr_imp: 11.322284052560262
```
### License notice:
This work "FasNetTAC_TACDataset_separatenoisy" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov, used under CC BY 4.0; of End-to-end Microphone Permutation and Number Invariant Multi-channel Speech Separation by Yi Luo, Zhuo Chen, Nima Mesgarani, Takuya Yoshioka, used under CC BY 4.0. "FasNetTAC_TACDataset_separatenoisy" is licensed under Attribution-ShareAlike 3.0 Unported by popcornell.
|
persiannlp/parsbert-base-parsinlu-multiple-choice
|
persiannlp
| 2021-09-23T16:20:53Z | 66 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"multiple-choice",
"parsbert",
"persian",
"farsi",
"text-classification",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- parsbert
- persian
- farsi
pipeline_tag: text-classification
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a parsbert-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from typing import List
import torch
from transformers import AutoConfig, AutoModelForMultipleChoice, AutoTokenizer
model_name = "persiannlp/parsbert-base-parsinlu-multiple-choice"
tokenizer = AutoTokenizer.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
model = AutoModelForMultipleChoice.from_pretrained(model_name, config=config)
def run_model(question: str, candicates: List[str]):
assert len(candicates) == 4, "you need four candidates"
choices_inputs = []
for c in candicates:
text_a = "" # empty context
text_b = question + " " + c
inputs = tokenizer(
text_a,
text_b,
add_special_tokens=True,
max_length=128,
padding="max_length",
truncation=True,
return_overflowing_tokens=True,
)
choices_inputs.append(inputs)
input_ids = torch.LongTensor([x["input_ids"] for x in choices_inputs])
output = model(input_ids=input_ids)
print(output)
return output
run_model(question="وسیع ترین کشور جهان کدام است؟", candicates=["آمریکا", "کانادا", "روسیه", "چین"])
run_model(question="طامع یعنی ؟", candicates=["آزمند", "خوش شانس", "محتاج", "مطمئن"])
run_model(
question="زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده ",
candicates=["روز اول", "روز دوم", "روز سوم", "هیچکدام"])
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/parsbert-base-parsinlu-entailment
|
persiannlp
| 2021-09-23T16:20:50Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"entailment",
"parsbert",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- entailment
- parsbert
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Textual Entailment (مدل برای پاسخ به استلزام منطقی)
This is a model for textual entailment problems.
Here is an example of how you can run this model:
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import numpy as np
labels = ["entails", "contradicts", "neutral"]
model_name_or_path = "persiannlp/parsbert-base-parsinlu-entailment"
model = AutoModelForSequenceClassification.from_pretrained(model_name_or_path)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path,)
def model_predict(text_a, text_b):
features = tokenizer( [(text_a, text_b)], padding="max_length", truncation=True, return_tensors='pt')
output = model(**features)
logits = output[0]
probs = torch.nn.functional.softmax(logits, dim=1).tolist()
idx = np.argmax(np.array(probs))
print(labels[idx], probs)
model_predict(
"این مسابقات بین آوریل و دسامبر در هیپودروم ولیفندی در نزدیکی باکرکی ، ۱۵ کیلومتری (۹ مایل) غرب استانبول برگزار می شود.",
"در ولیفندی هیپودروم، مسابقاتی از آوریل تا دسامبر وجود دارد."
)
model_predict(
"آیا کودکانی وجود دارند که نیاز به سرگرمی دارند؟",
"هیچ کودکی هرگز نمی خواهد سرگرم شود.",
)
model_predict(
"ما به سفرهایی رفته ایم که در نهرهایی شنا کرده ایم",
"علاوه بر استحمام در نهرها ، ما به اسپا ها و سونا ها نیز رفته ایم."
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-small-parsinlu-translation_en_fa
|
persiannlp
| 2021-09-23T16:20:48Z | 705 | 3 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"machine-translation",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- machine-translation
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- sacrebleu
---
# Machine Translation (ترجمهی ماشینی)
This is an mT5-based model for machine translation (English -> Persian).
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "small"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-translation_en_fa"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("Praise be to Allah, the Cherisher and Sustainer of the worlds;")
run_model("shrouds herself in white and walks penitentially disguised as brotherly love through factories and parliaments; offers help, but desires power;")
run_model("He thanked all fellow bloggers and organizations that showed support.")
run_model("Races are held between April and December at the Veliefendi Hippodrome near Bakerky, 15 km (9 miles) west of Istanbul.")
run_model("I want to pursue PhD in Computer Science about social network,what is the open problem in social networks?")
```
which should output:
```
['برای الله، یعنی چرنده و سوزان دنیا، تحسین کنید']
['خودش را در سفید پوسته می کند و به صورت عشق برادرانه']
['او از تمام بلاگرها و سازمان هایی که حمایتشان را نشان می داد']
['در طول ماه آوریل و دسامبر در والی فیودورونا نزدیک بیکر']
['من می خواهم در مورد شبکه اجتماعی تحقیقات علوم کامپیوتری را دن']
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-small-parsinlu-squad-reading-comprehension
|
persiannlp
| 2021-09-23T16:20:45Z | 81 | 3 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"reading-comprehension",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:squad",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- reading-comprehension
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- squad
metrics:
- f1
---
# Reading Comprehension (مدل برای پاسخ به درک مطلب)
This is a mT5-based model for reading comprehension.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "small"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-squad-reading-comprehension"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(paragraph, question, **generator_args):
input_ids = tokenizer.encode(question + "\n" + paragraph, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model(
"یک شی را دارای تقارن مینامیم زمانی که ان شی را بتوان به دو یا چند قسمت تقسیم کرد که آنها قسمتی از یک طرح سازمان یافته باشند یعنی بر روی شکل تنها جابجایی و چرخش و بازتاب و تجانس انجام شود و در اصل شکل تغییری به وجود نیایید آنگاه ان را تقارن مینامیم مرکز تقارن:اگر در یک شکل نقطهای مانندA وجود داشته باشد که هر نقطهٔ روی شکل (محیط) نسبت به نقطه یAمتقارن یک نقطهٔ دیگر شکل (محیط) باشد، نقطهٔ Aمرکز تقارن است. یعنی هر نقطه روی شکل باید متقارنی داشته باشد شکلهای که منتظم هستند و زوج ضلع دارند دارای مرکز تقارند ولی شکلهای فرد ضلعی منتظم مرکز تقارن ندارند. متوازیالأضلاع و دایره یک مرکز تقارن دارند ممکن است یک شکل خط تقارن نداشته باشد ولی مرکز تقارن داشته باشد. (منبع:س. گ)",
"اشکالی که یک مرکز تقارن دارند"
)
run_model(
"شُتُر یا اُشتر را که در زبان پهلوی (ushtar)[نیازمند منبع] میگفتند حیوانی است نیرومند و تنومند با توش و توان بالا از خانواده شتران؛ شبه نشخوارکننده و با دست و گردنی دراز. بر پشت خود یک یا دو کوهان دارد که ساختارش از پیه و چربی است. در دین اسلام گوشت او حلال است. اما ذبح آن با دیگر جانوران حلال گوشت متفاوت است و آن را نحر (بریدن گلو) میکنند و اگر سر آن را مانند گوسفند پیش از نحر ببرند گوشت آن حلال نیست. شیرش نیز نوشیده میشود ولی بیشتر کاربرد بارکشی دارد. پشم و پوستش نیز برای ریسندگی و پارچهبافی و کفشدوزی کاربرد دارد. گونههای دیگری از شتران نیز در آمریکای جنوبی زندگی میکنند، به نامهای لاما، آلپاکا، گواناکو که دارای کوهان نیستند. شتر ویژگیهای خاصّی دارد که مهمترین آنها تحمّل شرایط سخت صحرا و دماهای گوناگون و بهویژه گرمای شدید تابستان و کمبود آب و علوفه است. ترکیب جسمانی شتر با دیگر جانوران اختلاف زیادی دارد، و این اختلاف انگیزه شده که شتر در درازا روزهای سال در بیابان زندگی کند و از بوتهها و درختچههای گوناگون صحرایی و کویری و حتی از بوتههای شور و خاردار تغذیه کند. عربها از زمانهای بسیار دور از شتر استفاده کرده و میکنند. آنها به این حیوان اهلی لقب کشتی صحرا (به عربی: سفینةالصحراء) دادهاند.",
"غذای شترچیست؟"
)
run_model(
"""حسین میرزایی میگوید مرحله اول پرداخت وام حمایتی کرونا به همگی خانوارهای یارانهبگیر متقاضی تکمیل شده است و حال چهار میلیون خانوار که به عنوان "اقشار خاص" و "آسیبپذیر" شناسایی شدند، میتوانند برای یک میلیون تومان وام دیگر درخواست بدهند. آقای میرزایی گفته خانوارهای "آسیبپذیر" که شرایط گرفتن وام یک میلیونی اضافی را دارند با پیامک از این امکان مطلع شدهاند. بنا به گزارشهای رسمی با شیوع کرونا در ایران یک میلیون نفر بیکار شدهاند و درآمد کارکنان مشاغل غیررسمی نیز ضربه قابل توجهی خورده است. ارزش ریال هم در هفتههای اخیر در برابر ارزهای خارجی سقوط کرده است. اقتصاد ایران پیش از شیوع کرونا نیز با مشکلات مزمن رکود، تورم، تحریم و فساد روبرو بود.""",
"وام یارانه به چه کسانی میدهند؟"
)
run_model(
"در ۲۲ ژوئن ۱۹۴۱ نیروهای محور در عملیات بارباروسا حمله سنگینی به اتحاد شوروی کرده و یکی از بزرگترین نبردهای زمینی تاریخ بشر را رقم زدند. همچنین جبهه شرقی باعث به دام افتادن نیروهای محور شد و بیش از همه ارتش آلمان نازی را درگیر جنگ فرسایشی کرد. در دسامبر ۱۹۴۱ ژاپن یک در عملیاتی ناگهانی با نام نبرد پرل هاربر به پایگاه دریایی ایالات متحده آمریکا حمله کرد. به دنبال این اتفاق آمریکا نیز بلافاصله علیه ژاپن اعلان جنگ کرد که با حمایت بریتانیا همراه شد. پس از آن متحدین (نیروهای محور در اروپا) نیز با اتحاد ژاپن علیه آمریکا اعلام جنگ کردند. دستآوردهای ژاپن در یورش به آمریکا باعث ایجاد این احساس در آسیا شد که آسیا از تسلط غرب خارج شدهاست از این رو بسیاری از ارتشهای شکست خورده با آنها همراهی کردند.",
"چرا امریکا وارد جنگ جهانی دوم شد؟"
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-small-parsinlu-sentiment-analysis
|
persiannlp
| 2021-09-23T16:20:41Z | 54 | 2 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"sentiment",
"sentiment-analysis",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- sentiment
- sentiment-analysis
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Sentiment Analysis (آنالیز احساسات)
This is a mT5 model for sentiment analysis.
Here is an example of how you can run this model:
```python
import torch
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
import numpy as np
model_name_or_path = "persiannlp/mt5-small-parsinlu-sentiment-analysis"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def model_predict(text_a, text_b):
features = tokenizer( [(text_a, text_b)], padding="max_length", truncation=True, return_tensors='pt')
output = model(**features)
logits = output[0]
probs = torch.nn.functional.softmax(logits, dim=1).tolist()
idx = np.argmax(np.array(probs))
print(labels[idx], probs)
def run_model(context, query, **generator_args):
input_ids = tokenizer.encode(context + "<sep>" + query, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model(
"یک فیلم ضعیف بی محتوا بدون فیلمنامه . شوخی های سخیف .",
"نظر شما در مورد داستان، فیلمنامه، دیالوگ ها و موضوع فیلم لونه زنبور چیست؟"
)
run_model(
"فیلم تا وسط فیلم یعنی دقیقا تا جایی که معلوم میشه بچه های املشی دنبال رضان خیلی خوب و جذاب پیش میره ولی دقیقا از همونجاش سکته میزنه و خلاص...",
"نظر شما به صورت کلی در مورد فیلم ژن خوک چیست؟"
)
run_model(
"اصلا به هیچ عنوان علاقه نداشتم اجرای می سی سی پی نشسته میمیرد روی پرده سینما ببینم دیالوگ های تکراری هلیکوپتر ماشین آلندلون لئون پاپیون آخه چرااااااااااااااا همون حسی که توی تالار وحدت بعد از نیم ساعت به سرم اومد امشب توی سالن سینما تجربه کردم ،حس گریز از سالن....... (ノಠ益ಠ)ノ ",
" نظر شما در مورد صداگذاری و جلوه های صوتی فیلم مسخرهباز چیست؟"
)
run_model(
" گول نخورید این رنگارنگ مینو نیست برای شرکت گرجیه و متاسفانه این محصولش اصلا مزه رنگارنگی که انتظار دارید رو نمیده ",
" نظر شما در مورد عطر، بو، و طعم این بیسکویت و ویفر چیست؟"
)
run_model(
"در مقایسه با سایر برندهای موجود در بازار با توجه به حراجی که داشت ارزانتر ب",
" شما در مورد قیمت و ارزش خرید این حبوبات و سویا چیست؟"
)
run_model(
"من پسرم عاشق ایناس ولی دیگه به خاطر حفظ محیط زیست فقط زمانهایی که مجبور باشم شیر دونه ای میخرم و سعی میکنم دیگه کمتر شیر با بسته بندی تتراپک استفاده کنم ",
"نظر شما به صورت کلی در مورد این شیر چیست؟"
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-small-parsinlu-arc-comqa-obqa-multiple-choice
|
persiannlp
| 2021-09-23T16:20:31Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"multiple-choice",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:commonsenseqa",
"dataset:arc",
"dataset:openbookqa",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- commonsenseqa
- arc
- openbookqa
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a mT5-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "small"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-arc-comqa-obqa-multiple-choice"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("وسیع ترین کشور جهان کدام است؟ <sep> آمریکا <sep> کانادا <sep> روسیه <sep> چین")
run_model("طامع یعنی ؟ <sep> آزمند <sep> خوش شانس <sep> محتاج <sep> مطمئن")
run_model(
"زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده <sep> روز اول <sep> روز دوم <sep> روز سوم <sep> هیچکدام")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-large-parsinlu-snli-entailment
|
persiannlp
| 2021-09-23T16:20:24Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"entailment",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:snli",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- entailment
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- snli
metrics:
- accuracy
---
# Textual Entailment (مدل برای پاسخ به استلزام منطقی)
This is a model for textual entailment problems.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size="large"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-snli-entailment"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(premise, hypothesis, **generator_args):
input_ids = tokenizer.encode(f"{premise}<sep>{hypothesis}", return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model(
"این مسابقات بین آوریل و دسامبر در هیپودروم ولیفندی در نزدیکی باکرکی ، ۱۵ کیلومتری (۹ مایل) غرب استانبول برگزار می شود.",
"در ولیفندی هیپودروم، مسابقاتی از آوریل تا دسامبر وجود دارد."
)
run_model(
"آیا کودکانی وجود دارند که نیاز به سرگرمی دارند؟",
"هیچ کودکی هرگز نمی خواهد سرگرم شود.",
)
run_model(
"ما به سفرهایی رفته ایم که در نهرهایی شنا کرده ایم",
"علاوه بر استحمام در نهرها ، ما به اسپا ها و سونا ها نیز رفته ایم."
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-large-parsinlu-sentiment-analysis
|
persiannlp
| 2021-09-23T16:20:21Z | 25 | 2 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"sentiment",
"sentiment-analysis",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- sentiment
- sentiment-analysis
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Sentiment Analysis (آنالیز احساسات)
This is a mT5 model for sentiment analysis.
Here is an example of how you can run this model:
```python
import torch
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
import numpy as np
model_name_or_path = "persiannlp/mt5-large-parsinlu-sentiment-analysis"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def model_predict(text_a, text_b):
features = tokenizer( [(text_a, text_b)], padding="max_length", truncation=True, return_tensors='pt')
output = model(**features)
logits = output[0]
probs = torch.nn.functional.softmax(logits, dim=1).tolist()
idx = np.argmax(np.array(probs))
print(labels[idx], probs)
def run_model(context, query, **generator_args):
input_ids = tokenizer.encode(context + "<sep>" + query, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model(
"یک فیلم ضعیف بی محتوا بدون فیلمنامه . شوخی های سخیف .",
"نظر شما در مورد داستان، فیلمنامه، دیالوگ ها و موضوع فیلم لونه زنبور چیست؟"
)
run_model(
"فیلم تا وسط فیلم یعنی دقیقا تا جایی که معلوم میشه بچه های املشی دنبال رضان خیلی خوب و جذاب پیش میره ولی دقیقا از همونجاش سکته میزنه و خلاص...",
"نظر شما به صورت کلی در مورد فیلم ژن خوک چیست؟"
)
run_model(
"اصلا به هیچ عنوان علاقه نداشتم اجرای می سی سی پی نشسته میمیرد روی پرده سینما ببینم دیالوگ های تکراری هلیکوپتر ماشین آلندلون لئون پاپیون آخه چرااااااااااااااا همون حسی که توی تالار وحدت بعد از نیم ساعت به سرم اومد امشب توی سالن سینما تجربه کردم ،حس گریز از سالن....... (ノಠ益ಠ)ノ ",
" نظر شما در مورد صداگذاری و جلوه های صوتی فیلم مسخرهباز چیست؟"
)
run_model(
" گول نخورید این رنگارنگ مینو نیست برای شرکت گرجیه و متاسفانه این محصولش اصلا مزه رنگارنگی که انتظار دارید رو نمیده ",
" نظر شما در مورد عطر، بو، و طعم این بیسکویت و ویفر چیست؟"
)
run_model(
"در مقایسه با سایر برندهای موجود در بازار با توجه به حراجی که داشت ارزانتر ب",
" شما در مورد قیمت و ارزش خرید این حبوبات و سویا چیست؟"
)
run_model(
"من پسرم عاشق ایناس ولی دیگه به خاطر حفظ محیط زیست فقط زمانهایی که مجبور باشم شیر دونه ای میخرم و سعی میکنم دیگه کمتر شیر با بسته بندی تتراپک استفاده کنم ",
"نظر شما به صورت کلی در مورد این شیر چیست؟"
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-large-parsinlu-qqp-query-paraphrasing
|
persiannlp
| 2021-09-23T16:20:19Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"query-paraphrasing",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:qqp",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- query-paraphrasing
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- qqp
metrics:
- accuracy
---
# Detection of Paraphrased Queries (تشخصیص سوالات هممعنی)
This is a model for detection of paraphrased queries.
Here is an example of how you can run this model:
```python
from transformers import MT5Config, MT5ForConditionalGeneration, MT5Tokenizer
model_name = "persiannlp/mt5-large-parsinlu-qqp-query-paraphrasing"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(q1, q2, **generator_args):
input_ids = tokenizer.encode(f"{q1}<sep>{q2}", return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("چه چیزی باعث پوکی استخوان می شود؟", "چه چیزی باعث مقاومت استخوان در برابر ضربه می شود؟")
run_model("من دارم به این فکر میکنم چرا ساعت هفت نمیشه؟", "چرا من ساده فکر میکردم به عشقت پابندی؟")
run_model("دعای کمیل در چه روزهایی خوانده می شود؟", "دعای جوشن کبیر در چه شبی خوانده می شود؟")
run_model("دعای کمیل در چه روزهایی خوانده می شود؟", "دعای جوشن کبیر در چه شبی خوانده می شود؟")
run_model("شناسنامه در چه سالی وارد ایران شد؟", "سیب زمینی در چه سالی وارد ایران شد؟")
run_model("سیب زمینی چه زمانی وارد ایران شد؟", "سیب زمینی در چه سالی وارد ایران شد؟")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-large-parsinlu-multiple-choice
|
persiannlp
| 2021-09-23T16:20:14Z | 63 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"multiple-choice",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a mT5-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "large"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-multiple-choice"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("وسیع ترین کشور جهان کدام است؟ <sep> آمریکا <sep> کانادا <sep> روسیه <sep> چین")
run_model("طامع یعنی ؟ <sep> آزمند <sep> خوش شانس <sep> محتاج <sep> مطمئن")
run_model(
"زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده <sep> روز اول <sep> روز دوم <sep> روز سوم <sep> هیچکدام")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-large-parsinlu-arc-comqa-obqa-multiple-choice
|
persiannlp
| 2021-09-23T16:20:12Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"multiple-choice",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:commonsenseqa",
"dataset:arc",
"dataset:openbookqa",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- commonsenseqa
- arc
- openbookqa
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a mT5-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "large"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-arc-comqa-obqa-multiple-choice"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("وسیع ترین کشور جهان کدام است؟ <sep> آمریکا <sep> کانادا <sep> روسیه <sep> چین")
run_model("طامع یعنی ؟ <sep> آزمند <sep> خوش شانس <sep> محتاج <sep> مطمئن")
run_model(
"زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده <sep> روز اول <sep> روز دوم <sep> روز سوم <sep> هیچکدام")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-base-parsinlu-translation_en_fa
|
persiannlp
| 2021-09-23T16:20:09Z | 111 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"machine-translation",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- machine-translation
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- sacrebleu
---
# Machine Translation (ترجمهی ماشینی)
This is an mT5-based model for machine translation (English -> Persian).
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "base"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-translation_en_fa"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("Praise be to Allah, the Cherisher and Sustainer of the worlds;")
run_model("shrouds herself in white and walks penitentially disguised as brotherly love through factories and parliaments; offers help, but desires power;")
run_model("He thanked all fellow bloggers and organizations that showed support.")
run_model("Races are held between April and December at the Veliefendi Hippodrome near Bakerky, 15 km (9 miles) west of Istanbul.")
run_model("I want to pursue PhD in Computer Science about social network,what is the open problem in social networks?")
```
which should output:
```
['خدا را شکر که عامل خطرناک و محافظ دنیاست.']
['خود را سفید می کند و به شکل برادرانه ای در کارخانه ها و']
['او از تمامی همکاران و سازمان هایی که از او حمایت می کردند تشکر']
['برگزاری مسابقات بین آوریل تا دسامبر در هیپوگریم والی']
['من می خواهم تحصیل دکترای علوم کامپیوتری را در مورد شب']
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-base-parsinlu-snli-entailment
|
persiannlp
| 2021-09-23T16:20:04Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"entailment",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:snli",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- entailment
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- snli
metrics:
- accuracy
---
# Textual Entailment (مدل برای پاسخ به استلزام منطقی)
This is a model for textual entailment problems.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size="base"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-snli-entailment"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(premise, hypothesis, **generator_args):
input_ids = tokenizer.encode(f"{premise}<sep>{hypothesis}", return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model(
"این مسابقات بین آوریل و دسامبر در هیپودروم ولیفندی در نزدیکی باکرکی ، ۱۵ کیلومتری (۹ مایل) غرب استانبول برگزار می شود.",
"در ولیفندی هیپودروم، مسابقاتی از آوریل تا دسامبر وجود دارد."
)
run_model(
"آیا کودکانی وجود دارند که نیاز به سرگرمی دارند؟",
"هیچ کودکی هرگز نمی خواهد سرگرم شود.",
)
run_model(
"ما به سفرهایی رفته ایم که در نهرهایی شنا کرده ایم",
"علاوه بر استحمام در نهرها ، ما به اسپا ها و سونا ها نیز رفته ایم."
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-base-parsinlu-sentiment-analysis
|
persiannlp
| 2021-09-23T16:20:02Z | 94 | 4 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"sentiment",
"sentiment-analysis",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- sentiment
- sentiment-analysis
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Sentiment Analysis (آنالیز احساسات)
This is a mT5 model for sentiment analysis.
Here is an example of how you can run this model:
```python
import torch
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
import numpy as np
model_name_or_path = "persiannlp/mt5-base-parsinlu-sentiment-analysis"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def model_predict(text_a, text_b):
features = tokenizer( [(text_a, text_b)], padding="max_length", truncation=True, return_tensors='pt')
output = model(**features)
logits = output[0]
probs = torch.nn.functional.softmax(logits, dim=1).tolist()
idx = np.argmax(np.array(probs))
print(labels[idx], probs)
def run_model(context, query, **generator_args):
input_ids = tokenizer.encode(context + "<sep>" + query, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model(
"یک فیلم ضعیف بی محتوا بدون فیلمنامه . شوخی های سخیف .",
"نظر شما در مورد داستان، فیلمنامه، دیالوگ ها و موضوع فیلم لونه زنبور چیست؟"
)
run_model(
"فیلم تا وسط فیلم یعنی دقیقا تا جایی که معلوم میشه بچه های املشی دنبال رضان خیلی خوب و جذاب پیش میره ولی دقیقا از همونجاش سکته میزنه و خلاص...",
"نظر شما به صورت کلی در مورد فیلم ژن خوک چیست؟"
)
run_model(
"اصلا به هیچ عنوان علاقه نداشتم اجرای می سی سی پی نشسته میمیرد روی پرده سینما ببینم دیالوگ های تکراری هلیکوپتر ماشین آلندلون لئون پاپیون آخه چرااااااااااااااا همون حسی که توی تالار وحدت بعد از نیم ساعت به سرم اومد امشب توی سالن سینما تجربه کردم ،حس گریز از سالن....... (ノಠ益ಠ)ノ ",
" نظر شما در مورد صداگذاری و جلوه های صوتی فیلم مسخرهباز چیست؟"
)
run_model(
" گول نخورید این رنگارنگ مینو نیست برای شرکت گرجیه و متاسفانه این محصولش اصلا مزه رنگارنگی که انتظار دارید رو نمیده ",
" نظر شما در مورد عطر، بو، و طعم این بیسکویت و ویفر چیست؟"
)
run_model(
"در مقایسه با سایر برندهای موجود در بازار با توجه به حراجی که داشت ارزانتر ب",
" شما در مورد قیمت و ارزش خرید این حبوبات و سویا چیست؟"
)
run_model(
"من پسرم عاشق ایناس ولی دیگه به خاطر حفظ محیط زیست فقط زمانهایی که مجبور باشم شیر دونه ای میخرم و سعی میکنم دیگه کمتر شیر با بسته بندی تتراپک استفاده کنم ",
"نظر شما به صورت کلی در مورد این شیر چیست؟"
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-base-parsinlu-multiple-choice
|
persiannlp
| 2021-09-23T16:19:55Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"multiple-choice",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a mT5-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "base"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-multiple-choice"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("وسیع ترین کشور جهان کدام است؟ <sep> آمریکا <sep> کانادا <sep> روسیه <sep> چین")
run_model("طامع یعنی ؟ <sep> آزمند <sep> خوش شانس <sep> محتاج <sep> مطمئن")
run_model(
"زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده <sep> روز اول <sep> روز دوم <sep> روز سوم <sep> هیچکدام")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-base-parsinlu-arc-comqa-obqa-multiple-choice
|
persiannlp
| 2021-09-23T16:19:52Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"multiple-choice",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:commonsenseqa",
"dataset:arc",
"dataset:openbookqa",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-03-02T23:29:05Z |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- commonsenseqa
- arc
- openbookqa
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a mT5-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "base"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-arc-comqa-obqa-multiple-choice"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("وسیع ترین کشور جهان کدام است؟ <sep> آمریکا <sep> کانادا <sep> روسیه <sep> چین")
run_model("طامع یعنی ؟ <sep> آزمند <sep> خوش شانس <sep> محتاج <sep> مطمئن")
run_model(
"زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده <sep> روز اول <sep> روز دوم <sep> روز سوم <sep> هیچکدام")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
pere/norwegian-t5-base
|
pere
| 2021-09-23T16:19:40Z | 10 | 0 |
transformers
|
[
"transformers",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"seq2seq",
"no",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: no
license: cc-by-4.0
tags:
- seq2seq
datasets:
- Norwegian Nynorsk/Bokmål
---
# 🇳🇴 Norwegian T5 Base model 🇳🇴
This T5-base model is trained from scratch on a 19GB Balanced Bokmål-Nynorsk Corpus.
Update: Due to disk space errors, the model had to be restarted July 20. It is currently still running.
Parameters used in training:
```bash
python3 ./run_t5_mlm_flax_streaming.py
--model_name_or_path="./norwegian-t5-base"
--output_dir="./norwegian-t5-base"
--config_name="./norwegian-t5-base"
--tokenizer_name="./norwegian-t5-base"
--dataset_name="pere/nb_nn_balanced_shuffled"
--max_seq_length="512"
--per_device_train_batch_size="32"
--per_device_eval_batch_size="32"
--learning_rate="0.005"
--weight_decay="0.001"
--warmup_steps="2000"
--overwrite_output_dir
--logging_steps="100"
--save_steps="500"
--eval_steps="500"
--push_to_hub
--preprocessing_num_workers 96
--adafactor
```
|
pere/norwegian-t5-base-NCC
|
pere
| 2021-09-23T16:19:38Z | 4 | 0 |
transformers
|
[
"transformers",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"seq2seq",
"no",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: no
license: cc-by-4.0
tags:
- seq2seq
datasets:
- Norwegian Nynorsk/Bokmål
---
# 🇳🇴 Norwegian T5 Base model Trained on the NCC🇳🇴
This is a Norwegian T5-base model trained on the Norwegian Colossal Corpus (NCC) on a TPU v3-8. It needs to be finetuned on a specific task before being used for anything.
Currently the model is training. It is expected that it should be finished by the end of August 2021.
The following setting were used in training:
```bash
./run_t5_mlm_flax.py \
--output_dir="./" \
--model_type="t5" \
--config_name="./" \
--tokenizer_name="./" \
--train_file /mnt/disks/flaxdisk/corpus/norwegian_colossal_corpus_train.json \
--validation_file /mnt/disks/flaxdisk/corpus/norwegian_colossal_corpus_validation.json \
--max_seq_length="128" \
--weight_decay="0.01" \
--per_device_train_batch_size="128" \
--per_device_eval_batch_size="128" \
--learning_rate="8e-3" \
--warmup_steps="2000" \
--overwrite_output_dir \
--cache_dir /mnt/disks/flaxdisk/cache/ \
--num_train_epochs="3" \
--adam_beta1="0.9" \
--adam_beta2="0.98" \
--logging_steps="100" \
--save_steps="2500" \
--eval_steps="2500" \
--preprocessing_num_workers 96 \
--adafactor \
--push_to_hub
```
|
pere/norwegian-t5-base-NCC-nb-nn
|
pere
| 2021-09-23T16:19:35Z | 60 | 0 |
transformers
|
[
"transformers",
"jax",
"tensorboard",
"seq2seq",
"no",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: no
license: cc-by-4.0
tags:
- seq2seq
datasets:
- Norwegian Nynorsk/Bokmål
---
# 🇳🇴 Norwegian T5 Base model Trained on the NCC🇳🇴
This is a Norwegian T5-base model trained on the Norwegian Colossal Corpus (NCC) on a TPU v3-8. It needs to be finetuned on a specific task before being used for anything.
The following setting were used in training:
```bash
./run_t5_mlm_flax_streaming.py \
--output_dir="./" \
--model_type="t5" \
--config_name="./" \
--tokenizer_name="./" \
--dataset_name="pere/norwegian_colossal_corpus_v2_short100k" \
--max_seq_length="512" \
--weight_decay="0.01" \
--per_device_train_batch_size="32" \
--per_device_eval_batch_size="32" \
--learning_rate="8e-3" \
--warmup_steps="0" \
--overwrite_output_dir \
--cache_dir /mnt/disks/flaxdisk/cache/ \
--num_train_epochs="5" \
--adam_beta1="0.9" \
--adam_beta2="0.98" \
--logging_steps="500" \
--num_train_steps="1000000" \
--num_eval_samples="5000" \
--save_steps="5000" \
--eval_steps="5000" \
--preprocessing_num_workers 96 \
--adafactor \
--push_to_hub
```
|
pere/norwegian-t5-base-NCC-fast
|
pere
| 2021-09-23T16:19:32Z | 21 | 4 |
transformers
|
[
"transformers",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"seq2seq",
"no",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: no
license: cc-by-4.0
tags:
- seq2seq
datasets:
- Norwegian Nynorsk/Bokmål
---
# 🇳🇴 Norwegian T5 Base model Trained on the NCC🇳🇴
This is a Norwegian T5-base model trained on the Norwegian Colossal Corpus (NCC) on a TPU v3-8. It needs to be finetuned on a specific task before being used for anything.
The following setting were used in training:
```bash
./run_t5_mlm_flax_streaming.py \
--output_dir="./" \
--model_type="t5" \
--config_name="./" \
--tokenizer_name="./" \
--dataset_name="pere/norwegian_colossal_corpus_v2_short100k" \
--max_seq_length="512" \
--weight_decay="0.01" \
--per_device_train_batch_size="32" \
--per_device_eval_batch_size="32" \
--learning_rate="8e-3" \
--warmup_steps="0" \
--overwrite_output_dir \
--cache_dir /mnt/disks/flaxdisk/cache/ \
--num_train_epochs="5" \
--adam_beta1="0.9" \
--adam_beta2="0.98" \
--logging_steps="500" \
--num_train_steps="1000000" \
--num_eval_samples="5000" \
--save_steps="5000" \
--eval_steps="5000" \
--preprocessing_num_workers 96 \
--adafactor \
--push_to_hub
```
|
pere/norwegian-mt5
|
pere
| 2021-09-23T16:19:28Z | 5 | 0 |
transformers
|
[
"transformers",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"seq2seq",
"no",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: no
license: cc-by-4.0
tags:
- seq2seq
datasets:
- Norwegian Nynorsk/Bokmål
---
# 🇳🇴 Norwegian mT5 Base model 🇳🇴
This mT5-base model is trained from the mT5 checkpoint on a 19GB Balanced Bokmål-Nynorsk Corpus.
Parameters used in training:
```bash
python3 ./run_t5_mlm_flax_streaming.py
--model_name_or_path="./norwegian-t5-base"
--output_dir="./norwegian-t5-base"
--config_name="./norwegian-t5-base"
--tokenizer_name="./norwegian-t5-base"
--dataset_name="pere/nb_nn_balanced_shuffled"
--max_seq_length="512"
--per_device_train_batch_size="32"
--per_device_eval_batch_size="32"
--learning_rate="0.005"
--weight_decay="0.001"
--warmup_steps="2000"
--overwrite_output_dir
--logging_steps="100"
--save_steps="500"
--eval_steps="500"
--push_to_hub
--preprocessing_num_workers 96
--adafactor
```
|
pere/norwegian-gpt2
|
pere
| 2021-09-23T16:19:24Z | 196 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"norwegian",
"GPT2",
"casual language modeling",
"no",
"dataset:oscar",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: no
license: cc-by-4.0
tags:
- norwegian
- GPT2
- casual language modeling
datasets:
- oscar
---
# Norwegian GPT-2 - Oscar
## Description
This is a sample reference model trained only on the Oscar Corpus for a day on a TPU v3-8. Pretrained model on Norwegian language using a causal language modeling (CLM) objective.
|
osanseviero/corenlp_spanish
|
osanseviero
| 2021-09-23T16:16:53Z | 0 | 0 | null |
[
"corenlp",
"sp",
"license:gpl",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- corenlp
library_tag: corenlp
language:
- sp
license: gpl
---
# Core NLP model for sp
CoreNLP is your one stop shop for natural language processing in Java! CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations.
Find more about it in [our website](https://stanfordnlp.github.io/CoreNLP) and our [GitHub repository](https://github.com/stanfordnlp/CoreNLP).
|
osanseviero/corenlp_german
|
osanseviero
| 2021-09-23T16:16:51Z | 0 | 0 | null |
[
"corenlp",
"ge",
"license:gpl",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- corenlp
library_tag: corenlp
language:
- ge
license: gpl
---
# Core NLP model for ge
CoreNLP is your one stop shop for natural language processing in Java! CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations.
Find more about it in [our website](https://stanfordnlp.github.io/CoreNLP) and our [GitHub repository](https://github.com/stanfordnlp/CoreNLP).
|
osanseviero/corenlp_english-kbp
|
osanseviero
| 2021-09-23T16:16:46Z | 0 | 0 | null |
[
"corenlp",
"en",
"license:gpl",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- corenlp
library_tag: corenlp
language:
- en
license: gpl
---
# Core NLP model for en
CoreNLP is your one stop shop for natural language processing in Java! CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations.
Find more about it in [our website](https://stanfordnlp.github.io/CoreNLP) and our [GitHub repository](https://github.com/stanfordnlp/CoreNLP).
|
osanseviero/corenlp_english-default
|
osanseviero
| 2021-09-23T16:16:41Z | 0 | 0 | null |
[
"corenlp",
"en",
"license:gpl",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- corenlp
library_tag: corenlp
language:
- en
license: gpl
---
# Core NLP model for en
CoreNLP is your one stop shop for natural language processing in Java! CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations.
Find more about it in [our website](https://stanfordnlp.github.io/CoreNLP) and our [GitHub repository](https://github.com/stanfordnlp/CoreNLP).
|
osanseviero/corenlp_arabic
|
osanseviero
| 2021-09-23T16:16:37Z | 0 | 0 | null |
[
"corenlp",
"ar",
"license:gpl",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- corenlp
library_tag: corenlp
language:
- ar
license: gpl
---
# Core NLP model for ar
CoreNLP is your one stop shop for natural language processing in Java! CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations.
Find more about it in [our website](https://stanfordnlp.github.io/CoreNLP) and our [GitHub repository](https://github.com/stanfordnlp/CoreNLP).
|
osanseviero/ConvTasNet_Libri1Mix_enhsingle_16k
|
osanseviero
| 2021-09-23T16:16:32Z | 0 | 0 | null |
[
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri1Mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:05Z |
---
tags:
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri1Mix
- enh_single
license: cc-by-sa-4.0
library_tag: generic
---
## Clone from Asteroid model `JorisCos/ConvTasNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 1
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 6
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 14.743051006476085
si_sdr_imp: 11.293269700616385
sdr: 15.300522933671061
sdr_imp: 11.797860134458015
sir: Infinity
sir_imp: NaN
sar: 15.300522933671061
sar_imp: 11.797860134458015
stoi: 0.9310514162434267
stoi_imp: 0.13513159270288563
```
License notice:
This work "ConvTasNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"ConvTasNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino
|
mpariente/DPRNNTasNet-ks2_WHAM_sepclean
|
mpariente
| 2021-09-23T16:12:22Z | 252 | 9 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"DPRNNTasNet",
"audio-to-audio",
"dataset:wham",
"dataset:sep_clean",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:05Z |
---
tags:
- asteroid
- audio
- DPRNNTasNet
- audio-to-audio
datasets:
- wham
- sep_clean
license: cc-by-sa-4.0
---
## Asteroid model `mpariente/DPRNNTasNet-ks2_WHAM_sepclean`
Imported from [Zenodo](https://zenodo.org/record/3862942)
### Description:
This model was trained by Manuel Pariente
using the wham/DPRNN recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the WHAM! dataset.
### Training config:
```yaml
data:
mode: min
nondefault_nsrc: None
sample_rate: 8000
segment: 2.0
task: sep_clean
train_dir: data/wav8k/min/tr
valid_dir: data/wav8k/min/cv
filterbank:
kernel_size: 2
n_filters: 64
stride: 1
main_args:
exp_dir: exp/train_dprnn_new/
gpus: -1
help: None
masknet:
bidirectional: True
bn_chan: 128
chunk_size: 250
dropout: 0
hid_size: 128
hop_size: 125
in_chan: 64
mask_act: sigmoid
n_repeats: 6
n_src: 2
out_chan: 64
optim:
lr: 0.001
optimizer: adam
weight_decay: 1e-05
positional arguments:
training:
batch_size: 3
early_stop: True
epochs: 200
gradient_clipping: 5
half_lr: True
num_workers: 8
```
### Results:
```yaml
si_sdr: 19.316743490695334
si_sdr_imp: 19.317895273889842
sdr: 19.68085347190952
sdr_imp: 19.5298092932871
sir: 30.362213998701232
sir_imp: 30.21116982007881
sar: 20.15553251343315
sar_imp: -129.02091762351188
stoi: 0.97772664309074
stoi_imp: 0.23968091518217424
```
### License notice:
This work "DPRNNTasNet-ks2_WHAM_sepclean" is a derivative of [CSR-I (WSJ0) Complete](https://catalog.ldc.upenn.edu/LDC93S6A)
by [LDC](https://www.ldc.upenn.edu/), used under [LDC User Agreement for
Non-Members](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf) (Research only).
"DPRNNTasNet-ks2_WHAM_sepclean" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/)
by Manuel Pariente.
|
mpariente/ConvTasNet_Libri3Mix_sepnoisy
|
mpariente
| 2021-09-23T16:12:18Z | 17 | 0 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:LibriMix",
"dataset:sep_noisy",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:05Z |
---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- LibriMix
- sep_noisy
license: cc-by-sa-4.0
---
## Asteroid model
Imported from this Zenodo [model page](https://zenodo.org/record/4020529).
## Description:
This model was trained by Takhir Mirzaev using the Librimix/ConvTasNet recipe in Asteroid.
It was trained on the `sep_noisy` task of the Libri3Mix dataset.
## Training config:
```yaml
data:
n_src: 3
sample_rate: 8000
segment: 3
task: sep_noisy
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
positional arguments:
training:
batch_size: 4
early_stop: True
epochs: 200
half_lr: True
num_workers: 4
```
## Results:
```yaml
si_sdr: 6.824750632456865
si_sdr_imp: 11.234803761803752
sdr: 7.715799858488098
sdr_imp: 11.778681386239114
sir: 16.442141130818637
sir_imp: 19.527535070051055
sar: 8.757864265661263
sar_imp: -0.15657258049670303
stoi: 0.7854554136619554
stoi_imp: 0.22267957718163015
```
## License notice:
This work "ConvTasNet_Libri3Mix_sepnoisy"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by
[Vassil Panayotov](https://github.com/vdp),
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
"ConvTasNet_Libri3Mix_sepnoisy"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/)
by Manuel Pariente.
|
mpariente/ConvTasNet_Libri1Mix_enhsingle_8k
|
mpariente
| 2021-09-23T16:12:15Z | 19 | 1 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"dataset:LibriMix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- asteroid
- audio
- ConvTasNet
datasets:
- LibriMix
- enh_single
license: cc-by-sa-4.0
---
## Asteroid model
Imported from this Zenodo [model page](https://zenodo.org/record/3970768).
## Description:
This model was trained by Brij Mohan using the Librimix/ConvTasNet recipe in Asteroid.
It was trained on the `enh_single` task of the Libri3Mix dataset.
## Training config:
```yaml
data:
n_src: 1
sample_rate: 8000
segment: 3
task: enh_single
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 1
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: True
epochs: 200
half_lr: True
```
## Results:
```yaml
si_sdr: 14.783675142685572
si_sdr_imp: 11.464625198953202
sdr: 15.497505907983102
sdr_imp: 12.07230150154914
sar: 15.497505907983102
sar_imp: 12.07230150154914
stoi: 0.9270030254700518
stoi_imp: 0.1320547197597893
```
## License notice:
This work "ConvTasNet_Libri1Mix_enhsingle_8k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by
[Vassil Panayotov](https://github.com/vdp),
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
"ConvTasNet_Libri1Mix_enhsingle_8k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/)
by Manuel Pariente.
|
JorisCos/DPRNNTasNet-ks2_Libri1Mix_enhsingle_16k
|
JorisCos
| 2021-09-23T15:49:18Z | 28 | 1 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"DPRNNTasNet",
"audio-to-audio",
"dataset:Libri1Mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:04Z |
---
tags:
- asteroid
- audio
- DPRNNTasNet
- audio-to-audio
datasets:
- Libri1Mix
- enh_single
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/DPRNNTasNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 1
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 2
n_filters: 64
stride: 1
masknet:
bidirectional: true
bn_chan: 128
chunk_size: 250
dropout: 0
hid_size: 128
hop_size: 125
in_chan: 64
mask_act: sigmoid
n_repeats: 6
n_src: 1
out_chan: 64
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
training:
batch_size: 2
early_stop: true
epochs: 200
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 14.7228101708889
si_sdr_imp: 11.2730288650292
sdr: 15.35661405197161
sdr_imp: 11.853951252758595
sir: Infinity
sir_imp: NaN
sar: 15.35661405197161
sar_imp: 11.853951252758595
stoi: 0.9300461826351578
stoi_imp: 0.13412635909461715
```
License notice:
This work "DPRNNTasNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"DPRNNTasNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino
|
JorisCos/DCCRNet_Libri1Mix_enhsingle_16k
|
JorisCos
| 2021-09-23T15:49:13Z | 1,316 | 16 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"DCCRNet",
"audio-to-audio",
"speech-enhancement",
"dataset:Libri1Mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:04Z |
---
tags:
- asteroid
- audio
- DCCRNet
- audio-to-audio
- speech-enhancement
datasets:
- Libri1Mix
- enh_single
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/DCCRNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
stft_kernel_size: 400
stft_n_filters: 512
stft_stride: 100
masknet:
architecture: DCCRN-CL
n_src: 1
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
training:
batch_size: 12
early_stop: true
epochs: 200
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 13.329767398333798
si_sdr_imp: 9.879986092474098
sdr: 13.87279932997016
sdr_imp: 10.370136530757103
sir: Infinity
sir_imp: NaN
sar: 13.87279932997016
sar_imp: 10.370136530757103
stoi: 0.9140907015623948
stoi_imp: 0.11817087802185405
```
License notice:
This work "DCCRNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"DCCRNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino
|
JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k
|
JorisCos
| 2021-09-23T15:49:08Z | 43 | 1 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri3Mix",
"dataset:sep_noisy",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:04Z |
---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri3Mix
- sep_noisy
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_noisy` task of the Libri3Mix dataset.
Training config:
```yml
data:
n_src: 3
sample_rate: 16000
segment: 3
task: sep_noisy
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 8
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri3Mix min test set :
```yml
si_sdr: 5.926151147554517
si_sdr_imp: 10.282912158535625
sdr: 6.700975236867358
sdr_imp: 10.882972447337504
sir: 15.364110064569388
sir_imp: 18.574476587171688
sar: 7.918866830474568
sar_imp: -0.9638973409971135
stoi: 0.7713777027310713
stoi_imp: 0.2078696167973911
```
License notice:
This work "ConvTasNet_Libri3Mix_sepnoisy_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/).
"ConvTasNet_Libri3Mix_sepnoisy_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino
|
JorisCos/ConvTasNet_Libri3Mix_sepclean_16k
|
JorisCos
| 2021-09-23T15:49:03Z | 54 | 0 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri3Mix",
"dataset:sep_clean",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:04Z |
---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri3Mix
- sep_clean
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepclean_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri3Mix dataset.
Training config:
```yaml
data:
n_src: 3
sample_rate: 16000
segment: 3
task: sep_clean
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 8
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results :
On Libri3Mix min test set :
```yaml
si_sdr: 8.932601610824145
si_sdr_imp: 12.299341066588594
sdr: 9.557260814240447
sdr_imp: 12.76957128385349
sir: 17.387646884037455
sir_imp: 20.599955591768484
sar: 10.686885056960504
sar_imp: -55.8894643263213
stoi: 0.8481258332025354
stoi_imp: 0.25528367853750356
```
License notice:
This work "ConvTasNet_Libri3Mix_sepclean_16k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri3Mix_sepclean_16k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.