Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
text-generation
|
transformers
|
{}
|
akhooli/gpt2-ar-poetry
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
# GPT2-Small-Arabic-Poetry
## Model description
Fine-tuned model of Arabic poetry dataset based on gpt2-small-arabic.
## Intended uses & limitations
#### How to use
An example is provided in this [colab notebook](https://colab.research.google.com/drive/1mRl7c-5v-Klx27EEAEOAbrfkustL4g7a?usp=sharing).
#### Limitations and bias
Both the GPT2-small-arabic (trained on Arabic Wikipedia) and this model have several limitations in terms of coverage and training performance.
Use them as demonstrations or proof of concepts but not as production code.
## Training data
This pretrained model used the [Arabic Poetry dataset](https://www.kaggle.com/ahmedabelal/arabic-poetry) from 9 different eras with a total of around 40k poems.
The dataset was trained (fine-tuned) based on the [gpt2-small-arabic](https://huggingface.co/akhooli/gpt2-small-arabic) transformer model.
## Training procedure
Training was done using [Simple Transformers](https://github.com/ThilinaRajapakse/simpletransformers) library on Kaggle, using free GPU.
## Eval results
Final perplexity reached ws 76.3, loss: 4.33
### BibTeX entry and citation info
```bibtex
@inproceedings{Abed Khooli,
year={2020}
}
```
|
{"language": "ar", "tags": ["text-generation"], "datasets": ["Arabic poetry from several eras"]}
|
akhooli/gpt2-small-arabic-poetry
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"gpt2",
"text-generation",
"ar",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# GPT2-Small-Arabic
## Model description
GPT2 model from Arabic Wikipedia dataset based on gpt2-small (using Fastai2).
## Intended uses & limitations
#### How to use
An example is provided in this [colab notebook](https://colab.research.google.com/drive/1mRl7c-5v-Klx27EEAEOAbrfkustL4g7a?usp=sharing).
Both text and poetry (fine-tuned model) generation are included.
#### Limitations and bias
GPT2-small-arabic (trained on Arabic Wikipedia) has several limitations in terms of coverage (Arabic Wikipeedia quality, no diacritics) and training performance.
Use as demonstration or proof of concepts but not as production code.
## Training data
This pretrained model used the Arabic Wikipedia dump (around 900 MB).
## Training procedure
Training was done using [Fastai2](https://github.com/fastai/fastai2/) library on Kaggle, using free GPU.
## Eval results
Final perplexity reached was 72.19, loss: 4.28, accuracy: 0.307
### BibTeX entry and citation info
```bibtex
@inproceedings{Abed Khooli,
year={2020}
}
```
|
{"language": "ar", "datasets": ["Arabic Wikipedia"], "metrics": ["none"]}
|
akhooli/gpt2-small-arabic
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"gpt2",
"text-generation",
"ar",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
translation
|
transformers
|
### mbart-large-ar-en
This is mbart-large-cc25, finetuned on a subset of the OPUS corpus for ar_en.
Usage: see [example notebook](https://colab.research.google.com/drive/1I6RFOWMaTpPBX7saJYjnSTddW0TD6H1t?usp=sharing)
Note: model has limited training set, not fully trained (do not use for production).
Other models by me: [Abed Khooli](https://huggingface.co/akhooli)
|
{"language": ["ar", "en"], "license": "mit", "tags": ["translation"]}
|
akhooli/mbart-large-cc25-ar-en
| null |
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"translation",
"ar",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
translation
|
transformers
|
### mbart-large-en-ar
This is mbart-large-cc25, finetuned on a subset of the UN corpus for en_ar.
Usage: see [example notebook](https://colab.research.google.com/drive/1I6RFOWMaTpPBX7saJYjnSTddW0TD6H1t?usp=sharing)
Note: model has limited training set, not fully trained (do not use for production).
|
{"language": ["en", "ar"], "license": "mit", "tags": ["translation"]}
|
akhooli/mbart-large-cc25-en-ar
| null |
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"translation",
"en",
"ar",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
## personachat-arabic (conversational AI)
This is personachat-arabic, using a subset from the persona-chat validation dataset, machine translated to Arabic (from English)
and fine-tuned from [akhooli/gpt2-small-arabic](https://huggingface.co/akhooli/gpt2-small-arabic) which is a limited text generation model.
Usage: see the last section of this [example notebook](https://colab.research.google.com/drive/1I6RFOWMaTpPBX7saJYjnSTddW0TD6H1t?usp=sharing)
Note: model has limited training set which was machine translated (do not use for production).
|
{"language": ["ar"], "license": "mit", "tags": ["conversational"]}
|
akhooli/personachat-arabic
| null |
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"conversational",
"ar",
"license:mit",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
### xlm-r-large-arabic-sent
Multilingual sentiment classification (Label_0: mixed, Label_1: negative, Label_2: positive) of Arabic reviews by fine-tuning XLM-Roberta-Large.
Zero shot classification of other languages (also works in mixed languages - ex. Arabic & English). Mixed category is not accurate and may confuse other
classes (was based on a rate of 3 out of 5 in reviews).
Usage: see last section in this [Colab notebook](https://lnkd.in/d3bCFyZ)
|
{"language": ["ar", "en", "multilingual"], "license": "mit"}
|
akhooli/xlm-r-large-arabic-sent
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"ar",
"en",
"multilingual",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
### xlm-r-large-arabic-toxic (toxic/hate speech classifier)
Toxic (hate speech) classification (Label_0: non-toxic, Label_1: toxic) of Arabic comments by fine-tuning XLM-Roberta-Large.
Zero shot classification of other languages (also works in mixed languages - ex. Arabic & English).
Usage and further info: see last section in this [Colab notebook](https://lnkd.in/d3bCFyZ)
|
{"language": ["ar", "en"], "license": "mit"}
|
akhooli/xlm-r-large-arabic-toxic
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"ar",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 529614927
- CO2 Emissions (in grams): 5.999771405025692
## Validation Metrics
- Loss: 0.7582379579544067
- Accuracy: 0.7636103151862464
- Macro F1: 0.770630619486531
- Micro F1: 0.7636103151862464
- Weighted F1: 0.765233270165301
- Macro Precision: 0.7746285216467107
- Micro Precision: 0.7636103151862464
- Weighted Precision: 0.7683270753840836
- Macro Recall: 0.7680576576961138
- Micro Recall: 0.7636103151862464
- Weighted Recall: 0.7636103151862464
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/akilesh96/autonlp-mrcooper_text_classification-529614927
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("akilesh96/autonlp-mrcooper_text_classification-529614927", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("akilesh96/autonlp-mrcooper_text_classification-529614927", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["akilesh96/autonlp-data-mrcooper_text_classification"], "widget": [{"text": "Not Many People Know About The City 1200 Feet Below Detroit"}, {"text": "Bob accepts the challenge, and the next week they're standing in Saint Peters square. 'This isnt gonna work, he's never going to see me here when theres this much people. You stay here, I'll go talk to him and you'll see me on the balcony, the guards know me too.' Half an hour later, Bob and the pope appear side by side on the balcony. Bobs boss gets a heart attack, and Bob goes to visit him in the hospital."}, {"text": "I\u2019m sorry if you made it this far, but I\u2019m just genuinely idk, I feel like I shouldn\u2019t give up, it\u2019s just getting harder to come back from stuff like this."}], "co2_eq_emissions": 5.999771405025692}
|
akilesh96/autonlp-mrcooper_text_classification-529614927
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:akilesh96/autonlp-data-mrcooper_text_classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
akirasho/distilbert-base-uncased-finetuned-squad
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
{}
|
akivo4ka/ruGPT3medium_psy
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
feature-extraction
|
transformers
|
{}
|
akoksal/MTMB
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null |
transformers
|
{}
|
akoshel/made-ai-dungeon-rugpt3-small
| null |
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null |
transformers
|
{}
|
akoshel/made-ai-dungeon
| null |
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
hello
|
{}
|
akozlo/con_bal60k
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# conserv_fulltext_model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
unbalanced_texts gpt2
|
{"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "conserv_fulltext_model", "results": []}]}
|
akozlo/conserv_fulltext_1_18_22
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
question-answering
|
transformers
|
{}
|
akr/distilbert-base-uncased-finetuned-squad
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
{}
|
akrathi007/akk213text
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
akrathi007/akk2text
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
akrathi007/k2t-testx
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null |
transformers
|
This is a copy of: https://huggingface.co/hf-internal-testing/tiny-random-bert
Changes: use old format for `pytorch_model.bin`.
|
{}
|
akreal/tiny-random-bert
| null |
[
"transformers",
"pytorch",
"tf",
"bert",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null |
transformers
|
This is a copy of: https://huggingface.co/hf-internal-testing/tiny-random-gpt2
Changes: use old format for `pytorch_model.bin`.
|
{}
|
akreal/tiny-random-gpt2
| null |
[
"transformers",
"pytorch",
"tf",
"gpt2",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null |
transformers
|
This is a copy of: https://huggingface.co/hf-internal-testing/tiny-random-mbart
Changes: use old format for `pytorch_model.bin`.
|
{}
|
akreal/tiny-random-mbart
| null |
[
"transformers",
"pytorch",
"tf",
"mbart",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null |
transformers
|
This is a copy of: https://huggingface.co/hf-internal-testing/tiny-random-mpnet
Changes: use old format for `pytorch_model.bin`.
|
{}
|
akreal/tiny-random-mpnet
| null |
[
"transformers",
"pytorch",
"tf",
"mpnet",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null |
transformers
|
This is a copy of: https://huggingface.co/hf-internal-testing/tiny-random-t5
Changes: use old format for `pytorch_model.bin`.
|
{}
|
akreal/tiny-random-t5
| null |
[
"transformers",
"pytorch",
"tf",
"t5",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null |
transformers
|
This is a copy of: https://huggingface.co/hf-internal-testing/tiny-random-xlnet
Changes: use old format for `pytorch_model.bin`.
|
{}
|
akreal/tiny-random-xlnet
| null |
[
"transformers",
"pytorch",
"tf",
"xlnet",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
{}
|
akshara23/Pegasus_for_Here
| null |
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
{}
|
akshara23/Terra-Classification
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0475
- Matthews Correlation: 0.6290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 16 | 1.3863 | 0.0 |
| No log | 2.0 | 32 | 1.2695 | 0.4503 |
| No log | 3.0 | 48 | 1.1563 | 0.6110 |
| No log | 4.0 | 64 | 1.0757 | 0.6290 |
| No log | 5.0 | 80 | 1.0475 | 0.6290 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["matthews_correlation"], "model_index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "metric": {"name": "Matthews Correlation", "type": "matthews_correlation", "value": 0.6290322580645161}}]}]}
|
akshara23/distilbert-base-uncased-finetuned-cola
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
akshara23/xyz
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
akshat2301/distilbert-base-cased-finetuned-ner
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cloud-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0812
- Precision: 0.8975
- Recall: 0.9080
- F1: 0.9027
- Accuracy: 0.9703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 166 | 0.1326 | 0.7990 | 0.8043 | 0.8017 | 0.9338 |
| No log | 2.0 | 332 | 0.0925 | 0.8770 | 0.8946 | 0.8858 | 0.9618 |
| No log | 3.0 | 498 | 0.0812 | 0.8975 | 0.9080 | 0.9027 | 0.9703 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cloud-ner", "results": []}]}
|
akshaychaudhary/distilbert-base-uncased-finetuned-cloud-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cloud1-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0074
- Precision: 0.9714
- Recall: 0.9855
- F1: 0.9784
- Accuracy: 0.9972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 166 | 0.0160 | 0.9653 | 0.9420 | 0.9535 | 0.9945 |
| No log | 2.0 | 332 | 0.0089 | 0.9623 | 0.9855 | 0.9737 | 0.9965 |
| No log | 3.0 | 498 | 0.0074 | 0.9714 | 0.9855 | 0.9784 | 0.9972 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cloud1-ner", "results": []}]}
|
akshaychaudhary/distilbert-base-uncased-finetuned-cloud1-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cloud2-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8866
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.8453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 162 | 0.7804 | 0.0 | 0.0 | 0.0 | 0.8447 |
| No log | 2.0 | 324 | 0.8303 | 0.0 | 0.0 | 0.0 | 0.8465 |
| No log | 3.0 | 486 | 0.8866 | 0.0 | 0.0 | 0.0 | 0.8453 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cloud2-ner", "results": []}]}
|
akshaychaudhary/distilbert-base-uncased-finetuned-cloud2-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-hypertuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5683
- Precision: 0.3398
- Recall: 0.6481
- F1: 0.4459
- Accuracy: 0.8762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 84 | 0.3566 | 0.2913 | 0.5556 | 0.3822 | 0.8585 |
| No log | 2.0 | 168 | 0.4698 | 0.3366 | 0.6296 | 0.4387 | 0.8730 |
| No log | 3.0 | 252 | 0.5683 | 0.3398 | 0.6481 | 0.4459 | 0.8762 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-hypertuned-ner", "results": []}]}
|
akshaychaudhary/distilbert-base-uncased-finetuned-hypertuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9988
- Precision: 0.3
- Recall: 0.6
- F1: 0.4
- Accuracy: 0.7870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 84 | 0.8399 | 0.2105 | 0.4 | 0.2759 | 0.75 |
| No log | 2.0 | 168 | 0.9664 | 0.3 | 0.6 | 0.4 | 0.7870 |
| No log | 3.0 | 252 | 0.9988 | 0.3 | 0.6 | 0.4 | 0.7870 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": []}]}
|
akshaychaudhary/distilbert-base-uncased-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
akshaychaudhary/distilbert-base-uncased-finetunedHyperTuning-ner
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
akshayvr/DialoGPT-rickmorty
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
akshayvr/DialoGPT-rickndmorty
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null |
transformers
|
{}
|
akshayvr/DialoGPT-rickymorty
| null |
[
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
akshayvr/DialoGPT-small-Rickandmorty
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
akuma/DialoGPT-small-Harry
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0611
- Precision: 0.9250
- Recall: 0.9321
- F1: 0.9285
- Accuracy: 0.9834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2399 | 1.0 | 878 | 0.0702 | 0.9118 | 0.9208 | 0.9163 | 0.9805 |
| 0.0503 | 2.0 | 1756 | 0.0614 | 0.9176 | 0.9311 | 0.9243 | 0.9824 |
| 0.0304 | 3.0 | 2634 | 0.0611 | 0.9250 | 0.9321 | 0.9285 | 0.9834 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9833669595056158}}]}]}
|
al00014/distilbert-base-uncased-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
alaabashayreh/distilbert-base-uncased-finetuned-cola
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
alaansn/Jarvis
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
# BART Pretrained
[2021 훈민정음 한국어 음성•자연어 인공지능 경진대회] 대화요약 부문 알라꿍달라꿍 팀의 대화요약 학습 샘플 모델을 공유합니다.
[2021-dialogue-summary-competition](https://github.com/cosmoquester/2021-dialogue-summary-competition) 레포지토리의 BART Pretrain 단계를 학습한 모델입니다.
데이터는 [AIHub 한국어 대화요약](https://aihub.or.kr/aidata/30714) 데이터를 사용하였습니다.
|
{"language": ["ko"], "widget": [{"text": "[BOS]\ubb50 \ud574?[SEP][MASK]\ud558\ub2e4\uac00 \uc774\uc81c [MASK]\ub824\uace0[EOS]"}], "inference": {"parameters": {"max_length": 64}}}
|
alaggung/bart-pretrained
| null |
[
"transformers",
"pytorch",
"tf",
"bart",
"text2text-generation",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
summarization
|
transformers
|
# BART R3F
[2021 훈민정음 한국어 음성•자연어 인공지능 경진대회] 대화요약 부문 알라꿍달라꿍 팀의 대화요약 학습 샘플 모델을 공유합니다.
[bart-pretrained](https://huggingface.co/alaggung/bart-pretrained) 모델에 [2021-dialogue-summary-competition](https://github.com/cosmoquester/2021-dialogue-summary-competition) 레포지토리의 R3F를 적용해 대화요약 Task를 학습한 모델입니다.
데이터는 [AIHub 한국어 대화요약](https://aihub.or.kr/aidata/30714) 데이터를 사용하였습니다.
|
{"language": ["ko"], "tags": ["summarization"], "widget": [{"text": "[BOS]\ubc25 \u3131?[SEP]\uace0\uace0\uace0\uace0 \ubb50 \uba39\uc744\uae4c?[SEP]\uc5b4\uc81c \uae40\uce58\ucc0c\uac1c \uba39\uc5b4\uc11c \ud55c\uc2dd\ub9d0\uace0 \ub534 \uac70[SEP]\uadf8\ub7fc \ub3c8\uae4c\uc2a4 \uc5b4\ub54c?[SEP]\uc624 \uc88b\ub2e4 1\uc2dc \ud559\uad00 \uc55e\uc73c\ub85c \uc624\uc148[SEP]\u3147\u314b[EOS]"}], "inference": {"parameters": {"max_length": 64, "top_k": 5}}}
|
alaggung/bart-r3f
| null |
[
"transformers",
"pytorch",
"tf",
"bart",
"text2text-generation",
"summarization",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
summarization
|
transformers
|
# BART R3F
[2021 훈민정음 한국어 음성•자연어 인공지능 경진대회] 대화요약 부문 알라꿍달라꿍 팀의 대화요약 학습 샘플 모델을 공유합니다.
[bart-r3f](https://huggingface.co/alaggung/bart-r3f) 모델에 [2021-dialogue-summary-competition](https://github.com/cosmoquester/2021-dialogue-summary-competition) 레포지토리의 RL 기법을 적용해 대화요약 Task를 학습한 모델입니다.
데이터는 [AIHub 한국어 대화요약](https://aihub.or.kr/aidata/30714) 데이터를 사용하였습니다.
|
{"language": ["ko"], "tags": ["summarization"], "widget": [{"text": "[BOS]\ubc25 \u3131?[SEP]\uace0\uace0\uace0\uace0 \ubb50 \uba39\uc744\uae4c?[SEP]\uc5b4\uc81c \uae40\uce58\ucc0c\uac1c \uba39\uc5b4\uc11c \ud55c\uc2dd\ub9d0\uace0 \ub534 \uac70[SEP]\uadf8\ub7fc \ub3c8\uae4c\uc2a4 \uc5b4\ub54c?[SEP]\uc624 \uc88b\ub2e4 1\uc2dc \ud559\uad00 \uc55e\uc73c\ub85c \uc624\uc148[SEP]\u3147\u314b[EOS]"}], "inference": {"parameters": {"max_length": 64, "top_k": 5}}}
|
alaggung/bart-rl
| null |
[
"transformers",
"pytorch",
"tf",
"bart",
"text2text-generation",
"summarization",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
# mt5-large-finetuned-mnli-xtreme-xnli
## Model Description
This model takes a pretrained large [multilingual-t5](https://github.com/google-research/multilingual-t5) (also available from [models](https://huggingface.co/google/mt5-large)) and fine-tunes it on English MNLI and the [xtreme_xnli](https://www.tensorflow.org/datasets/catalog/xtreme_xnli) training set. It is intended to be used for zero-shot text classification, inspired by [xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli).
## Intended Use
This model is intended to be used for zero-shot text classification, especially in languages other than English. It is fine-tuned on English MNLI and the [xtreme_xnli](https://www.tensorflow.org/datasets/catalog/xtreme_xnli) training set, a multilingual NLI dataset. The model can therefore be used with any of the languages in the XNLI corpus:
- Arabic
- Bulgarian
- Chinese
- English
- French
- German
- Greek
- Hindi
- Russian
- Spanish
- Swahili
- Thai
- Turkish
- Urdu
- Vietnamese
As per recommendations in [xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli), for English-only classification, you might want to check out:
- [bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli)
- [a distilled bart MNLI model](https://huggingface.co/models?filter=pipeline_tag%3Azero-shot-classification&search=valhalla).
### Zero-shot example:
The model retains its text-to-text characteristic after fine-tuning. This means that our expected outputs will be text. During fine-tuning, the model learns to respond to the NLI task with a series of single token responses that map to entailment, neutral, or contradiction. The NLI task is indicated with a fixed prefix, "xnli:".
Below is an example, using PyTorch, of the model's use in a similar fashion to the `zero-shot-classification` pipeline. We use the logits from the LM output at the first token to represent confidence.
```python
from torch.nn.functional import softmax
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_name = "alan-turing-institute/mt5-large-finetuned-mnli-xtreme-xnli"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
model.eval()
sequence_to_classify = "¿A quién vas a votar en 2020?"
candidate_labels = ["Europa", "salud pública", "política"]
hypothesis_template = "Este ejemplo es {}."
ENTAILS_LABEL = "▁0"
NEUTRAL_LABEL = "▁1"
CONTRADICTS_LABEL = "▁2"
label_inds = tokenizer.convert_tokens_to_ids(
[ENTAILS_LABEL, NEUTRAL_LABEL, CONTRADICTS_LABEL])
def process_nli(premise: str, hypothesis: str):
""" process to required xnli format with task prefix """
return "".join(['xnli: premise: ', premise, ' hypothesis: ', hypothesis])
# construct sequence of premise, hypothesis pairs
pairs = [(sequence_to_classify, hypothesis_template.format(label)) for label in
candidate_labels]
# format for mt5 xnli task
seqs = [process_nli(premise=premise, hypothesis=hypothesis) for
premise, hypothesis in pairs]
print(seqs)
# ['xnli: premise: ¿A quién vas a votar en 2020? hypothesis: Este ejemplo es Europa.',
# 'xnli: premise: ¿A quién vas a votar en 2020? hypothesis: Este ejemplo es salud pública.',
# 'xnli: premise: ¿A quién vas a votar en 2020? hypothesis: Este ejemplo es política.']
inputs = tokenizer.batch_encode_plus(seqs, return_tensors="pt", padding=True)
out = model.generate(**inputs, output_scores=True, return_dict_in_generate=True,
num_beams=1)
# sanity check that our sequences are expected length (1 + start token + end token = 3)
for i, seq in enumerate(out.sequences):
assert len(
seq) == 3, f"generated sequence {i} not of expected length, 3." \\\\
f" Actual length: {len(seq)}"
# get the scores for our only token of interest
# we'll now treat these like the output logits of a `*ForSequenceClassification` model
scores = out.scores[0]
# scores has a size of the model's vocab.
# However, for this task we have a fixed set of labels
# sanity check that these labels are always the top 3 scoring
for i, sequence_scores in enumerate(scores):
top_scores = sequence_scores.argsort()[-3:]
assert set(top_scores.tolist()) == set(label_inds), \\\\
f"top scoring tokens are not expected for this task." \\\\
f" Expected: {label_inds}. Got: {top_scores.tolist()}."
# cut down scores to our task labels
scores = scores[:, label_inds]
print(scores)
# tensor([[-2.5697, 1.0618, 0.2088],
# [-5.4492, -2.1805, -0.1473],
# [ 2.2973, 3.7595, -0.1769]])
# new indices of entailment and contradiction in scores
entailment_ind = 0
contradiction_ind = 2
# we can show, per item, the entailment vs contradiction probas
entail_vs_contra_scores = scores[:, [entailment_ind, contradiction_ind]]
entail_vs_contra_probas = softmax(entail_vs_contra_scores, dim=1)
print(entail_vs_contra_probas)
# tensor([[0.0585, 0.9415],
# [0.0050, 0.9950],
# [0.9223, 0.0777]])
# or we can show probas similar to `ZeroShotClassificationPipeline`
# this gives a zero-shot classification style output across labels
entail_scores = scores[:, entailment_ind]
entail_probas = softmax(entail_scores, dim=0)
print(entail_probas)
# tensor([7.6341e-03, 4.2873e-04, 9.9194e-01])
print(dict(zip(candidate_labels, entail_probas.tolist())))
# {'Europa': 0.007634134963154793,
# 'salud pública': 0.0004287279152777046,
# 'política': 0.9919371604919434}
```
Unfortunately, the `generate` function for the TF equivalent model doesn't exactly mirror the PyTorch version so the above code won't directly transfer.
The model is currently not compatible with the existing `zero-shot-classification` pipeline.
## Training
This model was pre-trained on a set of 101 languages in the mC4, as described in [the mt5 paper](https://arxiv.org/abs/2010.11934). It was then fine-tuned on the [mt5_xnli_translate_train](https://github.com/google-research/multilingual-t5/blob/78d102c830d76bd68f27596a97617e2db2bfc887/multilingual_t5/tasks.py#L190) task for 8k steps in a similar manner to that described in the [offical repo](https://github.com/google-research/multilingual-t5#fine-tuning), with guidance from [Stephen Mayhew's notebook](https://github.com/mayhewsw/multilingual-t5/blob/master/notebooks/mt5-xnli.ipynb). The resulting model was then converted to :hugging_face: format.
## Eval results
Accuracy over XNLI test set:
| ar | bg | de | el | en | es | fr | hi | ru | sw | th | tr | ur | vi | zh | average |
|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|
| 81.0 | 85.0 | 84.3 | 84.3 | 88.8 | 85.3 | 83.9 | 79.9 | 82.6 | 78.0 | 81.0 | 81.6 | 76.4 | 81.7 | 82.3 | 82.4 |
|
{"language": ["multilingual", "en", "fr", "es", "de", "el", "bg", "ru", "tr", "ar", "vi", "th", "zh", "hi", "sw", "ur"], "license": "apache-2.0", "tags": ["pytorch"], "datasets": ["multi_nli", "xnli"], "metrics": ["xnli"]}
|
alan-turing-institute/mt5-large-finetuned-mnli-xtreme-xnli
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"mt5",
"text2text-generation",
"multilingual",
"en",
"fr",
"es",
"de",
"el",
"bg",
"ru",
"tr",
"ar",
"vi",
"th",
"zh",
"hi",
"sw",
"ur",
"dataset:multi_nli",
"dataset:xnli",
"arxiv:2010.11934",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
alanakbik/test-serialization
| null |
[
"pytorch",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
alangganggang/transformer_exercise_01
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
# Rick Sanchez DialoGPT Model
|
{"tags": ["conversational"]}
|
alankar/DialoGPT-small-rick
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
albererre/comments-playground
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
{}
|
albertbn/gpt2-medium-finetuned-ads-fp16-blocksz512
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 1311135
## Validation Metrics
- Loss: 0.35616958141326904
- Accuracy: 0.8979447200566973
- Macro F1: 0.8545383956197669
- Micro F1: 0.8979447200566975
- Weighted F1: 0.8983951947775538
- Macro Precision: 0.8615833774439791
- Micro Precision: 0.8979447200566973
- Weighted Precision: 0.9013559365881655
- Macro Recall: 0.8516503001777104
- Micro Recall: 0.8979447200566973
- Weighted Recall: 0.8979447200566973
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/albertvillanova/autonlp-indic_glue-multi_class_classification-1e67664-1311135
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("albertvillanova/autonlp-indic_glue-multi_class_classification-1e67664-1311135", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("albertvillanova/autonlp-indic_glue-multi_class_classification-1e67664-1311135", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "bn", "tags": "autonlp", "datasets": ["albertvillanova/autonlp-data-indic_glue-multi_class_classification-1e67664"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
|
albertvillanova/autonlp-indic_glue-multi_class_classification-1e67664-1311135
| null |
[
"transformers",
"pytorch",
"albert",
"text-classification",
"autonlp",
"bn",
"dataset:albertvillanova/autonlp-data-indic_glue-multi_class_classification-1e67664",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Entity Extraction
- Model ID: 1301123
## Validation Metrics
- Loss: 0.14097803831100464
- Accuracy: 0.9740097463451206
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/albertvillanova/autonlp-wikiann-entity_extraction-1e67664-1301123
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("albertvillanova/autonlp-wikiann-entity_extraction-1e67664-1301123", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("albertvillanova/autonlp-wikiann-entity_extraction-1e67664-1301123", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "bn", "tags": "autonlp", "datasets": ["albertvillanova/autonlp-data-wikiann-entity_extraction-1e67664"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
|
albertvillanova/autonlp-wikiann-entity_extraction-1e67664-1301123
| null |
[
"transformers",
"pytorch",
"safetensors",
"albert",
"token-classification",
"autonlp",
"bn",
"dataset:albertvillanova/autonlp-data-wikiann-entity_extraction-1e67664",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
albertvillanova/test
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
aldoaj/MorningGlory
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
# Configuration
`title`: _string_
Display title for the Space
`emoji`: _string_
Space emoji (emoji-only character allowed)
`colorFrom`: _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
`colorTo`: _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
`sdk`: _string_
Can be either `gradio` or `streamlit`
`sdk_version` : _string_
Only applicable for `streamlit` SDK.
See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
`app_file`: _string_
Path to your main application file (which contains either `gradio` or `streamlit` Python code).
Path is relative to the root of the repository.
`pinned`: _boolean_
Whether the Space stays on top of your list.
|
{"title": "clip", "emoji": "\ud83d\udc41", "colorFrom": "indigo", "colorTo": "blue", "sdk": "streamlit", "app_file": "app.py", "pinned": true}
|
allen0s/clip
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 441411446
- CO2 Emissions (in grams): 0.4362732160754736
## Validation Metrics
- Loss: 0.7598486542701721
- Accuracy: 0.8222222222222222
- Macro F1: 0.2912091747693842
- Micro F1: 0.8222222222222222
- Weighted F1: 0.7707160863181806
- Macro Precision: 0.29631463146314635
- Micro Precision: 0.8222222222222222
- Weighted Precision: 0.7341339689524508
- Macro Recall: 0.30174603174603176
- Micro Recall: 0.8222222222222222
- Weighted Recall: 0.8222222222222222
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/alecmullen/autonlp-group-classification-441411446
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("alecmullen/autonlp-group-classification-441411446", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("alecmullen/autonlp-group-classification-441411446", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["alecmullen/autonlp-data-group-classification"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 0.4362732160754736}
|
alecmullen/autonlp-group-classification-441411446
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"en",
"dataset:alecmullen/autonlp-data-group-classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
aleksi/bert-base-finnish-cased-v1-finetuned-cola
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
alemihai1/distilbert-fake-news
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
{}
|
alenusch/mt5base-ruparaphraser
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
{}
|
alenusch/mt5large-ruparaphraser
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
{}
|
alenusch/mt5small-ruparaphraser
| null |
[
"transformers",
"pytorch",
"jax",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
feature-extraction
|
transformers
|
## Classifier to check if two sequences are paraphrase or not
Trained based on ruBert by DeepPavlov.
Use this way:
```
import torch
import torch.nn as nn
import os
import copy
import random
import numpy as np
import pandas as pd
from torch.utils.data import DataLoader, Dataset
from torch.cuda.amp import autocast, GradScaler
from tqdm import tqdm
from transformers import AutoTokenizer, AutoModel, AdamW, get_linear_schedule_with_warmup
from transformers.file_utils import (
cached_path,
hf_bucket_url,
is_remote_url,
)
archive_file = hf_bucket_url(
"alenusch/par_cls_bert",
filename="rubert-base-cased_lr_2e-05_val_loss_0.66143_ep_4.pt",
revision=None,
mirror=None,
)
resolved_archive_file = cached_path(
archive_file,
cache_dir=None,
force_download=False,
proxies=None,
resume_download=False,
local_files_only=False,
)
os.environ["TOKENIZERS_PARALLELISM"] = "false"
class SentencePairClassifier(nn.Module):
def __init__(self, bert_model):
super(SentencePairClassifier, self).__init__()
self.bert_layer = AutoModel.from_pretrained(bert_model)
self.cls_layer = nn.Linear(768, 1)
self.dropout = nn.Dropout(p=0.1)
@autocast()
def forward(self, input_ids, attn_masks, token_type_ids):
cont_reps, pooler_output = self.bert_layer(input_ids, attn_masks, token_type_ids, return_dict=False)
logits = self.cls_layer(self.dropout(pooler_output))
return logits
class CustomDataset(Dataset):
def __init__(self, data, maxlen, bert_model):
self.data = data
self.tokenizer = AutoTokenizer.from_pretrained(bert_model)
self.maxlen = maxlen
self.targets = False
def __len__(self):
return len(self.data)
def __getitem__(self, index):
sent1 = str(self.data[index][0])
sent2 = str(self.data[index][1])
encoded_pair = self.tokenizer(sent1, sent2,
padding='max_length', # Pad to max_length
truncation=True, # Truncate to max_length
max_length=self.maxlen,
return_tensors='pt') # Return torch.Tensor objects
token_ids = encoded_pair['input_ids'].squeeze(0) # tensor of token ids
attn_masks = encoded_pair['attention_mask'].squeeze(0) # binary tensor with "0" for padded values and "1" for the other values
token_type_ids = encoded_pair['token_type_ids'].squeeze(0) # binary tensor with "0" for the 1st sentence tokens & "1" for the 2nd sentence tokens
return token_ids, attn_masks, token_type_ids
def get_probs_from_logits(logits):
probs = torch.sigmoid(logits.unsqueeze(-1))
return probs.detach().cpu().numpy()
def test_prediction(net, device, dataloader, with_labels=False):
net.eval()
probs_all = []
with torch.no_grad():
for seq, attn_masks, token_type_ids in tqdm(dataloader):
seq, attn_masks, token_type_ids = seq.to(device), attn_masks.to(device), token_type_ids.to(device)
logits = net(seq, attn_masks, token_type_ids)
probs = get_probs_from_logits(logits.squeeze(-1)).squeeze(-1)
probs_all += probs.tolist()
return probs_all
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
cls_model = SentencePairClassifier(bert_model="alenusch/par_cls_bert")
if torch.cuda.device_count() > 1:
cls_model = nn.DataParallel(model)
cls_model.load_state_dict(torch.load(resolved_archive_file))
cls_model.to(device)
variants = [["sentence1", "sentence2"]]
test_set = CustomDataset(variants, maxlen=512, bert_model="alenusch/par_cls_bert")
test_loader = DataLoader(test_set, batch_size=16, num_workers=5)
res = test_prediction(net=cls_model, device=device, dataloader=test_loader, with_labels=False)
```
|
{}
|
alenusch/par_cls_bert
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
{}
|
alenusch/rugpt2-paraphraser
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
{}
|
alenusch/rugpt3-paraphraser
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{"license": "afl-3.0"}
|
alex0224/Transformer1
| null |
[
"license:afl-3.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
{}
|
alex6095/SanctiMoly-Bart
| null |
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
feature-extraction
|
transformers
|
alex6095/SanctiMolyOH_Cpu
|
{}
|
alex6095/SanctiMolyOH_Cpu
| null |
[
"transformers",
"pytorch",
"distilbert",
"feature-extraction",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
{}
|
alex6095/SanctiMolyTopic
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
{}
|
alexLopatin/alex-ai
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null |
transformers
|
{}
|
alexaapo/greek_legal_bert_v1
| null |
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null |
transformers
|
{}
|
alexaapo/greek_legal_bert_v2
| null |
[
"transformers",
"pytorch",
"bert",
"pretraining",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
{}
|
alexander-karpov/bert-eatable-classification-en-ru
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
# DanBERT
## Model description
DanBERT is a danish pre-trained model based on BERT-Base. The pre-trained model has been trained on more than 2 million sentences and 40 millions, danish words. The training has been conducted as part of a thesis.
The model can be found at:
* [danbert-da](https://huggingface.co/alexanderfalk/danbert-small-cased)
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("alexanderfalk/danbert-small-cased")
model = AutoModel.from_pretrained("alexanderfalk/danbert-small-cased")
```
### BibTeX entry and citation info
```bibtex
@inproceedings{...,
year={2020},
title={Anonymization of Danish, Real-Time Data, and Personalized Modelling},
author={Alexander Falk},
}
```
|
{"language": ["da", "en"], "license": "apache-2.0", "tags": ["named entity recognition", "token criticality"], "datasets": ["custom danish dataset"], "metrics": ["array of metric identifiers"], "inference": false}
|
alexanderfalk/danbert-small-cased
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"named entity recognition",
"token criticality",
"da",
"en",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
alexandrudaia1305/signal_of_change_next_10_years_Romania
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
token-classification
|
transformers
|
# ArcheoBERTje-NER
A Dutch BERT model for Named Entity Recognition in the Archaeology domain
This is the [ArcheoBERTje](https://huggingface.co/alexbrandsen/ArcheoBERTje) model finetuned for NER, targeting the following entities:
- Time periods
- Places
- Artefacts
- Contexts
- Materials
- Species
|
{}
|
alexbrandsen/ArcheoBERTje-NER
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
# ArcheoBERTje
A Dutch BERT model for the Archaeology domain
This model is based on the Dutch BERTje model by wietsedv (https://github.com/wietsedv/bertje).
We further finetuned BERTje with a corpus of roughly 60k Dutch excavation reports (~650 million tokens) from the DANS data archive (https://easy.dans.knaw.nl/ui/home).
|
{}
|
alexbrandsen/ArcheoBERTje
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
{}
|
alexcg1/models
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
{}
|
alexcg1/trekbot
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
automatic-speech-recognition
|
transformers
|
# wav2vec2-large-xlsr-polish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Polish using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pl", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("alexcleu/wav2vec2-large-xlsr-polish")
model = Wav2Vec2ForCTC.from_pretrained("alexcleu/wav2vec2-large-xlsr-polish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "pl", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("alexcleu/wav2vec2-large-xlsr-polish")
model = Wav2Vec2ForCTC.from_pretrained("alexcleu/wav2vec2-large-xlsr-polish")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 24.846030
## Training
The Common Voice `train`, `validation` datasets were used for training.
|
{"language": "pl", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2vec2 Large 53 Polish by Alex Leu", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice pl", "type": "common_voice", "args": "pl"}, "metrics": [{"type": "wer", "value": 24.84603, "name": "Test WER"}]}]}]}
|
alexcleu/wav2vec2-large-xlsr-polish
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"pl",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
t5_boolq
|
{}
|
alexcruz0202/t5_boolq
| null |
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
alexdor/wizard-express
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
alexrfelicio/mbart-large-cc25-finetuned-en-to-cs
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
alexrfelicio/t5-small-finetuned-en-to-cs
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 136 | 1.7446 | 9.0564 | 17.8356 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "model-index": [{"name": "t5-small-finetuned-en-to-de", "results": []}]}
|
alexrfelicio/t5-small-finetuned-en-to-de
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
alexrfelicio/t5-small-finetuned-hiper1-16-en-to-de
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
alexrfelicio/t5-small-finetuned-length300-en-to-de
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned128-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "model-index": [{"name": "t5-small-finetuned128-en-to-de", "results": []}]}
|
alexrfelicio/t5-small-finetuned128-en-to-de
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned16-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 136 | 2.1906 | 23.3821 | 12.956 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "model-index": [{"name": "t5-small-finetuned16-en-to-de", "results": []}]}
|
alexrfelicio/t5-small-finetuned16-en-to-de
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
alexrfelicio/t5-small-finetuned2-en-to-de
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned300-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 136 | 1.1454 | 14.2319 | 17.8329 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "model-index": [{"name": "t5-small-finetuned300-en-to-de", "results": []}]}
|
alexrfelicio/t5-small-finetuned300-en-to-de
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned32-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 136 | 1.4226 | 21.9554 | 17.8089 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "model-index": [{"name": "t5-small-finetuned32-en-to-de", "results": []}]}
|
alexrfelicio/t5-small-finetuned32-en-to-de
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned8-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 136 | 3.6717 | 3.9127 | 4.0207 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "model-index": [{"name": "t5-small-finetuned8-en-to-de", "results": []}]}
|
alexrfelicio/t5-small-finetuned8-en-to-de
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
alexrink/distilbert-base-uncased-finetuned-emotion
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# alexrink/t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.6399
- Validation Loss: 6.0028
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 0.2, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 11.4991 | 6.9902 | 0 |
| 6.5958 | 6.2502 | 1 |
| 6.1443 | 6.1638 | 2 |
| 5.9379 | 6.0765 | 3 |
| 5.7739 | 5.9393 | 4 |
| 5.7033 | 6.0061 | 5 |
| 5.7070 | 5.9305 | 6 |
| 5.7000 | 5.9698 | 7 |
| 5.6888 | 5.9223 | 8 |
| 5.6657 | 5.9773 | 9 |
| 5.6827 | 5.9734 | 10 |
| 5.6380 | 5.9428 | 11 |
| 5.6532 | 5.9799 | 12 |
| 5.6617 | 5.9974 | 13 |
| 5.6402 | 5.9563 | 14 |
| 5.6710 | 5.9926 | 15 |
| 5.6999 | 5.9764 | 16 |
| 5.6573 | 5.9557 | 17 |
| 5.6297 | 5.9678 | 18 |
| 5.6399 | 6.0028 | 19 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.9.0
- Tokenizers 0.13.2
|
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "alexrink/t5-small-finetuned-xsum", "results": []}]}
|
alexrink/t5-small-finetuned-xsum
| null |
[
"transformers",
"tf",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
Paper: https://arxiv.org/abs/2204.03951
Code: https://github.com/alexyalunin/RuBioRoBERTa
|
{}
|
alexyalunin/RuBioBERT
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"arxiv:2204.03951",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
### Contact
[email protected]
https://t.me/pavel_blinoff
### Paper
https://arxiv.org/abs/2204.03951
### Code
https://github.com/alexyalunin/RuBioRoBERTa
### Citation
```
@misc{alex2022rubioroberta,
title={RuBioRoBERTa: a pre-trained biomedical language model for Russian language biomedical text mining},
author={Alexander Yalunin and Alexander Nesterov and Dmitriy Umerenkov},
year={2022},
eprint={2204.03951},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["ru"], "multilinguality": ["monolingual"], "widget": [{"text": "\u0416\u0430\u043b\u043e\u0431\u044b \u043d\u0430 \u0431\u043e\u043b\u044c \u0432\u043d\u0438\u0437\u0443 <mask> \u043f\u043e\u0441\u043b\u0435 \u043f\u0440\u0438\u0451\u043c\u0430 \u043f\u0438\u0449\u0438.", "example_title": "pain_example"}, {"text": "\u041f\u0430\u0446\u0438\u0435\u043d\u0442\u043a\u0430 \u043d\u0430\u0431\u043b\u044e\u0434\u0430\u043b\u0430\u0441\u044c \u0443 <mask> \u043f\u043e \u043f\u043e\u0432\u043e\u0434\u0443 \u0433\u0440\u0438\u0431\u043a\u043e\u0432\u043e\u0433\u043e \u043f\u043e\u0440\u0430\u0436\u0435\u043d\u0438\u044f \u043a\u043e\u0436\u0438.", "example_title": "spec_example"}, {"text": "\u041f\u043e\u044f\u0432\u0438\u043b\u0441\u044f \u0437\u0443\u0434 \u0442\u0435\u043b\u0430, <mask> \u0432\u0435\u0441\u0430, \u043f\u043e\u0442\u043b\u0438\u0432\u043e\u0441\u0442\u044c, \u043f\u0440\u043e\u0432\u043e\u0434\u0438\u043b \u043a\u043e\u043d\u0442\u0440\u043e\u043b\u044c \u0441\u0430\u0445\u0430\u0440\u0430 \u043a\u0440\u043e\u0432\u0438.", "example_title": "weight_example"}]}
|
alexyalunin/RuBioRoBERTa
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"ru",
"arxiv:2204.03951",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.