Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
text-generation
transformers
# Yui DialoGPT Model
{"tags": ["conversational"]}
Lurka/DialoGPT-medium-kon
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Tyrion DialoGPT Model
{"tags": ["conversational"]}
Luxiere/DialoGPT-medium-tyrion
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# BERT Reranker for MS-MARCO Document Ranking ## Model description A text reranker trained for BM25 retriever on MS MARCO document dataset. ## Intended uses & limitations It is possible to work with other retrievers like but using aligned BM25 works the best. We used anserini toolkit's BM25 implementation and indexed with tuned parameters (k1=3.8, b=0.87) following [this instruction](https://github.com/castorini/anserini/blob/master/docs/experiments-msmarco-doc.md). #### How to use See our [project repo page](https://github.com/luyug/Reranker). ## Eval results MRR @10: 0.423 on Dev. ### BibTeX entry and citation info ```bibtex @inproceedings{gao2021lce, title={Rethink Training of BERT Rerankers in Multi-Stage Retrieval Pipeline}, author={Luyu Gao and Zhuyun Dai and Jamie Callan}, year={2021}, booktitle={The 43rd European Conference On Information Retrieval (ECIR)}, } ```
{"language": ["en"], "license": "apache-2.0", "tags": ["text reranking"], "datasets": ["MS MARCO document ranking"]}
Luyu/bert-base-mdoc-bm25
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "text reranking", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# BERT Reranker for MS-MARCO Document Ranking ## Model description A text reranker trained for HDCT retriever on MS MARCO document dataset. ## Intended uses & limitations It is possible to work with other retrievers like BM25 but using aligned HDCT works the best. #### How to use See our [project repo page](https://github.com/luyug/Reranker). ## Eval results MRR @10: 0.434 on Dev. MRR @10: 0.382 on Eval. ### BibTeX entry and citation info ```bibtex @inproceedings{gao2021lce, title={Rethink Training of BERT Rerankers in Multi-Stage Retrieval Pipeline}, author={Luyu Gao and Zhuyun Dai and Jamie Callan}, year={2021}, booktitle={The 43rd European Conference On Information Retrieval (ECIR)}, } ```
{"language": ["en"], "license": "apache-2.0", "tags": ["text reranking"], "datasets": ["MS MARCO document ranking"]}
Luyu/bert-base-mdoc-hdct
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "text reranking", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
{}
Luyu/co-condenser-marco-retriever
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
{}
Luyu/co-condenser-marco
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
{}
Luyu/co-condenser-wiki
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
{}
Luyu/condenser
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
LuzDeGea/distilbert-base-uncased-finetuned-cola
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
LxvelyLala/Katie
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
Lysa/subheading_generator_en
null
[ "transformers", "pytorch", "jax", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
Lysa/subheading_generator_nl
null
[ "transformers", "pytorch", "mbart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Lysa/subheading_generator_nl.v.1.0
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
It's a sentiment inference model base on bert.
{}
LzLzLz/Bert
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
<br /> <p align="center"> <h1 align="center">M-BERT Base 69</h1> <p align="center"> <a href="https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/M-BERT%20Base%2069">Github Model Card</a> </p> </p> ## Usage To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the [Multilingual-CLIP Github](https://github.com/FreddeFrallan/Multilingual-CLIP). Once this is done, you can load and use the model with the following code ```python from src import multilingual_clip model = multilingual_clip.load_model('M-BERT-Base-40') embeddings = model(['Älgen är skogens konung!', 'Wie leben Eisbären in der Antarktis?', 'Вы знали, что все белые медведи левши?']) print(embeddings.shape) # Yields: torch.Size([3, 640]) ``` <!-- ABOUT THE PROJECT --> ## About A [BERT-base-multilingual](https://huggingface.co/bert-base-multilingual-cased) tuned to match the embedding space for [69 languages](https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/Model%20Cards/M-BERT%20Base%2069/Fine-Tune-Languages.md), to the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br> A full list of the 100 languages used during pre-training can be found [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages), and a list of the 4069languages used during fine-tuning can be found in [SupportedLanguages.md](https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/Model%20Cards/M-BERT%20Base%2069/Fine-Tune-Languages.md). Training data pairs was generated by sampling 40k sentences for each language from the combined descriptions of [GCC](https://ai.google.com/research/ConceptualCaptions/) + [MSCOCO](https://cocodataset.org/#home) + [VizWiz](https://vizwiz.org/tasks-and-datasets/image-captioning/), and translating them into the corresponding language. All translation was done using the [AWS translate service](https://aws.amazon.com/translate/), the quality of these translations have currently not been analyzed, but one can assume the quality varies between the 69 languages.
{}
M-CLIP/M-BERT-Base-69
null
[ "transformers", "pytorch", "jax", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
<br /> <p align="center"> <h1 align="center">M-BERT Base ViT-B</h1> <p align="center"> <a href="https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/M-BERT%20Base%20ViT-B">Github Model Card</a> </p> </p> ## Usage To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the [Multilingual-CLIP Github](https://github.com/FreddeFrallan/Multilingual-CLIP). Once this is done, you can load and use the model with the following code ```python from src import multilingual_clip model = multilingual_clip.load_model('M-BERT-Base-ViT') embeddings = model(['Älgen är skogens konung!', 'Wie leben Eisbären in der Antarktis?', 'Вы знали, что все белые медведи левши?']) print(embeddings.shape) # Yields: torch.Size([3, 640]) ``` <!-- ABOUT THE PROJECT --> ## About A [BERT-base-multilingual](https://huggingface.co/bert-base-multilingual-cased) tuned to match the embedding space for [69 languages](https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/Model%20Cards/M-BERT%20Base%2069/Fine-Tune-Languages.md), to the embedding space of the CLIP text encoder which accompanies the ViT-B/32 vision encoder. <br> A full list of the 100 languages used during pre-training can be found [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages), and a list of the 4069languages used during fine-tuning can be found in [SupportedLanguages.md](https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/Model%20Cards/M-BERT%20Base%2069/Fine-Tune-Languages.md). Training data pairs was generated by sampling 40k sentences for each language from the combined descriptions of [GCC](https://ai.google.com/research/ConceptualCaptions/) + [MSCOCO](https://cocodataset.org/#home) + [VizWiz](https://vizwiz.org/tasks-and-datasets/image-captioning/), and translating them into the corresponding language. All translation was done using the [AWS translate service](https://aws.amazon.com/translate/), the quality of these translations have currently not been analyzed, but one can assume the quality varies between the 69 languages.
{}
M-CLIP/M-BERT-Base-ViT-B
null
[ "transformers", "pytorch", "tf", "jax", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
<br /> <p align="center"> <h1 align="center">M-BERT Distil 40</h1> <p align="center"> <a href="https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/M-BERT%20Distil%2040">Github Model Card</a> </p> </p> ## Usage To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the [Multilingual-CLIP Github](https://github.com/FreddeFrallan/Multilingual-CLIP). Once this is done, you can load and use the model with the following code ```python from src import multilingual_clip model = multilingual_clip.load_model('M-BERT-Distil-40') embeddings = model(['Älgen är skogens konung!', 'Wie leben Eisbären in der Antarktis?', 'Вы знали, что все белые медведи левши?']) print(embeddings.shape) # Yields: torch.Size([3, 640]) ``` <!-- ABOUT THE PROJECT --> ## About A [distilbert-base-multilingual](https://huggingface.co/distilbert-base-multilingual-cased) tuned to match the embedding space for [40 languages](https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/Model%20Cards/M-BERT%20Distil%2040/Fine-Tune-Languages.md), to the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br> A full list of the 100 languages used during pre-training can be found [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages), and a list of the 40 languages used during fine-tuning can be found in [SupportedLanguages.md](Fine-Tune-Languages.md). Training data pairs was generated by sampling 40k sentences for each language from the combined descriptions of [GCC](https://ai.google.com/research/ConceptualCaptions/) + [MSCOCO](https://cocodataset.org/#home) + [VizWiz](https://vizwiz.org/tasks-and-datasets/image-captioning/), and translating them into the corresponding language. All translation was done using the [AWS translate service](https://aws.amazon.com/translate/), the quality of these translations have currently not been analyzed, but one can assume the quality varies between the 40 languages. ## Evaluation [These results can be viewed at Github](https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/M-BERT%20Distil%2040). <br> A non-rigorous qualitative evaluation shows that for the languages French, German, Spanish, Russian, Swedish and Greek it seemingly yields respectable results for most instances. The exception being that Greeks are apparently unable to recognize happy persons. <br> When testing on Kannada, a language which was included during pre-training but not fine-tuning, it performed close to random
{"language": ["sq", "am", "ar", "az", "bn", "bg", "ca", "zh", "nl", "en", "et", "fa", "fr", "ka", "de", "el", "hi", "hu", "is", "id", "it", "ja", "kk", "ko", "lv", "mk", "ms", "ps", "pl", "ro", "ru", "sl", "es", "sv", "tl", "th", "tr", "ur"]}
M-CLIP/M-BERT-Distil-40
null
[ "transformers", "pytorch", "distilbert", "feature-extraction", "sq", "am", "ar", "az", "bn", "bg", "ca", "zh", "nl", "en", "et", "fa", "fr", "ka", "de", "el", "hi", "hu", "is", "id", "it", "ja", "kk", "ko", "lv", "mk", "ms", "ps", "pl", "ro", "ru", "sl", "es", "sv", "tl", "th", "tr", "ur", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
<br /> <p align="center"> <h1 align="center">Swe-CLIP 2M</h1> <p align="center"> <a href="https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/Swe-CLIP%202M">Github Model Card</a> </p> </p> ## Usage To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the [Multilingual-CLIP Github](https://github.com/FreddeFrallan/Multilingual-CLIP). Once this is done, you can load and use the model with the following code ```python from src import multilingual_clip model = multilingual_clip.load_model('Swe-CLIP-500k') embeddings = model(['Älgen är skogens konung!', 'Alla isbjörnar är vänsterhänta']) print(embeddings.shape) # Yields: torch.Size([2, 640]) ``` <!-- ABOUT THE PROJECT --> ## About A [KB/Bert-Swedish-Cased](https://huggingface.co/KB/bert-base-swedish-cased) tuned to match the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br> Training data pairs was generated by sampling 2 Million sentences from the combined descriptions of [GCC](https://ai.google.com/research/ConceptualCaptions/) + [MSCOCO](https://cocodataset.org/#home) + [VizWiz](https://vizwiz.org/tasks-and-datasets/image-captioning/), and translating them into Swedish. All translation was done using the [Huggingface Opus Model](https://huggingface.co/Helsinki-NLP/opus-mt-en-sv), which seemingly procudes higher quality translations than relying on the [AWS translate service](https://aws.amazon.com/translate/).
{"language": "sv"}
M-CLIP/Swedish-2M
null
[ "transformers", "pytorch", "jax", "bert", "feature-extraction", "sv", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
<br /> <p align="center"> <h1 align="center">Swe-CLIP 500k</h1> <p align="center"> <a href="https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/Swe-CLIP%20500k">Github Model Card</a> </p> </p> ## Usage To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the [Multilingual-CLIP Github](https://github.com/FreddeFrallan/Multilingual-CLIP). Once this is done, you can load and use the model with the following code ```python from src import multilingual_clip model = multilingual_clip.load_model('Swe-CLIP-500k') embeddings = model(['Älgen är skogens konung!', 'Alla isbjörnar är vänsterhänta']) print(embeddings.shape) # Yields: torch.Size([2, 640]) ``` <!-- ABOUT THE PROJECT --> ## About A [KB/Bert-Swedish-Cased](https://huggingface.co/KB/bert-base-swedish-cased) tuned to match the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br> Training data pairs was generated by sampling 500k sentences from the combined descriptions of [GCC](https://ai.google.com/research/ConceptualCaptions/) + [MSCOCO](https://cocodataset.org/#home) + [VizWiz](https://vizwiz.org/tasks-and-datasets/image-captioning/), and translating them into Swedish. All translation was done using the [Huggingface Opus Model](https://huggingface.co/Helsinki-NLP/opus-mt-en-sv), which seemingly procudes higher quality translations than relying on the [AWS translate service](https://aws.amazon.com/translate/).
{"language": "sv"}
M-CLIP/Swedish-500k
null
[ "transformers", "pytorch", "jax", "bert", "feature-extraction", "sv", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# BERT-mini model finetuned with M-FAC This model is finetuned on MNLI dataset with state-of-the-art second-order optimizer M-FAC. Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf). ## Finetuning setup For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC. Hyperparameters used by M-FAC optimizer: ```bash learning rate = 1e-4 number of gradients = 1024 dampening = 1e-6 ``` ## Results We share the best model out of 5 runs with the following score on MNLI validation set: ```bash matched_accuracy = 75.13 mismatched_accuracy = 75.93 ``` Mean and standard deviation for 5 runs on MNLI validation set: | | Matched Accuracy | Mismatched Accuracy | |:-----:|:----------------:|:-------------------:| | Adam | 73.30 ± 0.20 | 74.85 ± 0.09 | | M-FAC | 74.59 ± 0.41 | 75.95 ± 0.14 | Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script: ```bash CUDA_VISIBLE_DEVICES=0 python run_glue.py \ --seed 8276 \ --model_name_or_path prajjwal1/bert-mini \ --task_name mnli \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 1e-4 \ --num_train_epochs 5 \ --output_dir out_dir/ \ --optim MFAC \ --optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}' ``` We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE). Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC). A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials). ## BibTeX entry and citation info ```bibtex @article{frantar2021m, title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information}, author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan}, journal={Advances in Neural Information Processing Systems}, volume={35}, year={2021} } ```
{}
M-FAC/bert-mini-finetuned-mnli
null
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2107.03356", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# BERT-mini model finetuned with M-FAC This model is finetuned on MRPC dataset with state-of-the-art second-order optimizer M-FAC. Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf). ## Finetuning setup For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC. Hyperparameters used by M-FAC optimizer: ```bash learning rate = 1e-4 number of gradients = 512 dampening = 1e-6 ``` ## Results We share the best model out of 5 runs with the following score on MRPC validation set: ```bash f1 = 86.51 accuracy = 81.12 ``` Mean and standard deviation for 5 runs on MRPC validation set: | | F1 | Accuracy | |:----:|:-----------:|:----------:| | Adam | 84.57 ± 0.36| 76.57 ± 0.80| | M-FAC | 85.06 ± 1.63 | 78.87 ± 2.33 | Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script: ```bash CUDA_VISIBLE_DEVICES=0 python run_glue.py \ --seed 1234 \ --model_name_or_path prajjwal1/bert-mini \ --task_name mrpc \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 1e-4 \ --num_train_epochs 5 \ --output_dir out_dir/ \ --optim MFAC \ --optim_args '{"lr": 1e-4, "num_grads": 512, "damp": 1e-6}' ``` We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE). Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC). A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials). ## BibTeX entry and citation info ```bibtex @article{frantar2021m, title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information}, author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan}, journal={Advances in Neural Information Processing Systems}, volume={35}, year={2021} } ```
{}
M-FAC/bert-mini-finetuned-mrpc
null
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2107.03356", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# BERT-mini model finetuned with M-FAC This model is finetuned on QNLI dataset with state-of-the-art second-order optimizer M-FAC. Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf). ## Finetuning setup For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC. Hyperparameters used by M-FAC optimizer: ```bash learning rate = 1e-4 number of gradients = 1024 dampening = 1e-6 ``` ## Results We share the best model out of 5 runs with the following score on QNLI validation set: ```bash accuracy = 83.90 ``` Mean and standard deviation for 5 runs on QNLI validation set: | | Accuracy | |:----:|:-----------:| | Adam | 83.85 ± 0.10 | | M-FAC | 83.70 ± 0.13 | Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script: ```bash CUDA_VISIBLE_DEVICES=0 python run_glue.py \ --seed 8276 \ --model_name_or_path prajjwal1/bert-mini \ --task_name qnli \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 1e-4 \ --num_train_epochs 5 \ --output_dir out_dir/ \ --optim MFAC \ --optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}' ``` We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE). Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC). A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials). ## BibTeX entry and citation info ```bibtex @article{frantar2021m, title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information}, author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan}, journal={Advances in Neural Information Processing Systems}, volume={35}, year={2021} } ```
{}
M-FAC/bert-mini-finetuned-qnli
null
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2107.03356", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# BERT-mini model finetuned with M-FAC This model is finetuned on QQP dataset with state-of-the-art second-order optimizer M-FAC. Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf). ## Finetuning setup For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC. Hyperparameters used by M-FAC optimizer: ```bash learning rate = 1e-4 number of gradients = 1024 dampening = 1e-6 ``` ## Results We share the best model out of 5 runs with the following score on QQP validation set: ```bash f1 = 82.98 accuracy = 87.03 ``` Mean and standard deviation for 5 runs on QQP validation set: | | F1 | Accuracy | |:----:|:-----------:|:----------:| | Adam | 82.43 ± 0.10 | 86.45 ± 0.12 | | M-FAC | 82.67 ± 0.23 | 86.75 ± 0.20 | Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script: ```bash CUDA_VISIBLE_DEVICES=0 python run_glue.py \ --seed 10723 \ --model_name_or_path prajjwal1/bert-mini \ --task_name qqp \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 1e-4 \ --num_train_epochs 5 \ --output_dir out_dir/ \ --optim MFAC \ --optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}' ``` We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE). Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC). A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials). ## BibTeX entry and citation info ```bibtex @article{frantar2021m, title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information}, author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan}, journal={Advances in Neural Information Processing Systems}, volume={35}, year={2021} } ```
{}
M-FAC/bert-mini-finetuned-qqp
null
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2107.03356", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
# BERT-mini model finetuned with M-FAC This model is finetuned on SQuAD version 2 dataset with state-of-the-art second-order optimizer M-FAC. Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf). ## Finetuning setup For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/question-answering](https://github.com/huggingface/transformers/tree/master/examples/pytorch/question-answering) and just swap Adam optimizer with M-FAC. Hyperparameters used by M-FAC optimizer: ```bash learning rate = 1e-4 number of gradients = 1024 dampening = 1e-6 ``` ## Results We share the best model out of 5 runs with the following score on SQuAD version 2 validation set: ```bash exact_match = 58.38 f1 = 61.65 ``` Mean and standard deviation for 5 runs on SQuAD version 2 validation set: | | Exact Match | F1 | |:----:|:-----------:|:----:| | Adam | 54.80 ± 0.47 | 58.13 ± 0.31 | | M-FAC | 58.02 ± 0.39 | 61.35 ± 0.24 | Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py) and running the following bash script: ```bash CUDA_VISIBLE_DEVICES=0 python run_qa.py \ --seed 8276 \ --model_name_or_path prajjwal1/bert-mini \ --dataset_name squad_v2 \ --version_2_with_negative \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 1e-4 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir out_dir/ \ --optim MFAC \ --optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}' ``` We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE). Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC). A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials). ## BibTeX entry and citation info ```bibtex @article{frantar2021m, title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information}, author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan}, journal={Advances in Neural Information Processing Systems}, volume={35}, year={2021} } ```
{}
M-FAC/bert-mini-finetuned-squadv2
null
[ "transformers", "pytorch", "bert", "question-answering", "arxiv:2107.03356", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# BERT-mini model finetuned with M-FAC This model is finetuned on SST-2 dataset with state-of-the-art second-order optimizer M-FAC. Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf). ## Finetuning setup For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC. Hyperparameters used by M-FAC optimizer: ```bash learning rate = 1e-4 number of gradients = 1024 dampening = 1e-6 ``` ## Results We share the best model out of 5 runs with the following score on SST-2 validation set: ```bash accuracy = 84.74 ``` Mean and standard deviation for 5 runs on SST-2 validation set: | | Accuracy | |:----:|:-----------:| | Adam | 85.46 ± 0.58 | | M-FAC | 84.20 ± 0.58 | Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script: ```bash CUDA_VISIBLE_DEVICES=0 python run_glue.py \ --seed 1234 \ --model_name_or_path prajjwal1/bert-mini \ --task_name sst2 \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 1e-4 \ --num_train_epochs 3 \ --output_dir out_dir/ \ --optim MFAC \ --optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}' ``` We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE). Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC). A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials). ## BibTeX entry and citation info ```bibtex @article{frantar2021m, title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information}, author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan}, journal={Advances in Neural Information Processing Systems}, volume={35}, year={2021} } ```
{}
M-FAC/bert-mini-finetuned-sst2
null
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2107.03356", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# BERT-mini model finetuned with M-FAC This model is finetuned on STS-B dataset with state-of-the-art second-order optimizer M-FAC. Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf). ## Finetuning setup For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC. Hyperparameters used by M-FAC optimizer: ```bash learning rate = 1e-4 number of gradients = 512 dampening = 1e-6 ``` ## Results We share the best model out of 5 runs with the following score on STS-B validation set: ```bash pearson = 85.03 spearman = 85.06 ``` Mean and standard deviation for 5 runs on STS-B validation set: | | Pearson | Spearman | |:----:|:-----------:|:----------:| | Adam | 82.09 ± 0.54 | 82.64 ± 0.71 | | M-FAC | 84.66 ± 0.30 | 84.65 ± 0.30 | Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script: ```bash CUDA_VISIBLE_DEVICES=0 python run_glue.py \ --seed 7 \ --model_name_or_path prajjwal1/bert-mini \ --task_name stsb \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 1e-4 \ --num_train_epochs 5 \ --output_dir out_dir/ \ --optim MFAC \ --optim_args '{"lr": 1e-4, "num_grads": 512, "damp": 1e-6}' ``` We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE). Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC). A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials). ## BibTeX entry and citation info ```bibtex @article{frantar2021m, title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information}, author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan}, journal={Advances in Neural Information Processing Systems}, volume={35}, year={2021} } ```
{}
M-FAC/bert-mini-finetuned-stsb
null
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2107.03356", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# BERT-tiny model finetuned with M-FAC This model is finetuned on MNLI dataset with state-of-the-art second-order optimizer M-FAC. Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf). ## Finetuning setup For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC. Hyperparameters used by M-FAC optimizer: ```bash learning rate = 1e-4 number of gradients = 1024 dampening = 1e-6 ``` ## Results We share the best model out of 5 runs with the following score on MNLI validation set: ```bash matched_accuracy = 69.55 mismatched_accuracy = 70.58 ``` Mean and standard deviation for 5 runs on MNLI validation set: | | Matched Accuracy | Mismatched Accuracy | |:----:|:-----------:|:----------:| | Adam | 65.36 ± 0.13 | 66.78 ± 0.15 | | M-FAC | 68.28 ± 3.29 | 68.98 ± 3.05 | Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script: ```bash CUDA_VISIBLE_DEVICES=0 python run_glue.py \ --seed 42 \ --model_name_or_path prajjwal1/bert-tiny \ --task_name mnli \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 1e-4 \ --num_train_epochs 5 \ --output_dir out_dir/ \ --optim MFAC \ --optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}' ``` We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE). Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC). A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials). ## BibTeX entry and citation info ```bibtex @article{frantar2021m, title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information}, author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan}, journal={Advances in Neural Information Processing Systems}, volume={35}, year={2021} } ```
{}
M-FAC/bert-tiny-finetuned-mnli
null
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2107.03356", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# BERT-tiny model finetuned with M-FAC This model is finetuned on MRPC dataset with state-of-the-art second-order optimizer M-FAC. Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf). ## Finetuning setup For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC. Hyperparameters used by M-FAC optimizer: ```bash learning rate = 1e-4 number of gradients = 512 dampening = 1e-6 ``` ## Results We share the best model out of 5 runs with the following score on MRPC validation set: ```bash f1 = 83.12 accuracy = 73.52 ``` Mean and standard deviation for 5 runs on MRPC validation set: | | F1 | Accuracy | |:----:|:-----------:|:----------:| | Adam | 81.68 ± 0.33 | 69.90 ± 0.32 | | M-FAC | 82.77 ± 0.22 | 72.94 ± 0.37 | Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script: ```bash CUDA_VISIBLE_DEVICES=0 python run_glue.py \ --seed 42 \ --model_name_or_path prajjwal1/bert-tiny \ --task_name mrpc \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 1e-4 \ --num_train_epochs 5 \ --output_dir out_dir/ \ --optim MFAC \ --optim_args '{"lr": 1e-4, "num_grads": 512, "damp": 1e-6}' ``` We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE). Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC). A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials). ## BibTeX entry and citation info ```bibtex @article{frantar2021m, title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information}, author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan}, journal={Advances in Neural Information Processing Systems}, volume={35}, year={2021} } ```
{}
M-FAC/bert-tiny-finetuned-mrpc
null
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2107.03356", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# BERT-tiny model finetuned with M-FAC This model is finetuned on QNLI dataset with state-of-the-art second-order optimizer M-FAC. Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf). ## Finetuning setup For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC. Hyperparameters used by M-FAC optimizer: ```bash learning rate = 1e-4 number of gradients = 1024 dampening = 1e-6 ``` ## Results We share the best model out of 5 runs with the following score on QNLI validation set: ```bash accuracy = 81.54 ``` Mean and standard deviation for 5 runs on QNLI validation set: | | Accuracy | |:----:|:-----------:| | Adam | 77.85 ± 0.15 | | M-FAC | 81.17 ± 0.43 | Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script: ```bash CUDA_VISIBLE_DEVICES=0 python run_glue.py \ --seed 8276 \ --model_name_or_path prajjwal1/bert-tiny \ --task_name qnli \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 1e-4 \ --num_train_epochs 5 \ --output_dir out_dir/ \ --optim MFAC \ --optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}' ``` We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE). Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC). A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials). ## BibTeX entry and citation info ```bibtex @article{frantar2021m, title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information}, author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan}, journal={Advances in Neural Information Processing Systems}, volume={35}, year={2021} } ```
{}
M-FAC/bert-tiny-finetuned-qnli
null
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2107.03356", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# BERT-tiny model finetuned with M-FAC This model is finetuned on QQP dataset with state-of-the-art second-order optimizer M-FAC. Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf). ## Finetuning setup For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC. Hyperparameters used by M-FAC optimizer: ```bash learning rate = 1e-4 number of gradients = 1024 dampening = 1e-6 ``` ## Results We share the best model out of 5 runs with the following score on QQP validation set: ```bash f1 = 79.84 accuracy = 84.40 ``` Mean and standard deviation for 5 runs on QQP validation set: | | F1 | Accuracy | |:----:|:-----------:|:----------:| | Adam | 77.58 ± 0.08 | 81.09 ± 0.15 | | M-FAC | 79.71 ± 0.13 | 84.29 ± 0.08 | Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script: ```bash CUDA_VISIBLE_DEVICES=0 python run_glue.py \ --seed 1234 \ --model_name_or_path prajjwal1/bert-tiny \ --task_name qqp \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 1e-4 \ --num_train_epochs 5 \ --output_dir out_dir/ \ --optim MFAC \ --optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}' ``` We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE). Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC). A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials). ## BibTeX entry and citation info ```bibtex @article{frantar2021m, title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information}, author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan}, journal={Advances in Neural Information Processing Systems}, volume={35}, year={2021} } ```
{}
M-FAC/bert-tiny-finetuned-qqp
null
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2107.03356", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
# BERT-tiny model finetuned with M-FAC This model is finetuned on SQuAD version 2 dataset with state-of-the-art second-order optimizer M-FAC. Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf). ## Finetuning setup For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/question-answering](https://github.com/huggingface/transformers/tree/master/examples/pytorch/question-answering) and just swap Adam optimizer with M-FAC. Hyperparameters used by M-FAC optimizer: ```bash learning rate = 1e-4 number of gradients = 1024 dampening = 1e-6 ``` ## Results We share the best model out of 5 runs with the following score on SQuAD version 2 validation set: ```bash exact_match = 50.29 f1 = 52.43 ``` Mean and standard deviation for 5 runs on SQuAD version 2 validation set: | | Exact Match | F1 | |:----:|:-----------:|:----:| | Adam | 48.41 ± 0.57 | 49.99 ± 0.54 | | M-FAC | 49.80 ± 0.43 | 52.18 ± 0.20 | Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py) and running the following bash script: ```bash CUDA_VISIBLE_DEVICES=0 python run_qa.py \ --seed 42 \ --model_name_or_path prajjwal1/bert-tiny \ --dataset_name squad_v2 \ --version_2_with_negative \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 1e-4 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir out_dir/ \ --optim MFAC \ --optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}' ``` We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE). Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC). A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials). ## BibTeX entry and citation info ```bibtex @article{frantar2021m, title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information}, author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan}, journal={Advances in Neural Information Processing Systems}, volume={35}, year={2021} } ```
{}
M-FAC/bert-tiny-finetuned-squadv2
null
[ "transformers", "pytorch", "bert", "question-answering", "arxiv:2107.03356", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# BERT-tiny model finetuned with M-FAC This model is finetuned on SST-2 dataset with state-of-the-art second-order optimizer M-FAC. Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf). ## Finetuning setup For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC. Hyperparameters used by M-FAC optimizer: ```bash learning rate = 1e-4 number of gradients = 1024 dampening = 1e-6 ``` ## Results We share the best model out of 5 runs with the following score on SST-2 validation set: ```bash accuracy = 83.02 ``` Mean and standard deviation for 5 runs on SST-2 validation set: | | Accuracy | |:----:|:-----------:| | Adam | 80.11 ± 0.65 | | M-FAC | 81.86 ± 0.76 | Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script: ```bash CUDA_VISIBLE_DEVICES=0 python run_glue.py \ --seed 42 \ --model_name_or_path prajjwal1/bert-tiny \ --task_name sst2 \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 1e-4 \ --num_train_epochs 3 \ --output_dir out_dir/ \ --optim MFAC \ --optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}' ``` We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE). Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC). A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials). ## BibTeX entry and citation info ```bibtex @article{frantar2021m, title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information}, author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan}, journal={Advances in Neural Information Processing Systems}, volume={35}, year={2021} } ```
{}
M-FAC/bert-tiny-finetuned-sst2
null
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2107.03356", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# BERT-tiny model finetuned with M-FAC This model is finetuned on STS-B dataset with state-of-the-art second-order optimizer M-FAC. Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf). ## Finetuning setup For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC. Hyperparameters used by M-FAC optimizer: ```bash learning rate = 1e-4 number of gradients = 512 dampening = 1e-6 ``` ## Results We share the best model out of 5 runs with the following score on STS-B validation set: ```bash pearson = 80.66 spearman = 81.13 ``` Mean and standard deviation for 5 runs on STS-B validation set: | | Pearson | Spearman | |:----:|:-----------:|:----------:| | Adam | 64.39 ± 5.02 | 66.52 ± 5.67 | | M-FAC | 80.15 ± 0.52 | 80.62 ± 0.43 | Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script: ```bash CUDA_VISIBLE_DEVICES=0 python run_glue.py \ --seed 7 \ --model_name_or_path prajjwal1/bert-tiny \ --task_name stsb \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 1e-4 \ --num_train_epochs 5 \ --output_dir out_dir/ \ --optim MFAC \ --optim_args '{"lr": 1e-4, "num_grads": 512, "damp": 1e-6}' ``` We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE). Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC). A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials). ## BibTeX entry and citation info ```bibtex @article{frantar2021m, title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information}, author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan}, journal={Advances in Neural Information Processing Systems}, volume={35}, year={2021} } ```
{}
M-FAC/bert-tiny-finetuned-stsb
null
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2107.03356", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
M47Labs/arabert_multiclass_news
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
M47Labs/binary_classification_arabic
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
M47Labs/english_news_classification_headlines
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
M47Labs/it_iptc
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
M47Labs/italian_news_classification_headlines
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# Spanish News Classification Headlines SNCH: this model was develop by [M47Labs](https://www.m47labs.com/es/) the goal is text classification, the base model use was [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased), it was fine-tuned on 1000 example dataset. ## Dataset Sample Dataset size : 1000 Columns: idTask,task content 1,idTag,tag. |idTask|task content 1|idTag|tag| |------|------|------|------| |3637d9ac-119c-4a8f-899c-339cf5b42ae0|Alcalá de Guadaíra celebra la IV Semana de la Diversidad Sexual con acciones de sensibilización|81b36360-6cbf-4ffa-b558-9ef95c136714|sociedad| |d56bab52-0029-45dd-ad90-5c17d4ed4c88|El Archipiélago Chinijo Graciplus se impone en el Trofeo Centro Comercial Rubicón|ed198b6d-a5b9-4557-91ff-c0be51707dec|deportes| |dec70bc5-4932-4fa2-aeac-31a52377be02|Un total de 39 personas padecen ELA actualmente en la provincia|81b36360-6cbf-4ffa-b558-9ef95c136714|sociedad| |fb396ba9-fbf1-4495-84d9-5314eb731405|Eurocopa 2021 : Italia vence a Gales y pasa a octavos con su candidatura reforzada|ed198b6d-a5b9-4557-91ff-c0be51707dec|deportes| |bc5a36ca-4e0a-422e-9167-766b41008c01|Resolución de 10 de junio de 2021, del Ayuntamiento de Tarazona de La Mancha (Albacete), referente a la convocatoria para proveer una plaza.|81b36360-6cbf-4ffa-b558-9ef95c136714|sociedad| |a87f8703-ce34-47a5-9c1b-e992c7fe60f6|El primer ministro sueco pierde una moción de censura|209ae89e-55b4-41fd-aac0-5400feab479e|politica| |d80bdaad-0ad5-43a0-850e-c473fd612526|El dólar se dispara tras la reunión de la Fed|11925830-148e-4890-a2bc-da9dc059dc17|economia| ## Labels: * ciencia_tecnologia * clickbait * cultura * deportes * economia * educacion * medio_ambiente * opinion * politica * sociedad ## Example of Use ### Pipeline ```{python} import torch from transformers import AutoTokenizer, BertForSequenceClassification,TextClassificationPipeline review_text = 'los vehiculos que esten esperando pasajaeros deberan estar apagados para reducir emisiones' path = "M47Labs/spanish_news_classification_headlines" tokenizer = AutoTokenizer.from_pretrained(path) model = BertForSequenceClassification.from_pretrained(path) nlp = TextClassificationPipeline(task = "text-classification", model = model, tokenizer = tokenizer) print(nlp(review_text)) ``` ```[{'label': 'medio_ambiente', 'score': 0.5648820996284485}]``` ### Pytorch ```{python} import torch from transformers import AutoTokenizer, BertForSequenceClassification,TextClassificationPipeline from numpy import np model_name = 'M47Labs/spanish_news_classification_headlines' MAX_LEN = 32 tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) texto = "las emisiones estan bajando, debido a las medidas ambientales tomadas por el gobierno" encoded_review = tokenizer.encode_plus( texto, max_length=MAX_LEN, add_special_tokens=True, #return_token_type_ids=False, pad_to_max_length=True, return_attention_mask=True, return_tensors='pt', ) input_ids = encoded_review['input_ids'] attention_mask = encoded_review['attention_mask'] output = model(input_ids, attention_mask) _, prediction = torch.max(output['logits'], dim=1) print(f'Review text: {texto}') print(f'Sentiment : {model.config.id2label[prediction.detach().cpu().numpy()[0]]}') ``` ```Review text: las emisiones estan bajando, debido a las medidas ambientales tomadas por el gobierno``` ```Sentiment : medio_ambiente``` A more in depth example on how to use the model can be found in this colab notebook: https://colab.research.google.com/drive/1XsKea6oMyEckye2FePW_XN7Rf8v41Cw_?usp=sharing ## Finetune Hyperparameters * MAX_LEN = 32 * TRAIN_BATCH_SIZE = 8 * VALID_BATCH_SIZE = 4 * EPOCHS = 5 * LEARNING_RATE = 1e-05 ## Train Results |n_example|epoch|loss|acc| |------|------|------|------| |100|0|2.286327266693115|12.5| |100|1|2.018876111507416|40.0| |100|2|1.8016730904579163|43.75| |100|3|1.6121837735176086|46.25| |100|4|1.41565443277359|68.75| |n_example|epoch|loss|acc| |------|------|------|------| |500|0|2.0770938420295715|24.5| |500|1|1.6953029704093934|50.25| |500|2|1.258900796175003|64.25| |500|3|0.8342628020048142|78.25| |500|4|0.5135736921429634|90.25| |n_example|epoch|loss|acc| |------|------|------|------| |1000|0|1.916002897115854|36.1997226074896| |1000|1|1.2941598492664295|62.2746185852982| |1000|2|0.8201534710415117|76.97642163661581| |1000|3|0.524806430051615|86.9625520110957| |1000|4|0.30662027455784463|92.64909847434119| ## Validation Results |n_examples|100| |------|------| |Accuracy Score|0.35| |Precision (Macro)|0.35| |Recall (Macro)|0.16| |n_examples|500| |------|------| |Accuracy Score|0.62| |Precision (Macro)|0.60| |Recall (Macro)|0.47| |n_examples|1000| |------|------| |Accuracy Score|0.68| |Precision(Macro)|0.68| |Recall (Macro)|0.64| ![alt text](https://media-exp1.licdn.com/dms/image/C4D0BAQHpfgjEyhtE1g/company-logo_200_200/0/1625210573748?e=1638403200&v=beta&t=toQNpiOlyim5Ja4f7Ejv8yKoCWifMsLWjkC7XnyXICI "Logo M47")
{"widget": [{"text": "El d\u00f3lar se dispara tras la reuni\u00f3n de la Fed"}]}
M47Labs/spanish_news_classification_headlines
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Rick Morty DialoGPT Model
{"tags": ["conversational"]}
MAUtastic/DialoGPT-medium-RickandMortyBot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Rick Sanchez DialoGPT Model
{"tags": ["conversational"]}
MCUxDaredevil/DialoGPT-small-rick
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
MELGA/M
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
MGeelen96/DeBERTa_legal_SBD
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
MHJ/t5-small-finetuned-xsum
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 7121569 ## Validation Metrics - Loss: 0.2151782214641571 - Accuracy: 0.9271 - Precision: 0.9469285415796072 - Recall: 0.9051328140603155 - AUC: 0.9804569416956057 - F1: 0.925559072807107 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/MICADEE/autonlp-imdb-sentiment-analysis2-7121569 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("MICADEE/autonlp-imdb-sentiment-analysis2-7121569", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("MICADEE/autonlp-imdb-sentiment-analysis2-7121569", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
{"language": "en", "tags": "autonlp", "datasets": ["MICADEE/autonlp-data-imdb-sentiment-analysis2"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
MICADEE/autonlp-imdb-sentiment-analysis2-7121569
null
[ "transformers", "pytorch", "distilbert", "text-classification", "autonlp", "en", "dataset:MICADEE/autonlp-data-imdb-sentiment-analysis2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8540 - Matthews Correlation: 0.5495 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5219 | 1.0 | 535 | 0.5314 | 0.4095 | | 0.346 | 2.0 | 1070 | 0.5141 | 0.5054 | | 0.2294 | 3.0 | 1605 | 0.6351 | 0.5200 | | 0.1646 | 4.0 | 2140 | 0.7575 | 0.5459 | | 0.1235 | 5.0 | 2675 | 0.8540 | 0.5495 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5494735380761103, "name": "Matthews Correlation"}]}]}]}
MINYOUNG/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
MKK/mbart-large-50-many-to-one-mmt
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
MKK/mt5-sinhalese-english
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
MKK/opus-mt-zh-en
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
MKK/wav2vec2-large-xlsr-53-chinese-zh-cn-gpt
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
MKK/wav2vec2-large-xlsr-53-dhivehi-v2
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# multilingual-cpv-sector-classifier This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on [the Tenders Economic Daily Public Procurement Data](https://simap.ted.europa.eu/en). It achieves the following results on the evaluation set: - F1 Score: 0.686 ## Model description The model takes procurement descriptions written in any of [104 languages](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages) and classifies them into 45 sector classes represented by [CPV(Common Procurement Vocabulary)](https://simap.ted.europa.eu/en_GB/web/simap/cpv) code descriptions as listed below. | Common Procurement Vocabulary | |:-----------------------------| | Administration, defence and social security services. 👮‍♀️ | | Agricultural machinery. 🚜 | | Agricultural, farming, fishing, forestry and related products. 🌾 | | Agricultural, forestry, horticultural, aquacultural and apicultural services. 👨🏿‍🌾 | | Architectural, construction, engineering and inspection services. 👷‍♂️ | | Business services: law, marketing, consulting, recruitment, printing and security. 👩‍💼 | | Chemical products. 🧪 | | Clothing, footwear, luggage articles and accessories. 👖 | | Collected and purified water. 🌊 | | Construction structures and materials; auxiliary products to construction (excepts electric apparatus). 🧱 | | Construction work. 🏗️ | | Education and training services. 👩🏿‍🏫 | | Electrical machinery, apparatus, equipment and consumables; Lighting. ⚡ | | Financial and insurance services. 👨‍💼 | | Food, beverages, tobacco and related products. 🍽️ | | Furniture (incl. office furniture), furnishings, domestic appliances (excl. lighting) and cleaning products. 🗄️ | | Health and social work services. 👨🏽‍⚕️ | | Hotel, restaurant and retail trade services. 🏨 | | IT services: consulting, software development, Internet and support. 🖥️ | | Industrial machinery. 🏭 | | Installation services (except software). 🛠️ | | Laboratory, optical and precision equipments (excl. glasses). 🔬 | | Leather and textile fabrics, plastic and rubber materials. 🧵 | | Machinery for mining, quarrying, construction equipment. ⛏️ | | Medical equipments, pharmaceuticals and personal care products. 💉 | | Mining, basic metals and related products. ⚙️ | | Musical instruments, sport goods, games, toys, handicraft, art materials and accessories. 🎸 | | Office and computing machinery, equipment and supplies except furniture and software packages. 🖨️ | | Other community, social and personal services. 🧑🏽‍🤝‍🧑🏽 | | Petroleum products, fuel, electricity and other sources of energy. 🔋 | | Postal and telecommunications services. 📶 | | Printed matter and related products. 📰 | | Public utilities. ⛲ | | Radio, television, communication, telecommunication and related equipment. 📡 | | Real estate services. 🏠 | | Recreational, cultural and sporting services. 🚴 | | Repair and maintenance services. 🔧 | | Research and development services and related consultancy services. 👩‍🔬 | | Security, fire-fighting, police and defence equipment. 🧯 | | Services related to the oil and gas industry. ⛽ | | Sewage-, refuse-, cleaning-, and environmental services. 🧹 | | Software package and information systems. 🔣 | | Supporting and auxiliary transport services; travel agencies services. 🚃 | | Transport equipment and auxiliary products to transportation. 🚌 | | Transport services (excl. Waste transport). 💺 ## Intended uses & limitations - Input description should be written in any of [the 104 languages](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages) that MBERT supports. - The model is just evaluated in 22 languages. Thus there is no information about the performances in other languages. - The domain is also restricted by the awarded procurement notice descriptions in European Union. Evaluating on whole document texts might change the performance. ## Training and evaluation data - The whole data consists of 744,360 rows. Shuffled and split into train and validation sets by using 80%/20% manner. - Each description represents a unique contract notice description awarded between 2011 and 2018. - Both training and validation data have contract notice descriptions written in 22 European Languages. (Malta and Irish are extracted due to scarcity compared to whole data) ## Training procedure The training procedure has been completed on Google Cloud V3-8 TPUs. Thanks [Google](https://sites.research.google/trc/about/) for giving the access to Cloud TPUs ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - num_epochs: 3 - gradient_accumulation_steps: 8 - batch_size_per_device: 4 - total_train_batch_size: 32 ### Training results | Epoch | Step | F1 Score| |:-----:|:------:|:------:| | 1 | 18,609 | 0.630 | | 2 | 37,218 | 0.674 | | 3 | 55,827 | 0.686 | | Language| F1 Score| Test Size| |:-----:|:-----:|:-----:| | PL| 0.759| 13950| | RO| 0.736| 3522| | SK| 0.719| 1122| | LT| 0.687| 2424| | HU| 0.681| 1879| | BG| 0.675| 2459| | CS| 0.668| 2694| | LV| 0.664| 836| | DE| 0.645| 35354| | FI| 0.644| 1898| | ES| 0.643| 7483| | PT| 0.631| 874| | EN| 0.631| 16615| | HR| 0.626| 865| | IT| 0.626| 8035| | NL| 0.624| 5640| | EL| 0.623| 1724| | SL| 0.615| 482| | SV| 0.607| 3326| | DA| 0.603| 1925| | FR| 0.601| 33113| | ET| 0.572| 458||
{"license": "apache-2.0", "tags": ["eu", "public procurement", "cpv", "sector", "multilingual", "transformers", "text-classification"], "widget": [{"text": "Oppeg\u00e5rd municipality, hereafter called the contracting authority, intends to enter into a framework agreement with one supplier for the procurement of fresh bread and bakery products for Oppeg\u00e5rd municipality. The contract is estimated to NOK 1 400 000 per annum excluding VAT The total for the entire period including options is NOK 5 600 000 excluding VAT"}]}
MKaan/multilingual-cpv-sector-classifier
null
[ "transformers", "pytorch", "bert", "text-classification", "eu", "public procurement", "cpv", "sector", "multilingual", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
ML-ass/english_decoder
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
ML-ass/g2e_dec_tok
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
ML-ass/g2e_enc_tok
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
ML-ass/g2e_encoder_decoder
null
[ "transformers", "pytorch", "encoder-decoder", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
{}
ML-ass/german_encoder
null
[ "transformers", "pytorch", "bert", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
MM98/bert-base-parsbert-uncased-finetuned-summary
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
MM98/ft-bz
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
MM98/mt5-small-finetuned-pnsum
null
[ "transformers", "pytorch", "tensorboard", "mt5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-pnsum2 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan - Rouge1: 4.3733 - Rouge2: 1.0221 - Rougel: 4.1265 - Rougelsum: 4.1372 - Gen Len: 6.2843 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.0 | 1.0 | 2500 | nan | 4.3733 | 1.0221 | 4.1265 | 4.1372 | 6.2843 | ### Framework versions - Transformers 4.16.1 - Pytorch 1.10.0+cu111 - Datasets 1.18.2 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "mt5-small-finetuned-pnsum2", "results": []}]}
MM98/mt5-small-finetuned-pnsum2
null
[ "transformers", "pytorch", "tensorboard", "mt5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
MM98/mt5-small-finetuned-summary-2
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
MM98/mt5-small-finetuned-summary
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
MM98/t5-small-finetuned-xsum
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-spanish-wwm-cased-finetuned-spa-squad2-es-finetuned-sqac-v2 This model is a fine-tuned version of [mrm8488/bert-base-spanish-wwm-cased-finetuned-spa-squad2-es](https://huggingface.co/mrm8488/bert-base-spanish-wwm-cased-finetuned-spa-squad2-es) on the sqac dataset. It achieves the following results on the evaluation set: - {'exact_match': 65.02145922746782, 'f1': 81.6651482773275} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9417 | 1.0 | 1277 | 0.7903 | | 0.5002 | 2.0 | 2554 | 0.8459 | | 0.2895 | 3.0 | 3831 | 0.9482 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"language": ["es"], "tags": ["generated_from_trainer"], "datasets": ["sqac"], "model-index": [{"name": "bert-base-spanish-wwm-cased-finetuned-spa-squad2-es-finetuned-sqac-v2", "results": []}]}
MMG/bert-base-spanish-wwm-cased-finetuned-spa-squad2-es-finetuned-sqac
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "bert", "question-answering", "generated_from_trainer", "es", "dataset:sqac", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad This model is a fine-tuned version of [MMG/bert-base-spanish-wwm-cased-finetuned-sqac](https://huggingface.co/MMG/bert-base-spanish-wwm-cased-finetuned-sqac) on the squad_es dataset. It achieves the following results on the evaluation set: - Loss: 1.5325 - {'exact_match': 60.30274361400189, 'f1': 77.01962587890856} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"language": ["es"], "tags": ["generated_from_trainer"], "datasets": ["squad_es"], "model-index": [{"name": "bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad", "results": []}]}
MMG/bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad
null
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "es", "dataset:squad_es", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad2-es This model is a fine-tuned version of [MMG/bert-base-spanish-wwm-cased-finetuned-sqac](https://huggingface.co/MMG/bert-base-spanish-wwm-cased-finetuned-sqac) on the squad_es dataset. It achieves the following results on the evaluation set: - Loss: 1.2584 - {'exact': 63.358070500927646, 'f1': 70.22498384623977} ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"language": ["es"], "tags": ["generated_from_trainer"], "datasets": ["squad_es"], "model-index": [{"name": "bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad2-es", "results": []}]}
MMG/bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad2-es
null
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "es", "dataset:squad_es", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-spanish-wwm-cased-finetuned-sqac This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the sqac dataset. It achieves the following results on the evaluation set: {'exact_match': 62.017167, 'f1': 79.452767} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1335 | 1.0 | 1230 | 0.9346 | | 0.6794 | 2.0 | 2460 | 0.8634 | | 0.3992 | 3.0 | 3690 | 0.9662 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"language": ["es"], "tags": ["generated_from_trainer"], "datasets": ["sqac"], "model-index": [{"name": "bert-base-spanish-wwm-cased-finetuned-sqac", "results": []}]}
MMG/bert-base-spanish-wwm-cased-finetuned-sqac
null
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "es", "dataset:sqac", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-spanish-wwm-cased-finetuned-squad2-es-finetuned-sqac This model is a fine-tuned version of [ockapuh/bert-base-spanish-wwm-cased-finetuned-squad2-es](https://huggingface.co/ockapuh/bert-base-spanish-wwm-cased-finetuned-squad2-es) on the sqac dataset. It achieves the following results on the evaluation set: - Loss: 0.9263 - {'exact_match': 65.55793991416309, 'f1': 82.72322701572416} ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"language": ["es"], "tags": ["generated_from_trainer"], "datasets": ["sqac"], "model-index": [{"name": "bert-base-spanish-wwm-cased-finetuned-squad2-es-finetuned-sqac", "results": []}]}
MMG/bert-base-spanish-wwm-cased-finetuned-squad2-es-finetuned-sqac
null
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "es", "dataset:sqac", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-spanish-wwm-cased-finetuned-squad2-es This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the squad_es dataset. It achieves the following results on the evaluation set: - Loss: 1.2841 {'exact': 62.53162421993591, 'f1': 69.33421368741254} ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"language": ["es"], "tags": ["generated_from_trainer"], "datasets": ["squad_es"], "model-index": [{"name": "bert-base-spanish-wwm-cased-finetuned-squad2-es", "results": []}]}
MMG/bert-base-spanish-wwm-cased-finetuned-squad2-es
null
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "es", "dataset:squad_es", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
# mlm-spanish-roberta-base This model has a RoBERTa base architecture and was trained from scratch with 3.6 GB of raw text over 10 epochs. 4 Tesla V-100 GPUs were used for the training. To test the quality of the resulting model we evaluate it over the [GLUES](https://github.com/dccuchile/GLUES) benchmark for Spanish NLU. The results are the following: | Task | Score (metric) | |:-----------------------:|:---------------------:| | XNLI | 71.99 (accuracy) | | Paraphrasing | 74.85 (accuracy) | | NER | 85.34 (F1) | | POS | 97.49 (accuracy) | | Dependency Parsing | 85.14/81.08 (UAS/LAS) | | Document Classification | 93.00 (accuracy) |
{"language": ["es"], "widget": [{"text": "MMG se dedica a la <mask> artificial."}]}
MMG/mlm-spanish-roberta-base
null
[ "transformers", "pytorch", "safetensors", "roberta", "fill-mask", "es", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
# xlm-roberta-large-ner-spanish This model is a XLM-Roberta-large model fine-tuned for Named Entity Recognition (NER) over the Spanish portion of the CoNLL-2002 dataset. Evaluating it over the test subset of this dataset, we get a F1-score of 89.17, being one of the best NER for Spanish available at the moment.
{"language": ["es"], "datasets": ["CoNLL-2002"], "widget": [{"text": "Las oficinas de MMG est\u00e1n en Las Rozas."}]}
MMG/xlm-roberta-large-ner-spanish
null
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "token-classification", "es", "dataset:CoNLL-2002", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
KeLiu/Title-Gen
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
MOHAMADM80/1
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
# Description A pre-trained model for volumetric (3D) segmentation of the spleen from CT image. # Model Overview This model is trained using the runner-up [1] awarded pipeline of the "Medical Segmentation Decathlon Challenge 2018" using the UNet architecture [2] with 32 training images and 9 validation images. ## Data The training dataset is Task09_Spleen.tar from http://medicaldecathlon.com/. ## Training configuration The training was performed with at least 12GB-memory GPUs. Actual Model Input: 96 x 96 x 96 ## Input and output formats Input: 1 channel CT image Output: 2 channels: Label 1: spleen; Label 0: everything else ## Scores This model achieves the following Dice score on the validation data (our own split from the training dataset): Mean Dice = 0.96 ## commands example Execute inference: ``` python -m monai.bundle run evaluator --meta_file configs/metadata.json --config_file configs/inference.json --logging_file configs/logging.conf ``` Verify the metadata format: ``` python -m monai.bundle verify_metadata --meta_file configs/metadata.json --filepath eval/schema.json ``` Verify the data shape of network: ``` python -m monai.bundle verify_net_in_out network_def --meta_file configs/metadata.json --config_file configs/inference.json ``` Export checkpoint to TorchScript file: ``` python -m monai.bundle export network_def --filepath models/model.ts --ckpt_file models/model.pt --meta_file configs/metadata.json --config_file configs/inference.json ``` # Disclaimer This is an example, not to be used for diagnostic purposes. # References [1] Xia, Yingda, et al. "3D Semi-Supervised Learning with Uncertainty-Aware Multi-View Co-Training." arXiv preprint arXiv:1811.12506 (2018). https://arxiv.org/abs/1811.12506. [2] Kerfoot E., Clough J., Oksuz I., Lee J., King A.P., Schnabel J.A. (2019) Left-Ventricle Quantification Using Residual U-Net. In: Pop M. et al. (eds) Statistical Atlases and Computational Models of the Heart. Atrial Segmentation and LV Quantification Challenges. STACOM 2018. Lecture Notes in Computer Science, vol 11395. Springer, Cham. https://doi.org/10.1007/978-3-030-12029-0_40
{"tags": ["monai"]}
MONAI/example_spleen_segmentation
null
[ "monai", "arxiv:1811.12506", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
MROE/DialoGPT-small-rick
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
MS100/wav2vec
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Vision DialoGPT Model
{"tags": ["conversational"]}
MS366/DialoGPT-small-vision
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
#### Languages: - Source language: English - Source language: isiZulu #### Model Details: - model: transformer - Architecture: MarianMT - pre-processing: normalization + SentencePiece #### Pre-trained Model: - https://huggingface.co/Helsinki-NLP/opus-mt-en-xh #### Corpus: - Umsuka English-isiZulu Parallel Corpus (https://zenodo.org/record/5035171#.Yh5NIOhBy3A) #### Benchmark: | Benchmark | Train | Test | |-----------|-------|-------| | Umsuka | 17.61 | 13.73 | #### GitHub: - https://github.com/umair-nasir14/Geographical-Distance-Is-The-New-Hyperparameter #### Citation: ``` @article{umair2022geographical, title={Geographical Distance Is The New Hyperparameter: A Case Study Of Finding The Optimal Pre-trained Language For English-isiZulu Machine Translation}, author={Umair Nasir, Muhammad and Amos Mchechesi, Innocent}, journal={arXiv e-prints}, pages={arXiv--2205}, year={2022} } ```
{}
MUNasir/umsuka-en-zu
null
[ "transformers", "pytorch", "marian", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
MYX4567/bert-base-cased-wikitext2
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1520 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2177 | 1.0 | 5533 | 1.1565 | | 0.9472 | 2.0 | 11066 | 1.1174 | | 0.7634 | 3.0 | 16599 | 1.1520 | ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model_index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": [{"task": {"name": "Question Answering", "type": "question-answering"}, "dataset": {"name": "squad", "type": "squad", "args": "plain_text"}}]}]}
MYX4567/distilbert-base-uncased-finetuned-squad
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6428 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.76 | 1.0 | 2334 | 3.6658 | | 3.6325 | 2.0 | 4668 | 3.6454 | | 3.6068 | 3.0 | 7002 | 3.6428 | ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": [], "model_index": [{"name": "distilgpt2-finetuned-wikitext2", "results": [{"task": {"name": "Causal Language Modeling", "type": "text-generation"}}]}]}
MYX4567/distilgpt2-finetuned-wikitext2
null
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
MYX4567/distilroberta-base-finetuned-wikitext2
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
{}
MYX4567/dummy-model
null
[ "transformers", "pytorch", "camembert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-wikitext2 This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.3227 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.7523 | 1.0 | 2249 | 6.6652 | | 6.4134 | 2.0 | 4498 | 6.3987 | | 6.2507 | 3.0 | 6747 | 6.3227 | ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": [], "model_index": [{"name": "gpt2-wikitext2", "results": [{"task": {"name": "Causal Language Modeling", "type": "text-generation"}}]}]}
MYX4567/gpt2-wikitext2
null
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
MYX4567/model_name
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
MaalK/DialoGPT-small-Petyr
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
bgc-accession model is a Named Entity Recognition (NER) model that identifies and annotates the accession number of biosynthetic gene clusters in texts. The model is a fine-tuned BioBERT model and the training dataset is available in https://gitlab.com/maaly7/emerald_bgcs_annotations Testing examples: 1. The genome sequences of Leptolyngbya sp. PCC 7375 (ALVN00000000) and G. sunshinyii YC6258 (NZ_CP007142.1) were obtained previously.36,59 2. K311 was sequenced (NCBI accession number: JN852959) and analyzed with FramePlot and 18 genes were predicted to be involved in echinomycin biosynthesis (Figure 2). 3. The mar cluster was sequenced and annotated and the complete sequence was deposited into Genbank (accession KF711829).
{}
Maaly/bgc-accession
null
[ "transformers", "pytorch", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
body-site model is a Named Entity Recognition (NER) model that identifies and annotates the body-site of microbiome samples in texts. The model is a fine-tuned BioBERT model and the training dataset is available in https://gitlab.com/maaly7/emerald_metagenomics_annotations Testing examples: 1. Scalp hair was collected from behind the right ear, near the right retroauricular crease, and pubic hair was collected from their right pubis, near the right inguinal crease. 2. Field-collected bee samples were dissected on dry ice and separated into head, thorax (excluding legs and wings), and abdomens. 3. TSO modulate the IEC and LPMC transcriptome To gain further insights into the mechanisms of TSO treatment, we performed genome wide expression analysis on intestinal epithelial cells (IEC) and lamina propria mononuclear cells (LPMC) isolated from caecum samples by RNA sequencing (RNAseq). 4. Two catheters were bilaterally placed in the CA1 region of the hippocampus with the coordinates of 4.5 mm anterior to bregma, 1.6 mm ventral to the dura, and two directions of ± 4.0 mm from the interaural line (Park et al. 2013; Yang et al. 2013).
{}
Maaly/body-site
null
[ "transformers", "pytorch", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
host model is a Named Entity Recognition (NER) model that identifies and annotates the host (living organism) of microbiome samples in texts. The model is a fine-tuned BioBERT model and the training dataset is available in https://gitlab.com/maaly7/emerald_metagenomics_annotations Testing examples: 1. Turkestan cockroach nymphs (Finke, 2013) were fed to the treefrogs at a quantity of 10% of treefrog biomass twice a week. 2. Samples were collected from clinically healthy giant pandas (five females and four males) at the China Conservation and Research Center for Giant Pandas (Ya'an, China). 3. Field-collected bee samples were dissected on dry ice and separated into head, thorax (excluding legs and wings), and abdomens.
{}
Maaly/host
null
[ "transformers", "pytorch", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
#Harry Potter DialoGPT Model
{"tags": ["conversational"]}
MadhanKumar/DialoGPT-small-HarryPotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
#Harry Potter Bot Model
{"tags": ["conversational"]}
MadhanKumar/HarryPotter-Bot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
Madhour/gpt2-eli5
null
[ "transformers", "pytorch", "gpt2", "text-generation", "ELI5", "en", "dataset:eli5", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 515314387 - CO2 Emissions (in grams): 70.95647633212745 ## Validation Metrics - Loss: 0.08077705651521683 - Accuracy: 0.9760103738923709 - Macro F1: 0.9728412857204902 - Micro F1: 0.9760103738923709 - Weighted F1: 0.9759907151741426 - Macro Precision: 0.9736622407675567 - Micro Precision: 0.9760103738923709 - Weighted Precision: 0.97673611876005 - Macro Recall: 0.9728978421381711 - Micro Recall: 0.9760103738923709 - Weighted Recall: 0.9760103738923709 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/MadhurJindalWorkMail/autonlp-Gibb-Detect-515314387 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("MadhurJindalWorkMail/autonlp-Gibb-Detect-515314387", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("MadhurJindalWorkMail/autonlp-Gibb-Detect-515314387", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
{"language": "en", "tags": "autonlp", "datasets": ["MadhurJindalWorkMail/autonlp-data-Gibb-Detect"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 70.95647633212745}
MadhurJindalWorkMail/autonlp-Gibb-Detect-515314387
null
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:MadhurJindalWorkMail/autonlp-data-Gibb-Detect", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Mads/fun01
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Mads/wav2vec2-large-xls-r-300m-tr-colab
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
# WIP
{}
Mads/wav2vec2-xlsr-large-53-kor-financial-engineering
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{}
Mads/xlsr-demo-2
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{}
Mads/xlsr-demo-eng
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{}
Mads/xlsr-demo
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00