pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
listlengths
0
201
languages
listlengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
listlengths
0
722
processed_texts
listlengths
1
723
feature-extraction
transformers
This model is converted from the original BPR [repo](https://github.com/studio-ousia/bpr) and fitted into Pyserini: > Ikuya Yamada, Akari Asai, and Hannaneh Hajishirzi. 2021. Efficient passage retrieval with hashing for open-domain question answering. arXiv:2106.00882.
{}
castorini/bpr-nq-question-encoder
null
[ "transformers", "pytorch", "dpr", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #dpr #feature-extraction #endpoints_compatible #region-us
This model is converted from the original BPR repo and fitted into Pyserini: > Ikuya Yamada, Akari Asai, and Hannaneh Hajishirzi. 2021. Efficient passage retrieval with hashing for open-domain question answering. arXiv:2106.00882.
[]
[ "TAGS\n#transformers #pytorch #dpr #feature-extraction #endpoints_compatible #region-us \n" ]
feature-extraction
transformers
This model is converted from the original DKRR [repo](https://github.com/facebookresearch/FiD) and ported into Pyserini: ``` @misc{izacard2020distilling, title={Distilling Knowledge from Reader to Retriever for Question Answering}, author={Gautier Izacard and Edouard Grave}, year={2020}, eprint={2012.04584}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{}
castorini/dkrr-dpr-nq-retriever
null
[ "transformers", "pytorch", "bert", "feature-extraction", "arxiv:2012.04584", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2012.04584" ]
[]
TAGS #transformers #pytorch #bert #feature-extraction #arxiv-2012.04584 #endpoints_compatible #has_space #region-us
This model is converted from the original DKRR repo and ported into Pyserini:
[]
[ "TAGS\n#transformers #pytorch #bert #feature-extraction #arxiv-2012.04584 #endpoints_compatible #has_space #region-us \n" ]
null
transformers
This model is converted from the original DKRR [repo](https://github.com/facebookresearch/FiD) and ported into Pyserini: ``` @misc{izacard2020distilling, title={Distilling Knowledge from Reader to Retriever for Question Answering}, author={Gautier Izacard and Edouard Grave}, year={2020}, eprint={2012.04584}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{}
castorini/dkrr-dpr-tqa-retriever
null
[ "transformers", "pytorch", "bert", "arxiv:2012.04584", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2012.04584" ]
[]
TAGS #transformers #pytorch #bert #arxiv-2012.04584 #endpoints_compatible #has_space #region-us
This model is converted from the original DKRR repo and ported into Pyserini:
[]
[ "TAGS\n#transformers #pytorch #bert #arxiv-2012.04584 #endpoints_compatible #has_space #region-us \n" ]
text2text-generation
transformers
For more information, check [doc2query.ai](http://doc2query.ai)
{}
castorini/doc2query-t5-base-msmarco
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
For more information, check URL
[]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n" ]
text2text-generation
transformers
For more information, check [doc2query.ai](http://doc2query.ai)
{}
castorini/doc2query-t5-large-msmarco
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
For more information, check URL
[]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
feature-extraction
transformers
This model is a T5-3B reranker pre-finetuned on the MS MARCO passage dataset for 10K steps (or 1 epoch) on the pairwise task and then finetuned on MedMARCO (from [Sledge-Z paper](https://www.aclweb.org/anthology/2020.emnlp-main.341.pdf)) for 1K steps on the pairwise task. For more details on how to use it, check [pygaggle.ai](pygaggle.ai)! Paper describing the model: [The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models](https://arxiv.org/abs/2101.05667)
{}
castorini/duot5-3b-med-msmarco
null
[ "transformers", "pytorch", "t5", "feature-extraction", "arxiv:2101.05667", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2101.05667" ]
[]
TAGS #transformers #pytorch #t5 #feature-extraction #arxiv-2101.05667 #endpoints_compatible #text-generation-inference #region-us
This model is a T5-3B reranker pre-finetuned on the MS MARCO passage dataset for 10K steps (or 1 epoch) on the pairwise task and then finetuned on MedMARCO (from Sledge-Z paper) for 1K steps on the pairwise task. For more details on how to use it, check URL! Paper describing the model: The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models
[]
[ "TAGS\n#transformers #pytorch #t5 #feature-extraction #arxiv-2101.05667 #endpoints_compatible #text-generation-inference #region-us \n" ]
feature-extraction
transformers
This model is a T5-3B reranker, initialized with our pointwise ranker, [castorini/monot5-3b-msmarco](https://huggingface.co/castorini/monot5-3b-msmarco), and finetuned on the MS MARCO passage dataset for 50K steps (or 5 epochs) on the pairwise reranking task. For more details on how to use it, check [pygaggle.ai](pygaggle.ai)! Paper describing the model: [The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models](https://arxiv.org/abs/2101.05667)
{}
castorini/duot5-3b-msmarco
null
[ "transformers", "pytorch", "t5", "feature-extraction", "arxiv:2101.05667", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2101.05667" ]
[]
TAGS #transformers #pytorch #t5 #feature-extraction #arxiv-2101.05667 #endpoints_compatible #text-generation-inference #region-us
This model is a T5-3B reranker, initialized with our pointwise ranker, castorini/monot5-3b-msmarco, and finetuned on the MS MARCO passage dataset for 50K steps (or 5 epochs) on the pairwise reranking task. For more details on how to use it, check URL! Paper describing the model: The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models
[]
[ "TAGS\n#transformers #pytorch #t5 #feature-extraction #arxiv-2101.05667 #endpoints_compatible #text-generation-inference #region-us \n" ]
text2text-generation
transformers
This model is a T5-base pairwise reranker fine-tuned on MS MARCO passage dataset for 50k steps (or 5 epochs). For more details on how to use it, check [pygaggle.ai](pygaggle.ai) Paper describing the model: [The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models](https://arxiv.org/pdf/2101.05667.pdf)
{}
castorini/duot5-base-msmarco
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "arxiv:2101.05667", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2101.05667" ]
[]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #arxiv-2101.05667 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
This model is a T5-base pairwise reranker fine-tuned on MS MARCO passage dataset for 50k steps (or 5 epochs). For more details on how to use it, check URL Paper describing the model: The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models
[]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #arxiv-2101.05667 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n" ]
text-classification
transformers
# Model Description This checkpoint is a direct conversion of [BERT_Large_trained_on_MSMARCO.zip](https://drive.google.com/open?id=1crlASTMlsihALlkabAQP6JTYIZwC1Wm8) from the original [repo](https://github.com/nyu-dl/dl4marco-bert/). The corresponding model class is BertForSequenceClassification, and its purpose is for MS MARCO passage ranking. Please find the original repo for more detail of its training settings regarding hyperparameter/device/data.
{}
castorini/monobert-large-msmarco-finetune-only
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
# Model Description This checkpoint is a direct conversion of BERT_Large_trained_on_MSMARCO.zip from the original repo. The corresponding model class is BertForSequenceClassification, and its purpose is for MS MARCO passage ranking. Please find the original repo for more detail of its training settings regarding hyperparameter/device/data.
[ "# Model Description\nThis checkpoint is a direct conversion of BERT_Large_trained_on_MSMARCO.zip from the original repo.\nThe corresponding model class is BertForSequenceClassification, and its purpose is for MS MARCO passage ranking.\nPlease find the original repo for more detail of its training settings regarding hyperparameter/device/data." ]
[ "TAGS\n#transformers #pytorch #jax #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Description\nThis checkpoint is a direct conversion of BERT_Large_trained_on_MSMARCO.zip from the original repo.\nThe corresponding model class is BertForSequenceClassification, and its purpose is for MS MARCO passage ranking.\nPlease find the original repo for more detail of its training settings regarding hyperparameter/device/data." ]
feature-extraction
transformers
This model is a T5-3B reranker fine-tuned on the MS MARCO passage dataset for 10K steps (or 1 epoch) and then fine-tuned again on MedMARCO (from [Sledge-Z paper](https://www.aclweb.org/anthology/2020.emnlp-main.341.pdf)) for 1K steps. For more details on how to use it, check [pygaggle.ai](pygaggle.ai)! Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/)
{}
castorini/monot5-3b-med-msmarco
null
[ "transformers", "pytorch", "t5", "feature-extraction", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #t5 #feature-extraction #endpoints_compatible #has_space #text-generation-inference #region-us
This model is a T5-3B reranker fine-tuned on the MS MARCO passage dataset for 10K steps (or 1 epoch) and then fine-tuned again on MedMARCO (from Sledge-Z paper) for 1K steps. For more details on how to use it, check URL! Paper describing the model: Document Ranking with a Pretrained Sequence-to-Sequence Model
[]
[ "TAGS\n#transformers #pytorch #t5 #feature-extraction #endpoints_compatible #has_space #text-generation-inference #region-us \n" ]
feature-extraction
transformers
This model is a T5-3B reranker fine-tuned on the MS MARCO passage dataset for 100k steps (or 10 epochs). For more details on how to use it, check [pygaggle.ai](pygaggle.ai) Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/)
{}
castorini/monot5-3b-msmarco
null
[ "transformers", "pytorch", "t5", "feature-extraction", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #t5 #feature-extraction #endpoints_compatible #text-generation-inference #region-us
This model is a T5-3B reranker fine-tuned on the MS MARCO passage dataset for 100k steps (or 10 epochs). For more details on how to use it, check URL Paper describing the model: Document Ranking with a Pretrained Sequence-to-Sequence Model
[]
[ "TAGS\n#transformers #pytorch #t5 #feature-extraction #endpoints_compatible #text-generation-inference #region-us \n" ]
feature-extraction
transformers
This model is a T5-base reranker fine-tuned on the MS MARCO passage dataset for 10k steps (or 1 epoch) and then fine-tuned again on MedMARCO (from [Sledge-Z paper](https://www.aclweb.org/anthology/2020.emnlp-main.341.pdf) for 1k steps. For more details on how to use it, check [pygaggle.ai](pygaggle.ai) Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/)
{}
castorini/monot5-base-med-msmarco
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #endpoints_compatible #has_space #text-generation-inference #region-us
This model is a T5-base reranker fine-tuned on the MS MARCO passage dataset for 10k steps (or 1 epoch) and then fine-tuned again on MedMARCO (from Sledge-Z paper for 1k steps. For more details on how to use it, check URL Paper describing the model: Document Ranking with a Pretrained Sequence-to-Sequence Model
[]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #endpoints_compatible #has_space #text-generation-inference #region-us \n" ]
text2text-generation
transformers
This model is a T5-base reranker fine-tuned on the MS MARCO passage dataset for 10k steps (or 1 epoch). This model usually has a better zero-shot performance than `monot5-base-msmarco`, i.e., it performs better on datasets different from MS MARCO. For more details on how to use it, check the following links: - [A simple reranking example](https://github.com/castorini/pygaggle#a-simple-reranking-example) - [Rerank MS MARCO passages](https://github.com/castorini/pygaggle/blob/master/docs/experiments-msmarco-passage-subset.md) - [Rerank Robust04 documents](https://github.com/castorini/pygaggle/blob/master/docs/experiments-robust04-monot5-gpu.md) Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/)
{}
castorini/monot5-base-msmarco-10k
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
This model is a T5-base reranker fine-tuned on the MS MARCO passage dataset for 10k steps (or 1 epoch). This model usually has a better zero-shot performance than 'monot5-base-msmarco', i.e., it performs better on datasets different from MS MARCO. For more details on how to use it, check the following links: - A simple reranking example - Rerank MS MARCO passages - Rerank Robust04 documents Paper describing the model: Document Ranking with a Pretrained Sequence-to-Sequence Model
[]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n" ]
text2text-generation
transformers
This model is a T5-base reranker fine-tuned on the MS MARCO passage dataset for 100k steps (or 10 epochs). For better zero-shot performance (i.e., inference on other datasets), we recommend using `castorini/monot5-base-msmarco-10k`. For more details on how to use it, check the following links: - [A simple reranking example](https://github.com/castorini/pygaggle#a-simple-reranking-example) - [Rerank MS MARCO passages](https://github.com/castorini/pygaggle/blob/master/docs/experiments-msmarco-passage-subset.md) - [Rerank Robust04 documents](https://github.com/castorini/pygaggle/blob/master/docs/experiments-robust04-monot5-gpu.md) Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/)
{}
castorini/monot5-base-msmarco
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
This model is a T5-base reranker fine-tuned on the MS MARCO passage dataset for 100k steps (or 10 epochs). For better zero-shot performance (i.e., inference on other datasets), we recommend using 'castorini/monot5-base-msmarco-10k'. For more details on how to use it, check the following links: - A simple reranking example - Rerank MS MARCO passages - Rerank Robust04 documents Paper describing the model: Document Ranking with a Pretrained Sequence-to-Sequence Model
[]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n" ]
text2text-generation
transformers
This model is a T5-large reranker fine-tuned on the MS MARCO passage dataset for 10k steps (or 1 epoch). This model usually has a better zero-shot performance than `monot5-large-msmarco`, i.e., it performs better on datasets different from MS MARCO. For more details on how to use it, check the following links: - [A simple reranking example](https://github.com/castorini/pygaggle#a-simple-reranking-example) - [Rerank MS MARCO passages](https://github.com/castorini/pygaggle/blob/master/docs/experiments-msmarco-passage-subset.md) - [Rerank Robust04 documents](https://github.com/castorini/pygaggle/blob/master/docs/experiments-robust04-monot5-gpu.md) Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/)
{}
castorini/monot5-large-msmarco-10k
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
This model is a T5-large reranker fine-tuned on the MS MARCO passage dataset for 10k steps (or 1 epoch). This model usually has a better zero-shot performance than 'monot5-large-msmarco', i.e., it performs better on datasets different from MS MARCO. For more details on how to use it, check the following links: - A simple reranking example - Rerank MS MARCO passages - Rerank Robust04 documents Paper describing the model: Document Ranking with a Pretrained Sequence-to-Sequence Model
[]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n" ]
feature-extraction
transformers
This model is a T5-large reranker fine-tuned on the MS MARCO passage dataset for 100k steps (or 10 epochs). For more details on how to use it, check the following links: - [A simple reranking example](https://github.com/castorini/pygaggle#a-simple-reranking-example) - [Rerank MS MARCO passages](https://github.com/castorini/pygaggle/blob/master/docs/experiments-msmarco-passage-subset.md) - [Rerank Robust04 documents](https://github.com/castorini/pygaggle/blob/master/docs/experiments-robust04-monot5-gpu.md) Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/)
{}
castorini/monot5-large-msmarco
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #endpoints_compatible #has_space #text-generation-inference #region-us
This model is a T5-large reranker fine-tuned on the MS MARCO passage dataset for 100k steps (or 10 epochs). For more details on how to use it, check the following links: - A simple reranking example - Rerank MS MARCO passages - Rerank Robust04 documents Paper describing the model: Document Ranking with a Pretrained Sequence-to-Sequence Model
[]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #endpoints_compatible #has_space #text-generation-inference #region-us \n" ]
text2text-generation
transformers
This model is trained for conversational question rewriting. Usage: Source text format: ${HISTORY} ||| ${CURRENT_QUESTION} example from [CANARD](https://sites.google.com/view/qanta/projects/canard): Frank Zappa ||| Disbandment ||| What group disbanded ||| Zappa and the Mothers of Invention ||| When did they disband? Target text: When did Zappa and the Mothers of Invention disband? You can find our guide to reproduce the training in this [repo](https://github.com/castorini/chatty-goose/blob/c7d0cd8c45354b09b5fb930ab0b5af8be2e5772b/docs/t5_finetuning.md).
{}
castorini/t5-base-canard
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
This model is trained for conversational question rewriting. Usage: Source text format: ${HISTORY} ||| ${CURRENT_QUESTION} example from CANARD: Frank Zappa ||| Disbandment ||| What group disbanded ||| Zappa and the Mothers of Invention ||| When did they disband? Target text: When did Zappa and the Mothers of Invention disband? You can find our guide to reproduce the training in this repo.
[]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
null
transformers
This model is to reproduce the TCT-ColBERT dense retrieval described in the following paper: > Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. [Distilling Dense Representations for Ranking using Tightly-Coupled Teachers.](https://arxiv.org/abs/2010.11386) arXiv:2010.11386, October 2020. For more details on how to use it, check our experiments in [Pyserini](https://github.com/castorini/pyserini/blob/master/docs/experiments-tct_colbert.md)
{}
castorini/tct_colbert-msmarco
null
[ "transformers", "pytorch", "arxiv:2010.11386", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2010.11386" ]
[]
TAGS #transformers #pytorch #arxiv-2010.11386 #endpoints_compatible #has_space #region-us
This model is to reproduce the TCT-ColBERT dense retrieval described in the following paper: > Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. Distilling Dense Representations for Ranking using Tightly-Coupled Teachers. arXiv:2010.11386, October 2020. For more details on how to use it, check our experiments in Pyserini
[]
[ "TAGS\n#transformers #pytorch #arxiv-2010.11386 #endpoints_compatible #has_space #region-us \n" ]
feature-extraction
transformers
This model is to reproduce a variant of TCT-ColBERT-V2 dense retrieval models described in the following paper: > Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. [In-Batch Negatives for Knowledge Distillation with Tightly-CoupledTeachers for Dense Retrieval.](https://cs.uwaterloo.ca/~jimmylin/publications/Lin_etal_2021_RepL4NLP.pdf) _RepL4NLP 2021_. You can find our reproduction report in Pyserini [here](https://github.com/castorini/pyserini/blob/master/docs/experiments-tct_colbert-v2.md).
{}
castorini/tct_colbert-v2-hn-msmarco
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #feature-extraction #endpoints_compatible #has_space #region-us
This model is to reproduce a variant of TCT-ColBERT-V2 dense retrieval models described in the following paper: > Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. In-Batch Negatives for Knowledge Distillation with Tightly-CoupledTeachers for Dense Retrieval. _RepL4NLP 2021_. You can find our reproduction report in Pyserini here.
[]
[ "TAGS\n#transformers #pytorch #bert #feature-extraction #endpoints_compatible #has_space #region-us \n" ]
feature-extraction
transformers
This model is to reproduce a variant of TCT-ColBERT-V2 dense retrieval models described in the following paper: > Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. [In-Batch Negatives for Knowledge Distillation with Tightly-CoupledTeachers for Dense Retrieval.](https://cs.uwaterloo.ca/~jimmylin/publications/Lin_etal_2021_RepL4NLP.pdf) _RepL4NLP 2021_. Specifically, this checkpoint is finetuned for MS MARCO-V2 passage ranking, and we use this checkpoint as our ``trained'' model for TREC DL 2021 submissions. The initial checkpoint is from a previous one [tct_colbert-v2-hnp-msmarco](https://huggingface.co/castorini/tct_colbert-v2-hnp-msmarco) trained on [MS MARCO](https://github.com/microsoft/MSMARCO-Passage-Ranking). For fine-tuning, we construct our training data for MS MARCO-V2 passage ranking using this [script](https://github.com/castorini/pyserini/blob/master/scripts/msmarco_v2/generate_train_triplet.py).
{}
castorini/tct_colbert-v2-hnp-msmarco-r2
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #feature-extraction #endpoints_compatible #region-us
This model is to reproduce a variant of TCT-ColBERT-V2 dense retrieval models described in the following paper: > Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. In-Batch Negatives for Knowledge Distillation with Tightly-CoupledTeachers for Dense Retrieval. _RepL4NLP 2021_. Specifically, this checkpoint is finetuned for MS MARCO-V2 passage ranking, and we use this checkpoint as our ''trained'' model for TREC DL 2021 submissions. The initial checkpoint is from a previous one tct_colbert-v2-hnp-msmarco trained on MS MARCO. For fine-tuning, we construct our training data for MS MARCO-V2 passage ranking using this script.
[]
[ "TAGS\n#transformers #pytorch #bert #feature-extraction #endpoints_compatible #region-us \n" ]
feature-extraction
transformers
This model is to reproduce a variant of TCT-ColBERT-V2 dense retrieval models described in the following paper: > Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. [In-Batch Negatives for Knowledge Distillation with Tightly-CoupledTeachers for Dense Retrieval.](https://cs.uwaterloo.ca/~jimmylin/publications/Lin_etal_2021_RepL4NLP.pdf) _RepL4NLP 2021_. You can find our reproduction report in Pyserini [here](https://github.com/castorini/pyserini/blob/master/docs/experiments-tct_colbert-v2.md).
{}
castorini/tct_colbert-v2-hnp-msmarco
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #feature-extraction #endpoints_compatible #has_space #region-us
This model is to reproduce a variant of TCT-ColBERT-V2 dense retrieval models described in the following paper: > Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. In-Batch Negatives for Knowledge Distillation with Tightly-CoupledTeachers for Dense Retrieval. _RepL4NLP 2021_. You can find our reproduction report in Pyserini here.
[]
[ "TAGS\n#transformers #pytorch #bert #feature-extraction #endpoints_compatible #has_space #region-us \n" ]
feature-extraction
transformers
This model is to reproduce Contextualized Query Embeddings for Conversational Search described in the following paper: > Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. [Contextualized Query Embeddings for Conversational Search.](https://cs.uwaterloo.ca/~jimmylin/publications/Lin_etal_EMNLP2021.pdf) EMNLP, Nov 2021. This model is finetuend only on query ecoder with frezzed passage encoder. The starting point is the [tct_colbert-msmarco](https://huggingface.co/castorini/tct_colbert-msmarco/tree/main). The detailed usage of the model will be out soon on [Chatty Goose](https://github.com/castorini/chatty-goose). You can also check the fine-tuning and inference using tensorflow in our [CQE repo](https://github.com/castorini/CQE)
{}
castorini/tct_colbert-v2-msmarco-cqe
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #feature-extraction #endpoints_compatible #region-us
This model is to reproduce Contextualized Query Embeddings for Conversational Search described in the following paper: > Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. Contextualized Query Embeddings for Conversational Search. EMNLP, Nov 2021. This model is finetuend only on query ecoder with frezzed passage encoder. The starting point is the tct_colbert-msmarco. The detailed usage of the model will be out soon on Chatty Goose. You can also check the fine-tuning and inference using tensorflow in our CQE repo
[]
[ "TAGS\n#transformers #pytorch #bert #feature-extraction #endpoints_compatible #region-us \n" ]
feature-extraction
transformers
This model is to reproduce a variant of TCT-ColBERT-V2 dense retrieval models described in the following paper: > Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. [In-Batch Negatives for Knowledge Distillation with Tightly-CoupledTeachers for Dense Retrieval.](https://cs.uwaterloo.ca/~jimmylin/publications/Lin_etal_2021_RepL4NLP.pdf) _RepL4NLP 2021_. You can find our reproduction report in Pyserini [here](https://github.com/castorini/pyserini/blob/master/docs/experiments-tct_colbert-v2.md).
{}
castorini/tct_colbert-v2-msmarco
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #feature-extraction #endpoints_compatible #has_space #region-us
This model is to reproduce a variant of TCT-ColBERT-V2 dense retrieval models described in the following paper: > Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. In-Batch Negatives for Knowledge Distillation with Tightly-CoupledTeachers for Dense Retrieval. _RepL4NLP 2021_. You can find our reproduction report in Pyserini here.
[]
[ "TAGS\n#transformers #pytorch #bert #feature-extraction #endpoints_compatible #has_space #region-us \n" ]
null
null
An NER model to detect company and person names from news articles.
{}
cb-insights-team/news_ner
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #region-us
An NER model to detect company and person names from news articles.
[]
[ "TAGS\n#region-us \n" ]
fill-mask
transformers
# LSG model **Transformers >= 4.36.1**\ **This model relies on a custom modeling file, you need to add trust_remote_code=True**\ **See [\#13467](https://github.com/huggingface/transformers/pull/13467)** LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \ Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg). * [Usage](#usage) * [Parameters](#parameters) * [Sparse selection type](#sparse-selection-type) * [Tasks](#tasks) * [Training global tokens](#training-global-tokens) This model is adapted from [LEGAL-BERT](https://huggingface.co/nlpaueb/legal-bert-base-uncased) without additional pretraining yet. It uses the same number of parameters/layers and the same tokenizer. This model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). Support encoder-decoder but I didnt test it extensively.\ Implemented in PyTorch. ![attn](attn.png) ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ```python: from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("ccdv/legal-lsg-base-uncased-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/legal-lsg-base-uncased-4096") ``` ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * mask_first_token (mask first token since it is redundant with the first global token) * see config.json file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ```python: from transformers import AutoModel model = AutoModel.from_pretrained("ccdv/legal-lsg-base-uncased-4096", trust_remote_code=True, num_global_tokens=16, block_size=64, sparse_block_size=64, attention_probs_dropout_prob=0.0 sparsity_factor=4, sparsity_type="none", mask_first_token=True ) ``` ## Sparse selection type There are 6 different sparse selection patterns. The best type is task dependent. \ If `sparse_block_size=0` or `sparsity_type="none"`, only local attention is considered. \ Note that for sequences with length < 2*block_size, the type has no effect. * `sparsity_type="bos_pooling"` (new) * weighted average pooling using the BOS token * Works best in general, especially with a rather large sparsity_factor (8, 16, 32) * Additional parameters: * None * `sparsity_type="norm"`, select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="pooling"`, use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="lsh"`, use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * `sparsity_type="stride"`, use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * `sparsity_type="block_stride"`, use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Fill mask example: ```python: from transformers import FillMaskPipeline, AutoModelForMaskedLM, AutoTokenizer model = AutoModelForMaskedLM.from_pretrained("ccdv/legal-lsg-base-uncased-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/legal-lsg-base-uncased-4096") SENTENCES = ["Paris is the <mask> of France.", "The goal of life is <mask>."] pipeline = FillMaskPipeline(model, tokenizer) output = pipeline(SENTENCES, top_k=1) output = [o[0]["sequence"] for o in output] > ['Paris is the capital of France.', 'The goal of life is happiness.'] ``` Classification example: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/legal-lsg-base-uncased-4096", trust_remote_code=True, pool_with_global=True, # pool with a global token instead of first token ) tokenizer = AutoTokenizer.from_pretrained("ccdv/legal-lsg-base-uncased-4096") SENTENCE = "This is a test for sequence classification. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", #pad_to_multiple_of=... # Optional truncation=True ) output = model(**token_ids) > SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None) ``` ## Training global tokens To train global tokens and the classification head only: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/legal-lsg-base-uncased-4096", trust_remote_code=True, pool_with_global=True, # pool with a global token instead of first token num_global_tokens=16 ) tokenizer = AutoTokenizer.from_pretrained("ccdv/legal-lsg-base-uncased-4096") for name, param in model.named_parameters(): if "global_embeddings" not in name: param.requires_grad = False else: param.required_grad = True ``` **LEGAL-BERT** ``` @inproceedings{chalkidis-etal-2020-legal, title = "{LEGAL}-{BERT}: The Muppets straight out of Law School", author = "Chalkidis, Ilias and Fergadiotis, Manos and Malakasiotis, Prodromos and Aletras, Nikolaos and Androutsopoulos, Ion", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", doi = "10.18653/v1/2020.findings-emnlp.261", pages = "2898--2904" } ```
{"language": "en", "tags": ["long context", "legal"], "pipeline_tag": "fill-mask"}
ccdv/lsg-legal-base-uncased-4096
null
[ "transformers", "pytorch", "bert", "pretraining", "long context", "legal", "fill-mask", "custom_code", "en", "arxiv:2210.15497", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2210.15497" ]
[ "en" ]
TAGS #transformers #pytorch #bert #pretraining #long context #legal #fill-mask #custom_code #en #arxiv-2210.15497 #region-us
# LSG model Transformers >= 4.36.1\ This model relies on a custom modeling file, you need to add trust_remote_code=True\ See \#13467 LSG ArXiv paper. \ Github/conversion script is available at this link. * Usage * Parameters * Sparse selection type * Tasks * Training global tokens This model is adapted from LEGAL-BERT without additional pretraining yet. It uses the same number of parameters/layers and the same tokenizer. This model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). Support encoder-decoder but I didnt test it extensively.\ Implemented in PyTorch. !attn ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * mask_first_token (mask first token since it is redundant with the first global token) * see URL file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ## Sparse selection type There are 6 different sparse selection patterns. The best type is task dependent. \ If 'sparse_block_size=0' or 'sparsity_type="none"', only local attention is considered. \ Note that for sequences with length < 2*block_size, the type has no effect. * 'sparsity_type="bos_pooling"' (new) * weighted average pooling using the BOS token * Works best in general, especially with a rather large sparsity_factor (8, 16, 32) * Additional parameters: * None * 'sparsity_type="norm"', select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * 'sparsity_type="pooling"', use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * 'sparsity_type="lsh"', use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * 'sparsity_type="stride"', use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * 'sparsity_type="block_stride"', use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Fill mask example: Classification example: ## Training global tokens To train global tokens and the classification head only: LEGAL-BERT
[ "# LSG model \nTransformers >= 4.36.1\\\nThis model relies on a custom modeling file, you need to add trust_remote_code=True\\\nSee \\#13467\n\nLSG ArXiv paper. \\\nGithub/conversion script is available at this link.\n\n* Usage\n* Parameters\n* Sparse selection type\n* Tasks\n* Training global tokens\n\nThis model is adapted from LEGAL-BERT without additional pretraining yet. It uses the same number of parameters/layers and the same tokenizer.\n\n\nThis model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG).\n\nThe model requires sequences whose length is a multiple of the block size. The model is \"adaptive\" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...).\n\nSupport encoder-decoder but I didnt test it extensively.\\\nImplemented in PyTorch.\n\n!attn", "## Usage\nThe model relies on a custom modeling file, you need to add trust_remote_code=True to use it.", "## Parameters\nYou can change various parameters like : \n* the number of global tokens (num_global_tokens=1)\n* local block size (block_size=128)\n* sparse block size (sparse_block_size=128)\n* sparsity factor (sparsity_factor=2)\n* mask_first_token (mask first token since it is redundant with the first global token)\n* see URL file\n\nDefault parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.", "## Sparse selection type\n\nThere are 6 different sparse selection patterns. The best type is task dependent. \\\nIf 'sparse_block_size=0' or 'sparsity_type=\"none\"', only local attention is considered. \\\nNote that for sequences with length < 2*block_size, the type has no effect.\n* 'sparsity_type=\"bos_pooling\"' (new)\n * weighted average pooling using the BOS token \n * Works best in general, especially with a rather large sparsity_factor (8, 16, 32)\n * Additional parameters:\n * None\n* 'sparsity_type=\"norm\"', select highest norm tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"pooling\"', use average pooling to merge tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"lsh\"', use the LSH algorithm to cluster similar tokens\n * Works best for a large sparsity_factor (4+)\n * LSH relies on random projections, thus inference may differ slightly with different seeds\n * Additional parameters:\n * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids\n* 'sparsity_type=\"stride\"', use a striding mecanism per head\n * Each head will use different tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads\n* 'sparsity_type=\"block_stride\"', use a striding mecanism per head\n * Each head will use block of tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads", "## Tasks\nFill mask example:\n\n\n\nClassification example:", "## Training global tokens\nTo train global tokens and the classification head only:\n\n\nLEGAL-BERT" ]
[ "TAGS\n#transformers #pytorch #bert #pretraining #long context #legal #fill-mask #custom_code #en #arxiv-2210.15497 #region-us \n", "# LSG model \nTransformers >= 4.36.1\\\nThis model relies on a custom modeling file, you need to add trust_remote_code=True\\\nSee \\#13467\n\nLSG ArXiv paper. \\\nGithub/conversion script is available at this link.\n\n* Usage\n* Parameters\n* Sparse selection type\n* Tasks\n* Training global tokens\n\nThis model is adapted from LEGAL-BERT without additional pretraining yet. It uses the same number of parameters/layers and the same tokenizer.\n\n\nThis model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG).\n\nThe model requires sequences whose length is a multiple of the block size. The model is \"adaptive\" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...).\n\nSupport encoder-decoder but I didnt test it extensively.\\\nImplemented in PyTorch.\n\n!attn", "## Usage\nThe model relies on a custom modeling file, you need to add trust_remote_code=True to use it.", "## Parameters\nYou can change various parameters like : \n* the number of global tokens (num_global_tokens=1)\n* local block size (block_size=128)\n* sparse block size (sparse_block_size=128)\n* sparsity factor (sparsity_factor=2)\n* mask_first_token (mask first token since it is redundant with the first global token)\n* see URL file\n\nDefault parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.", "## Sparse selection type\n\nThere are 6 different sparse selection patterns. The best type is task dependent. \\\nIf 'sparse_block_size=0' or 'sparsity_type=\"none\"', only local attention is considered. \\\nNote that for sequences with length < 2*block_size, the type has no effect.\n* 'sparsity_type=\"bos_pooling\"' (new)\n * weighted average pooling using the BOS token \n * Works best in general, especially with a rather large sparsity_factor (8, 16, 32)\n * Additional parameters:\n * None\n* 'sparsity_type=\"norm\"', select highest norm tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"pooling\"', use average pooling to merge tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"lsh\"', use the LSH algorithm to cluster similar tokens\n * Works best for a large sparsity_factor (4+)\n * LSH relies on random projections, thus inference may differ slightly with different seeds\n * Additional parameters:\n * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids\n* 'sparsity_type=\"stride\"', use a striding mecanism per head\n * Each head will use different tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads\n* 'sparsity_type=\"block_stride\"', use a striding mecanism per head\n * Each head will use block of tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads", "## Tasks\nFill mask example:\n\n\n\nClassification example:", "## Training global tokens\nTo train global tokens and the classification head only:\n\n\nLEGAL-BERT" ]
fill-mask
transformers
# LSG model **Transformers >= 4.36.1**\ **This model relies on a custom modeling file, you need to add trust_remote_code=True**\ **See [\#13467](https://github.com/huggingface/transformers/pull/13467)** LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \ Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg). * [Usage](#usage) * [Parameters](#parameters) * [Sparse selection type](#sparse-selection-type) * [Tasks](#tasks) * [Training global tokens](#training-global-tokens) This model is a small version of the [LEGAL-BERT](https://huggingface.co/nlpaueb/legal-bert-small-uncased) model without additional pretraining yet. It uses the same number of parameters/layers and the same tokenizer. This model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). Support encoder-decoder but I didnt test it extensively.\ Implemented in PyTorch. ![attn](attn.png) ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ```python: from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("ccdv/legal-lsg-small-uncased-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/legal-lsg-small-uncased-4096") ``` ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * mask_first_token (mask first token since it is redundant with the first global token) * see config.json file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ```python: from transformers import AutoModel model = AutoModel.from_pretrained("ccdv/legal-lsg-small-uncased-4096", trust_remote_code=True, num_global_tokens=16, block_size=64, sparse_block_size=64, attention_probs_dropout_prob=0.0 sparsity_factor=4, sparsity_type="none", mask_first_token=True ) ``` ## Sparse selection type There are 6 different sparse selection patterns. The best type is task dependent. \ If `sparse_block_size=0` or `sparsity_type="none"`, only local attention is considered. \ Note that for sequences with length < 2*block_size, the type has no effect. * `sparsity_type="bos_pooling"` (new) * weighted average pooling using the BOS token * Works best in general, especially with a rather large sparsity_factor (8, 16, 32) * Additional parameters: * None * `sparsity_type="norm"`, select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="pooling"`, use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="lsh"`, use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * `sparsity_type="stride"`, use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * `sparsity_type="block_stride"`, use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Fill mask example: ```python: from transformers import FillMaskPipeline, AutoModelForMaskedLM, AutoTokenizer model = AutoModelForMaskedLM.from_pretrained("ccdv/legal-lsg-small-uncased-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/legal-lsg-small-uncased-4096") SENTENCES = ["Paris is the <mask> of France.", "The goal of life is <mask>."] pipeline = FillMaskPipeline(model, tokenizer) output = pipeline(SENTENCES, top_k=1) output = [o[0]["sequence"] for o in output] > ['Paris is the capital of France.', 'The goal of life is happiness.'] ``` Classification example: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/legal-lsg-small-uncased-4096", trust_remote_code=True, pool_with_global=True, # pool with a global token instead of first token ) tokenizer = AutoTokenizer.from_pretrained("ccdv/legal-lsg-small-uncased-4096") SENTENCE = "This is a test for sequence classification. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", #pad_to_multiple_of=... # Optional truncation=True ) output = model(**token_ids) > SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None) ``` ## Training global tokens To train global tokens and the classification head only: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/legal-lsg-small-uncased-4096", trust_remote_code=True, pool_with_global=True, # pool with a global token instead of first token num_global_tokens=16 ) tokenizer = AutoTokenizer.from_pretrained("ccdv/legal-lsg-small-uncased-4096") for name, param in model.named_parameters(): if "global_embeddings" not in name: param.requires_grad = False else: param.required_grad = True ``` **LEGAL-BERT** ``` @inproceedings{chalkidis-etal-2020-legal, title = "{LEGAL}-{BERT}: The Muppets straight out of Law School", author = "Chalkidis, Ilias and Fergadiotis, Manos and Malakasiotis, Prodromos and Aletras, Nikolaos and Androutsopoulos, Ion", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", doi = "10.18653/v1/2020.findings-emnlp.261", pages = "2898--2904" } ```
{"language": "en", "tags": ["long context", "legal"], "pipeline_tag": "fill-mask"}
ccdv/lsg-legal-small-uncased-4096
null
[ "transformers", "pytorch", "bert", "pretraining", "long context", "legal", "fill-mask", "custom_code", "en", "arxiv:2210.15497", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2210.15497" ]
[ "en" ]
TAGS #transformers #pytorch #bert #pretraining #long context #legal #fill-mask #custom_code #en #arxiv-2210.15497 #region-us
# LSG model Transformers >= 4.36.1\ This model relies on a custom modeling file, you need to add trust_remote_code=True\ See \#13467 LSG ArXiv paper. \ Github/conversion script is available at this link. * Usage * Parameters * Sparse selection type * Tasks * Training global tokens This model is a small version of the LEGAL-BERT model without additional pretraining yet. It uses the same number of parameters/layers and the same tokenizer. This model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). Support encoder-decoder but I didnt test it extensively.\ Implemented in PyTorch. !attn ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * mask_first_token (mask first token since it is redundant with the first global token) * see URL file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ## Sparse selection type There are 6 different sparse selection patterns. The best type is task dependent. \ If 'sparse_block_size=0' or 'sparsity_type="none"', only local attention is considered. \ Note that for sequences with length < 2*block_size, the type has no effect. * 'sparsity_type="bos_pooling"' (new) * weighted average pooling using the BOS token * Works best in general, especially with a rather large sparsity_factor (8, 16, 32) * Additional parameters: * None * 'sparsity_type="norm"', select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * 'sparsity_type="pooling"', use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * 'sparsity_type="lsh"', use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * 'sparsity_type="stride"', use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * 'sparsity_type="block_stride"', use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Fill mask example: Classification example: ## Training global tokens To train global tokens and the classification head only: LEGAL-BERT
[ "# LSG model \nTransformers >= 4.36.1\\\nThis model relies on a custom modeling file, you need to add trust_remote_code=True\\\nSee \\#13467\n\nLSG ArXiv paper. \\\nGithub/conversion script is available at this link.\n\n* Usage\n* Parameters\n* Sparse selection type\n* Tasks\n* Training global tokens\n\nThis model is a small version of the LEGAL-BERT model without additional pretraining yet. It uses the same number of parameters/layers and the same tokenizer. \n\n\nThis model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG).\n\n\nThe model requires sequences whose length is a multiple of the block size. The model is \"adaptive\" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \n\n\nSupport encoder-decoder but I didnt test it extensively.\\\nImplemented in PyTorch.\n\n!attn", "## Usage\nThe model relies on a custom modeling file, you need to add trust_remote_code=True to use it.", "## Parameters\nYou can change various parameters like : \n* the number of global tokens (num_global_tokens=1)\n* local block size (block_size=128)\n* sparse block size (sparse_block_size=128)\n* sparsity factor (sparsity_factor=2)\n* mask_first_token (mask first token since it is redundant with the first global token)\n* see URL file\n\nDefault parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.", "## Sparse selection type\n\nThere are 6 different sparse selection patterns. The best type is task dependent. \\\nIf 'sparse_block_size=0' or 'sparsity_type=\"none\"', only local attention is considered. \\\nNote that for sequences with length < 2*block_size, the type has no effect.\n* 'sparsity_type=\"bos_pooling\"' (new)\n * weighted average pooling using the BOS token \n * Works best in general, especially with a rather large sparsity_factor (8, 16, 32)\n * Additional parameters:\n * None\n* 'sparsity_type=\"norm\"', select highest norm tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"pooling\"', use average pooling to merge tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"lsh\"', use the LSH algorithm to cluster similar tokens\n * Works best for a large sparsity_factor (4+)\n * LSH relies on random projections, thus inference may differ slightly with different seeds\n * Additional parameters:\n * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids\n* 'sparsity_type=\"stride\"', use a striding mecanism per head\n * Each head will use different tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads\n* 'sparsity_type=\"block_stride\"', use a striding mecanism per head\n * Each head will use block of tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads", "## Tasks\nFill mask example:\n\n\n\nClassification example:", "## Training global tokens\nTo train global tokens and the classification head only:\n\n\n\nLEGAL-BERT" ]
[ "TAGS\n#transformers #pytorch #bert #pretraining #long context #legal #fill-mask #custom_code #en #arxiv-2210.15497 #region-us \n", "# LSG model \nTransformers >= 4.36.1\\\nThis model relies on a custom modeling file, you need to add trust_remote_code=True\\\nSee \\#13467\n\nLSG ArXiv paper. \\\nGithub/conversion script is available at this link.\n\n* Usage\n* Parameters\n* Sparse selection type\n* Tasks\n* Training global tokens\n\nThis model is a small version of the LEGAL-BERT model without additional pretraining yet. It uses the same number of parameters/layers and the same tokenizer. \n\n\nThis model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG).\n\n\nThe model requires sequences whose length is a multiple of the block size. The model is \"adaptive\" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \n\n\nSupport encoder-decoder but I didnt test it extensively.\\\nImplemented in PyTorch.\n\n!attn", "## Usage\nThe model relies on a custom modeling file, you need to add trust_remote_code=True to use it.", "## Parameters\nYou can change various parameters like : \n* the number of global tokens (num_global_tokens=1)\n* local block size (block_size=128)\n* sparse block size (sparse_block_size=128)\n* sparsity factor (sparsity_factor=2)\n* mask_first_token (mask first token since it is redundant with the first global token)\n* see URL file\n\nDefault parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.", "## Sparse selection type\n\nThere are 6 different sparse selection patterns. The best type is task dependent. \\\nIf 'sparse_block_size=0' or 'sparsity_type=\"none\"', only local attention is considered. \\\nNote that for sequences with length < 2*block_size, the type has no effect.\n* 'sparsity_type=\"bos_pooling\"' (new)\n * weighted average pooling using the BOS token \n * Works best in general, especially with a rather large sparsity_factor (8, 16, 32)\n * Additional parameters:\n * None\n* 'sparsity_type=\"norm\"', select highest norm tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"pooling\"', use average pooling to merge tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"lsh\"', use the LSH algorithm to cluster similar tokens\n * Works best for a large sparsity_factor (4+)\n * LSH relies on random projections, thus inference may differ slightly with different seeds\n * Additional parameters:\n * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids\n* 'sparsity_type=\"stride\"', use a striding mecanism per head\n * Each head will use different tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads\n* 'sparsity_type=\"block_stride\"', use a striding mecanism per head\n * Each head will use block of tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads", "## Tasks\nFill mask example:\n\n\n\nClassification example:", "## Training global tokens\nTo train global tokens and the classification head only:\n\n\n\nLEGAL-BERT" ]
fill-mask
transformers
# LSG model **Transformers >= 4.36.1**\ **This model relies on a custom modeling file, you need to add trust_remote_code=True**\ **See [\#13467](https://github.com/huggingface/transformers/pull/13467)** LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \ Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg). * [Usage](#usage) * [Parameters](#parameters) * [Sparse selection type](#sparse-selection-type) * [Tasks](#tasks) This model is adapted from [BART-base](https://huggingface.co/facebook/bart-base) for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer. This model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). Implemented in PyTorch. ![attn](attn.png) ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ```python: from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("ccdv/lsg-bart-base-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-4096") ``` ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * mask_first_token (mask first token since it is redundant with the first global token) * see config.json file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ```python: from transformers import AutoModel model = AutoModel.from_pretrained("ccdv/lsg-bart-base-4096", trust_remote_code=True, num_global_tokens=16, block_size=64, sparse_block_size=64, attention_probs_dropout_prob=0.0 sparsity_factor=4, sparsity_type="none", mask_first_token=True ) ``` ## Sparse selection type There are 6 different sparse selection patterns. The best type is task dependent. \ If `sparse_block_size=0` or `sparsity_type="none"`, only local attention is considered. \ Note that for sequences with length < 2*block_size, the type has no effect. * `sparsity_type="bos_pooling"` (new) * weighted average pooling using the BOS token * Works best in general, especially with a rather large sparsity_factor (8, 16, 32) * Additional parameters: * None * `sparsity_type="norm"`, select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="pooling"`, use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="lsh"`, use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * `sparsity_type="stride"`, use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * `sparsity_type="block_stride"`, use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Seq2Seq example for summarization: ```python: from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-base-4096", trust_remote_code=True, pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-4096") SENTENCE = "This is a test sequence to test the model. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", padding="max_length", # Optional but recommended truncation=True # Optional but recommended ) output = model(**token_ids) ``` Classification example: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-bart-base-4096", trust_remote_code=True, pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-4096") SENTENCE = "This is a test sequence to test the model. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", #pad_to_multiple_of=... # Optional truncation=True ) output = model(**token_ids) > SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None) ``` **BART** ``` @article{DBLP:journals/corr/abs-1910-13461, author = {Mike Lewis and Yinhan Liu and Naman Goyal and Marjan Ghazvininejad and Abdelrahman Mohamed and Omer Levy and Veselin Stoyanov and Luke Zettlemoyer}, title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension}, journal = {CoRR}, volume = {abs/1910.13461}, year = {2019}, url = {http://arxiv.org/abs/1910.13461}, eprinttype = {arXiv}, eprint = {1910.13461}, timestamp = {Thu, 31 Oct 2019 14:02:26 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"language": ["en"], "tags": ["summarization", "bart", "long context"], "pipeline_tag": "fill-mask"}
ccdv/lsg-bart-base-4096
null
[ "transformers", "pytorch", "bart", "text2text-generation", "summarization", "long context", "fill-mask", "custom_code", "en", "arxiv:2210.15497", "arxiv:1910.13461", "autotrain_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2210.15497", "1910.13461" ]
[ "en" ]
TAGS #transformers #pytorch #bart #text2text-generation #summarization #long context #fill-mask #custom_code #en #arxiv-2210.15497 #arxiv-1910.13461 #autotrain_compatible #region-us
# LSG model Transformers >= 4.36.1\ This model relies on a custom modeling file, you need to add trust_remote_code=True\ See \#13467 LSG ArXiv paper. \ Github/conversion script is available at this link. * Usage * Parameters * Sparse selection type * Tasks This model is adapted from BART-base for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer. This model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). Implemented in PyTorch. !attn ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * mask_first_token (mask first token since it is redundant with the first global token) * see URL file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ## Sparse selection type There are 6 different sparse selection patterns. The best type is task dependent. \ If 'sparse_block_size=0' or 'sparsity_type="none"', only local attention is considered. \ Note that for sequences with length < 2*block_size, the type has no effect. * 'sparsity_type="bos_pooling"' (new) * weighted average pooling using the BOS token * Works best in general, especially with a rather large sparsity_factor (8, 16, 32) * Additional parameters: * None * 'sparsity_type="norm"', select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * 'sparsity_type="pooling"', use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * 'sparsity_type="lsh"', use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * 'sparsity_type="stride"', use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * 'sparsity_type="block_stride"', use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Seq2Seq example for summarization: Classification example: BART
[ "# LSG model \nTransformers >= 4.36.1\\\nThis model relies on a custom modeling file, you need to add trust_remote_code=True\\\nSee \\#13467\n\nLSG ArXiv paper. \\\nGithub/conversion script is available at this link.\n\n* Usage\n* Parameters\n* Sparse selection type\n* Tasks\n\nThis model is adapted from BART-base for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer.\n\n\nThis model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG).\n\nThe model requires sequences whose length is a multiple of the block size. The model is \"adaptive\" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \n\nImplemented in PyTorch.\n\n!attn", "## Usage\nThe model relies on a custom modeling file, you need to add trust_remote_code=True to use it.", "## Parameters\nYou can change various parameters like : \n* the number of global tokens (num_global_tokens=1)\n* local block size (block_size=128)\n* sparse block size (sparse_block_size=128)\n* sparsity factor (sparsity_factor=2)\n* mask_first_token (mask first token since it is redundant with the first global token)\n* see URL file\n\nDefault parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.", "## Sparse selection type\n\nThere are 6 different sparse selection patterns. The best type is task dependent. \\\nIf 'sparse_block_size=0' or 'sparsity_type=\"none\"', only local attention is considered. \\\nNote that for sequences with length < 2*block_size, the type has no effect.\n* 'sparsity_type=\"bos_pooling\"' (new)\n * weighted average pooling using the BOS token \n * Works best in general, especially with a rather large sparsity_factor (8, 16, 32)\n * Additional parameters:\n * None\n* 'sparsity_type=\"norm\"', select highest norm tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"pooling\"', use average pooling to merge tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"lsh\"', use the LSH algorithm to cluster similar tokens\n * Works best for a large sparsity_factor (4+)\n * LSH relies on random projections, thus inference may differ slightly with different seeds\n * Additional parameters:\n * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids\n* 'sparsity_type=\"stride\"', use a striding mecanism per head\n * Each head will use different tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads\n* 'sparsity_type=\"block_stride\"', use a striding mecanism per head\n * Each head will use block of tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads", "## Tasks\nSeq2Seq example for summarization:\n\n\n\nClassification example:\n\n\nBART" ]
[ "TAGS\n#transformers #pytorch #bart #text2text-generation #summarization #long context #fill-mask #custom_code #en #arxiv-2210.15497 #arxiv-1910.13461 #autotrain_compatible #region-us \n", "# LSG model \nTransformers >= 4.36.1\\\nThis model relies on a custom modeling file, you need to add trust_remote_code=True\\\nSee \\#13467\n\nLSG ArXiv paper. \\\nGithub/conversion script is available at this link.\n\n* Usage\n* Parameters\n* Sparse selection type\n* Tasks\n\nThis model is adapted from BART-base for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer.\n\n\nThis model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG).\n\nThe model requires sequences whose length is a multiple of the block size. The model is \"adaptive\" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \n\nImplemented in PyTorch.\n\n!attn", "## Usage\nThe model relies on a custom modeling file, you need to add trust_remote_code=True to use it.", "## Parameters\nYou can change various parameters like : \n* the number of global tokens (num_global_tokens=1)\n* local block size (block_size=128)\n* sparse block size (sparse_block_size=128)\n* sparsity factor (sparsity_factor=2)\n* mask_first_token (mask first token since it is redundant with the first global token)\n* see URL file\n\nDefault parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.", "## Sparse selection type\n\nThere are 6 different sparse selection patterns. The best type is task dependent. \\\nIf 'sparse_block_size=0' or 'sparsity_type=\"none\"', only local attention is considered. \\\nNote that for sequences with length < 2*block_size, the type has no effect.\n* 'sparsity_type=\"bos_pooling\"' (new)\n * weighted average pooling using the BOS token \n * Works best in general, especially with a rather large sparsity_factor (8, 16, 32)\n * Additional parameters:\n * None\n* 'sparsity_type=\"norm\"', select highest norm tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"pooling\"', use average pooling to merge tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"lsh\"', use the LSH algorithm to cluster similar tokens\n * Works best for a large sparsity_factor (4+)\n * LSH relies on random projections, thus inference may differ slightly with different seeds\n * Additional parameters:\n * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids\n* 'sparsity_type=\"stride\"', use a striding mecanism per head\n * Each head will use different tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads\n* 'sparsity_type=\"block_stride\"', use a striding mecanism per head\n * Each head will use block of tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads", "## Tasks\nSeq2Seq example for summarization:\n\n\n\nClassification example:\n\n\nBART" ]
fill-mask
transformers
# LSG model **Transformers >= 4.36.1**\ **This model relies on a custom modeling file, you need to add trust_remote_code=True**\ **See [\#13467](https://github.com/huggingface/transformers/pull/13467)** LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \ Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg). * [Usage](#usage) * [Parameters](#parameters) * [Sparse selection type](#sparse-selection-type) * [Tasks](#tasks) This model is adapted from [BART-large](https://huggingface.co/facebook/bart-large) for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer. This model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \ Implemented in PyTorch. ![attn](attn.png) ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ```python: from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("ccdv/lsg-bart-large-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-large-4096") ``` ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * mask_first_token (mask first token since it is redundant with the first global token) * see config.json file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ```python: from transformers import AutoModel model = AutoModel.from_pretrained("ccdv/lsg-bart-large-4096", trust_remote_code=True, num_global_tokens=16, block_size=64, sparse_block_size=64, attention_probs_dropout_prob=0.0 sparsity_factor=4, sparsity_type="none", mask_first_token=True ) ``` ## Sparse selection type There are 6 different sparse selection patterns. The best type is task dependent. \ If `sparse_block_size=0` or `sparsity_type="none"`, only local attention is considered. \ Note that for sequences with length < 2*block_size, the type has no effect. * `sparsity_type="bos_pooling"` (new) * weighted average pooling using the BOS token * Works best in general, especially with a rather large sparsity_factor (8, 16, 32) * Additional parameters: * None * `sparsity_type="norm"`, select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="pooling"`, use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="lsh"`, use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * `sparsity_type="stride"`, use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * `sparsity_type="block_stride"`, use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Seq2Seq example for summarization: ```python: from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-large-4096", trust_remote_code=True, pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-large-4096") SENTENCE = "This is a test sequence to test the model. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", #pad_to_multiple_of=... # Optional truncation=True ) output = model(**token_ids) ``` Classification example: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-bart-large-4096", trust_remote_code=True, pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-large-4096") SENTENCE = "This is a test sequence to test the model. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", padding="max_length", # Optional but recommended truncation=True # Optional but recommended ) output = model(**token_ids) > SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None) ``` **BART** ``` @article{DBLP:journals/corr/abs-1910-13461, author = {Mike Lewis and Yinhan Liu and Naman Goyal and Marjan Ghazvininejad and Abdelrahman Mohamed and Omer Levy and Veselin Stoyanov and Luke Zettlemoyer}, title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension}, journal = {CoRR}, volume = {abs/1910.13461}, year = {2019}, url = {http://arxiv.org/abs/1910.13461}, eprinttype = {arXiv}, eprint = {1910.13461}, timestamp = {Thu, 31 Oct 2019 14:02:26 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"language": ["en"], "tags": ["summarization", "bart", "long context"], "pipeline_tag": "fill-mask"}
ccdv/lsg-bart-large-4096
null
[ "transformers", "pytorch", "bart", "text2text-generation", "summarization", "long context", "fill-mask", "custom_code", "en", "arxiv:2210.15497", "arxiv:1910.13461", "autotrain_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2210.15497", "1910.13461" ]
[ "en" ]
TAGS #transformers #pytorch #bart #text2text-generation #summarization #long context #fill-mask #custom_code #en #arxiv-2210.15497 #arxiv-1910.13461 #autotrain_compatible #region-us
# LSG model Transformers >= 4.36.1\ This model relies on a custom modeling file, you need to add trust_remote_code=True\ See \#13467 LSG ArXiv paper. \ Github/conversion script is available at this link. * Usage * Parameters * Sparse selection type * Tasks This model is adapted from BART-large for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer. This model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \ Implemented in PyTorch. !attn ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * mask_first_token (mask first token since it is redundant with the first global token) * see URL file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ## Sparse selection type There are 6 different sparse selection patterns. The best type is task dependent. \ If 'sparse_block_size=0' or 'sparsity_type="none"', only local attention is considered. \ Note that for sequences with length < 2*block_size, the type has no effect. * 'sparsity_type="bos_pooling"' (new) * weighted average pooling using the BOS token * Works best in general, especially with a rather large sparsity_factor (8, 16, 32) * Additional parameters: * None * 'sparsity_type="norm"', select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * 'sparsity_type="pooling"', use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * 'sparsity_type="lsh"', use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * 'sparsity_type="stride"', use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * 'sparsity_type="block_stride"', use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Seq2Seq example for summarization: Classification example: BART
[ "# LSG model \nTransformers >= 4.36.1\\\nThis model relies on a custom modeling file, you need to add trust_remote_code=True\\\nSee \\#13467\n\nLSG ArXiv paper. \\\nGithub/conversion script is available at this link.\n\n* Usage\n* Parameters\n* Sparse selection type\n* Tasks\n\nThis model is adapted from BART-large for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer.\n\n\nThis model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG).\n\n\nThe model requires sequences whose length is a multiple of the block size. The model is \"adaptive\" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \\\n\nImplemented in PyTorch.\n\n!attn", "## Usage\nThe model relies on a custom modeling file, you need to add trust_remote_code=True to use it.", "## Parameters\nYou can change various parameters like : \n* the number of global tokens (num_global_tokens=1)\n* local block size (block_size=128)\n* sparse block size (sparse_block_size=128)\n* sparsity factor (sparsity_factor=2)\n* mask_first_token (mask first token since it is redundant with the first global token)\n* see URL file\n\nDefault parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.", "## Sparse selection type\n\nThere are 6 different sparse selection patterns. The best type is task dependent. \\\nIf 'sparse_block_size=0' or 'sparsity_type=\"none\"', only local attention is considered. \\\nNote that for sequences with length < 2*block_size, the type has no effect.\n* 'sparsity_type=\"bos_pooling\"' (new)\n * weighted average pooling using the BOS token \n * Works best in general, especially with a rather large sparsity_factor (8, 16, 32)\n * Additional parameters:\n * None\n* 'sparsity_type=\"norm\"', select highest norm tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"pooling\"', use average pooling to merge tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"lsh\"', use the LSH algorithm to cluster similar tokens\n * Works best for a large sparsity_factor (4+)\n * LSH relies on random projections, thus inference may differ slightly with different seeds\n * Additional parameters:\n * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids\n* 'sparsity_type=\"stride\"', use a striding mecanism per head\n * Each head will use different tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads\n* 'sparsity_type=\"block_stride\"', use a striding mecanism per head\n * Each head will use block of tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads", "## Tasks\nSeq2Seq example for summarization:\n\n\n\nClassification example:\n\n\nBART" ]
[ "TAGS\n#transformers #pytorch #bart #text2text-generation #summarization #long context #fill-mask #custom_code #en #arxiv-2210.15497 #arxiv-1910.13461 #autotrain_compatible #region-us \n", "# LSG model \nTransformers >= 4.36.1\\\nThis model relies on a custom modeling file, you need to add trust_remote_code=True\\\nSee \\#13467\n\nLSG ArXiv paper. \\\nGithub/conversion script is available at this link.\n\n* Usage\n* Parameters\n* Sparse selection type\n* Tasks\n\nThis model is adapted from BART-large for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer.\n\n\nThis model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG).\n\n\nThe model requires sequences whose length is a multiple of the block size. The model is \"adaptive\" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \\\n\nImplemented in PyTorch.\n\n!attn", "## Usage\nThe model relies on a custom modeling file, you need to add trust_remote_code=True to use it.", "## Parameters\nYou can change various parameters like : \n* the number of global tokens (num_global_tokens=1)\n* local block size (block_size=128)\n* sparse block size (sparse_block_size=128)\n* sparsity factor (sparsity_factor=2)\n* mask_first_token (mask first token since it is redundant with the first global token)\n* see URL file\n\nDefault parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.", "## Sparse selection type\n\nThere are 6 different sparse selection patterns. The best type is task dependent. \\\nIf 'sparse_block_size=0' or 'sparsity_type=\"none\"', only local attention is considered. \\\nNote that for sequences with length < 2*block_size, the type has no effect.\n* 'sparsity_type=\"bos_pooling\"' (new)\n * weighted average pooling using the BOS token \n * Works best in general, especially with a rather large sparsity_factor (8, 16, 32)\n * Additional parameters:\n * None\n* 'sparsity_type=\"norm\"', select highest norm tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"pooling\"', use average pooling to merge tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"lsh\"', use the LSH algorithm to cluster similar tokens\n * Works best for a large sparsity_factor (4+)\n * LSH relies on random projections, thus inference may differ slightly with different seeds\n * Additional parameters:\n * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids\n* 'sparsity_type=\"stride\"', use a striding mecanism per head\n * Each head will use different tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads\n* 'sparsity_type=\"block_stride\"', use a striding mecanism per head\n * Each head will use block of tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads", "## Tasks\nSeq2Seq example for summarization:\n\n\n\nClassification example:\n\n\nBART" ]
fill-mask
transformers
# LSG model **Transformers >= 4.36.1**\ **This model relies on a custom modeling file, you need to add trust_remote_code=True**\ **See [\#13467](https://github.com/huggingface/transformers/pull/13467)** LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \ Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg). * [Usage](#usage) * [Parameters](#parameters) * [Sparse selection type](#sparse-selection-type) * [Tasks](#tasks) This model is adapted from [BARThez](https://huggingface.co/moussaKam/barthez) for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer. This model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \ Implemented in PyTorch. ![attn](attn.png) ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ```python: from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("ccdv/lsg-barthez-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-barthez-4096") ``` ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * mask_first_token (mask first token since it is redundant with the first global token) * see config.json file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ```python: from transformers import AutoModel model = AutoModel.from_pretrained("ccdv/lsg-barthez-4096", trust_remote_code=True, num_global_tokens=16, block_size=64, sparse_block_size=64, attention_probs_dropout_prob=0.0 sparsity_factor=4, sparsity_type="none", mask_first_token=True ) ``` ## Sparse selection type There are 6 different sparse selection patterns. The best type is task dependent. \ If `sparse_block_size=0` or `sparsity_type="none"`, only local attention is considered. \ Note that for sequences with length < 2*block_size, the type has no effect. * `sparsity_type="bos_pooling"` (new) * weighted average pooling using the BOS token * Works best in general, especially with a rather large sparsity_factor (8, 16, 32) * Additional parameters: * None * `sparsity_type="norm"`, select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="pooling"`, use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="lsh"`, use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * `sparsity_type="stride"`, use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * `sparsity_type="block_stride"`, use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Seq2Seq example for summarization: ```python: from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-barthez-4096", trust_remote_code=True, pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-barthez-4096") SENTENCE = "This is a test sequence to test the model. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", padding="max_length", # Optional but recommended truncation=True # Optional but recommended ) output = model(**token_ids) ``` Classification example: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-barthez-4096", trust_remote_code=True, pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-barthez-4096") SENTENCE = "This is a test sequence to test the model. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", #pad_to_multiple_of=... # Optional truncation=True ) output = model(**token_ids) > SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None) ``` ## Conversion script To convert a BERT, RoBERTa or BART checkpoint to LSG, see this [repo](https://github.com/ccdv-ai/convert_checkpoint_to_lsg). **BARThez** ``` @article{eddine2020barthez, title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model}, author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis}, journal={arXiv preprint arXiv:2010.12321}, year={2020} } ```
{"language": ["fr"], "tags": ["summarization", "bart", "long context"], "pipeline_tag": "fill-mask"}
ccdv/lsg-barthez-4096
null
[ "transformers", "pytorch", "mbart", "text2text-generation", "summarization", "bart", "long context", "fill-mask", "custom_code", "fr", "arxiv:2210.15497", "autotrain_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2210.15497" ]
[ "fr" ]
TAGS #transformers #pytorch #mbart #text2text-generation #summarization #bart #long context #fill-mask #custom_code #fr #arxiv-2210.15497 #autotrain_compatible #region-us
# LSG model Transformers >= 4.36.1\ This model relies on a custom modeling file, you need to add trust_remote_code=True\ See \#13467 LSG ArXiv paper. \ Github/conversion script is available at this link. * Usage * Parameters * Sparse selection type * Tasks This model is adapted from BARThez for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer. This model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \ Implemented in PyTorch. !attn ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * mask_first_token (mask first token since it is redundant with the first global token) * see URL file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ## Sparse selection type There are 6 different sparse selection patterns. The best type is task dependent. \ If 'sparse_block_size=0' or 'sparsity_type="none"', only local attention is considered. \ Note that for sequences with length < 2*block_size, the type has no effect. * 'sparsity_type="bos_pooling"' (new) * weighted average pooling using the BOS token * Works best in general, especially with a rather large sparsity_factor (8, 16, 32) * Additional parameters: * None * 'sparsity_type="norm"', select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * 'sparsity_type="pooling"', use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * 'sparsity_type="lsh"', use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * 'sparsity_type="stride"', use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * 'sparsity_type="block_stride"', use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Seq2Seq example for summarization: Classification example: ## Conversion script To convert a BERT, RoBERTa or BART checkpoint to LSG, see this repo. BARThez
[ "# LSG model \nTransformers >= 4.36.1\\\nThis model relies on a custom modeling file, you need to add trust_remote_code=True\\\nSee \\#13467\n\nLSG ArXiv paper. \\\nGithub/conversion script is available at this link.\n\n* Usage\n* Parameters\n* Sparse selection type\n* Tasks\n\nThis model is adapted from BARThez for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer.\n\n\nThis model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG).\n\n\nThe model requires sequences whose length is a multiple of the block size. The model is \"adaptive\" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \\\n\nImplemented in PyTorch.\n\n!attn", "## Usage\nThe model relies on a custom modeling file, you need to add trust_remote_code=True to use it.", "## Parameters\nYou can change various parameters like : \n* the number of global tokens (num_global_tokens=1)\n* local block size (block_size=128)\n* sparse block size (sparse_block_size=128)\n* sparsity factor (sparsity_factor=2)\n* mask_first_token (mask first token since it is redundant with the first global token)\n* see URL file\n\nDefault parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.", "## Sparse selection type\n\nThere are 6 different sparse selection patterns. The best type is task dependent. \\\nIf 'sparse_block_size=0' or 'sparsity_type=\"none\"', only local attention is considered. \\\nNote that for sequences with length < 2*block_size, the type has no effect.\n* 'sparsity_type=\"bos_pooling\"' (new)\n * weighted average pooling using the BOS token \n * Works best in general, especially with a rather large sparsity_factor (8, 16, 32)\n * Additional parameters:\n * None\n* 'sparsity_type=\"norm\"', select highest norm tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"pooling\"', use average pooling to merge tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"lsh\"', use the LSH algorithm to cluster similar tokens\n * Works best for a large sparsity_factor (4+)\n * LSH relies on random projections, thus inference may differ slightly with different seeds\n * Additional parameters:\n * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids\n* 'sparsity_type=\"stride\"', use a striding mecanism per head\n * Each head will use different tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads\n* 'sparsity_type=\"block_stride\"', use a striding mecanism per head\n * Each head will use block of tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads", "## Tasks\nSeq2Seq example for summarization:\n\n\n\nClassification example:", "## Conversion script\n\nTo convert a BERT, RoBERTa or BART checkpoint to LSG, see this repo.\n\n\nBARThez" ]
[ "TAGS\n#transformers #pytorch #mbart #text2text-generation #summarization #bart #long context #fill-mask #custom_code #fr #arxiv-2210.15497 #autotrain_compatible #region-us \n", "# LSG model \nTransformers >= 4.36.1\\\nThis model relies on a custom modeling file, you need to add trust_remote_code=True\\\nSee \\#13467\n\nLSG ArXiv paper. \\\nGithub/conversion script is available at this link.\n\n* Usage\n* Parameters\n* Sparse selection type\n* Tasks\n\nThis model is adapted from BARThez for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer.\n\n\nThis model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG).\n\n\nThe model requires sequences whose length is a multiple of the block size. The model is \"adaptive\" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \\\n\nImplemented in PyTorch.\n\n!attn", "## Usage\nThe model relies on a custom modeling file, you need to add trust_remote_code=True to use it.", "## Parameters\nYou can change various parameters like : \n* the number of global tokens (num_global_tokens=1)\n* local block size (block_size=128)\n* sparse block size (sparse_block_size=128)\n* sparsity factor (sparsity_factor=2)\n* mask_first_token (mask first token since it is redundant with the first global token)\n* see URL file\n\nDefault parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.", "## Sparse selection type\n\nThere are 6 different sparse selection patterns. The best type is task dependent. \\\nIf 'sparse_block_size=0' or 'sparsity_type=\"none\"', only local attention is considered. \\\nNote that for sequences with length < 2*block_size, the type has no effect.\n* 'sparsity_type=\"bos_pooling\"' (new)\n * weighted average pooling using the BOS token \n * Works best in general, especially with a rather large sparsity_factor (8, 16, 32)\n * Additional parameters:\n * None\n* 'sparsity_type=\"norm\"', select highest norm tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"pooling\"', use average pooling to merge tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"lsh\"', use the LSH algorithm to cluster similar tokens\n * Works best for a large sparsity_factor (4+)\n * LSH relies on random projections, thus inference may differ slightly with different seeds\n * Additional parameters:\n * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids\n* 'sparsity_type=\"stride\"', use a striding mecanism per head\n * Each head will use different tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads\n* 'sparsity_type=\"block_stride\"', use a striding mecanism per head\n * Each head will use block of tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads", "## Tasks\nSeq2Seq example for summarization:\n\n\n\nClassification example:", "## Conversion script\n\nTo convert a BERT, RoBERTa or BART checkpoint to LSG, see this repo.\n\n\nBARThez" ]
fill-mask
transformers
# LSG model **Transformers >= 4.36.1**\ **This model relies on a custom modeling file, you need to add trust_remote_code=True**\ **See [\#13467](https://github.com/huggingface/transformers/pull/13467)** LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \ Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg). * [Usage](#usage) * [Parameters](#parameters) * [Sparse selection type](#sparse-selection-type) * [Tasks](#tasks) * [Training global tokens](#training-global-tokens) This model is adapted from [CamemBERT-base](https://huggingface.co/camembert-base) without additional pretraining yet. It uses the same number of parameters/layers and the same tokenizer. This model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). Support encoder-decoder but I didnt test it extensively.\ Implemented in PyTorch. ![attn](attn.png) ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ```python: from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("ccdv/lsg-camembert-base-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-camembert-base-4096") ``` ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * mask_first_token (mask first token since it is redundant with the first global token) * see config.json file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ```python: from transformers import AutoModel model = AutoModel.from_pretrained("ccdv/lsg-camembert-base-4096", trust_remote_code=True, num_global_tokens=16, block_size=64, sparse_block_size=64, attention_probs_dropout_prob=0.0 sparsity_factor=4, sparsity_type="none", mask_first_token=True ) ``` ## Sparse selection type There are 6 different sparse selection patterns. The best type is task dependent. \ If `sparse_block_size=0` or `sparsity_type="none"`, only local attention is considered. \ Note that for sequences with length < 2*block_size, the type has no effect. * `sparsity_type="bos_pooling"` (new) * weighted average pooling using the BOS token * Works best in general, especially with a rather large sparsity_factor (8, 16, 32) * Additional parameters: * None * `sparsity_type="norm"`, select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="pooling"`, use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="lsh"`, use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * `sparsity_type="stride"`, use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * `sparsity_type="block_stride"`, use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Fill mask example: ```python: from transformers import FillMaskPipeline, AutoModelForMaskedLM, AutoTokenizer model = AutoModelForMaskedLM.from_pretrained("ccdv/lsg-camembert-base-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-camembert-base-4096") SENTENCES = "Paris est la <mask> de la France." pipeline = FillMaskPipeline(model, tokenizer) output = pipeline(SENTENCES) > 'Paris est la capitale de la France.' ``` Classification example: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-camembert-base-4096", trust_remote_code=True, pool_with_global=True, # pool with a global token instead of first token ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-camembert-base-4096") SENTENCE = "This is a test for sequence classification. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", #pad_to_multiple_of=... # Optional truncation=True ) output = model(**token_ids) > SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None) ``` ## Training global tokens To train global tokens and the classification head only: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-camembert-base-4096", trust_remote_code=True, pool_with_global=True, # pool with a global token instead of first token num_global_tokens=16 ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-camembert-base-4096") for name, param in model.named_parameters(): if "global_embeddings" not in name: param.requires_grad = False else: param.required_grad = True ``` **CamemBERT** ``` @inproceedings{Martin_2020, doi = {10.18653/v1/2020.acl-main.645}, url = {https://doi.org/10.18653%2Fv1%2F2020.acl-main.645}, year = 2020, publisher = {Association for Computational Linguistics}, author = {Louis Martin and Benjamin Muller and Pedro Javier Ortiz Su{\'{a}}rez and Yoann Dupont and Laurent Romary and {\'{E}}ric de la Clergeri and Djam{\'{e}} Seddah and Beno{\^{\i}}t Sagot}, title = {{CamemBERT}: a Tasty French Language Model}, booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics} } ```
{"language": "fr", "tags": ["camembert", "long context"], "pipeline_tag": "fill-mask"}
ccdv/lsg-camembert-base-4096
null
[ "transformers", "pytorch", "camembert", "fill-mask", "long context", "custom_code", "fr", "arxiv:2210.15497", "autotrain_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2210.15497" ]
[ "fr" ]
TAGS #transformers #pytorch #camembert #fill-mask #long context #custom_code #fr #arxiv-2210.15497 #autotrain_compatible #region-us
# LSG model Transformers >= 4.36.1\ This model relies on a custom modeling file, you need to add trust_remote_code=True\ See \#13467 LSG ArXiv paper. \ Github/conversion script is available at this link. * Usage * Parameters * Sparse selection type * Tasks * Training global tokens This model is adapted from CamemBERT-base without additional pretraining yet. It uses the same number of parameters/layers and the same tokenizer. This model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). Support encoder-decoder but I didnt test it extensively.\ Implemented in PyTorch. !attn ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * mask_first_token (mask first token since it is redundant with the first global token) * see URL file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ## Sparse selection type There are 6 different sparse selection patterns. The best type is task dependent. \ If 'sparse_block_size=0' or 'sparsity_type="none"', only local attention is considered. \ Note that for sequences with length < 2*block_size, the type has no effect. * 'sparsity_type="bos_pooling"' (new) * weighted average pooling using the BOS token * Works best in general, especially with a rather large sparsity_factor (8, 16, 32) * Additional parameters: * None * 'sparsity_type="norm"', select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * 'sparsity_type="pooling"', use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * 'sparsity_type="lsh"', use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * 'sparsity_type="stride"', use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * 'sparsity_type="block_stride"', use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Fill mask example: Classification example: ## Training global tokens To train global tokens and the classification head only: CamemBERT
[ "# LSG model \nTransformers >= 4.36.1\\\nThis model relies on a custom modeling file, you need to add trust_remote_code=True\\\nSee \\#13467\n\nLSG ArXiv paper. \\\nGithub/conversion script is available at this link.\n\n* Usage\n* Parameters\n* Sparse selection type\n* Tasks\n* Training global tokens\n\nThis model is adapted from CamemBERT-base without additional pretraining yet. It uses the same number of parameters/layers and the same tokenizer.\n\nThis model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG).\n\nThe model requires sequences whose length is a multiple of the block size. The model is \"adaptive\" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...).\n\nSupport encoder-decoder but I didnt test it extensively.\\\nImplemented in PyTorch.\n\n!attn", "## Usage\nThe model relies on a custom modeling file, you need to add trust_remote_code=True to use it.", "## Parameters\nYou can change various parameters like : \n* the number of global tokens (num_global_tokens=1)\n* local block size (block_size=128)\n* sparse block size (sparse_block_size=128)\n* sparsity factor (sparsity_factor=2)\n* mask_first_token (mask first token since it is redundant with the first global token)\n* see URL file\n\nDefault parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.", "## Sparse selection type\n\nThere are 6 different sparse selection patterns. The best type is task dependent. \\\nIf 'sparse_block_size=0' or 'sparsity_type=\"none\"', only local attention is considered. \\\nNote that for sequences with length < 2*block_size, the type has no effect.\n* 'sparsity_type=\"bos_pooling\"' (new)\n * weighted average pooling using the BOS token \n * Works best in general, especially with a rather large sparsity_factor (8, 16, 32)\n * Additional parameters:\n * None\n* 'sparsity_type=\"norm\"', select highest norm tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"pooling\"', use average pooling to merge tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"lsh\"', use the LSH algorithm to cluster similar tokens\n * Works best for a large sparsity_factor (4+)\n * LSH relies on random projections, thus inference may differ slightly with different seeds\n * Additional parameters:\n * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids\n* 'sparsity_type=\"stride\"', use a striding mecanism per head\n * Each head will use different tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads\n* 'sparsity_type=\"block_stride\"', use a striding mecanism per head\n * Each head will use block of tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads", "## Tasks\nFill mask example:\n\n\n\nClassification example:", "## Training global tokens\nTo train global tokens and the classification head only:\n\n\nCamemBERT" ]
[ "TAGS\n#transformers #pytorch #camembert #fill-mask #long context #custom_code #fr #arxiv-2210.15497 #autotrain_compatible #region-us \n", "# LSG model \nTransformers >= 4.36.1\\\nThis model relies on a custom modeling file, you need to add trust_remote_code=True\\\nSee \\#13467\n\nLSG ArXiv paper. \\\nGithub/conversion script is available at this link.\n\n* Usage\n* Parameters\n* Sparse selection type\n* Tasks\n* Training global tokens\n\nThis model is adapted from CamemBERT-base without additional pretraining yet. It uses the same number of parameters/layers and the same tokenizer.\n\nThis model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG).\n\nThe model requires sequences whose length is a multiple of the block size. The model is \"adaptive\" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...).\n\nSupport encoder-decoder but I didnt test it extensively.\\\nImplemented in PyTorch.\n\n!attn", "## Usage\nThe model relies on a custom modeling file, you need to add trust_remote_code=True to use it.", "## Parameters\nYou can change various parameters like : \n* the number of global tokens (num_global_tokens=1)\n* local block size (block_size=128)\n* sparse block size (sparse_block_size=128)\n* sparsity factor (sparsity_factor=2)\n* mask_first_token (mask first token since it is redundant with the first global token)\n* see URL file\n\nDefault parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.", "## Sparse selection type\n\nThere are 6 different sparse selection patterns. The best type is task dependent. \\\nIf 'sparse_block_size=0' or 'sparsity_type=\"none\"', only local attention is considered. \\\nNote that for sequences with length < 2*block_size, the type has no effect.\n* 'sparsity_type=\"bos_pooling\"' (new)\n * weighted average pooling using the BOS token \n * Works best in general, especially with a rather large sparsity_factor (8, 16, 32)\n * Additional parameters:\n * None\n* 'sparsity_type=\"norm\"', select highest norm tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"pooling\"', use average pooling to merge tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"lsh\"', use the LSH algorithm to cluster similar tokens\n * Works best for a large sparsity_factor (4+)\n * LSH relies on random projections, thus inference may differ slightly with different seeds\n * Additional parameters:\n * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids\n* 'sparsity_type=\"stride\"', use a striding mecanism per head\n * Each head will use different tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads\n* 'sparsity_type=\"block_stride\"', use a striding mecanism per head\n * Each head will use block of tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads", "## Tasks\nFill mask example:\n\n\n\nClassification example:", "## Training global tokens\nTo train global tokens and the classification head only:\n\n\nCamemBERT" ]
fill-mask
transformers
# LSG model **Transformers >= 4.36.1**\ **This model relies on a custom modeling file, you need to add trust_remote_code=True**\ **See [\#13467](https://github.com/huggingface/transformers/pull/13467)** LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \ Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg). * [Usage](#usage) * [Parameters](#parameters) * [Sparse selection type](#sparse-selection-type) * [Tasks](#tasks) * [Training global tokens](#training-global-tokens) This model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \ The model is trained starting from a RoBERTa-base checkpoint on 16Gb of data (Wikipedia, Bookcorpus etc...) using the same number of parameters/layers and the same tokenizer. Support encoder-decoder and causal masking but I didnt test it extensively.\ Implemented in PyTorch. ![attn](attn.png) ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ```python: from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("ccdv/lsg-base-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-base-4096") ``` ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * mask_first_token (mask first token since it is redundant with the first global token) * see config.json file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ```python: from transformers import AutoModel model = AutoModel.from_pretrained("ccdv/lsg-base-4096", trust_remote_code=True, num_global_tokens=16, block_size=64, sparse_block_size=64, attention_probs_dropout_prob=0.0 sparsity_factor=4, sparsity_type="none", mask_first_token=True ) ``` ## Sparse selection type There are 6 different sparse selection patterns. The best type is task dependent. \ If `sparse_block_size=0` or `sparsity_type="none"`, only local attention is considered. \ Note that for sequences with length < 2*block_size, the type has no effect. * `sparsity_type="bos_pooling"` (new) * weighted average pooling using the BOS token * Works best in general, especially with a rather large sparsity_factor (8, 16, 32) * Additional parameters: * None * `sparsity_type="norm"`, select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="pooling"`, use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="lsh"`, use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * `sparsity_type="stride"`, use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * `sparsity_type="block_stride"`, use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Fill mask example: ```python: from transformers import FillMaskPipeline, AutoModelForMaskedLM, AutoTokenizer model = AutoModelForMaskedLM.from_pretrained("ccdv/lsg-base-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-base-4096") SENTENCES = ["Paris is the <mask> of France.", "The goal of life is <mask>."] pipeline = FillMaskPipeline(model, tokenizer) output = pipeline(SENTENCES, top_k=1) output = [o[0]["sequence"] for o in output] > ['Paris is the capital of France.', 'The goal of life is happiness.'] ``` Classification example: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-base-4096", trust_remote_code=True, pool_with_global=True, # pool with a global token instead of first token ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-base-4096") SENTENCE = "This is a test for sequence classification. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", #pad_to_multiple_of=... # Optional truncation=True ) output = model(**token_ids) > SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None) ``` ## Training global tokens To train global tokens and the classification head only: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-base-4096", trust_remote_code=True, pool_with_global=True, # pool with a global token instead of first token num_global_tokens=16 ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-base-4096") for name, param in model.named_parameters(): if "global_embeddings" not in name: param.requires_grad = False else: param.required_grad = True ```
{"language": "en", "tags": ["long context"]}
ccdv/lsg-base-4096
null
[ "transformers", "pytorch", "roberta", "fill-mask", "long context", "custom_code", "en", "arxiv:2210.15497", "autotrain_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2210.15497" ]
[ "en" ]
TAGS #transformers #pytorch #roberta #fill-mask #long context #custom_code #en #arxiv-2210.15497 #autotrain_compatible #region-us
# LSG model Transformers >= 4.36.1\ This model relies on a custom modeling file, you need to add trust_remote_code=True\ See \#13467 LSG ArXiv paper. \ Github/conversion script is available at this link. * Usage * Parameters * Sparse selection type * Tasks * Training global tokens This model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \ The model is trained starting from a RoBERTa-base checkpoint on 16Gb of data (Wikipedia, Bookcorpus etc...) using the same number of parameters/layers and the same tokenizer. Support encoder-decoder and causal masking but I didnt test it extensively.\ Implemented in PyTorch. !attn ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * mask_first_token (mask first token since it is redundant with the first global token) * see URL file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ## Sparse selection type There are 6 different sparse selection patterns. The best type is task dependent. \ If 'sparse_block_size=0' or 'sparsity_type="none"', only local attention is considered. \ Note that for sequences with length < 2*block_size, the type has no effect. * 'sparsity_type="bos_pooling"' (new) * weighted average pooling using the BOS token * Works best in general, especially with a rather large sparsity_factor (8, 16, 32) * Additional parameters: * None * 'sparsity_type="norm"', select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * 'sparsity_type="pooling"', use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * 'sparsity_type="lsh"', use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * 'sparsity_type="stride"', use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * 'sparsity_type="block_stride"', use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Fill mask example: Classification example: ## Training global tokens To train global tokens and the classification head only:
[ "# LSG model \nTransformers >= 4.36.1\\\nThis model relies on a custom modeling file, you need to add trust_remote_code=True\\\nSee \\#13467\n\nLSG ArXiv paper. \\\nGithub/conversion script is available at this link.\n\n* Usage\n* Parameters\n* Sparse selection type\n* Tasks\n* Training global tokens\n\nThis model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG).\n\n\nThe model requires sequences whose length is a multiple of the block size. The model is \"adaptive\" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \\\n\n\nThe model is trained starting from a RoBERTa-base checkpoint on 16Gb of data (Wikipedia, Bookcorpus etc...) using the same number of parameters/layers and the same tokenizer.\n\n\nSupport encoder-decoder and causal masking but I didnt test it extensively.\\\nImplemented in PyTorch.\n\n!attn", "## Usage\nThe model relies on a custom modeling file, you need to add trust_remote_code=True to use it.", "## Parameters\nYou can change various parameters like : \n* the number of global tokens (num_global_tokens=1)\n* local block size (block_size=128)\n* sparse block size (sparse_block_size=128)\n* sparsity factor (sparsity_factor=2)\n* mask_first_token (mask first token since it is redundant with the first global token)\n* see URL file\n\nDefault parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.", "## Sparse selection type\n\nThere are 6 different sparse selection patterns. The best type is task dependent. \\\nIf 'sparse_block_size=0' or 'sparsity_type=\"none\"', only local attention is considered. \\\nNote that for sequences with length < 2*block_size, the type has no effect.\n* 'sparsity_type=\"bos_pooling\"' (new)\n * weighted average pooling using the BOS token \n * Works best in general, especially with a rather large sparsity_factor (8, 16, 32)\n * Additional parameters:\n * None\n* 'sparsity_type=\"norm\"', select highest norm tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"pooling\"', use average pooling to merge tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"lsh\"', use the LSH algorithm to cluster similar tokens\n * Works best for a large sparsity_factor (4+)\n * LSH relies on random projections, thus inference may differ slightly with different seeds\n * Additional parameters:\n * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids\n* 'sparsity_type=\"stride\"', use a striding mecanism per head\n * Each head will use different tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads\n* 'sparsity_type=\"block_stride\"', use a striding mecanism per head\n * Each head will use block of tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads", "## Tasks\nFill mask example:\n\n\n\nClassification example:", "## Training global tokens\nTo train global tokens and the classification head only:" ]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #long context #custom_code #en #arxiv-2210.15497 #autotrain_compatible #region-us \n", "# LSG model \nTransformers >= 4.36.1\\\nThis model relies on a custom modeling file, you need to add trust_remote_code=True\\\nSee \\#13467\n\nLSG ArXiv paper. \\\nGithub/conversion script is available at this link.\n\n* Usage\n* Parameters\n* Sparse selection type\n* Tasks\n* Training global tokens\n\nThis model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG).\n\n\nThe model requires sequences whose length is a multiple of the block size. The model is \"adaptive\" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \\\n\n\nThe model is trained starting from a RoBERTa-base checkpoint on 16Gb of data (Wikipedia, Bookcorpus etc...) using the same number of parameters/layers and the same tokenizer.\n\n\nSupport encoder-decoder and causal masking but I didnt test it extensively.\\\nImplemented in PyTorch.\n\n!attn", "## Usage\nThe model relies on a custom modeling file, you need to add trust_remote_code=True to use it.", "## Parameters\nYou can change various parameters like : \n* the number of global tokens (num_global_tokens=1)\n* local block size (block_size=128)\n* sparse block size (sparse_block_size=128)\n* sparsity factor (sparsity_factor=2)\n* mask_first_token (mask first token since it is redundant with the first global token)\n* see URL file\n\nDefault parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.", "## Sparse selection type\n\nThere are 6 different sparse selection patterns. The best type is task dependent. \\\nIf 'sparse_block_size=0' or 'sparsity_type=\"none\"', only local attention is considered. \\\nNote that for sequences with length < 2*block_size, the type has no effect.\n* 'sparsity_type=\"bos_pooling\"' (new)\n * weighted average pooling using the BOS token \n * Works best in general, especially with a rather large sparsity_factor (8, 16, 32)\n * Additional parameters:\n * None\n* 'sparsity_type=\"norm\"', select highest norm tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"pooling\"', use average pooling to merge tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"lsh\"', use the LSH algorithm to cluster similar tokens\n * Works best for a large sparsity_factor (4+)\n * LSH relies on random projections, thus inference may differ slightly with different seeds\n * Additional parameters:\n * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids\n* 'sparsity_type=\"stride\"', use a striding mecanism per head\n * Each head will use different tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads\n* 'sparsity_type=\"block_stride\"', use a striding mecanism per head\n * Each head will use block of tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads", "## Tasks\nFill mask example:\n\n\n\nClassification example:", "## Training global tokens\nTo train global tokens and the classification head only:" ]
fill-mask
transformers
# LSG model **Transformers >= 4.36.1**\ **This model relies on a custom modeling file, you need to add trust_remote_code=True**\ **See [\#13467](https://github.com/huggingface/transformers/pull/13467)** LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \ Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg). * [Usage](#usage) * [Parameters](#parameters) * [Sparse selection type](#sparse-selection-type) * [Tasks](#tasks) This model is adapted from [Pegasus-large](https://huggingface.co/google/pegasus-large) for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer. This model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \ Implemented in PyTorch. ![attn](attn.png) ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ```python: from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("ccdv/lsg-pegasus-large-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-pegasus-large-4096") ``` ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * see config.json file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ```python: from transformers import AutoModel model = AutoModel.from_pretrained("ccdv/lsg-pegasus-large-4096", trust_remote_code=True, num_global_tokens=16, block_size=64, sparse_block_size=64, attention_probs_dropout_prob=0.0 sparsity_factor=4, sparsity_type="none", mask_first_token=True ) ``` ## Sparse selection type There are 6 different sparse selection patterns. The best type is task dependent. \ If `sparse_block_size=0` or `sparsity_type="none"`, only local attention is considered. \ Note that for sequences with length < 2*block_size, the type has no effect. * `sparsity_type="bos_pooling"` (new) * weighted average pooling using the BOS token * Works best in general, especially with a rather large sparsity_factor (8, 16, 32) * Additional parameters: * None * `sparsity_type="norm"`, select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="pooling"`, use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="lsh"`, use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * `sparsity_type="stride"`, use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * `sparsity_type="block_stride"`, use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Seq2Seq example for summarization: ```python: from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-pegasus-large-4096", trust_remote_code=True, pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-pegasus-large-4096") SENTENCE = "This is a test sequence to test the model. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", #pad_to_multiple_of=... # Optional truncation=True ) output = model(**token_ids) ``` Classification example: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-pegasus-large-4096", trust_remote_code=True, pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-pegasus-large-4096") SENTENCE = "This is a test sequence to test the model. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", padding="max_length", # Optional but recommended truncation=True # Optional but recommended ) output = model(**token_ids) > SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None) ``` **Pegasus** ``` @misc{zhang2019pegasus, title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization}, author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu}, year={2019}, eprint={1912.08777}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": ["en"], "tags": ["summarization", "pegasus", "long context"], "pipeline_tag": "fill-mask"}
ccdv/lsg-pegasus-large-4096
null
[ "transformers", "pytorch", "pegasus", "text2text-generation", "summarization", "long context", "fill-mask", "custom_code", "en", "arxiv:2210.15497", "arxiv:1912.08777", "autotrain_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2210.15497", "1912.08777" ]
[ "en" ]
TAGS #transformers #pytorch #pegasus #text2text-generation #summarization #long context #fill-mask #custom_code #en #arxiv-2210.15497 #arxiv-1912.08777 #autotrain_compatible #region-us
# LSG model Transformers >= 4.36.1\ This model relies on a custom modeling file, you need to add trust_remote_code=True\ See \#13467 LSG ArXiv paper. \ Github/conversion script is available at this link. * Usage * Parameters * Sparse selection type * Tasks This model is adapted from Pegasus-large for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer. This model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \ Implemented in PyTorch. !attn ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * see URL file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ## Sparse selection type There are 6 different sparse selection patterns. The best type is task dependent. \ If 'sparse_block_size=0' or 'sparsity_type="none"', only local attention is considered. \ Note that for sequences with length < 2*block_size, the type has no effect. * 'sparsity_type="bos_pooling"' (new) * weighted average pooling using the BOS token * Works best in general, especially with a rather large sparsity_factor (8, 16, 32) * Additional parameters: * None * 'sparsity_type="norm"', select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * 'sparsity_type="pooling"', use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * 'sparsity_type="lsh"', use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * 'sparsity_type="stride"', use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * 'sparsity_type="block_stride"', use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Seq2Seq example for summarization: Classification example: Pegasus
[ "# LSG model \nTransformers >= 4.36.1\\\nThis model relies on a custom modeling file, you need to add trust_remote_code=True\\\nSee \\#13467\n\nLSG ArXiv paper. \\\nGithub/conversion script is available at this link.\n\n* Usage\n* Parameters\n* Sparse selection type\n* Tasks\n\nThis model is adapted from Pegasus-large for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer.\n\n\nThis model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG).\n\nThe model requires sequences whose length is a multiple of the block size. The model is \"adaptive\" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \\\n\nImplemented in PyTorch.\n\n!attn", "## Usage\nThe model relies on a custom modeling file, you need to add trust_remote_code=True to use it.", "## Parameters\nYou can change various parameters like : \n* the number of global tokens (num_global_tokens=1)\n* local block size (block_size=128)\n* sparse block size (sparse_block_size=128)\n* sparsity factor (sparsity_factor=2)\n* see URL file\n\nDefault parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.", "## Sparse selection type\n\nThere are 6 different sparse selection patterns. The best type is task dependent. \\\nIf 'sparse_block_size=0' or 'sparsity_type=\"none\"', only local attention is considered. \\\nNote that for sequences with length < 2*block_size, the type has no effect.\n* 'sparsity_type=\"bos_pooling\"' (new)\n * weighted average pooling using the BOS token \n * Works best in general, especially with a rather large sparsity_factor (8, 16, 32)\n * Additional parameters:\n * None\n* 'sparsity_type=\"norm\"', select highest norm tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"pooling\"', use average pooling to merge tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"lsh\"', use the LSH algorithm to cluster similar tokens\n * Works best for a large sparsity_factor (4+)\n * LSH relies on random projections, thus inference may differ slightly with different seeds\n * Additional parameters:\n * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids\n* 'sparsity_type=\"stride\"', use a striding mecanism per head\n * Each head will use different tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads\n* 'sparsity_type=\"block_stride\"', use a striding mecanism per head\n * Each head will use block of tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads", "## Tasks\nSeq2Seq example for summarization:\n\n\n\nClassification example:\n\n\n\nPegasus" ]
[ "TAGS\n#transformers #pytorch #pegasus #text2text-generation #summarization #long context #fill-mask #custom_code #en #arxiv-2210.15497 #arxiv-1912.08777 #autotrain_compatible #region-us \n", "# LSG model \nTransformers >= 4.36.1\\\nThis model relies on a custom modeling file, you need to add trust_remote_code=True\\\nSee \\#13467\n\nLSG ArXiv paper. \\\nGithub/conversion script is available at this link.\n\n* Usage\n* Parameters\n* Sparse selection type\n* Tasks\n\nThis model is adapted from Pegasus-large for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer.\n\n\nThis model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG).\n\nThe model requires sequences whose length is a multiple of the block size. The model is \"adaptive\" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \\\n\nImplemented in PyTorch.\n\n!attn", "## Usage\nThe model relies on a custom modeling file, you need to add trust_remote_code=True to use it.", "## Parameters\nYou can change various parameters like : \n* the number of global tokens (num_global_tokens=1)\n* local block size (block_size=128)\n* sparse block size (sparse_block_size=128)\n* sparsity factor (sparsity_factor=2)\n* see URL file\n\nDefault parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.", "## Sparse selection type\n\nThere are 6 different sparse selection patterns. The best type is task dependent. \\\nIf 'sparse_block_size=0' or 'sparsity_type=\"none\"', only local attention is considered. \\\nNote that for sequences with length < 2*block_size, the type has no effect.\n* 'sparsity_type=\"bos_pooling\"' (new)\n * weighted average pooling using the BOS token \n * Works best in general, especially with a rather large sparsity_factor (8, 16, 32)\n * Additional parameters:\n * None\n* 'sparsity_type=\"norm\"', select highest norm tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"pooling\"', use average pooling to merge tokens\n * Works best for a small sparsity_factor (2 to 4)\n * Additional parameters:\n * None\n* 'sparsity_type=\"lsh\"', use the LSH algorithm to cluster similar tokens\n * Works best for a large sparsity_factor (4+)\n * LSH relies on random projections, thus inference may differ slightly with different seeds\n * Additional parameters:\n * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids\n* 'sparsity_type=\"stride\"', use a striding mecanism per head\n * Each head will use different tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads\n* 'sparsity_type=\"block_stride\"', use a striding mecanism per head\n * Each head will use block of tokens strided by sparsify_factor\n * Not recommended if sparsify_factor > num_heads", "## Tasks\nSeq2Seq example for summarization:\n\n\n\nClassification example:\n\n\n\nPegasus" ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-100k-VoxPopuli-Català **⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL:** https://huggingface.co/softcatala/wav2vec2-large-100k-voxpopuli-catala Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) on Catalan language using the [Common Voice](https://huggingface.co/datasets/common_voice) and [ParlamentParla](https://www.openslr.org/59/) datasets. **Attention:** The split train/dev/test used does not fully map with the CommonVoice 6.1 dataset. A custom split was used combining both the CommonVoice and ParlamentParla dataset and can be found [here](https://github.com/ccoreilly/wav2vec2-catala). Evaluating on the CV test dataset will produce a biased WER as 1144 audio files of that dataset were used in training/evaluation of this model. WER was calculated using this [test.csv](https://github.com/ccoreilly/wav2vec2-catala/blob/master/test-filtered.csv) which was not seen by the model during training/evaluation. You can find training and evaluation scripts in the github repository [ccoreilly/wav2vec2-catala](https://github.com/ccoreilly/wav2vec2-catala) When using this model, make sure that your speech input is sampled at 16kHz. ## Results Word error rate was evaluated on the following datasets unseen by the model: | Dataset | WER | | ------- | --- | | [Test split CV+ParlamentParla]((https://github.com/ccoreilly/wav2vec2-catala/blob/master/test-filtered.csv)) | 5.98% | | [Google Crowsourced Corpus](https://www.openslr.org/69/) | 12.14% | | Audiobook “La llegenda de Sant Jordi” | 12.02% | ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ca", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("ccoreilly/wav2vec2-large-100k-voxpopuli-catala") model = Wav2Vec2ForCTC.from_pretrained("ccoreilly/wav2vec2-large-100k-voxpopuli-catala") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ```
{"language": "ca", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "speech-to-text"], "datasets": ["common_voice", "parlament_parla"], "metrics": ["wer"]}
ccoreilly/wav2vec2-large-100k-voxpopuli-catala
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "speech-to-text", "ca", "dataset:common_voice", "dataset:parlament_parla", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ca" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #speech-to-text #ca #dataset-common_voice #dataset-parlament_parla #license-apache-2.0 #model-index #endpoints_compatible #region-us
Wav2Vec2-Large-100k-VoxPopuli-Català ==================================== ️NOTICE️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL: URL Fine-tuned facebook/wav2vec2-large-100k-voxpopuli on Catalan language using the Common Voice and ParlamentParla datasets. Attention: The split train/dev/test used does not fully map with the CommonVoice 6.1 dataset. A custom split was used combining both the CommonVoice and ParlamentParla dataset and can be found here. Evaluating on the CV test dataset will produce a biased WER as 1144 audio files of that dataset were used in training/evaluation of this model. WER was calculated using this URL which was not seen by the model during training/evaluation. You can find training and evaluation scripts in the github repository ccoreilly/wav2vec2-catala When using this model, make sure that your speech input is sampled at 16kHz. Results ------- Word error rate was evaluated on the following datasets unseen by the model: Usage ----- The model can be used directly (without a language model) as follows:
[]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #speech-to-text #ca #dataset-common_voice #dataset-parlament_parla #license-apache-2.0 #model-index #endpoints_compatible #region-us \n" ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-Català Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Catalan language using the [Common Voice](https://huggingface.co/datasets/common_voice) and [ParlamentParla](https://www.openslr.org/59/) datasets. **Attention:** The split train/dev/test used does not fully map with the CommonVoice 6.1 dataset. A custom split was used combining both the CommonVoice and ParlamentParla dataset and can be found [here](https://github.com/ccoreilly/wav2vec2-catala). Evaluating on the CV test dataset will produce a biased WER as 1144 audio files of that dataset were used in training/evaluation of this model. WER was calculated using this [test.csv](https://github.com/ccoreilly/wav2vec2-catala/blob/master/test.csv) which was not seen by the model during training/evaluation. You can find training and evaluation scripts in the github repository [ccoreilly/wav2vec2-catala](https://github.com/ccoreilly/wav2vec2-catala) When using this model, make sure that your speech input is sampled at 16kHz. ## Results Word error rate was evaluated on the following datasets unseen by the model: | Dataset | WER | | ------- | --- | | [Test split CV+ParlamentParla]((https://github.com/ccoreilly/wav2vec2-catala/blob/master/test.csv)) | 6.92% | | [Google Crowsourced Corpus](https://www.openslr.org/69/) | 12.99% | | Audiobook “La llegenda de Sant Jordi” | 13.23% | ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ca", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("ccoreilly/wav2vec2-large-xlsr-catala") model = Wav2Vec2ForCTC.from_pretrained("ccoreilly/wav2vec2-large-xlsr-catala") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ```
{"language": "ca", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice", "parlament_parla"], "metrics": ["wer"]}
ccoreilly/wav2vec2-large-xlsr-catala
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "ca", "dataset:common_voice", "dataset:parlament_parla", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ca" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ca #dataset-common_voice #dataset-parlament_parla #license-apache-2.0 #model-index #endpoints_compatible #region-us
Wav2Vec2-Large-XLSR-Català ========================== Fine-tuned facebook/wav2vec2-large-xlsr-53 on Catalan language using the Common Voice and ParlamentParla datasets. Attention: The split train/dev/test used does not fully map with the CommonVoice 6.1 dataset. A custom split was used combining both the CommonVoice and ParlamentParla dataset and can be found here. Evaluating on the CV test dataset will produce a biased WER as 1144 audio files of that dataset were used in training/evaluation of this model. WER was calculated using this URL which was not seen by the model during training/evaluation. You can find training and evaluation scripts in the github repository ccoreilly/wav2vec2-catala When using this model, make sure that your speech input is sampled at 16kHz. Results ------- Word error rate was evaluated on the following datasets unseen by the model: Usage ----- The model can be used directly (without a language model) as follows:
[]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ca #dataset-common_voice #dataset-parlament_parla #license-apache-2.0 #model-index #endpoints_compatible #region-us \n" ]
text-generation
transformers
# GIMPLEARN knows modeltest2 # To generate conversation use input such as Human: What should I do?\nAI:
{"tags": ["Text Generation"]}
cd-dvd/testmodel2
null
[ "transformers", "pytorch", "gpt_neo", "text-generation", "Text Generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt_neo #text-generation #Text Generation #autotrain_compatible #endpoints_compatible #region-us
# GIMPLEARN knows modeltest2 # To generate conversation use input such as Human: What should I do?\nAI:
[ "# GIMPLEARN knows modeltest2", "# To generate conversation use input such as Human: What should I do?\\nAI:" ]
[ "TAGS\n#transformers #pytorch #gpt_neo #text-generation #Text Generation #autotrain_compatible #endpoints_compatible #region-us \n", "# GIMPLEARN knows modeltest2", "# To generate conversation use input such as Human: What should I do?\\nAI:" ]
text-generation
transformers
## a dialoggpt model trained on french opensubtitles with custom tokenizer trained with this notebook https://colab.research.google.com/drive/1pfCV3bngAmISNZVfDvBMyEhQKuYw37Rl#scrollTo=AyImj9qZYLRi&uniqifier=3 config from microsoft/DialoGPT-medium dataset generated from 2018 opensubtitle downloaded from opus folowing these guidelines https://github.com/PolyAI-LDN/conversational-datasets/tree/master/opensubtitles with this notebook https://colab.research.google.com/drive/1uyh3vJ9nEjqOHI68VD73qxt4olJzODxi#scrollTo=deaacv4XfLMk ### How to use Now we are ready to try out how the model works as a chatting partner! ```python import torch from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("cedpsam/chatbot_fr") model = AutoModelWithLMHead.from_pretrained("cedpsam/chatbot_fr") for step in range(6): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id, top_p=0.92, top_k = 50 ) # pretty print last ouput tokens from bot print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
{"language": "fr", "tags": ["conversational"], "widget": [{"text": "bonjour."}, {"text": "mais encore"}, {"text": "est ce que l'argent achete le bonheur?"}]}
cedpsam/chatbot_fr
null
[ "transformers", "pytorch", "jax", "safetensors", "gpt2", "text-generation", "conversational", "fr", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "fr" ]
TAGS #transformers #pytorch #jax #safetensors #gpt2 #text-generation #conversational #fr #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
## a dialoggpt model trained on french opensubtitles with custom tokenizer trained with this notebook URL config from microsoft/DialoGPT-medium dataset generated from 2018 opensubtitle downloaded from opus folowing these guidelines URL with this notebook URL ### How to use Now we are ready to try out how the model works as a chatting partner! '''python import torch from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("cedpsam/chatbot_fr") model = AutoModelWithLMHead.from_pretrained("cedpsam/chatbot_fr") for step in range(6): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = URL(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = URL([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id, top_p=0.92, top_k = 50 ) # pretty print last ouput tokens from bot print("DialoGPT: {}".format(URL(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
[ "## a dialoggpt model trained on french opensubtitles with custom tokenizer\ntrained with this notebook\nURL\n\nconfig from microsoft/DialoGPT-medium\ndataset generated from 2018 opensubtitle downloaded from opus folowing these guidelines\nURL with this notebook\nURL", "### How to use\n\nNow we are ready to try out how the model works as a chatting partner!\n\n'''python\nimport torch\nfrom transformers import AutoTokenizer, AutoModelWithLMHead\n\ntokenizer = AutoTokenizer.from_pretrained(\"cedpsam/chatbot_fr\")\n\nmodel = AutoModelWithLMHead.from_pretrained(\"cedpsam/chatbot_fr\")\n\nfor step in range(6):\n # encode the new user input, add the eos_token and return a tensor in Pytorch\n new_user_input_ids = URL(input(\">> User:\") + tokenizer.eos_token, return_tensors='pt')\n # print(new_user_input_ids)\n\n # append the new user input tokens to the chat history\n bot_input_ids = URL([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids\n\n # generated a response while limiting the total chat history to 1000 tokens, \n chat_history_ids = model.generate(\n bot_input_ids, max_length=1000,\n pad_token_id=tokenizer.eos_token_id,\n top_p=0.92, top_k = 50\n )\n \n # pretty print last ouput tokens from bot\n print(\"DialoGPT: {}\".format(URL(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))" ]
[ "TAGS\n#transformers #pytorch #jax #safetensors #gpt2 #text-generation #conversational #fr #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "## a dialoggpt model trained on french opensubtitles with custom tokenizer\ntrained with this notebook\nURL\n\nconfig from microsoft/DialoGPT-medium\ndataset generated from 2018 opensubtitle downloaded from opus folowing these guidelines\nURL with this notebook\nURL", "### How to use\n\nNow we are ready to try out how the model works as a chatting partner!\n\n'''python\nimport torch\nfrom transformers import AutoTokenizer, AutoModelWithLMHead\n\ntokenizer = AutoTokenizer.from_pretrained(\"cedpsam/chatbot_fr\")\n\nmodel = AutoModelWithLMHead.from_pretrained(\"cedpsam/chatbot_fr\")\n\nfor step in range(6):\n # encode the new user input, add the eos_token and return a tensor in Pytorch\n new_user_input_ids = URL(input(\">> User:\") + tokenizer.eos_token, return_tensors='pt')\n # print(new_user_input_ids)\n\n # append the new user input tokens to the chat history\n bot_input_ids = URL([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids\n\n # generated a response while limiting the total chat history to 1000 tokens, \n chat_history_ids = model.generate(\n bot_input_ids, max_length=1000,\n pad_token_id=tokenizer.eos_token_id,\n top_p=0.92, top_k = 50\n )\n \n # pretty print last ouput tokens from bot\n print(\"DialoGPT: {}\".format(URL(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))" ]
text-classification
transformers
话题分类模型,使用某乎"环境"话题下所有子话题,过滤后得69类。 top1 acc 60.7, top3 acc 81.6, 可以用于中文环境文本挖掘的预处理步骤。 标签: "生态环境","水污染", "野生动物保护", "太阳能", "环保经济", "污水处理", "绿色建筑", "水处理", "噪音污染", "温室效应", "净水设备", "净水器", "自来水", "生活", "环境评估", "空气污染", "环境评价", "工业污染", "雾霾", "植树", "环保行业", "水处理工程", "沙漠治理", "巴黎协定", "核能", "噪音", "环评工程师", "二氧化碳", "低碳", "自然环境", "沙尘暴", "环境工程", "秸秆焚烧", "PM 2.5", "太空垃圾", "穹顶之下(纪录片)", "垃圾", "环境科学", "净水", "污水排放", "室内空气污染", "环境污染", "全球变暖", "邻居噪音", "土壤污染", "生物多样性", "碳交易", "污染治理", "雾霾治理", "碳金融", "建筑节能", "风能及风力发电", "温室气体", "环境保护", "碳排放", "垃圾处理器", "气候变化", "化学污染", "地球一小时", "环保组织", "物种多样性", "节能减排", "核污染", "环保督查", "垃圾处理", "垃圾分类", "重金属污染", "环境伦理学", "垃圾焚烧"
{"language": "zh", "tags": ["pretrain", "pytorch", "environment", "classification", "topic classification"], "widget": [{"text": "\u7f8e\u56fd\u9000\u51fa\u300a\u5df4\u9ece\u534f\u5b9a\u300b"}, {"text": "\u6c61\u6c34\u5904\u7406\u5382\u4e2d\u7684\u529f\u8017\u9700\u8981\u51cf\u5c11"}]}
celtics1863/env-bert-topic
null
[ "transformers", "pytorch", "bert", "text-classification", "pretrain", "environment", "classification", "topic classification", "zh", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "zh" ]
TAGS #transformers #pytorch #bert #text-classification #pretrain #environment #classification #topic classification #zh #autotrain_compatible #endpoints_compatible #region-us
话题分类模型,使用某乎"环境"话题下所有子话题,过滤后得69类。 top1 acc 60.7, top3 acc 81.6, 可以用于中文环境文本挖掘的预处理步骤。 标签: "生态环境","水污染", "野生动物保护", "太阳能", "环保经济", "污水处理", "绿色建筑", "水处理", "噪音污染", "温室效应", "净水设备", "净水器", "自来水", "生活", "环境评估", "空气污染", "环境评价", "工业污染", "雾霾", "植树", "环保行业", "水处理工程", "沙漠治理", "巴黎协定", "核能", "噪音", "环评工程师", "二氧化碳", "低碳", "自然环境", "沙尘暴", "环境工程", "秸秆焚烧", "PM 2.5", "太空垃圾", "穹顶之下(纪录片)", "垃圾", "环境科学", "净水", "污水排放", "室内空气污染", "环境污染", "全球变暖", "邻居噪音", "土壤污染", "生物多样性", "碳交易", "污染治理", "雾霾治理", "碳金融", "建筑节能", "风能及风力发电", "温室气体", "环境保护", "碳排放", "垃圾处理器", "气候变化", "化学污染", "地球一小时", "环保组织", "物种多样性", "节能减排", "核污染", "环保督查", "垃圾处理", "垃圾分类", "重金属污染", "环境伦理学", "垃圾焚烧"
[]
[ "TAGS\n#transformers #pytorch #bert #text-classification #pretrain #environment #classification #topic classification #zh #autotrain_compatible #endpoints_compatible #region-us \n" ]
null
null
tags: - array - of - tags license: "any valid license identifier"
{}
cemigo/cemigo-test-model
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #region-us
tags: - array - of - tags license: "any valid license identifier"
[]
[ "TAGS\n#region-us \n" ]
text-generation
transformers
#Harry Potter DialoGPT Model
{"tags": ["conversational"]}
centon21/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "conversational", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #conversational #endpoints_compatible #region-us
#Harry Potter DialoGPT Model
[]
[ "TAGS\n#transformers #pytorch #conversational #endpoints_compatible #region-us \n" ]
text-generation
transformers
# Harry Potter Fanfiction Generator This is a pre-trained GPT-2 generative text model that allows you to generate your own Harry Potter fanfiction, trained off of the top 100 rated fanficition stories. We intend for this to be used for individual fun and experimentation and not as a commercial product.
{"language": ["en"], "license": "mit", "tags": ["harry-potter"]}
ceostroff/harry-potter-gpt2-fanfiction
null
[ "transformers", "pytorch", "tf", "jax", "gpt2", "text-generation", "harry-potter", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #gpt2 #text-generation #harry-potter #en #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# Harry Potter Fanfiction Generator This is a pre-trained GPT-2 generative text model that allows you to generate your own Harry Potter fanfiction, trained off of the top 100 rated fanficition stories. We intend for this to be used for individual fun and experimentation and not as a commercial product.
[ "# Harry Potter Fanfiction Generator\n\nThis is a pre-trained GPT-2 generative text model that allows you to generate your own Harry Potter fanfiction, trained off of the top 100 rated fanficition stories. We intend for this to be used for individual fun and experimentation and not as a commercial product." ]
[ "TAGS\n#transformers #pytorch #tf #jax #gpt2 #text-generation #harry-potter #en #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# Harry Potter Fanfiction Generator\n\nThis is a pre-trained GPT-2 generative text model that allows you to generate your own Harry Potter fanfiction, trained off of the top 100 rated fanficition stories. We intend for this to be used for individual fun and experimentation and not as a commercial product." ]
feature-extraction
transformers
# TinyBERT_L-4_H-312_v2 English Sentence Encoder This is distilled from the `bert-base-nli-stsb-mean-tokens` pre-trained model from [Sentence-Transformers](https://sbert.net/). The embedding vector is obtained by mean/average pooling of the last layer's hidden states. Update 20210325: Added the attention matrices imitation objective as in the TinyBERT paper, and the distill target has been changed from `distilbert-base-nli-stsb-mean-tokens` to `bert-base-nli-stsb-mean-tokens` (they have almost the same STSb performance). ## Model Comparison We compute cosine similarity scores of the embeddings of the sentence pair to get the spearman correlation on the STS benchmark (bigger is better): | | Dev | Test | | ------------------------------------ | ----- | ----- | | bert-base-nli-stsb-mean-tokens | .8704 | .8505 | | distilbert-base-nli-stsb-mean-tokens | .8667 | .8516 | | TinyBERT_L-4_H-312_v2-distill-AllNLI | .8587 | .8283 | | TinyBERT_L-4_H (20210325) | .8551 | .8341 |
{}
ceshine/TinyBERT_L-4_H-312_v2-distill-AllNLI
null
[ "transformers", "pytorch", "jax", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #bert #feature-extraction #endpoints_compatible #region-us
TinyBERT\_L-4\_H-312\_v2 English Sentence Encoder ================================================= This is distilled from the 'bert-base-nli-stsb-mean-tokens' pre-trained model from Sentence-Transformers. The embedding vector is obtained by mean/average pooling of the last layer's hidden states. Update 20210325: Added the attention matrices imitation objective as in the TinyBERT paper, and the distill target has been changed from 'distilbert-base-nli-stsb-mean-tokens' to 'bert-base-nli-stsb-mean-tokens' (they have almost the same STSb performance). Model Comparison ---------------- We compute cosine similarity scores of the embeddings of the sentence pair to get the spearman correlation on the STS benchmark (bigger is better): Dev: bert-base-nli-stsb-mean-tokens, Test: .8704 Dev: distilbert-base-nli-stsb-mean-tokens, Test: .8667 Dev: TinyBERT\_L-4\_H-312\_v2-distill-AllNLI, Test: .8587 Dev: TinyBERT\_L-4\_H (20210325), Test: .8551
[]
[ "TAGS\n#transformers #pytorch #jax #bert #feature-extraction #endpoints_compatible #region-us \n" ]
text2text-generation
transformers
# T5-base Parapharasing model fine-tuned on PAWS, MSRP, and Opinosis More details in [ceshine/finetuning-t5 Github repo](https://github.com/ceshine/finetuning-t5/tree/master/paraphrase)
{"language": "en", "license": "apache-2.0", "tags": ["t5", "paraphrasing", "paraphrase"]}
ceshine/t5-paraphrase-paws-msrp-opinosis
null
[ "transformers", "pytorch", "jax", "safetensors", "t5", "text2text-generation", "paraphrasing", "paraphrase", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #jax #safetensors #t5 #text2text-generation #paraphrasing #paraphrase #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# T5-base Parapharasing model fine-tuned on PAWS, MSRP, and Opinosis More details in ceshine/finetuning-t5 Github repo
[ "# T5-base Parapharasing model fine-tuned on PAWS, MSRP, and Opinosis\n\nMore details in ceshine/finetuning-t5 Github repo" ]
[ "TAGS\n#transformers #pytorch #jax #safetensors #t5 #text2text-generation #paraphrasing #paraphrase #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# T5-base Parapharasing model fine-tuned on PAWS, MSRP, and Opinosis\n\nMore details in ceshine/finetuning-t5 Github repo" ]
text2text-generation
transformers
# T5-base Parapharasing model fine-tuned on PAWS and Quora More details in [ceshine/finetuning-t5 Github repo](https://github.com/ceshine/finetuning-t5/tree/master/paraphrase)
{"language": "en", "license": "apache-2.0", "tags": ["t5", "paraphrasing", "paraphrase"]}
ceshine/t5-paraphrase-quora-paws
null
[ "transformers", "pytorch", "jax", "safetensors", "t5", "text2text-generation", "paraphrasing", "paraphrase", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #jax #safetensors #t5 #text2text-generation #paraphrasing #paraphrase #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# T5-base Parapharasing model fine-tuned on PAWS and Quora More details in ceshine/finetuning-t5 Github repo
[ "# T5-base Parapharasing model fine-tuned on PAWS and Quora\n\nMore details in ceshine/finetuning-t5 Github repo" ]
[ "TAGS\n#transformers #pytorch #jax #safetensors #t5 #text2text-generation #paraphrasing #paraphrase #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# T5-base Parapharasing model fine-tuned on PAWS and Quora\n\nMore details in ceshine/finetuning-t5 Github repo" ]
automatic-speech-recognition
transformers
# Wav2Vec2-Base-760-Turkish # TBA Pretrained Turkish model [ceyda/wav2vec2-base-760](https://huggingface.co/ceyda/wav2vec2-base-760). Fine-tuned on Turkish using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "tr", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("ceyda/wav2vec2-base-960-turkish") model = Wav2Vec2ForCTC.from_pretrained("ceyda/wav2vec2-base-960-turkish") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Turkish test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "tr", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("ceyda/wav2vec2-base-960-turkish") model = Wav2Vec2ForCTC.from_pretrained("ceyda/wav2vec2-base-960-turkish") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\'\`…\’»«]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays #Attention mask is not used because the base-model was not trained with it. reference: https://github.com/huggingface/transformers/blob/403d530eec105c0e229fc2b754afdf77a4439def/src/transformers/models/wav2vec2/tokenization_wav2vec2.py#L305 def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids,skip_special_tokens=True) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Results**: - WER: 22.602390 - CER: 6.054137 ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found [here](https://github.com/cceyda/wav2vec2)
{"language": "tr", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Wav2Vec2-Base Turkish by Ceyda Cinarel", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice tr", "type": "common_voice", "args": "tr"}, "metrics": [{"type": "wer", "value": 22.6, "name": "Test WER"}]}]}]}
ceyda/wav2vec2-base-760-turkish
null
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "tr", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "tr" ]
TAGS #transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #tr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Base-760-Turkish # TBA Pretrained Turkish model ceyda/wav2vec2-base-760. Fine-tuned on Turkish using the Common Voice When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Turkish test data of Common Voice. Test Results: - WER: 22.602390 - CER: 6.054137 ## Training The Common Voice 'train', 'validation' datasets were used for training. The script used for training can be found here
[ "# Wav2Vec2-Base-760-Turkish", "# TBA\nPretrained Turkish model ceyda/wav2vec2-base-760. Fine-tuned on Turkish using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Turkish test data of Common Voice.\n\n\n\nTest Results: \n- WER: 22.602390\n- CER: 6.054137", "## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training.\n\nThe script used for training can be found here" ]
[ "TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #tr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Base-760-Turkish", "# TBA\nPretrained Turkish model ceyda/wav2vec2-base-760. Fine-tuned on Turkish using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Turkish test data of Common Voice.\n\n\n\nTest Results: \n- WER: 22.602390\n- CER: 6.054137", "## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training.\n\nThe script used for training can be found here" ]
feature-extraction
transformers
Pretrained on 720h~ of Turkish speech data TBA
{}
ceyda/wav2vec2-base-760
null
[ "transformers", "pytorch", "wav2vec2", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #wav2vec2 #feature-extraction #endpoints_compatible #region-us
Pretrained on 720h~ of Turkish speech data TBA
[]
[ "TAGS\n#transformers #pytorch #wav2vec2 #feature-extraction #endpoints_compatible #region-us \n" ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Turkish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Turkish using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "tr", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("ceyda/wav2vec2-large-xlsr-53-turkish") model = Wav2Vec2ForCTC.from_pretrained("ceyda/wav2vec2-large-xlsr-53-turkish") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Turkish test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "tr", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("ceyda/wav2vec2-large-xlsr-53-turkish") model = Wav2Vec2ForCTC.from_pretrained("ceyda/wav2vec2-large-xlsr-53-turkish") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\'\`…\]\[\’»«]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 27.59 % ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found [here](https://github.com/cceyda/wav2vec2)
{"language": "tr", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Turkish by Ceyda Cinarel", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice tr", "type": "common_voice", "args": "tr"}, "metrics": [{"type": "wer", "value": 27.59, "name": "Test WER"}]}]}]}
ceyda/wav2vec2-large-xlsr-53-turkish
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "tr", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "tr" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #tr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-Turkish Fine-tuned facebook/wav2vec2-large-xlsr-53 on Turkish using the Common Voice When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Turkish test data of Common Voice. Test Result: 27.59 % ## Training The Common Voice 'train', 'validation' datasets were used for training. The script used for training can be found here
[ "# Wav2Vec2-Large-XLSR-53-Turkish\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Turkish using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Turkish test data of Common Voice.\n\n\n\nTest Result: 27.59 %", "## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training.\n\nThe script used for training can be found here" ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #tr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-Turkish\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Turkish using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Turkish test data of Common Voice.\n\n\n\nTest Result: 27.59 %", "## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training.\n\nThe script used for training can be found here" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # punct_restore_fr This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on a raw, French opensubtitles dataset. It achieves the following results on the evaluation set: - Loss: 0.0301 - Precision: 0.9601 - Recall: 0.9527 - F1: 0.9564 - Accuracy: 0.9915 ## Model description Classifies tokens based on beginning of French sentences (B-SENT) and everything else (O). ## Intended uses & limitations This model aims to help punctuation restoration on French YouTube auto-generated subtitles. In doing so, one can measure more in a corpus such as words per sentence, grammar structures per sentence, etc. ## Training and evaluation data 1 million Open Subtitles (French) sentences. 80%/10%/10% training/validation/test split. The sentences: - were lower-cased - had end punctuation (.?!) removed - were of length between 7 and 70 words - had beginning word of sentence tagged with B-SENT. - All other words marked with O. Token/tag pairs batched together in groups of 64. This helps show variety of positions for B-SENT and O tags. This also keeps training examples from just being one sentence. Otherwise, this leads to having the first word and only the first word in a sequence being labeled B-SENT. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 1 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.8.1 - Pytorch 1.9.0+cu102 - Datasets 1.8.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "punct_restore_fr", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.991500810518732}}]}]}
cfinley/punct_restore_fr
null
[ "transformers", "pytorch", "camembert", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #camembert #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
# punct_restore_fr This model is a fine-tuned version of camembert-base on a raw, French opensubtitles dataset. It achieves the following results on the evaluation set: - Loss: 0.0301 - Precision: 0.9601 - Recall: 0.9527 - F1: 0.9564 - Accuracy: 0.9915 ## Model description Classifies tokens based on beginning of French sentences (B-SENT) and everything else (O). ## Intended uses & limitations This model aims to help punctuation restoration on French YouTube auto-generated subtitles. In doing so, one can measure more in a corpus such as words per sentence, grammar structures per sentence, etc. ## Training and evaluation data 1 million Open Subtitles (French) sentences. 80%/10%/10% training/validation/test split. The sentences: - were lower-cased - had end punctuation (.?!) removed - were of length between 7 and 70 words - had beginning word of sentence tagged with B-SENT. - All other words marked with O. Token/tag pairs batched together in groups of 64. This helps show variety of positions for B-SENT and O tags. This also keeps training examples from just being one sentence. Otherwise, this leads to having the first word and only the first word in a sequence being labeled B-SENT. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 1 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.8.1 - Pytorch 1.9.0+cu102 - Datasets 1.8.0 - Tokenizers 0.10.3
[ "# punct_restore_fr\n\nThis model is a fine-tuned version of camembert-base on a raw, French opensubtitles dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0301\n- Precision: 0.9601\n- Recall: 0.9527\n- F1: 0.9564\n- Accuracy: 0.9915", "## Model description\n\nClassifies tokens based on beginning of French sentences (B-SENT) and everything else (O).", "## Intended uses & limitations\n\nThis model aims to help punctuation restoration on French YouTube auto-generated subtitles. In doing so, one can measure more in a corpus such as words per sentence, grammar structures per sentence, etc.", "## Training and evaluation data\n\n1 million Open Subtitles (French) sentences. 80%/10%/10% training/validation/test split.\n\nThe sentences:\n\n- were lower-cased\n- had end punctuation (.?!) removed\n- were of length between 7 and 70 words\n- had beginning word of sentence tagged with B-SENT.\n - All other words marked with O.\n\nToken/tag pairs batched together in groups of 64. This helps show variety of positions for B-SENT and O tags. This also keeps training examples from just being one sentence. Otherwise, this leads to having the first word and only the first word in a sequence being labeled B-SENT.", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 1\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- Transformers 4.8.1\n- Pytorch 1.9.0+cu102\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #camembert #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# punct_restore_fr\n\nThis model is a fine-tuned version of camembert-base on a raw, French opensubtitles dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0301\n- Precision: 0.9601\n- Recall: 0.9527\n- F1: 0.9564\n- Accuracy: 0.9915", "## Model description\n\nClassifies tokens based on beginning of French sentences (B-SENT) and everything else (O).", "## Intended uses & limitations\n\nThis model aims to help punctuation restoration on French YouTube auto-generated subtitles. In doing so, one can measure more in a corpus such as words per sentence, grammar structures per sentence, etc.", "## Training and evaluation data\n\n1 million Open Subtitles (French) sentences. 80%/10%/10% training/validation/test split.\n\nThe sentences:\n\n- were lower-cased\n- had end punctuation (.?!) removed\n- were of length between 7 and 70 words\n- had beginning word of sentence tagged with B-SENT.\n - All other words marked with O.\n\nToken/tag pairs batched together in groups of 64. This helps show variety of positions for B-SENT and O tags. This also keeps training examples from just being one sentence. Otherwise, this leads to having the first word and only the first word in a sequence being labeled B-SENT.", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 1\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- Transformers 4.8.1\n- Pytorch 1.9.0+cu102\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0629 - Precision: 0.9282 - Recall: 0.9356 - F1: 0.9319 - Accuracy: 0.9838 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2406 | 1.0 | 878 | 0.0721 | 0.9072 | 0.9172 | 0.9122 | 0.9801 | | 0.0529 | 2.0 | 1756 | 0.0637 | 0.9166 | 0.9318 | 0.9241 | 0.9826 | | 0.0315 | 3.0 | 2634 | 0.0629 | 0.9282 | 0.9356 | 0.9319 | 0.9838 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9281908990011098, "name": "Precision"}, {"type": "recall", "value": 0.9355632621098557, "name": "Recall"}, {"type": "f1", "value": 0.9318624993035824, "name": "F1"}, {"type": "accuracy", "value": 0.9837641190207635, "name": "Accuracy"}]}]}]}
cfisicaro/distilbert-base-uncased-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-ner ===================================== This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset. It achieves the following results on the evaluation set: * Loss: 0.0629 * Precision: 0.9282 * Recall: 0.9356 * F1: 0.9319 * Accuracy: 0.9838 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.10.2 * Pytorch 1.9.0+cu102 * Datasets 1.12.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # custom_german This model is a fine-tuned version of [flozi00/wav2vec-xlsr-german](https://huggingface.co/flozi00/wav2vec-xlsr-german) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.6832 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 8.7718 | 5.0 | 5 | 8.5148 | 1.0 | | 3.7125 | 10.0 | 10 | 5.4304 | 1.0 | | 2.7679 | 15.0 | 15 | 5.0388 | 1.0 | | 2.0516 | 20.0 | 20 | 4.4628 | 1.0 | | 1.6702 | 25.0 | 25 | 4.5341 | 1.0 | | 1.515 | 30.0 | 30 | 4.6832 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu102 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "custom_german", "results": []}]}
chaitanya97/custom_german
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
custom\_german ============== This model is a fine-tuned version of flozi00/wav2vec-xlsr-german on the None dataset. It achieves the following results on the evaluation set: * Loss: 4.6832 * Wer: 1.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 5 * num\_epochs: 30 ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu102 * Datasets 1.13.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 5\n* num\\_epochs: 30", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu102\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 5\n* num\\_epochs: 30", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu102\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # german_pretrained This model is a fine-tuned version of [flozi00/wav2vec-xlsr-german](https://huggingface.co/flozi00/wav2vec-xlsr-german) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9812 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 12.5229 | 5.0 | 5 | 12.9520 | 1.0 | | 4.3782 | 10.0 | 10 | 5.5689 | 1.0 | | 2.56 | 15.0 | 15 | 4.8410 | 1.0 | | 2.2895 | 20.0 | 20 | 4.0380 | 1.0 | | 1.872 | 25.0 | 25 | 3.9558 | 1.0 | | 1.6992 | 30.0 | 30 | 3.9812 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu102 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "german_pretrained", "results": []}]}
chaitanya97/german_pretrained
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
german\_pretrained ================== This model is a fine-tuned version of flozi00/wav2vec-xlsr-german on the None dataset. It achieves the following results on the evaluation set: * Loss: 3.9812 * Wer: 1.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 5 * num\_epochs: 30 ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu102 * Datasets 1.13.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 5\n* num\\_epochs: 30", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu102\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 5\n* num\\_epochs: 30", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu102\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # german_trained This model is a fine-tuned version of [flozi00/wav2vec-xlsr-german](https://huggingface.co/flozi00/wav2vec-xlsr-german) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9367 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 12.0352 | 5.0 | 5 | 12.6165 | 1.0 | | 4.0249 | 10.0 | 10 | 6.6453 | 1.0 | | 2.6661 | 15.0 | 15 | 5.7873 | 1.0 | | 2.4123 | 20.0 | 20 | 4.3250 | 1.0 | | 1.9481 | 25.0 | 25 | 3.9899 | 1.0 | | 1.7533 | 30.0 | 30 | 3.9367 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu102 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "german_trained", "results": []}]}
chaitanya97/german_trained
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
german\_trained =============== This model is a fine-tuned version of flozi00/wav2vec-xlsr-german on the None dataset. It achieves the following results on the evaluation set: * Loss: 3.9367 * Wer: 1.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 5 * num\_epochs: 30 ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu102 * Datasets 1.13.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 5\n* num\\_epochs: 30", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu102\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 5\n* num\\_epochs: 30", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu102\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-3 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-3", "results": []}]}
chaitanya97/wav2vec2-large-xls-r-3
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
# wav2vec2-large-xls-r-3 This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
[ "# wav2vec2-large-xls-r-3\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 30\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n", "# wav2vec2-large-xls-r-3\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 30\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hindi-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 7.2810 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 23.4144 | 0.8 | 4 | 29.5895 | 1.0 | | 19.1336 | 1.6 | 8 | 18.3354 | 1.0 | | 12.1562 | 2.4 | 12 | 11.2065 | 1.0 | | 8.1523 | 3.2 | 16 | 8.8674 | 1.0 | | 6.807 | 4.0 | 20 | 7.8106 | 1.0 | | 6.1583 | 4.8 | 24 | 7.2810 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-hindi-colab", "results": []}]}
chaitanya97/wav2vec2-large-xls-r-300m-hindi-colab
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-hindi-colab ===================================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset. It achieves the following results on the evaluation set: * Loss: 7.2810 * Wer: 1.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 5 * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 5\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 5\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-turkish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 33.1265 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 21.4247 | 4.0 | 4 | 33.1265 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-turkish-colab", "results": []}]}
chaitanya97/wav2vec2-large-xls-r-300m-turkish-colab
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-turkish-colab ======================================= This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset. It achieves the following results on the evaluation set: * Loss: 33.1265 * Wer: 1.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 5 * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 5\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 5\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
text-generation
transformers
# Rick DialoGPT model
{"tags": ["conversational"]}
chaitrabhat/DialoGPT-small-rick
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Rick DialoGPT model
[ "# Rick DialoGPT model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Rick DialoGPT model" ]
text-generation
transformers
# Sokka DialoGPT Model
{"tags": ["conversational"]}
chamodkarunasena/DialoGPT-medium-sokka
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Sokka DialoGPT Model
[ "# Sokka DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Sokka DialoGPT Model" ]
text-generation
transformers
# DialoGPT Medium JAB
{"tags": ["conversational"]}
chan030609/DialoGPT-medium-JAB
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# DialoGPT Medium JAB
[ "# DialoGPT Medium JAB" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# DialoGPT Medium JAB" ]
text-generation
transformers
# DialoGPT Small JAB
{"tags": ["conversational"]}
chan030609/DialoGPT-small-JAB
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# DialoGPT Small JAB
[ "# DialoGPT Small JAB" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# DialoGPT Small JAB" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0609 - Precision: 0.9244 - Recall: 0.9374 - F1: 0.9308 - Accuracy: 0.9836 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2412 | 1.0 | 878 | 0.0732 | 0.9116 | 0.9216 | 0.9166 | 0.9802 | | 0.0567 | 2.0 | 1756 | 0.0601 | 0.9164 | 0.9331 | 0.9247 | 0.9826 | | 0.0301 | 3.0 | 2634 | 0.0609 | 0.9244 | 0.9374 | 0.9308 | 0.9836 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9244263018534863, "name": "Precision"}, {"type": "recall", "value": 0.9373531714956931, "name": "Recall"}, {"type": "f1", "value": 0.930844859190135, "name": "F1"}, {"type": "accuracy", "value": 0.9836211415953103, "name": "Accuracy"}]}]}]}
chanaa/distilbert-base-uncased-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-ner ===================================== This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset. It achieves the following results on the evaluation set: * Loss: 0.0609 * Precision: 0.9244 * Recall: 0.9374 * F1: 0.9308 * Accuracy: 0.9836 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-finetuned-kaggglenews-baseline-final This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6942 - Rouge1: 28.581 - Rouge2: 16.3417 - Rougel: 24.1277 - Rougelsum: 25.9797 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 495 | 1.7514 | 27.911 | 15.7038 | 23.6466 | 25.2111 | 20.0 | | 2.0585 | 2.0 | 990 | 1.6655 | 28.7581 | 16.4875 | 24.2669 | 26.1676 | 20.0 | | 1.4173 | 3.0 | 1485 | 1.6942 | 28.581 | 16.3417 | 24.1277 | 25.9797 | 20.0 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "bart-base-finetuned-kaggglenews-baseline-final", "results": []}]}
chandank/bart-base-finetuned-kaggglenews-baseline-final
null
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bart-base-finetuned-kaggglenews-baseline-final ============================================== This model is a fine-tuned version of facebook/bart-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.6942 * Rouge1: 28.581 * Rouge2: 16.3417 * Rougel: 24.1277 * Rougelsum: 25.9797 * Gen Len: 20.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.10.0+cu102 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-finetuned-kaggglenews-batch8-LR1 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 495 | 1.6826 | 27.5191 | 15.0672 | 23.3065 | 24.7163 | 20.0 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "bart-base-finetuned-kaggglenews-batch8-LR1", "results": []}]}
chandank/bart-base-finetuned-kaggglenews-batch8-LR1
null
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bart-base-finetuned-kaggglenews-batch8-LR1 ========================================== This model is a fine-tuned version of facebook/bart-base on the None dataset. Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.10.0+cu102 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-finetuned-kaggglenews-batch8-LR2E6 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 495 | 1.7971 | 26.6141 | 13.9957 | 22.3012 | 23.7509 | 20.0 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "bart-base-finetuned-kaggglenews-batch8-LR2E6", "results": []}]}
chandank/bart-base-finetuned-kaggglenews-batch8-LR2E6
null
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bart-base-finetuned-kaggglenews-batch8-LR2E6 ============================================ This model is a fine-tuned version of facebook/bart-base on the None dataset. Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-06 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.10.0+cu102 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-06\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-06\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-finetuned-kaggglenews-batch8-LR4 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 495 | 1.6037 | 28.1247 | 15.9399 | 23.8676 | 25.3739 | 20.0 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "bart-base-finetuned-kaggglenews-batch8-LR4", "results": []}]}
chandank/bart-base-finetuned-kaggglenews-batch8-LR4
null
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bart-base-finetuned-kaggglenews-batch8-LR4 ========================================== This model is a fine-tuned version of facebook/bart-base on the None dataset. Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 4e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.10.0+cu102 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-finetuned-kaggglenews-batch8-epochs10 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5763 - Rouge1: 28.693 - Rouge2: 16.666 - Rougel: 24.2361 - Rougelsum: 26.0289 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 495 | 1.6043 | 27.8611 | 15.8713 | 23.8365 | 25.378 | 20.0 | | 1.9054 | 2.0 | 990 | 1.5613 | 28.2715 | 16.3724 | 24.3212 | 25.8499 | 20.0 | | 1.651 | 3.0 | 1485 | 1.5394 | 28.6282 | 16.2976 | 24.2336 | 25.9434 | 20.0 | | 1.4955 | 4.0 | 1980 | 1.5438 | 28.9266 | 16.7257 | 24.61 | 26.443 | 20.0 | | 1.4034 | 5.0 | 2475 | 1.5449 | 28.2296 | 16.1292 | 23.9698 | 25.651 | 20.0 | | 1.3077 | 6.0 | 2970 | 1.5642 | 28.4486 | 16.3833 | 24.1629 | 26.0013 | 20.0 | | 1.2505 | 7.0 | 3465 | 1.5566 | 28.5469 | 16.5374 | 24.2966 | 25.962 | 20.0 | | 1.2027 | 8.0 | 3960 | 1.5730 | 28.7278 | 16.6442 | 24.2531 | 26.1171 | 20.0 | | 1.1571 | 9.0 | 4455 | 1.5690 | 28.7736 | 16.7491 | 24.3066 | 26.1439 | 20.0 | | 1.1237 | 10.0 | 4950 | 1.5763 | 28.693 | 16.666 | 24.2361 | 26.0289 | 20.0 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "bart-base-finetuned-kaggglenews-batch8-epochs10", "results": []}]}
chandank/bart-base-finetuned-kaggglenews-batch8-epochs10
null
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bart-base-finetuned-kaggglenews-batch8-epochs10 =============================================== This model is a fine-tuned version of facebook/bart-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.5763 * Rouge1: 28.693 * Rouge2: 16.666 * Rougel: 24.2361 * Rougelsum: 26.0289 * Gen Len: 20.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.10.0+cu102 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-finetuned-kaggglenews-batch8-epochs3 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5635 - Rouge1: 28.2335 - Rouge2: 16.0201 - Rougel: 24.0315 - Rougelsum: 25.647 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 495 | 1.5635 | 28.2335 | 16.0201 | 24.0315 | 25.647 | 20.0 | | 1.5345 | 2.0 | 990 | 1.5635 | 28.2335 | 16.0201 | 24.0315 | 25.647 | 20.0 | | 1.531 | 3.0 | 1485 | 1.5635 | 28.2335 | 16.0201 | 24.0315 | 25.647 | 20.0 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "bart-base-finetuned-kaggglenews-batch8-epochs3", "results": []}]}
chandank/bart-base-finetuned-kaggglenews-batch8-epochs3
null
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bart-base-finetuned-kaggglenews-batch8-epochs3 ============================================== This model is a fine-tuned version of facebook/bart-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.5635 * Rouge1: 28.2335 * Rouge2: 16.0201 * Rougel: 24.0315 * Rougelsum: 25.647 * Gen Len: 20.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.10.0+cu102 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-finetuned-kaggglenews-batch8 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|:---------:|:-------:| | No log | 1.0 | 495 | 1.6409 | 27.9647 | 15.4352 | 23.611 | 25.107 | 20.0 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "bart-base-finetuned-kaggglenews-batch8", "results": []}]}
chandank/bart-base-finetuned-kaggglenews-batch8
null
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bart-base-finetuned-kaggglenews-batch8 ====================================== This model is a fine-tuned version of facebook/bart-base on the None dataset. Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.10.0+cu102 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-finetuned-kaggglenews-fact-corrector-I This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 432 | 1.5483 | 28.9811 | 16.5711 | 24.7826 | 26.4132 | 20.0 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "bart-base-finetuned-kaggglenews-fact-corrector-I", "results": []}]}
chandank/bart-base-finetuned-kaggglenews-fact-corrector-I
null
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bart-base-finetuned-kaggglenews-fact-corrector-I ================================================ This model is a fine-tuned version of facebook/bart-base on the None dataset. Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.10.0+cu102 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-finetuned-kaggglenews-fact-corrector-II This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 305 | 1.5749 | 27.9313 | 15.1004 | 23.3282 | 25.2336 | 20.0 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "bart-base-finetuned-kaggglenews-fact-corrector-II", "results": []}]}
chandank/bart-base-finetuned-kaggglenews-fact-corrector-II
null
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bart-base-finetuned-kaggglenews-fact-corrector-II ================================================= This model is a fine-tuned version of facebook/bart-base on the None dataset. Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.10.0+cu102 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-finetuned-kaggglenews This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6240 - Rouge1: 28.3618 - Rouge2: 15.9828 - Rougel: 24.078 - Rougelsum: 25.565 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|:---------:|:-------:| | 1.9433 | 1.0 | 989 | 1.6240 | 28.3618 | 15.9828 | 24.078 | 25.565 | 20.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu102 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "bart-base-finetuned-kaggglenews", "results": []}]}
chandank/bart-base-finetuned-kaggglenews
null
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bart-base-finetuned-kaggglenews =============================== This model is a fine-tuned version of facebook/bart-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.6240 * Rouge1: 28.3618 * Rouge2: 15.9828 * Rougel: 24.078 * Rougelsum: 25.565 * Gen Len: 20.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu102 * Datasets 1.14.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu102\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu102\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-finetuned-kagglenews-entityfiltering This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5703 - Rouge1: 28.2719 - Rouge2: 15.6883 - Rougel: 24.0674 - Rougelsum: 25.616 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.9187 | 1.0 | 863 | 1.5703 | 28.2719 | 15.6883 | 24.0674 | 25.616 | 20.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu102 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "bart-base-finetuned-kagglenews-entityfiltering", "results": []}]}
chandank/bart-base-finetuned-kagglenews-entityfiltering
null
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bart-base-finetuned-kagglenews-entityfiltering ============================================== This model is a fine-tuned version of facebook/bart-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.5703 * Rouge1: 28.2719 * Rouge2: 15.6883 * Rougel: 24.0674 * Rougelsum: 25.616 * Gen Len: 20.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu102 * Datasets 1.14.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu102\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu102\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-finetuned-xsum This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5925 - Rouge1: 27.887 - Rouge2: 16.1414 - Rougel: 24.0525 - Rougelsum: 25.4029 - Gen Len: 19.9841 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:-------:|:---------:|:-------:| | 1.9826 | 1.0 | 879 | 1.5925 | 27.887 | 16.1414 | 24.0525 | 25.4029 | 19.9841 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": [], "metrics": ["rouge"], "model_index": [{"name": "bart-base-finetuned-xsum", "results": [{"task": {"name": "Sequence-to-sequence Language Modeling", "type": "text2text-generation"}, "metric": {"name": "Rouge1", "type": "rouge", "value": 27.887}}]}]}
chandank/bart-base-finetuned-xsum
null
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
bart-base-finetuned-xsum ======================== This model is a fine-tuned version of facebook/bart-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.5925 * Rouge1: 27.887 * Rouge2: 16.1414 * Rougel: 24.0525 * Rougelsum: 25.4029 * Gen Len: 19.9841 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.9.2 * Pytorch 1.9.0+cu102 * Datasets 1.11.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0607 - Precision: 0.9276 - Recall: 0.9366 - F1: 0.9321 - Accuracy: 0.9841 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.246 | 1.0 | 878 | 0.0696 | 0.9152 | 0.9215 | 0.9183 | 0.9812 | | 0.0518 | 2.0 | 1756 | 0.0606 | 0.9196 | 0.9342 | 0.9269 | 0.9831 | | 0.0309 | 3.0 | 2634 | 0.0607 | 0.9276 | 0.9366 | 0.9321 | 0.9841 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9276454293628809, "name": "Precision"}, {"type": "recall", "value": 0.9365700861393892, "name": "Recall"}, {"type": "f1", "value": 0.9320863950122468, "name": "F1"}, {"type": "accuracy", "value": 0.9840500738716699, "name": "Accuracy"}]}]}]}
charlecheng/distilbert-base-uncased-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-ner ===================================== This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset. It achieves the following results on the evaluation set: * Loss: 0.0607 * Precision: 0.9276 * Recall: 0.9366 * F1: 0.9321 * Accuracy: 0.9841 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.10.0 * Pytorch 1.9.0+cu102 * Datasets 1.11.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # contest_train This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ru-en](https://huggingface.co/Helsinki-NLP/opus-mt-ru-en) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4420 - Bleu: 67.6003 - Gen Len: 35.605 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": ["ru", "en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "model-index": [{"name": "contest_train", "results": []}]}
elezhergina/MedMTEVAL_baseline
null
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru", "en" ]
TAGS #transformers #pytorch #endpoints_compatible #region-us
# contest_train This model is a fine-tuned version of Helsinki-NLP/opus-mt-ru-en on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4420 - Bleu: 67.6003 - Gen Len: 35.605 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
[ "# contest_train\n\nThis model is a fine-tuned version of Helsinki-NLP/opus-mt-ru-en on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.4420\n- Bleu: 67.6003\n- Gen Len: 35.605", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10.0", "### Training results", "### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #endpoints_compatible #region-us \n", "# contest_train\n\nThis model is a fine-tuned version of Helsinki-NLP/opus-mt-ru-en on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.4420\n- Bleu: 67.6003\n- Gen Len: 35.605", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10.0", "### Training results", "### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0" ]
token-classification
spacy
<a href="https://github.com/centre-for-humanities-computing/Dacy"><img src="https://centre-for-humanities-computing.github.io/DaCy/_static/icon.png" width="175" height="175" align="right" /></a> # DaCy large DaCy is a Danish language processing framework with state-of-the-art pipelines as well as functionality for analysing Danish pipelines. DaCy's largest pipeline has achieved State-of-the-Art performance on parts-of-speech tagging and dependency parsing for Danish on the Danish Dependency treebank as well as competitive performance on named entity recognition, named entity disambiguation and coreference resolution. To read more check out the [DaCy repository](https://github.com/centre-for-humanities-computing/DaCy) for material on how to use DaCy and reproduce the results. DaCy also contains guides on usage of the package as well as behavioural test for biases and robustness of Danish NLP pipelines. | Feature | Description | | --- | --- | | **Name** | `da_dacy_large_trf` | | **Version** | `0.2.0` | | **spaCy** | `>=3.5.2,<3.6.0` | | **Default Pipeline** | `transformer`, `tagger`, `morphologizer`, `trainable_lemmatizer`, `parser`, `ner`, `coref`, `span_resolver`, `span_cleaner`, `entity_linker` | | **Components** | `transformer`, `tagger`, `morphologizer`, `trainable_lemmatizer`, `parser`, `ner`, `coref`, `span_resolver`, `span_cleaner`, `entity_linker` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [UD Danish DDT v2.11](https://github.com/UniversalDependencies/UD_Danish-DDT) (Johannsen, Anders; Martínez Alonso, Héctor; Plank, Barbara)<br />[DaNE](https://huggingface.co/datasets/dane) (Rasmus Hvingelby, Amalie B. Pauli, Maria Barrett, Christina Rosted, Lasse M. Lidegaard, Anders Søgaard)<br />[DaCoref](https://huggingface.co/datasets/alexandrainst/dacoref) (Buch-Kromann, Matthias)<br />[DaNED](https://danlp-alexandra.readthedocs.io/en/stable/docs/datasets.html#daned) (Barrett, M. J., Lam, H., Wu, M., Lacroix, O., Plank, B., & Søgaard, A.)<br />[chcaa/dfm-encoder-large-v1](https://huggingface.co/chcaa/dfm-encoder-large-v1) (The Danish Foundation Models team) | | **License** | `Apache-2.0` | | **Author** | [Kenneth Enevoldsen](https://chcaa.io/#/) | ### Label Scheme <details> <summary>View label scheme (211 labels for 4 components)</summary> | Component | Labels | | --- | --- | | **`tagger`** | `ADJ`, `ADP`, `ADV`, `AUX`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PRON`, `PROPN`, `PUNCT`, `SCONJ`, `SYM`, `VERB`, `X` | | **`morphologizer`** | `AdpType=Prep\|POS=ADP`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=NOUN`, `Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=PROPN`, `Definite=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=SCONJ`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=ADV`, `Number=Plur\|POS=DET\|PronType=Dem`, `Degree=Pos\|Number=Plur\|POS=ADJ`, `Definite=Ind\|Gender=Com\|Number=Plur\|POS=NOUN`, `POS=PUNCT`, `NumType=Ord\|POS=ADJ`, `POS=CCONJ`, `Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Sup\|POS=ADV`, `Degree=Pos\|POS=ADV`, `Gender=Com\|Number=Sing\|POS=DET\|PronType=Ind`, `Number=Plur\|POS=DET\|PronType=Ind`, `POS=ADP`, `POS=ADV\|PartType=Inf`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=ADP\|PartType=Inf`, `Definite=Ind\|Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `NumType=Card\|POS=NUM`, `Degree=Pos\|POS=ADJ`, `Definite=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `POS=PART\|PartType=Inf`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Definite=Def\|Gender=Com\|Number=Plur\|POS=NOUN`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=VERB\|Tense=Pres\|VerbForm=Part`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Com\|Number=Sing\|POS=NOUN`, `Definite=Def\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `POS=AUX\|VerbForm=Inf\|Voice=Act`, `Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Degree=Cmp\|POS=ADJ`, `POS=PRON\|PartType=Inf`, `Definite=Ind\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Com\|POS=PRON\|PronType=Ind`, `Number=Plur\|POS=PRON\|PronType=Ind`, `POS=INTJ`, `Gender=Com\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Degree=Cmp\|POS=ADV`, `Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Case=Acc\|Gender=Com\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|POS=PROPN`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Com\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Def\|Degree=Sup\|POS=ADJ`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Definite=Def\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `POS=PRON\|PronType=Dem`, `Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `Number=Plur\|POS=NUM`, `POS=VERB\|VerbForm=Inf\|Voice=Pass`, `Definite=Def\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=PRON`, `Definite=Ind\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Number=Sing\|POS=NUM`, `Case=Gen\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=NOUN`, `Foreign=Yes\|POS=ADV`, `POS=NOUN`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Gender=Com\|Number=Plur\|POS=NOUN`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Gender=Com\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Com\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Gen\|Definite=Ind\|Gender=Com\|Number=Plur\|POS=NOUN`, `Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Degree=Sup\|POS=ADJ`, `Degree=Pos\|Number=Sing\|POS=ADJ`, `Mood=Imp\|POS=VERB`, `Case=Nom\|Gender=Com\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Case=Acc\|Gender=Com\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `POS=X`, `Case=Gen\|Definite=Def\|Gender=Com\|Number=Plur\|POS=NOUN`, `Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Com\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Number=Plur\|POS=PRON\|PronType=Int,Rel`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Degree=Cmp\|Number=Plur\|POS=ADJ`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Com\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Gender=Com\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Com\|POS=PRON\|PronType=Int,Rel`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `POS=VERB\|VerbForm=Ger`, `Gender=Com\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Gen\|POS=PRON\|PronType=Int,Rel`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Abbr=Yes\|POS=X`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|Number=Plur\|POS=NOUN`, `Foreign=Yes\|POS=X`, `Number=Plur\|POS=PRON\|PronType=Rcp`, `Case=Nom\|Gender=Com\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Gen\|Degree=Cmp\|POS=ADJ`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Com\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=PRON\|PronType=Rcp`, `POS=DET\|Person=2\|Polite=Form\|Poss=Yes\|PronType=Prs`, `POS=SYM`, `POS=DET\|PronType=Dem`, `Gender=Com\|Number=Sing\|POS=NUM`, `Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Degree=Abs\|POS=ADJ`, `POS=VERB\|Tense=Pres`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NUM`, `Degree=Abs\|POS=ADV`, `Case=Gen\|Definite=Def\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Gender=Com\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Ind\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Number[psor]=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|POS=NOUN`, `Case=Gen\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Ind`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=NUM`, `Definite=Def\|Number=Plur\|POS=NOUN`, `Case=Gen\|POS=NOUN`, `POS=AUX\|Tense=Pres\|VerbForm=Part` | | **`parser`** | `ROOT`, `acl:relcl`, `advcl`, `advmod`, `advmod:lmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound:prt`, `conj`, `cop`, `dep`, `det`, `expl`, `fixed`, `flat`, `iobj`, `list`, `mark`, `nmod`, `nmod:poss`, `nsubj`, `nummod`, `obj`, `obl`, `obl:lmod`, `obl:tmod`, `punct`, `xcomp` | | **`ner`** | `LOC`, `MISC`, `ORG`, `PER` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 99.92 | | `TOKEN_P` | 99.70 | | `TOKEN_R` | 99.77 | | `TOKEN_F` | 99.74 | | `SENTS_P` | 100.00 | | `SENTS_R` | 100.00 | | `SENTS_F` | 100.00 | | `TAG_ACC` | 99.14 | | `POS_ACC` | 99.08 | | `MORPH_ACC` | 98.80 | | `MORPH_MICRO_P` | 99.45 | | `MORPH_MICRO_R` | 99.32 | | `MORPH_MICRO_F` | 99.39 | | `DEP_UAS` | 92.81 | | `DEP_LAS` | 90.80 | | `ENTS_P` | 88.58 | | `ENTS_R` | 86.20 | | `ENTS_F` | 87.38 | | `LEMMA_ACC` | 95.89 | | `COREF_LEA_F1` | 46.72 | | `COREF_LEA_PRECISION` | 45.91 | | `COREF_LEA_RECALL` | 47.56 | | `NEL_SCORE` | 34.29 | | `NEL_MICRO_P` | 84.00 | | `NEL_MICRO_R` | 21.54 | | `NEL_MICRO_F` | 34.29 | | `NEL_MACRO_P` | 86.71 | | `NEL_MACRO_R` | 24.70 | | `NEL_MACRO_F` | 37.28 | ### Training This model was trained using [spaCy](https://spacy.io) and logged to [Weights & Biases](https://wandb.ai/kenevoldsen/dacy-v0.2.0). You can find all the training logs [here](https://wandb.ai/kenevoldsen/dacy-v0.2.0).
{"language": ["da"], "license": "apache-2.0", "library_name": "spacy", "tags": ["spacy", "dacy", "danish", "token-classification", "pos tagging", "morphological analysis", "lemmatization", "dependency parsing", "named entity recognition", "coreference resolution", "named entity linking", "named entity disambiguation"], "datasets": ["universal_dependencies", "dane", "alexandrainst/dacoref"], "metrics": ["accuracy"], "model-index": [{"name": "da_dacy_large_trf-0.2.0", "results": [{"task": {"type": "token-classification", "name": "NER"}, "dataset": {"name": "DaNE", "type": "dane", "split": "test"}, "metrics": [{"type": "precision", "value": 0.8858195212, "name": "NER Precision"}, {"type": "recall", "value": 0.8620071685, "name": "NER Recall"}, {"type": "f_score", "value": 0.8737511353, "name": "NER F Score"}]}, {"task": {"type": "token-classification", "name": "TAG"}, "dataset": {"name": "UD Danish DDT", "type": "universal_dependencies", "config": "da_ddt", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.9913668347, "name": "TAG (XPOS) Accuracy"}, {"type": "accuracy", "value": 0.9908174469, "name": "POS (UPOS) Accuracy"}, {"type": "accuracy", "value": 0.9880227568, "name": "Morph (UFeats) Accuracy"}, {"type": "accuracy", "value": 0.9589423796, "name": "Lemma Accuracy"}, {"type": "f_score", "value": 0.9280885781, "name": "Unlabeled Attachment Score (UAS)"}, {"type": "f_score", "value": 0.9079997669, "name": "Labeled Attachment Score (LAS)"}, {"type": "f_score", "value": 1.0, "name": "Sentences F-Score"}]}, {"task": {"type": "coreference-resolution", "name": "coreference-resolution"}, "dataset": {"name": "DaCoref", "type": "alexandrainst/dacoref", "split": "custom"}, "metrics": [{"type": "f_score", "value": 0.4672143289, "name": "LEA"}]}, {"task": {"type": "coreference-resolution", "name": "coreference-resolution"}, "dataset": {"name": "DaNED", "type": "named-entity-linking", "split": "custom"}, "metrics": [{"type": "precision", "value": 0.84, "name": "Named entity Linking Precision"}, {"type": "recall", "value": 0.2153846154, "name": "Named entity Linking Recall"}, {"type": "f_score", "value": 0.3428571429, "name": "Named entity Linking F Score"}]}]}]}
chcaa/da_dacy_large_trf
null
[ "spacy", "dacy", "danish", "token-classification", "pos tagging", "morphological analysis", "lemmatization", "dependency parsing", "named entity recognition", "coreference resolution", "named entity linking", "named entity disambiguation", "da", "dataset:universal_dependencies", "dataset:dane", "dataset:alexandrainst/dacoref", "license:apache-2.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "da" ]
TAGS #spacy #dacy #danish #token-classification #pos tagging #morphological analysis #lemmatization #dependency parsing #named entity recognition #coreference resolution #named entity linking #named entity disambiguation #da #dataset-universal_dependencies #dataset-dane #dataset-alexandrainst/dacoref #license-apache-2.0 #model-index #region-us
<a href="URL src="URL width="175" height="175" align="right" /> DaCy large ========== DaCy is a Danish language processing framework with state-of-the-art pipelines as well as functionality for analysing Danish pipelines. DaCy's largest pipeline has achieved State-of-the-Art performance on parts-of-speech tagging and dependency parsing for Danish on the Danish Dependency treebank as well as competitive performance on named entity recognition, named entity disambiguation and coreference resolution. To read more check out the DaCy repository for material on how to use DaCy and reproduce the results. DaCy also contains guides on usage of the package as well as behavioural test for biases and robustness of Danish NLP pipelines. ### Label Scheme View label scheme (211 labels for 4 components) ### Accuracy ### Training This model was trained using spaCy and logged to Weights & Biases. You can find all the training logs here.
[ "### Label Scheme\n\n\n\nView label scheme (211 labels for 4 components)", "### Accuracy", "### Training\n\n\nThis model was trained using spaCy and logged to Weights & Biases. You can find all the training logs here." ]
[ "TAGS\n#spacy #dacy #danish #token-classification #pos tagging #morphological analysis #lemmatization #dependency parsing #named entity recognition #coreference resolution #named entity linking #named entity disambiguation #da #dataset-universal_dependencies #dataset-dane #dataset-alexandrainst/dacoref #license-apache-2.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (211 labels for 4 components)", "### Accuracy", "### Training\n\n\nThis model was trained using spaCy and logged to Weights & Biases. You can find all the training logs here." ]
token-classification
spacy
<a href="https://github.com/centre-for-humanities-computing/Dacy"><img src="https://centre-for-humanities-computing.github.io/DaCy/_static/icon.png" width="175" height="175" align="right" /></a> # DaCy medium DaCy is a Danish language processing framework with state-of-the-art pipelines as well as functionality for analysing Danish pipelines. DaCy's largest pipeline has achieved State-of-the-Art performance on parts-of-speech tagging and dependency parsing for Danish on the Danish Dependency treebank as well as competitive performance on named entity recognition, named entity disambiguation and coreference resolution. To read more check out the [DaCy repository](https://github.com/centre-for-humanities-computing/DaCy) for material on how to use DaCy and reproduce the results. DaCy also contains guides on usage of the package as well as behavioural test for biases and robustness of Danish NLP pipelines. | Feature | Description | | --- | --- | | **Name** | `da_dacy_medium_trf` | | **Version** | `0.2.0` | | **spaCy** | `>=3.5.2,<3.6.0` | | **Default Pipeline** | `transformer`, `tagger`, `morphologizer`, `trainable_lemmatizer`, `parser`, `ner`, `coref`, `span_resolver`, `span_cleaner`, `entity_linker` | | **Components** | `transformer`, `tagger`, `morphologizer`, `trainable_lemmatizer`, `parser`, `ner`, `coref`, `span_resolver`, `span_cleaner`, `entity_linker` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [UD Danish DDT v2.11](https://github.com/UniversalDependencies/UD_Danish-DDT) (Johannsen, Anders; Martínez Alonso, Héctor; Plank, Barbara)<br />[DaNE](https://huggingface.co/datasets/dane) (Rasmus Hvingelby, Amalie B. Pauli, Maria Barrett, Christina Rosted, Lasse M. Lidegaard, Anders Søgaard)<br />[DaCoref](https://huggingface.co/datasets/alexandrainst/dacoref) (Buch-Kromann, Matthias)<br />[DaNED](https://danlp-alexandra.readthedocs.io/en/stable/docs/datasets.html#daned) (Barrett, M. J., Lam, H., Wu, M., Lacroix, O., Plank, B., & Søgaard, A.)<br />[vesteinn/DanskBERT](https://huggingface.co/vesteinn/DanskBERT) (Vésteinn Snæbjarnarson) | | **License** | `Apache-2.0` | | **Author** | [Kenneth Enevoldsen](https://chcaa.io/#/) | ### Label Scheme <details> <summary>View label scheme (211 labels for 4 components)</summary> | Component | Labels | | --- | --- | | **`tagger`** | `ADJ`, `ADP`, `ADV`, `AUX`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PRON`, `PROPN`, `PUNCT`, `SCONJ`, `SYM`, `VERB`, `X` | | **`morphologizer`** | `AdpType=Prep\|POS=ADP`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=NOUN`, `Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=PROPN`, `Definite=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=SCONJ`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=ADV`, `Number=Plur\|POS=DET\|PronType=Dem`, `Degree=Pos\|Number=Plur\|POS=ADJ`, `Definite=Ind\|Gender=Com\|Number=Plur\|POS=NOUN`, `POS=PUNCT`, `NumType=Ord\|POS=ADJ`, `POS=CCONJ`, `Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Sup\|POS=ADV`, `Degree=Pos\|POS=ADV`, `Gender=Com\|Number=Sing\|POS=DET\|PronType=Ind`, `Number=Plur\|POS=DET\|PronType=Ind`, `POS=ADP`, `POS=ADV\|PartType=Inf`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=ADP\|PartType=Inf`, `Definite=Ind\|Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `NumType=Card\|POS=NUM`, `Degree=Pos\|POS=ADJ`, `Definite=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `POS=PART\|PartType=Inf`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Definite=Def\|Gender=Com\|Number=Plur\|POS=NOUN`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=VERB\|Tense=Pres\|VerbForm=Part`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Com\|Number=Sing\|POS=NOUN`, `Definite=Def\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `POS=AUX\|VerbForm=Inf\|Voice=Act`, `Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Degree=Cmp\|POS=ADJ`, `POS=PRON\|PartType=Inf`, `Definite=Ind\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Com\|POS=PRON\|PronType=Ind`, `Number=Plur\|POS=PRON\|PronType=Ind`, `POS=INTJ`, `Gender=Com\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Degree=Cmp\|POS=ADV`, `Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Case=Acc\|Gender=Com\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|POS=PROPN`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Com\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Def\|Degree=Sup\|POS=ADJ`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Definite=Def\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `POS=PRON\|PronType=Dem`, `Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `Number=Plur\|POS=NUM`, `POS=VERB\|VerbForm=Inf\|Voice=Pass`, `Definite=Def\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=PRON`, `Definite=Ind\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Number=Sing\|POS=NUM`, `Case=Gen\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=NOUN`, `Foreign=Yes\|POS=ADV`, `POS=NOUN`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Gender=Com\|Number=Plur\|POS=NOUN`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Gender=Com\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Com\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Gen\|Definite=Ind\|Gender=Com\|Number=Plur\|POS=NOUN`, `Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Degree=Sup\|POS=ADJ`, `Degree=Pos\|Number=Sing\|POS=ADJ`, `Mood=Imp\|POS=VERB`, `Case=Nom\|Gender=Com\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Case=Acc\|Gender=Com\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `POS=X`, `Case=Gen\|Definite=Def\|Gender=Com\|Number=Plur\|POS=NOUN`, `Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Com\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Number=Plur\|POS=PRON\|PronType=Int,Rel`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Degree=Cmp\|Number=Plur\|POS=ADJ`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Com\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Gender=Com\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Com\|POS=PRON\|PronType=Int,Rel`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `POS=VERB\|VerbForm=Ger`, `Gender=Com\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Gen\|POS=PRON\|PronType=Int,Rel`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Abbr=Yes\|POS=X`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|Number=Plur\|POS=NOUN`, `Foreign=Yes\|POS=X`, `Number=Plur\|POS=PRON\|PronType=Rcp`, `Case=Nom\|Gender=Com\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Gen\|Degree=Cmp\|POS=ADJ`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Com\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=PRON\|PronType=Rcp`, `POS=DET\|Person=2\|Polite=Form\|Poss=Yes\|PronType=Prs`, `POS=SYM`, `POS=DET\|PronType=Dem`, `Gender=Com\|Number=Sing\|POS=NUM`, `Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Degree=Abs\|POS=ADJ`, `POS=VERB\|Tense=Pres`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NUM`, `Degree=Abs\|POS=ADV`, `Case=Gen\|Definite=Def\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Gender=Com\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Ind\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Number[psor]=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|POS=NOUN`, `Case=Gen\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Ind`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=NUM`, `Definite=Def\|Number=Plur\|POS=NOUN`, `Case=Gen\|POS=NOUN`, `POS=AUX\|Tense=Pres\|VerbForm=Part` | | **`parser`** | `ROOT`, `acl:relcl`, `advcl`, `advmod`, `advmod:lmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound:prt`, `conj`, `cop`, `dep`, `det`, `expl`, `fixed`, `flat`, `iobj`, `list`, `mark`, `nmod`, `nmod:poss`, `nsubj`, `nummod`, `obj`, `obl`, `obl:lmod`, `obl:tmod`, `punct`, `xcomp` | | **`ner`** | `LOC`, `MISC`, `ORG`, `PER` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 99.92 | | `TOKEN_P` | 99.70 | | `TOKEN_R` | 99.77 | | `TOKEN_F` | 99.74 | | `SENTS_P` | 98.42 | | `SENTS_R` | 99.29 | | `SENTS_F` | 98.85 | | `TAG_ACC` | 98.47 | | `POS_ACC` | 98.57 | | `MORPH_ACC` | 98.14 | | `MORPH_MICRO_P` | 99.10 | | `MORPH_MICRO_R` | 98.77 | | `MORPH_MICRO_F` | 98.93 | | `DEP_UAS` | 90.84 | | `DEP_LAS` | 88.33 | | `ENTS_P` | 87.08 | | `ENTS_R` | 84.59 | | `ENTS_F` | 85.82 | | `LEMMA_ACC` | 94.20 | | `COREF_LEA_F1` | 41.18 | | `COREF_LEA_PRECISION` | 48.89 | | `COREF_LEA_RECALL` | 35.58 | | `NEL_SCORE` | 80.12 | | `NEL_MICRO_P` | 99.23 | | `NEL_MICRO_R` | 67.19 | | `NEL_MICRO_F` | 80.12 | | `NEL_MACRO_P` | 99.39 | | `NEL_MACRO_R` | 65.99 | | `NEL_MACRO_F` | 78.15 | ### Training This model was trained using [spaCy](https://spacy.io) and logged to [Weights & Biases](https://wandb.ai/kenevoldsen/dacy-v0.2.0). You can find all the training logs [here](https://wandb.ai/kenevoldsen/dacy-v0.2.0).
{"language": ["da"], "license": "apache-2.0", "library_name": "spacy", "tags": ["spacy", "dacy", "danish", "token-classification", "pos tagging", "morphological analysis", "lemmatization", "dependency parsing", "named entity recognition", "coreference resolution", "named entity linking", "named entity disambiguation"], "datasets": ["universal_dependencies", "dane", "alexandrainst/dacoref"], "metrics": ["accuracy"], "model-index": [{"name": "da_dacy_medium_trf-0.2.0", "results": [{"task": {"type": "token-classification", "name": "NER"}, "dataset": {"name": "DaNE", "type": "dane", "split": "test"}, "metrics": [{"type": "precision", "value": 0.8708487085, "name": "NER Precision"}, {"type": "recall", "value": 0.8458781362, "name": "NER Recall"}, {"type": "f_score", "value": 0.8581818182, "name": "NER F Score"}]}, {"task": {"type": "token-classification", "name": "TAG"}, "dataset": {"name": "UD Danish DDT", "type": "universal_dependencies", "config": "da_ddt", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.9847290149, "name": "TAG (XPOS) Accuracy"}, {"type": "accuracy", "value": 0.985677928, "name": "POS (UPOS) Accuracy"}, {"type": "accuracy", "value": 0.9814371257, "name": "Morph (UFeats) Accuracy"}, {"type": "accuracy", "value": 0.9419805438, "name": "Lemma Accuracy"}, {"type": "f_score", "value": 0.9083920564, "name": "Unlabeled Attachment Score (UAS)"}, {"type": "f_score", "value": 0.883349834, "name": "Labeled Attachment Score (LAS)"}, {"type": "f_score", "value": 0.9885462555, "name": "Sentences F-Score"}]}, {"task": {"type": "coreference-resolution", "name": "coreference-resolution"}, "dataset": {"name": "DaCoref", "type": "alexandrainst/dacoref", "split": "custom"}, "metrics": [{"type": "f_score", "value": 0.4118366346, "name": "LEA"}]}, {"task": {"type": "coreference-resolution", "name": "coreference-resolution"}, "dataset": {"name": "DaNED", "type": "named-entity-linking", "split": "custom"}, "metrics": [{"type": "precision", "value": 0.9923076923, "name": "Named entity Linking Precision"}, {"type": "recall", "value": 0.671875, "name": "Named entity Linking Recall"}, {"type": "f_score", "value": 0.801242236, "name": "Named entity Linking F Score"}]}]}]}
chcaa/da_dacy_medium_trf
null
[ "spacy", "dacy", "danish", "token-classification", "pos tagging", "morphological analysis", "lemmatization", "dependency parsing", "named entity recognition", "coreference resolution", "named entity linking", "named entity disambiguation", "da", "dataset:universal_dependencies", "dataset:dane", "dataset:alexandrainst/dacoref", "license:apache-2.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "da" ]
TAGS #spacy #dacy #danish #token-classification #pos tagging #morphological analysis #lemmatization #dependency parsing #named entity recognition #coreference resolution #named entity linking #named entity disambiguation #da #dataset-universal_dependencies #dataset-dane #dataset-alexandrainst/dacoref #license-apache-2.0 #model-index #region-us
<a href="URL src="URL width="175" height="175" align="right" /> DaCy medium =========== DaCy is a Danish language processing framework with state-of-the-art pipelines as well as functionality for analysing Danish pipelines. DaCy's largest pipeline has achieved State-of-the-Art performance on parts-of-speech tagging and dependency parsing for Danish on the Danish Dependency treebank as well as competitive performance on named entity recognition, named entity disambiguation and coreference resolution. To read more check out the DaCy repository for material on how to use DaCy and reproduce the results. DaCy also contains guides on usage of the package as well as behavioural test for biases and robustness of Danish NLP pipelines. ### Label Scheme View label scheme (211 labels for 4 components) ### Accuracy ### Training This model was trained using spaCy and logged to Weights & Biases. You can find all the training logs here.
[ "### Label Scheme\n\n\n\nView label scheme (211 labels for 4 components)", "### Accuracy", "### Training\n\n\nThis model was trained using spaCy and logged to Weights & Biases. You can find all the training logs here." ]
[ "TAGS\n#spacy #dacy #danish #token-classification #pos tagging #morphological analysis #lemmatization #dependency parsing #named entity recognition #coreference resolution #named entity linking #named entity disambiguation #da #dataset-universal_dependencies #dataset-dane #dataset-alexandrainst/dacoref #license-apache-2.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (211 labels for 4 components)", "### Accuracy", "### Training\n\n\nThis model was trained using spaCy and logged to Weights & Biases. You can find all the training logs here." ]
token-classification
spacy
<a href="https://github.com/centre-for-humanities-computing/Dacy"><img src="https://centre-for-humanities-computing.github.io/DaCy/_static/icon.png" width="175" height="175" align="right" /></a> # DaCy small DaCy is a Danish language processing framework with state-of-the-art pipelines as well as functionality for analysing Danish pipelines. DaCy's largest pipeline has achieved State-of-the-Art performance on parts-of-speech tagging and dependency parsing for Danish on the Danish Dependency treebank as well as competitive performance on named entity recognition, named entity disambiguation and coreference resolution. To read more check out the [DaCy repository](https://github.com/centre-for-humanities-computing/DaCy) for material on how to use DaCy and reproduce the results. DaCy also contains guides on usage of the package as well as behavioural test for biases and robustness of Danish NLP pipelines. | Feature | Description | | --- | --- | | **Name** | `da_dacy_small_trf` | | **Version** | `0.2.0` | | **spaCy** | `>=3.5.2,<3.6.0` | | **Default Pipeline** | `transformer`, `tagger`, `morphologizer`, `trainable_lemmatizer`, `parser`, `ner`, `coref`, `span_resolver`, `span_cleaner`, `entity_linker` | | **Components** | `transformer`, `tagger`, `morphologizer`, `trainable_lemmatizer`, `parser`, `ner`, `coref`, `span_resolver`, `span_cleaner`, `entity_linker` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [UD Danish DDT v2.11](https://github.com/UniversalDependencies/UD_Danish-DDT) (Johannsen, Anders; Martínez Alonso, Héctor; Plank, Barbara)<br />[DaNE](https://huggingface.co/datasets/dane) (Rasmus Hvingelby, Amalie B. Pauli, Maria Barrett, Christina Rosted, Lasse M. Lidegaard, Anders Søgaard)<br />[DaCoref](https://huggingface.co/datasets/alexandrainst/dacoref) (Buch-Kromann, Matthias)<br />[DaNED](https://danlp-alexandra.readthedocs.io/en/stable/docs/datasets.html#daned) (Barrett, M. J., Lam, H., Wu, M., Lacroix, O., Plank, B., & Søgaard, A.)<br />[jonfd/electra-small-nordic](https://huggingface.co/jonfd/electra-small-nordic) (Jón Friðrik Daðason) | | **License** | `Apache-2.0` | | **Author** | [Kenneth Enevoldsen](https://chcaa.io/#/) | ### Label Scheme <details> <summary>View label scheme (211 labels for 4 components)</summary> | Component | Labels | | --- | --- | | **`tagger`** | `ADJ`, `ADP`, `ADV`, `AUX`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PRON`, `PROPN`, `PUNCT`, `SCONJ`, `SYM`, `VERB`, `X` | | **`morphologizer`** | `AdpType=Prep\|POS=ADP`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=NOUN`, `Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=PROPN`, `Definite=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=SCONJ`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=ADV`, `Number=Plur\|POS=DET\|PronType=Dem`, `Degree=Pos\|Number=Plur\|POS=ADJ`, `Definite=Ind\|Gender=Com\|Number=Plur\|POS=NOUN`, `POS=PUNCT`, `NumType=Ord\|POS=ADJ`, `POS=CCONJ`, `Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Sup\|POS=ADV`, `Degree=Pos\|POS=ADV`, `Gender=Com\|Number=Sing\|POS=DET\|PronType=Ind`, `Number=Plur\|POS=DET\|PronType=Ind`, `POS=ADP`, `POS=ADV\|PartType=Inf`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=ADP\|PartType=Inf`, `Definite=Ind\|Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `NumType=Card\|POS=NUM`, `Degree=Pos\|POS=ADJ`, `Definite=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `POS=PART\|PartType=Inf`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Definite=Def\|Gender=Com\|Number=Plur\|POS=NOUN`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=VERB\|Tense=Pres\|VerbForm=Part`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Com\|Number=Sing\|POS=NOUN`, `Definite=Def\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `POS=AUX\|VerbForm=Inf\|Voice=Act`, `Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Degree=Cmp\|POS=ADJ`, `POS=PRON\|PartType=Inf`, `Definite=Ind\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Com\|POS=PRON\|PronType=Ind`, `Number=Plur\|POS=PRON\|PronType=Ind`, `POS=INTJ`, `Gender=Com\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Degree=Cmp\|POS=ADV`, `Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Case=Acc\|Gender=Com\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|POS=PROPN`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Com\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Def\|Degree=Sup\|POS=ADJ`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Definite=Def\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `POS=PRON\|PronType=Dem`, `Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `Number=Plur\|POS=NUM`, `POS=VERB\|VerbForm=Inf\|Voice=Pass`, `Definite=Def\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=PRON`, `Definite=Ind\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Number=Sing\|POS=NUM`, `Case=Gen\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=NOUN`, `Foreign=Yes\|POS=ADV`, `POS=NOUN`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Gender=Com\|Number=Plur\|POS=NOUN`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Gender=Com\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Com\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Gen\|Definite=Ind\|Gender=Com\|Number=Plur\|POS=NOUN`, `Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Degree=Sup\|POS=ADJ`, `Degree=Pos\|Number=Sing\|POS=ADJ`, `Mood=Imp\|POS=VERB`, `Case=Nom\|Gender=Com\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Case=Acc\|Gender=Com\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `POS=X`, `Case=Gen\|Definite=Def\|Gender=Com\|Number=Plur\|POS=NOUN`, `Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Com\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Number=Plur\|POS=PRON\|PronType=Int,Rel`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Degree=Cmp\|Number=Plur\|POS=ADJ`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Com\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Gender=Com\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Com\|POS=PRON\|PronType=Int,Rel`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `POS=VERB\|VerbForm=Ger`, `Gender=Com\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Gen\|POS=PRON\|PronType=Int,Rel`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Abbr=Yes\|POS=X`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|Number=Plur\|POS=NOUN`, `Foreign=Yes\|POS=X`, `Number=Plur\|POS=PRON\|PronType=Rcp`, `Case=Nom\|Gender=Com\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Gen\|Degree=Cmp\|POS=ADJ`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Com\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=PRON\|PronType=Rcp`, `POS=DET\|Person=2\|Polite=Form\|Poss=Yes\|PronType=Prs`, `POS=SYM`, `POS=DET\|PronType=Dem`, `Gender=Com\|Number=Sing\|POS=NUM`, `Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Degree=Abs\|POS=ADJ`, `POS=VERB\|Tense=Pres`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NUM`, `Degree=Abs\|POS=ADV`, `Case=Gen\|Definite=Def\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Gender=Com\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Ind\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Number[psor]=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|POS=NOUN`, `Case=Gen\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Ind`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=NUM`, `Definite=Def\|Number=Plur\|POS=NOUN`, `Case=Gen\|POS=NOUN`, `POS=AUX\|Tense=Pres\|VerbForm=Part` | | **`parser`** | `ROOT`, `acl:relcl`, `advcl`, `advmod`, `advmod:lmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound:prt`, `conj`, `cop`, `dep`, `det`, `expl`, `fixed`, `flat`, `iobj`, `list`, `mark`, `nmod`, `nmod:poss`, `nsubj`, `nummod`, `obj`, `obl`, `obl:lmod`, `obl:tmod`, `punct`, `xcomp` | | **`ner`** | `LOC`, `MISC`, `ORG`, `PER` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 99.92 | | `TOKEN_P` | 99.70 | | `TOKEN_R` | 99.77 | | `TOKEN_F` | 99.74 | | `SENTS_P` | 92.96 | | `SENTS_R` | 95.75 | | `SENTS_F` | 94.33 | | `TAG_ACC` | 98.47 | | `POS_ACC` | 98.42 | | `MORPH_ACC` | 97.73 | | `MORPH_MICRO_P` | 98.94 | | `MORPH_MICRO_R` | 98.33 | | `MORPH_MICRO_F` | 98.64 | | `DEP_UAS` | 89.79 | | `DEP_LAS` | 87.02 | | `ENTS_P` | 83.06 | | `ENTS_R` | 81.72 | | `ENTS_F` | 82.38 | | `LEMMA_ACC` | 94.67 | | `COREF_LEA_F1` | 42.18 | | `COREF_LEA_PRECISION` | 44.79 | | `COREF_LEA_RECALL` | 39.86 | | `NEL_SCORE` | 35.20 | | `NEL_MICRO_P` | 84.62 | | `NEL_MICRO_R` | 22.22 | | `NEL_MICRO_F` | 35.20 | | `NEL_MACRO_P` | 87.68 | | `NEL_MACRO_R` | 24.76 | | `NEL_MACRO_F` | 37.52 | ### Training This model was trained using [spaCy](https://spacy.io) and logged to [Weights & Biases](https://wandb.ai/kenevoldsen/dacy-v0.2.0). You can find all the training logs [here](https://wandb.ai/kenevoldsen/dacy-v0.2.0).
{"language": ["da"], "license": "apache-2.0", "library_name": "spacy", "tags": ["spacy", "dacy", "danish", "token-classification", "pos tagging", "morphological analysis", "lemmatization", "dependency parsing", "named entity recognition", "coreference resolution", "named entity linking", "named entity disambiguation"], "datasets": ["universal_dependencies", "dane", "alexandrainst/dacoref"], "metrics": ["accuracy"], "model-index": [{"name": "da_dacy_small_trf-0.2.0", "results": [{"task": {"type": "token-classification", "name": "NER"}, "dataset": {"name": "DaNE", "type": "dane", "split": "test"}, "metrics": [{"type": "precision", "value": 0.8306010929, "name": "NER Precision"}, {"type": "recall", "value": 0.8172043011, "name": "NER Recall"}, {"type": "f_score", "value": 0.8238482385, "name": "NER F Score"}]}, {"task": {"type": "token-classification", "name": "TAG"}, "dataset": {"name": "UD Danish DDT", "type": "universal_dependencies", "config": "da_ddt", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.9846798742, "name": "TAG (XPOS) Accuracy"}, {"type": "accuracy", "value": 0.9842315369, "name": "POS (UPOS) Accuracy"}, {"type": "accuracy", "value": 0.9772942762, "name": "Morph (UFeats) Accuracy"}, {"type": "accuracy", "value": 0.9466699925, "name": "Lemma Accuracy"}, {"type": "f_score", "value": 0.8978522787, "name": "Unlabeled Attachment Score (UAS)"}, {"type": "f_score", "value": 0.8701623698, "name": "Labeled Attachment Score (LAS)"}, {"type": "f_score", "value": 0.9433304272, "name": "Sentences F-Score"}]}, {"task": {"type": "coreference-resolution", "name": "coreference-resolution"}, "dataset": {"name": "DaCoref", "type": "alexandrainst/dacoref", "split": "custom"}, "metrics": [{"type": "f_score", "value": 0.4218334451, "name": "LEA"}]}, {"task": {"type": "coreference-resolution", "name": "coreference-resolution"}, "dataset": {"name": "DaNED", "type": "named-entity-linking", "split": "custom"}, "metrics": [{"type": "precision", "value": 0.8461538462, "name": "Named entity Linking Precision"}, {"type": "recall", "value": 0.2222222222, "name": "Named entity Linking Recall"}, {"type": "f_score", "value": 0.352, "name": "Named entity Linking F Score"}]}]}]}
chcaa/da_dacy_small_trf
null
[ "spacy", "dacy", "danish", "token-classification", "pos tagging", "morphological analysis", "lemmatization", "dependency parsing", "named entity recognition", "coreference resolution", "named entity linking", "named entity disambiguation", "da", "dataset:universal_dependencies", "dataset:dane", "dataset:alexandrainst/dacoref", "license:apache-2.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "da" ]
TAGS #spacy #dacy #danish #token-classification #pos tagging #morphological analysis #lemmatization #dependency parsing #named entity recognition #coreference resolution #named entity linking #named entity disambiguation #da #dataset-universal_dependencies #dataset-dane #dataset-alexandrainst/dacoref #license-apache-2.0 #model-index #region-us
<a href="URL src="URL width="175" height="175" align="right" /> DaCy small ========== DaCy is a Danish language processing framework with state-of-the-art pipelines as well as functionality for analysing Danish pipelines. DaCy's largest pipeline has achieved State-of-the-Art performance on parts-of-speech tagging and dependency parsing for Danish on the Danish Dependency treebank as well as competitive performance on named entity recognition, named entity disambiguation and coreference resolution. To read more check out the DaCy repository for material on how to use DaCy and reproduce the results. DaCy also contains guides on usage of the package as well as behavioural test for biases and robustness of Danish NLP pipelines. ### Label Scheme View label scheme (211 labels for 4 components) ### Accuracy ### Training This model was trained using spaCy and logged to Weights & Biases. You can find all the training logs here.
[ "### Label Scheme\n\n\n\nView label scheme (211 labels for 4 components)", "### Accuracy", "### Training\n\n\nThis model was trained using spaCy and logged to Weights & Biases. You can find all the training logs here." ]
[ "TAGS\n#spacy #dacy #danish #token-classification #pos tagging #morphological analysis #lemmatization #dependency parsing #named entity recognition #coreference resolution #named entity linking #named entity disambiguation #da #dataset-universal_dependencies #dataset-dane #dataset-alexandrainst/dacoref #license-apache-2.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (211 labels for 4 components)", "### Accuracy", "### Training\n\n\nThis model was trained using spaCy and logged to Weights & Biases. You can find all the training logs here." ]
text-generation
transformers
#Chizuru Ichinose~ DialoGPT Model
{"tags": ["conversational"]}
chellver24/DialoGPT-medium-chizuru_ichinose
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#Chizuru Ichinose~ DialoGPT Model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-chinese-cnhdwriter This model is a fine-tuned version of [fnlp/bart-large-chinese](https://huggingface.co/fnlp/bart-large-chinese) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3859 - Rouge1: 16.8496 - Rouge2: 2.5548 - Rougel: 16.8123 - Rougelsum: 16.8056 - Gen Len: 18.9357 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 1.2119 | 1.0 | 62716 | 1.1876 | 15.3858 | 2.1251 | 15.3709 | 15.3705 | 18.7269 | | 1.0847 | 2.0 | 125432 | 1.3353 | 13.7743 | 1.9047 | 13.7664 | 13.7421 | 18.6183 | | 0.6995 | 3.0 | 188148 | 1.2209 | 16.6797 | 2.3979 | 16.6258 | 16.6368 | 18.8953 | | 0.4819 | 4.0 | 250864 | 1.3859 | 16.8496 | 2.5548 | 16.8123 | 16.8056 | 18.9357 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "fnlp/bart-large-chinese", "model-index": [{"name": "bart-large-chinese-cnhdwriter", "results": []}]}
chinhon/bart-large-chinese-cnhdwriter
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:fnlp/bart-large-chinese", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-fnlp/bart-large-chinese #autotrain_compatible #endpoints_compatible #has_space #region-us
bart-large-chinese-cnhdwriter ============================= This model is a fine-tuned version of fnlp/bart-large-chinese on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.3859 * Rouge1: 16.8496 * Rouge2: 2.5548 * Rougel: 16.8123 * Rougelsum: 16.8056 * Gen Len: 18.9357 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 4 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-fnlp/bart-large-chinese #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-summarizer_03 This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0999 - Rouge1: 51.6222 - Rouge2: 33.428 - Rougel: 40.2093 - Rougelsum: 47.7154 - Gen Len: 102.7962 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | 0.9348 | 1.0 | 17166 | 0.9969 | 51.0763 | 32.9497 | 39.6851 | 47.0744 | 99.664 | | 0.7335 | 2.0 | 34332 | 1.0019 | 51.8002 | 33.8081 | 40.5887 | 47.9445 | 99.7884 | | 0.471 | 3.0 | 51498 | 1.0999 | 51.6222 | 33.428 | 40.2093 | 47.7154 | 102.7962 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "facebook/bart-large-cnn", "model-index": [{"name": "bart-large-cnn-summarizer_03", "results": []}]}
chinhon/bart-large-cnn-summarizer_03
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-large-cnn", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large-cnn #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
bart-large-cnn-summarizer\_03 ============================= This model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.0999 * Rouge1: 51.6222 * Rouge2: 33.428 * Rougel: 40.2093 * Rougelsum: 47.7154 * Gen Len: 102.7962 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.12.3 * Pytorch 1.9.0+cu111 * Datasets 1.15.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large-cnn #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-commentaries_hdwriter This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.1619 - Rouge1: 26.1101 - Rouge2: 9.928 - Rougel: 22.9007 - Rougelsum: 23.117 - Gen Len: 15.9536 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.6237 | 1.0 | 5072 | 2.5309 | 26.4063 | 9.1795 | 22.6699 | 22.9125 | 17.3103 | | 1.8808 | 2.0 | 10144 | 2.5049 | 25.3706 | 8.7568 | 21.8594 | 22.1233 | 15.8579 | | 1.3084 | 3.0 | 15216 | 2.6680 | 26.6284 | 9.9914 | 23.1477 | 23.3625 | 16.8832 | | 0.9247 | 4.0 | 20288 | 2.8923 | 26.3827 | 9.8217 | 22.9524 | 23.1651 | 15.4529 | | 0.692 | 5.0 | 25360 | 3.1619 | 26.1101 | 9.928 | 22.9007 | 23.117 | 15.9536 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "bart-large-commentaries_hdwriter", "results": []}]}
chinhon/bart-large-commentaries_hdwriter
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
bart-large-commentaries\_hdwriter ================================= This model is a fine-tuned version of facebook/bart-large on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 3.1619 * Rouge1: 26.1101 * Rouge2: 9.928 * Rougel: 22.9007 * Rougelsum: 23.117 * Gen Len: 15.9536 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-sgnews This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1516 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.3558 | 1.0 | 23769 | 3.2316 | | 3.2558 | 2.0 | 47538 | 3.1683 | | 3.2321 | 3.0 | 71307 | 3.1516 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilgpt2", "model-index": [{"name": "distilgpt2-sgnews", "results": []}]}
chinhon/distilgpt2-sgnews
null
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilgpt2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #base_model-distilgpt2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
distilgpt2-sgnews ================= This model is a fine-tuned version of distilgpt2 on the None dataset. It achieves the following results on the evaluation set: * Loss: 3.1516 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3.0 ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.9.0+cu111 * Datasets 1.14.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #base_model-distilgpt2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
# Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 25965855 - CO2 Emissions (in grams): 114.71292762345828 ## Validation Metrics - Loss: 1.3862273693084717 - Rouge1: 52.4988 - Rouge2: 31.6973 - RougeL: 47.1727 - RougeLsum: 47.1576 - Gen Len: 17.6194 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/chinhon/autonlp-sg_headline_generator-25965855 ```
{"language": "en", "tags": "autonlp", "datasets": ["chinhon/autonlp-data-sg_headline_generator"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 114.71292762345828}
chinhon/headline_writer
null
[ "transformers", "pytorch", "safetensors", "bart", "text2text-generation", "autonlp", "en", "dataset:chinhon/autonlp-data-sg_headline_generator", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #safetensors #bart #text2text-generation #autonlp #en #dataset-chinhon/autonlp-data-sg_headline_generator #co2_eq_emissions #autotrain_compatible #endpoints_compatible #has_space #region-us
# Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 25965855 - CO2 Emissions (in grams): 114.71292762345828 ## Validation Metrics - Loss: 1.3862273693084717 - Rouge1: 52.4988 - Rouge2: 31.6973 - RougeL: 47.1727 - RougeLsum: 47.1576 - Gen Len: 17.6194 ## Usage You can use cURL to access this model:
[ "# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 25965855\n- CO2 Emissions (in grams): 114.71292762345828", "## Validation Metrics\n\n- Loss: 1.3862273693084717\n- Rouge1: 52.4988\n- Rouge2: 31.6973\n- RougeL: 47.1727\n- RougeLsum: 47.1576\n- Gen Len: 17.6194", "## Usage\n\nYou can use cURL to access this model:" ]
[ "TAGS\n#transformers #pytorch #safetensors #bart #text2text-generation #autonlp #en #dataset-chinhon/autonlp-data-sg_headline_generator #co2_eq_emissions #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 25965855\n- CO2 Emissions (in grams): 114.71292762345828", "## Validation Metrics\n\n- Loss: 1.3862273693084717\n- Rouge1: 52.4988\n- Rouge2: 31.6973\n- RougeL: 47.1727\n- RougeLsum: 47.1576\n- Gen Len: 17.6194", "## Usage\n\nYou can use cURL to access this model:" ]
text2text-generation
transformers
# Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 25965856 - CO2 Emissions (in grams): 396.629376395644 ## Validation Metrics - Loss: 1.4130597114562988 - Rouge1: 51.7922 - Rouge2: 30.8259 - RougeL: 46.4585 - RougeLsum: 46.4807 - Gen Len: 15.8411 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/chinhon/autonlp-sg_headline_generator-25965856 ```
{"language": "en", "tags": "autonlp", "datasets": ["chinhon/autonlp-data-sg_headline_generator"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 396.629376395644}
chinhon/headline_writer2
null
[ "transformers", "pytorch", "safetensors", "bart", "text2text-generation", "autonlp", "en", "dataset:chinhon/autonlp-data-sg_headline_generator", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #safetensors #bart #text2text-generation #autonlp #en #dataset-chinhon/autonlp-data-sg_headline_generator #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 25965856 - CO2 Emissions (in grams): 396.629376395644 ## Validation Metrics - Loss: 1.4130597114562988 - Rouge1: 51.7922 - Rouge2: 30.8259 - RougeL: 46.4585 - RougeLsum: 46.4807 - Gen Len: 15.8411 ## Usage You can use cURL to access this model:
[ "# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 25965856\n- CO2 Emissions (in grams): 396.629376395644", "## Validation Metrics\n\n- Loss: 1.4130597114562988\n- Rouge1: 51.7922\n- Rouge2: 30.8259\n- RougeL: 46.4585\n- RougeLsum: 46.4807\n- Gen Len: 15.8411", "## Usage\n\nYou can use cURL to access this model:" ]
[ "TAGS\n#transformers #pytorch #safetensors #bart #text2text-generation #autonlp #en #dataset-chinhon/autonlp-data-sg_headline_generator #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 25965856\n- CO2 Emissions (in grams): 396.629376395644", "## Validation Metrics\n\n- Loss: 1.4130597114562988\n- Rouge1: 51.7922\n- Rouge2: 30.8259\n- RougeL: 46.4585\n- RougeLsum: 46.4807\n- Gen Len: 15.8411", "## Usage\n\nYou can use cURL to access this model:" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-large-commentaries_hd This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5453 - Rouge1: 26.3475 - Rouge2: 9.5095 - Rougel: 22.6367 - Rougelsum: 22.8127 - Gen Len: 14.4789 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.5718 | 1.0 | 4710 | 2.5277 | 25.1384 | 8.6528 | 21.3443 | 21.5289 | 15.3268 | | 2.4034 | 2.0 | 9420 | 2.4973 | 25.9298 | 9.2238 | 22.3192 | 22.4817 | 14.2243 | | 2.2093 | 3.0 | 14130 | 2.5013 | 26.6036 | 9.7482 | 22.8409 | 23.0077 | 14.2263 | | 2.0518 | 4.0 | 18840 | 2.5272 | 26.4723 | 9.6599 | 22.7439 | 22.9201 | 14.38 | | 1.9906 | 5.0 | 23550 | 2.5453 | 26.3475 | 9.5095 | 22.6367 | 22.8127 | 14.4789 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "google/pegasus-large", "model-index": [{"name": "pegasus-large-commentaries_hd", "results": []}]}
chinhon/pegasus-large-commentaries_hd
null
[ "transformers", "pytorch", "tensorboard", "pegasus", "text2text-generation", "generated_from_trainer", "base_model:google/pegasus-large", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #pegasus #text2text-generation #generated_from_trainer #base_model-google/pegasus-large #autotrain_compatible #endpoints_compatible #has_space #region-us
pegasus-large-commentaries\_hd ============================== This model is a fine-tuned version of google/pegasus-large on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 2.5453 * Rouge1: 26.3475 * Rouge2: 9.5095 * Rougel: 22.6367 * Rougelsum: 22.8127 * Gen Len: 14.4789 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #pegasus #text2text-generation #generated_from_trainer #base_model-google/pegasus-large #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-multi_news-commentaries_hdwriter This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.7259 - Rouge1: 21.3899 - Rouge2: 6.2409 - Rougel: 16.6172 - Rougelsum: 17.808 - Gen Len: 34.7016 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.847 | 1.0 | 4710 | 2.7513 | 20.5559 | 5.9762 | 16.1223 | 17.2872 | 35.81 | | 2.6399 | 2.0 | 9420 | 2.6890 | 21.2052 | 6.0104 | 16.5753 | 17.6517 | 34.5242 | | 2.3811 | 3.0 | 14130 | 2.6904 | 21.2358 | 6.1416 | 16.6053 | 17.7067 | 34.6157 | | 2.2388 | 4.0 | 18840 | 2.7112 | 21.3806 | 6.1895 | 16.6909 | 17.7504 | 34.5227 | | 2.1589 | 5.0 | 23550 | 2.7259 | 21.3899 | 6.2409 | 16.6172 | 17.808 | 34.7016 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "google/pegasus-multi_news", "model-index": [{"name": "pegasus-multi_news-commentaries_hdwriter", "results": []}]}
chinhon/pegasus-multi_news-commentaries_hdwriter
null
[ "transformers", "pytorch", "tensorboard", "pegasus", "text2text-generation", "generated_from_trainer", "base_model:google/pegasus-multi_news", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #pegasus #text2text-generation #generated_from_trainer #base_model-google/pegasus-multi_news #autotrain_compatible #endpoints_compatible #has_space #region-us
pegasus-multi\_news-commentaries\_hdwriter ========================================== This model is a fine-tuned version of google/pegasus-multi\_news on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 2.7259 * Rouge1: 21.3899 * Rouge2: 6.2409 * Rougel: 16.6172 * Rougelsum: 17.808 * Gen Len: 34.7016 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #pegasus #text2text-generation #generated_from_trainer #base_model-google/pegasus-multi_news #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-multi_news-headline This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4421 - Rouge1: 41.616 - Rouge2: 22.922 - Rougel: 35.2189 - Rougelsum: 35.3561 - Gen Len: 33.9532 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.6637 | 1.0 | 31200 | 1.4877 | 41.0996 | 22.579 | 34.9311 | 35.0611 | 34.3431 | | 1.4395 | 2.0 | 62400 | 1.4388 | 41.6075 | 22.8274 | 35.2051 | 35.3526 | 33.7965 | | 1.3137 | 3.0 | 93600 | 1.4421 | 41.616 | 22.922 | 35.2189 | 35.3561 | 33.9532 | ### Framework versions - Transformers 4.12.2 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "google/pegasus-multi_news", "model-index": [{"name": "pegasus-multi_news-headline", "results": []}]}
chinhon/pegasus-multi_news-headline
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "pegasus", "text2text-generation", "generated_from_trainer", "base_model:google/pegasus-multi_news", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #safetensors #pegasus #text2text-generation #generated_from_trainer #base_model-google/pegasus-multi_news #autotrain_compatible #endpoints_compatible #has_space #region-us
pegasus-multi\_news-headline ============================ This model is a fine-tuned version of google/pegasus-multi\_news on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.4421 * Rouge1: 41.616 * Rouge2: 22.922 * Rougel: 35.2189 * Rougelsum: 35.3561 * Gen Len: 33.9532 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.12.2 * Pytorch 1.9.0+cu111 * Datasets 1.14.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.2\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #safetensors #pegasus #text2text-generation #generated_from_trainer #base_model-google/pegasus-multi_news #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.2\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-multi_news-malay_headlines_02 This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9295 - Rouge1: 39.9859 - Rouge2: 20.1943 - Rougel: 36.1927 - Rougelsum: 36.2105 - Gen Len: 35.6062 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 2.0943 | 1.0 | 53582 | 1.9295 | 39.9859 | 20.1943 | 36.1927 | 36.2105 | 35.6062 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "google/pegasus-multi_news", "model-index": [{"name": "pegasus-multi_news-malay_headlines_02", "results": []}]}
chinhon/pegasus-multi_news-malay_headlines_02
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "pegasus", "text2text-generation", "generated_from_trainer", "base_model:google/pegasus-multi_news", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #safetensors #pegasus #text2text-generation #generated_from_trainer #base_model-google/pegasus-multi_news #autotrain_compatible #endpoints_compatible #has_space #region-us
pegasus-multi\_news-malay\_headlines\_02 ======================================== This model is a fine-tuned version of google/pegasus-multi\_news on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.9295 * Rouge1: 39.9859 * Rouge2: 20.1943 * Rougel: 36.1927 * Rougelsum: 36.2105 * Gen Len: 35.6062 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.12.3 * Pytorch 1.10.0+cu111 * Datasets 1.15.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #safetensors #pegasus #text2text-generation #generated_from_trainer #base_model-google/pegasus-multi_news #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-multi_news-summarizer_01 This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2794 - Rouge1: 52.1693 - Rouge2: 34.8989 - Rougel: 41.2385 - Rougelsum: 48.4365 - Gen Len: 98.6433 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | 1.3936 | 1.0 | 16113 | 1.2972 | 51.5747 | 34.2062 | 40.7279 | 47.7783 | 95.0004 | | 1.3664 | 2.0 | 32226 | 1.2817 | 52.1077 | 34.8189 | 41.1614 | 48.3894 | 100.3265 | | 1.3002 | 3.0 | 48339 | 1.2794 | 52.1693 | 34.8989 | 41.2385 | 48.4365 | 98.6433 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "google/pegasus-multi_news", "model-index": [{"name": "pegasus-multi_news-summarizer_01", "results": []}]}
chinhon/pegasus-multi_news-summarizer_01
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "pegasus", "text2text-generation", "generated_from_trainer", "base_model:google/pegasus-multi_news", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #safetensors #pegasus #text2text-generation #generated_from_trainer #base_model-google/pegasus-multi_news #autotrain_compatible #endpoints_compatible #region-us
pegasus-multi\_news-summarizer\_01 ================================== This model is a fine-tuned version of google/pegasus-multi\_news on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.2794 * Rouge1: 52.1693 * Rouge2: 34.8989 * Rougel: 41.2385 * Rougelsum: 48.4365 * Gen Len: 98.6433 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.12.3 * Pytorch 1.9.0+cu111 * Datasets 1.15.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #safetensors #pegasus #text2text-generation #generated_from_trainer #base_model-google/pegasus-multi_news #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-newsroom-commentaries_hdwriter This model is a fine-tuned version of [google/pegasus-newsroom](https://huggingface.co/google/pegasus-newsroom) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5316 - Rouge1: 21.4079 - Rouge2: 6.2399 - Rougel: 16.6644 - Rougelsum: 17.8501 - Gen Len: 34.4111 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.6327 | 1.0 | 4710 | 2.5474 | 20.9392 | 6.1702 | 16.3859 | 17.5963 | 35.6626 | | 2.4322 | 2.0 | 9420 | 2.5198 | 21.4026 | 6.1811 | 16.5874 | 17.8207 | 34.5976 | | 2.2703 | 3.0 | 14130 | 2.5316 | 21.4079 | 6.2399 | 16.6644 | 17.8501 | 34.4111 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "pegasus-newsroom-commentaries_hdwriter", "results": []}]}
chinhon/pegasus-newsroom-commentaries_hdwriter
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "pegasus", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #safetensors #pegasus #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
pegasus-newsroom-commentaries\_hdwriter ======================================= This model is a fine-tuned version of google/pegasus-newsroom on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 2.5316 * Rouge1: 21.4079 * Rouge2: 6.2399 * Rougel: 16.6644 * Rougelsum: 17.8501 * Gen Len: 34.4111 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #safetensors #pegasus #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-newsroom-headline_writer This model is a fine-tuned version of [google/pegasus-newsroom](https://huggingface.co/google/pegasus-newsroom) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3988 - Rouge1: 41.8748 - Rouge2: 23.1947 - Rougel: 35.6263 - Rougelsum: 35.7355 - Gen Len: 34.1266 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.5784 | 1.0 | 31200 | 1.4287 | 41.4257 | 22.9355 | 35.3299 | 35.4648 | 34.4677 | | 1.3501 | 2.0 | 62400 | 1.3955 | 41.9119 | 23.1912 | 35.6698 | 35.7479 | 33.8672 | | 1.2417 | 3.0 | 93600 | 1.3988 | 41.8748 | 23.1947 | 35.6263 | 35.7355 | 34.1266 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "google/pegasus-newsroom", "model-index": [{"name": "pegasus-newsroom-headline_writer", "results": []}]}
chinhon/pegasus-newsroom-headline_writer
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "pegasus", "text2text-generation", "generated_from_trainer", "base_model:google/pegasus-newsroom", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #safetensors #pegasus #text2text-generation #generated_from_trainer #base_model-google/pegasus-newsroom #autotrain_compatible #endpoints_compatible #has_space #region-us
pegasus-newsroom-headline\_writer ================================= This model is a fine-tuned version of google/pegasus-newsroom on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.3988 * Rouge1: 41.8748 * Rouge2: 23.1947 * Rougel: 35.6263 * Rougelsum: 35.7355 * Gen Len: 34.1266 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.9.0+cu111 * Datasets 1.14.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #safetensors #pegasus #text2text-generation #generated_from_trainer #base_model-google/pegasus-newsroom #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-newsroom-malay_headlines This model is a fine-tuned version of [google/pegasus-newsroom](https://huggingface.co/google/pegasus-newsroom) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6603 - Rouge1: 42.6667 - Rouge2: 22.8739 - Rougel: 38.6684 - Rougelsum: 38.6928 - Gen Len: 34.7995 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.9713 | 1.0 | 15310 | 1.8121 | 41.1469 | 21.5262 | 37.3081 | 37.3377 | 35.0939 | | 1.7917 | 2.0 | 30620 | 1.6913 | 42.4027 | 22.6089 | 38.4471 | 38.4699 | 34.8149 | | 1.7271 | 3.0 | 45930 | 1.6603 | 42.6667 | 22.8739 | 38.6684 | 38.6928 | 34.7995 | ### Framework versions - Transformers 4.12.2 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "google/pegasus-newsroom", "model-index": [{"name": "pegasus-newsroom-malay_headlines", "results": []}]}
chinhon/pegasus-newsroom-malay_headlines
null
[ "transformers", "pytorch", "tensorboard", "pegasus", "text2text-generation", "generated_from_trainer", "base_model:google/pegasus-newsroom", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #pegasus #text2text-generation #generated_from_trainer #base_model-google/pegasus-newsroom #autotrain_compatible #endpoints_compatible #has_space #region-us
pegasus-newsroom-malay\_headlines ================================= This model is a fine-tuned version of google/pegasus-newsroom on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.6603 * Rouge1: 42.6667 * Rouge2: 22.8739 * Rougel: 38.6684 * Rougelsum: 38.6928 * Gen Len: 34.7995 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.12.2 * Pytorch 1.9.0+cu111 * Datasets 1.14.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.2\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #pegasus #text2text-generation #generated_from_trainer #base_model-google/pegasus-newsroom #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.2\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-newsroom-summarizer_02 This model is a fine-tuned version of [google/pegasus-newsroom](https://huggingface.co/google/pegasus-newsroom) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2204 - Rouge1: 52.4459 - Rouge2: 35.2568 - Rougel: 41.6213 - Rougelsum: 48.7859 - Gen Len: 98.0627 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.3231 | 1.0 | 16113 | 1.2305 | 52.1565 | 34.8681 | 41.3189 | 48.4258 | 95.9049 | | 1.3001 | 2.0 | 32226 | 1.2186 | 52.4921 | 35.2661 | 41.6264 | 48.8168 | 98.9241 | | 1.2372 | 3.0 | 48339 | 1.2204 | 52.4459 | 35.2568 | 41.6213 | 48.7859 | 98.0627 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "google/pegasus-newsroom", "model-index": [{"name": "pegasus-newsroom-summarizer_02", "results": []}]}
chinhon/pegasus-newsroom-summarizer_02
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "pegasus", "text2text-generation", "generated_from_trainer", "base_model:google/pegasus-newsroom", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #safetensors #pegasus #text2text-generation #generated_from_trainer #base_model-google/pegasus-newsroom #autotrain_compatible #endpoints_compatible #has_space #region-us
pegasus-newsroom-summarizer\_02 =============================== This model is a fine-tuned version of google/pegasus-newsroom on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.2204 * Rouge1: 52.4459 * Rouge2: 35.2568 * Rougel: 41.6213 * Rougelsum: 48.7859 * Gen Len: 98.0627 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.12.3 * Pytorch 1.9.0+cu111 * Datasets 1.15.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #safetensors #pegasus #text2text-generation #generated_from_trainer #base_model-google/pegasus-newsroom #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
text-generation
transformers
Chizuru Ichinose DialoGPT Model.
{"tags": ["conversational"]}
chip/DialoGPT-small-chizuru
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Chizuru Ichinose DialoGPT Model.
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-classification
transformers
### Distibert model finetuned on the task of classifying product descriptions to one of 45 broad [NICE classifications](https://www.wipo.int/classifications/nice/en/)
{}
chisadi/nice-distilbert-v2
null
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us
### Distibert model finetuned on the task of classifying product descriptions to one of 45 broad NICE classifications
[ "### Distibert model finetuned on the task of classifying product descriptions to one of 45 broad NICE classifications" ]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n", "### Distibert model finetuned on the task of classifying product descriptions to one of 45 broad NICE classifications" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetune-paraphrase-model This model is a fine-tuned version of [coderpotter/adversarial-paraphrasing-detector](https://huggingface.co/coderpotter/adversarial-paraphrasing-detector) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.1 | 200 | 3.0116 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "model-index": [{"name": "finetune-paraphrase-model", "results": []}]}
chitra/finetune-paraphrase-model
null
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
finetune-paraphrase-model ========================= This model is a fine-tuned version of coderpotter/adversarial-paraphrasing-detector on an unknown dataset. Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 0.1 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 0.1", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 0.1", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-adversarial-paraphrase-model This model is a fine-tuned version of [coderpotter/adversarial-paraphrasing-detector](https://huggingface.co/coderpotter/adversarial-paraphrasing-detector) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 7.5680 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0848 | 1.0 | 2000 | 5.4633 | | 0.0495 | 2.0 | 4000 | 6.0352 | | 0.0121 | 3.0 | 6000 | 7.5680 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "model-index": [{"name": "finetuned-adversarial-paraphrase-model", "results": []}]}
chitra/finetuned-adversarial-paraphrase-model
null
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
finetuned-adversarial-paraphrase-model ====================================== This model is a fine-tuned version of coderpotter/adversarial-paraphrasing-detector on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 7.5680 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
### Welcome to RoBERTArg! 🤖 **Model description** This model was trained on ~25k heterogeneous manually annotated sentences (📚 [Stab et al. 2018](https://www.aclweb.org/anthology/D18-1402/)) of controversial topics to classify text into one of two labels: 🏷 **NON-ARGUMENT** (0) and **ARGUMENT** (1). 🗃 **Dataset** The dataset (📚 Stab et al. 2018) consists of **ARGUMENTS** (\~11k) that either support or oppose a topic if it includes a relevant reason for supporting or opposing the topic, or as a **NON-ARGUMENT** (\~14k) if it does not include reasons. The authors focus on controversial topics, i.e., topics that include "an obvious polarity to the possible outcomes" and compile a final set of eight controversial topics: _abortion, school uniforms, death penalty, marijuana legalization, nuclear energy, cloning, gun control, and minimum wage_. | TOPIC | ARGUMENT | NON-ARGUMENT | |----|----|----| | abortion | 2213 | 2,427 | | school uniforms | 325 | 1,734 | | death penalty | 325 | 2,083 | | marijuana legalization | 325 | 1,262 | | nuclear energy | 325 | 2,118 | | cloning | 325 | 1,494 | | gun control | 325 | 1,889 | | minimum wage | 325 | 1,346 | 🏃🏼‍♂️**Model training** **RoBERTArg** was fine-tuned on a RoBERTA (base) pre-trained model from HuggingFace using the HuggingFace trainer with the following hyperparameters: ``` training_args = TrainingArguments( num_train_epochs=2, learning_rate=2.3102e-06, seed=8, per_device_train_batch_size=64, per_device_eval_batch_size=64, ) ``` 📊 **Evaluation** The model was evaluated on an evaluation set (20%): | Model | Acc | F1 | R arg | R non | P arg | P non | |----|----|----|----|----|----|----| | RoBERTArg | 0.8193 | 0.8021 | 0.8463 | 0.7986 | 0.7623 | 0.8719 | Showing the **confusion matrix** using again the evaluation set: | | ARGUMENT | NON-ARGUMENT | |----|----|----| | ARGUMENT | 2213 | 558 | | NON-ARGUMENT | 325 | 1790 | ⚠️ **Intended Uses & Potential Limitations** The model can only be a starting point to dive into the exciting field of argument mining. But be aware. An argument is a complex structure, with multiple dependencies. Therefore, the model may perform less well on different topics and text types not included in the training set. Enjoy and stay tuned! 🚀 🐦 Twitter: [@chklamm](http://twitter.com/chklamm)
{"language": "en", "widget": [{"text": "It has been determined that the amount of greenhouse gases have decreased by almost half because of the prevalence in the utilization of nuclear power."}]}
chkla/roberta-argument
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "roberta", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #safetensors #roberta #text-classification #en #autotrain_compatible #endpoints_compatible #has_space #region-us
### Welcome to RoBERTArg! Model description This model was trained on ~25k heterogeneous manually annotated sentences ( Stab et al. 2018) of controversial topics to classify text into one of two labels: NON-ARGUMENT (0) and ARGUMENT (1). Dataset The dataset ( Stab et al. 2018) consists of ARGUMENTS (~11k) that either support or oppose a topic if it includes a relevant reason for supporting or opposing the topic, or as a NON-ARGUMENT (~14k) if it does not include reasons. The authors focus on controversial topics, i.e., topics that include "an obvious polarity to the possible outcomes" and compile a final set of eight controversial topics: *abortion, school uniforms, death penalty, marijuana legalization, nuclear energy, cloning, gun control, and minimum wage*. TOPIC: abortion, ARGUMENT: 2213, NON-ARGUMENT: 2,427 TOPIC: school uniforms, ARGUMENT: 325, NON-ARGUMENT: 1,734 TOPIC: death penalty, ARGUMENT: 325, NON-ARGUMENT: 2,083 TOPIC: marijuana legalization, ARGUMENT: 325, NON-ARGUMENT: 1,262 TOPIC: nuclear energy, ARGUMENT: 325, NON-ARGUMENT: 2,118 TOPIC: cloning, ARGUMENT: 325, NON-ARGUMENT: 1,494 TOPIC: gun control, ARGUMENT: 325, NON-ARGUMENT: 1,889 TOPIC: minimum wage, ARGUMENT: 325, NON-ARGUMENT: 1,346 ‍️Model training RoBERTArg was fine-tuned on a RoBERTA (base) pre-trained model from HuggingFace using the HuggingFace trainer with the following hyperparameters: Evaluation The model was evaluated on an evaluation set (20%): Showing the confusion matrix using again the evaluation set: ARGUMENT: ARGUMENT, NON-ARGUMENT: 2213 ARGUMENT: NON-ARGUMENT, NON-ARGUMENT: 325 ️ Intended Uses & Potential Limitations The model can only be a starting point to dive into the exciting field of argument mining. But be aware. An argument is a complex structure, with multiple dependencies. Therefore, the model may perform less well on different topics and text types not included in the training set. Enjoy and stay tuned! Twitter: @chklamm
[ "### Welcome to RoBERTArg!\n\n\nModel description\n\n\nThis model was trained on ~25k heterogeneous manually annotated sentences ( Stab et al. 2018) of controversial topics to classify text into one of two labels: NON-ARGUMENT (0) and ARGUMENT (1).\n\n\nDataset\n\n\nThe dataset ( Stab et al. 2018) consists of ARGUMENTS (~11k) that either support or oppose a topic if it includes a relevant reason for supporting or opposing the topic, or as a NON-ARGUMENT (~14k) if it does not include reasons. The authors focus on controversial topics, i.e., topics that include \"an obvious polarity to the possible outcomes\" and compile a final set of eight controversial topics: *abortion, school uniforms, death penalty, marijuana legalization, nuclear energy, cloning, gun control, and minimum wage*.\n\n\nTOPIC: abortion, ARGUMENT: 2213, NON-ARGUMENT: 2,427\nTOPIC: school uniforms, ARGUMENT: 325, NON-ARGUMENT: 1,734\nTOPIC: death penalty, ARGUMENT: 325, NON-ARGUMENT: 2,083\nTOPIC: marijuana legalization, ARGUMENT: 325, NON-ARGUMENT: 1,262\nTOPIC: nuclear energy, ARGUMENT: 325, NON-ARGUMENT: 2,118\nTOPIC: cloning, ARGUMENT: 325, NON-ARGUMENT: 1,494\nTOPIC: gun control, ARGUMENT: 325, NON-ARGUMENT: 1,889\nTOPIC: minimum wage, ARGUMENT: 325, NON-ARGUMENT: 1,346\n\n\n‍️Model training\n\n\nRoBERTArg was fine-tuned on a RoBERTA (base) pre-trained model from HuggingFace using the HuggingFace trainer with the following hyperparameters:\n\n\nEvaluation\n\n\nThe model was evaluated on an evaluation set (20%):\n\n\n\nShowing the confusion matrix using again the evaluation set:\n\n\nARGUMENT: ARGUMENT, NON-ARGUMENT: 2213\nARGUMENT: NON-ARGUMENT, NON-ARGUMENT: 325\n\n\n️ Intended Uses & Potential Limitations\n\n\nThe model can only be a starting point to dive into the exciting field of argument mining. But be aware. An argument is a complex structure, with multiple dependencies. Therefore, the model may perform less well on different topics and text types not included in the training set.\n\n\nEnjoy and stay tuned!\n\n\nTwitter: @chklamm" ]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #roberta #text-classification #en #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Welcome to RoBERTArg!\n\n\nModel description\n\n\nThis model was trained on ~25k heterogeneous manually annotated sentences ( Stab et al. 2018) of controversial topics to classify text into one of two labels: NON-ARGUMENT (0) and ARGUMENT (1).\n\n\nDataset\n\n\nThe dataset ( Stab et al. 2018) consists of ARGUMENTS (~11k) that either support or oppose a topic if it includes a relevant reason for supporting or opposing the topic, or as a NON-ARGUMENT (~14k) if it does not include reasons. The authors focus on controversial topics, i.e., topics that include \"an obvious polarity to the possible outcomes\" and compile a final set of eight controversial topics: *abortion, school uniforms, death penalty, marijuana legalization, nuclear energy, cloning, gun control, and minimum wage*.\n\n\nTOPIC: abortion, ARGUMENT: 2213, NON-ARGUMENT: 2,427\nTOPIC: school uniforms, ARGUMENT: 325, NON-ARGUMENT: 1,734\nTOPIC: death penalty, ARGUMENT: 325, NON-ARGUMENT: 2,083\nTOPIC: marijuana legalization, ARGUMENT: 325, NON-ARGUMENT: 1,262\nTOPIC: nuclear energy, ARGUMENT: 325, NON-ARGUMENT: 2,118\nTOPIC: cloning, ARGUMENT: 325, NON-ARGUMENT: 1,494\nTOPIC: gun control, ARGUMENT: 325, NON-ARGUMENT: 1,889\nTOPIC: minimum wage, ARGUMENT: 325, NON-ARGUMENT: 1,346\n\n\n‍️Model training\n\n\nRoBERTArg was fine-tuned on a RoBERTA (base) pre-trained model from HuggingFace using the HuggingFace trainer with the following hyperparameters:\n\n\nEvaluation\n\n\nThe model was evaluated on an evaluation set (20%):\n\n\n\nShowing the confusion matrix using again the evaluation set:\n\n\nARGUMENT: ARGUMENT, NON-ARGUMENT: 2213\nARGUMENT: NON-ARGUMENT, NON-ARGUMENT: 325\n\n\n️ Intended Uses & Potential Limitations\n\n\nThe model can only be a starting point to dive into the exciting field of argument mining. But be aware. An argument is a complex structure, with multiple dependencies. Therefore, the model may perform less well on different topics and text types not included in the training set.\n\n\nEnjoy and stay tuned!\n\n\nTwitter: @chklamm" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the OPENSLR_SLR66 - NA dataset. It achieves the following results on the evaluation set: - Loss: 0.3119 - Wer: 0.2613 ### Evaluation metrics | Metric | Split | Decode with LM | Value | |:------:|:------:|:--------------:|:---------:| | WER | Train | No | 5.36 | | CER | Train | No | 1.11 | | WER | Test | No | 26.14 | | CER | Test | No | 4.93 | | WER | Train | Yes | 5.04 | | CER | Train | Yes | 1.07 | | WER | Test | Yes | 20.69 | | CER | Test | Yes | 3.986 | ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 150.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:------:| | 2.9038 | 4.8 | 500 | 3.0125 | 1.0 | | 1.3777 | 9.61 | 1000 | 0.8681 | 0.8753 | | 1.1436 | 14.42 | 1500 | 0.6256 | 0.7961 | | 1.0997 | 19.23 | 2000 | 0.5244 | 0.6875 | | 1.0363 | 24.04 | 2500 | 0.4585 | 0.6276 | | 0.7996 | 28.84 | 3000 | 0.4072 | 0.5295 | | 0.825 | 33.65 | 3500 | 0.3590 | 0.5222 | | 0.8018 | 38.46 | 4000 | 0.3678 | 0.4671 | | 0.7545 | 43.27 | 4500 | 0.3474 | 0.3962 | | 0.7375 | 48.08 | 5000 | 0.3224 | 0.3869 | | 0.6198 | 52.88 | 5500 | 0.3233 | 0.3630 | | 0.6608 | 57.69 | 6000 | 0.3029 | 0.3308 | | 0.645 | 62.5 | 6500 | 0.3195 | 0.3722 | | 0.5249 | 67.31 | 7000 | 0.3004 | 0.3202 | | 0.4875 | 72.11 | 7500 | 0.2826 | 0.2992 | | 0.5171 | 76.92 | 8000 | 0.2962 | 0.2976 | | 0.4974 | 81.73 | 8500 | 0.2990 | 0.2933 | | 0.4387 | 86.54 | 9000 | 0.2834 | 0.2755 | | 0.4511 | 91.34 | 9500 | 0.2886 | 0.2787 | | 0.4112 | 96.15 | 10000 | 0.3093 | 0.2976 | | 0.4064 | 100.96 | 10500 | 0.3123 | 0.2863 | | 0.4047 | 105.77 | 11000 | 0.2968 | 0.2719 | | 0.3519 | 110.57 | 11500 | 0.3106 | 0.2832 | | 0.3719 | 115.38 | 12000 | 0.3030 | 0.2737 | | 0.3669 | 120.19 | 12500 | 0.2964 | 0.2714 | | 0.3386 | 125.0 | 13000 | 0.3101 | 0.2714 | | 0.3137 | 129.8 | 13500 | 0.3063 | 0.2710 | | 0.3008 | 134.61 | 14000 | 0.3082 | 0.2617 | | 0.301 | 139.42 | 14500 | 0.3121 | 0.2628 | | 0.3291 | 144.23 | 15000 | 0.3105 | 0.2612 | | 0.3133 | 149.04 | 15500 | 0.3114 | 0.2624 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["te"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "openslr_SLR66", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["openslr", "SLR66"], "metrics": ["wer"], "model-index": [{"name": "xls-r-1B-te", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Open SLR", "type": "openslr", "args": "SLR66"}, "metrics": [{"type": "wer", "value": 20.624, "name": "Test WER"}, {"type": "cer", "value": 3.979, "name": "Test CER"}, {"type": "wer", "value": 26.14777618364419, "name": "Test WER (without LM)"}, {"type": "cer", "value": 4.932543184970369, "name": "Test CER (without LM)"}]}]}]}
chmanoj/xls-r-1B-te
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "openslr_SLR66", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "te", "dataset:openslr", "dataset:SLR66", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "te" ]
TAGS #transformers #pytorch #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #openslr_SLR66 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #te #dataset-openslr #dataset-SLR66 #license-apache-2.0 #model-index #endpoints_compatible #region-us
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the OPENSLR\_SLR66 - NA dataset. It achieves the following results on the evaluation set: * Loss: 0.3119 * Wer: 0.2613 ### Evaluation metrics Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 150.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Evaluation metrics\n\n\n\nModel description\n-----------------\n\n\nMore information needed\n\n\nIntended uses & limitations\n---------------------------\n\n\nMore information needed\n\n\nTraining and evaluation data\n----------------------------\n\n\nMore information needed\n\n\nTraining procedure\n------------------", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 150.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #openslr_SLR66 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #te #dataset-openslr #dataset-SLR66 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Evaluation metrics\n\n\n\nModel description\n-----------------\n\n\nMore information needed\n\n\nIntended uses & limitations\n---------------------------\n\n\nMore information needed\n\n\nTraining and evaluation data\n----------------------------\n\n\nMore information needed\n\n\nTraining procedure\n------------------", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 150.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-2b](https://huggingface.co/facebook/wav2vec2-xls-r-2b) on the OPENSLR_SLR66 - NA dataset. It achieves the following results on the evaluation set: - Loss: 0.4253 - Wer: 0.5109 ### Evaluation metrics | Metric | Split | Decode with LM | Value | |:------:|:------:|:--------------:|:---------:| | WER | Train | No | | | CER | Train | No | | | WER | Test | No | | | CER | Test | No | | | WER | Train | Yes | | | CER | Train | Yes | | | WER | Test | Yes | | | CER | Test | Yes | | ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 12 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - learning_rate: 3e-6 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 150.0 - hidden_dropout: 0.15 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["te"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "openslr_SLR66", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["openslr", "SLR66"], "metrics": ["wer"], "model-index": [{"name": "xls-r-1B-te", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Open SLR", "type": "openslr", "args": "SLR66"}, "metrics": [{"type": "wer", "value": 0.51, "name": "Test WER"}, {"type": "cer", "value": 0.097, "name": "Test CER"}]}]}]}
chmanoj/xls-r-2B-te
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "openslr_SLR66", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "te", "dataset:openslr", "dataset:SLR66", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "te" ]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #openslr_SLR66 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #te #dataset-openslr #dataset-SLR66 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
This model is a fine-tuned version of facebook/wav2vec2-xls-r-2b on the OPENSLR\_SLR66 - NA dataset. It achieves the following results on the evaluation set: * Loss: 0.4253 * Wer: 0.5109 ### Evaluation metrics Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 12 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * learning\_rate: 3e-6 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 150.0 * hidden\_dropout: 0.15 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Evaluation metrics\n\n\n\nModel description\n-----------------\n\n\nMore information needed\n\n\nIntended uses & limitations\n---------------------------\n\n\nMore information needed\n\n\nTraining and evaluation data\n----------------------------\n\n\nMore information needed\n\n\nTraining procedure\n------------------", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 12\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* learning\\_rate: 3e-6\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 150.0\n* hidden\\_dropout: 0.15\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #openslr_SLR66 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #te #dataset-openslr #dataset-SLR66 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n", "### Evaluation metrics\n\n\n\nModel description\n-----------------\n\n\nMore information needed\n\n\nIntended uses & limitations\n---------------------------\n\n\nMore information needed\n\n\nTraining and evaluation data\n----------------------------\n\n\nMore information needed\n\n\nTraining procedure\n------------------", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 12\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* learning\\_rate: 3e-6\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 150.0\n* hidden\\_dropout: 0.15\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - SV-SE dataset. It achieves the following results on the evaluation set: - Loss: 0.8004 - Wer: 0.7139 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.6683 | 1.45 | 500 | 1.7698 | 1.0041 | | 1.9548 | 2.91 | 1000 | 1.0890 | 0.8602 | | 1.9568 | 4.36 | 1500 | 1.0878 | 0.8680 | | 1.9497 | 5.81 | 2000 | 1.1501 | 0.8838 | | 1.8453 | 7.27 | 2500 | 1.0452 | 0.8418 | | 1.6952 | 8.72 | 3000 | 0.9153 | 0.7823 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu113 - Datasets 1.18.1.dev0 - Tokenizers 0.10.3
{"language": ["sv-SE"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]}
chmanoj/xls-r-300m-sv
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "sv-SE" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - SV-SE dataset. It achieves the following results on the evaluation set: * Loss: 0.8004 * Wer: 0.7139 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7.5e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 10.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.0+cu113 * Datasets 1.18.1.dev0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 10.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu113\n* Datasets 1.18.1.dev0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 10.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu113\n* Datasets 1.18.1.dev0\n* Tokenizers 0.10.3" ]